[GitHub] [hadoop] avijayanhwx opened a new pull request #1536: HDDS-2164 : om.db.checkpoints is getting filling up fast.

2019-09-26 Thread GitBox
avijayanhwx opened a new pull request #1536: HDDS-2164 : om.db.checkpoints is 
getting filling up fast.
URL: https://github.com/apache/hadoop/pull/1536
 
 
   Fixed issue where the checkpoint clean up does not happen.
   Changed the 2 step process in the OM DB checkpoint servlet (Creating a tar 
file for OM DB + Writing to outputstream) to a single step process (Writing the 
compressed tar file as a stream directly to output stream).
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] avijayanhwx commented on issue #1536: HDDS-2164 : om.db.checkpoints is getting filling up fast.

2019-09-26 Thread GitBox
avijayanhwx commented on issue #1536: HDDS-2164 : om.db.checkpoints is getting 
filling up fast.
URL: https://github.com/apache/hadoop/pull/1536#issuecomment-535799489
 
 
   /label ozone


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer closed pull request #1513: HDDS-2149. Replace FindBugs with SpotBugs

2019-09-26 Thread GitBox
anuengineer closed pull request #1513: HDDS-2149. Replace FindBugs with SpotBugs
URL: https://github.com/apache/hadoop/pull/1513
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #1513: HDDS-2149. Replace FindBugs with SpotBugs

2019-09-26 Thread GitBox
anuengineer commented on issue #1513: HDDS-2149. Replace FindBugs with SpotBugs
URL: https://github.com/apache/hadoop/pull/1513#issuecomment-535799019
 
 
   Thank you for the contribution. I have committed this to the trunk. @elek  
Thank you for the reivew.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on issue #1525: HDDS-2179. ConfigFileGenerator fails with Java 10 or newer

2019-09-26 Thread GitBox
adoroszlai commented on issue #1525: HDDS-2179. ConfigFileGenerator fails with 
Java 10 or newer
URL: https://github.com/apache/hadoop/pull/1525#issuecomment-535796885
 
 
   Thanks @anuengineer for reviewing and committing it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16544) update io.netty in branch-2

2019-09-26 Thread Masatake Iwasaki (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939140#comment-16939140
 ] 

Masatake Iwasaki commented on HADOOP-16544:
---

TestJournalNodeRespectsBindHostKeys failed multiple times but the cause looks 
like build environment issue. The test succeeded on my local. JournalNode does 
not use netty.
{noformat}
[ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.386 s 
<<< FAILURE! - in 
org.apache.hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys
[ERROR] 
testHttpsBindHostKey(org.apache.hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys)
  Time elapsed: 2.62 s  <<< ERROR!
java.io.FileNotFoundException: /home/jenkins/.keystore (No such file or 
directory)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.(FileInputStream.java:138)
at 
org.mortbay.resource.FileResource.getInputStream(FileResource.java:275)
at 
org.mortbay.jetty.security.SslSelectChannelConnector.createSSLContext(SslSelectChannelConnector.java:624)
...
{noformat}

> update io.netty in branch-2
> ---
>
> Key: HADOOP-16544
> URL: https://issues.apache.org/jira/browse/HADOOP-16544
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: release-blocker
> Attachments: HADOOP-16544-branch-2.001.patch, 
> HADOOP-16544-branch-2.002.patch, HADOOP-16544-branch-2.003.patch, 
> HADOOP-16544-branch-2.004.patch
>
>
> branch-2 pulls in io.netty 3.6.2.Final which is more than 5 years old.
> The latest is 3.10.6Final. I know updating netty is sensitive but it deserves 
> some attention.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1517: HDDS-2169

2019-09-26 Thread GitBox
hadoop-yetus commented on issue #1517: HDDS-2169
URL: https://github.com/apache/hadoop/pull/1517#issuecomment-535792469
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 11 | https://github.com/apache/hadoop/pull/1517 does not 
apply to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/1517 |
   | JIRA Issue | HDDS-2169 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1517/2/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16588) Update commons-beanutils version to 1.9.4 in branch-2

2019-09-26 Thread Masatake Iwasaki (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939135#comment-16939135
 ] 

Masatake Iwasaki commented on HADOOP-16588:
---

[~jojochuang] how about excluding commons-beanutils-core too?  like
{noformat}

  commons-configuration
  commons-configuration
  compile
  

  commons-beanutils
  commons-beanutils-core


  commons-digester
  commons-digester

  

{noformat}


> Update commons-beanutils version to 1.9.4 in branch-2
> -
>
> Key: HADOOP-16588
> URL: https://issues.apache.org/jira/browse/HADOOP-16588
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Critical
>  Labels: release-blocker
> Attachments: HADOOP-16588.branch-2.001.patch
>
>
> Similar to HADOOP-16542 but we need to do it differently.
> In branch-2, we pull in commons-beanutils through commons-configuration 1.6 
> --> commons-digester 1.8
> {noformat}
> [INFO] +- commons-configuration:commons-configuration:jar:1.6:compile
> [INFO] |  +- commons-digester:commons-digester:jar:1.8:compile
> [INFO] |  |  \- commons-beanutils:commons-beanutils:jar:1.7.0:compile
> [INFO] |  \- commons-beanutils:commons-beanutils-core:jar:1.8.0:compile
> {noformat}
> I have a patch to update version of the transitive dependency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16588) Update commons-beanutils version to 1.9.4 in branch-2

2019-09-26 Thread Masatake Iwasaki (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939134#comment-16939134
 ] 

Masatake Iwasaki commented on HADOOP-16588:
---

{noformat}
[INFO] +- commons-configuration:commons-configuration:jar:1.6:compile
[INFO] |  \- commons-beanutils:commons-beanutils-core:jar:1.8.0:compile
[INFO] +- commons-digester:commons-digester:jar:1.8:compile
[INFO] +- commons-beanutils:commons-beanutils:jar:1.9.4:compile
{noformat}
This is from dependency tree of hadoop-common with the patch applied. 
commons-configuration depends on commons-beanutils-core. commons-beanutils-core 
is dependency reduced commons-beanutils and was removed in BEANUTILS-379.

I think both commons-beanutils and commons-beanutils-core could be affected by 
CVE-2019-10086 due to existence of relvant class.
{noformat}
$ jar tvf ./share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar | grep 
BeanUtilsBean.class
 16336 Thu Aug 28 16:18:06 JST 2008 
org/apache/commons/beanutils/BeanUtilsBean.class
 12623 Thu Aug 28 16:18:06 JST 2008 
org/apache/commons/beanutils/locale/LocaleBeanUtilsBean.class

$ jar tvf ./share/hadoop/common/lib/commons-beanutils-1.9.4.jar | grep 
BeanUtilsBean.class
 12870 Sun Jul 28 18:16:38 JST 2019 
org/apache/commons/beanutils/locale/LocaleBeanUtilsBean.class
 18035 Sun Jul 28 18:16:38 JST 2019 
org/apache/commons/beanutils/BeanUtilsBean.class
{noformat}
commons-beanutils-core could be in front of commons-beanutils in the classpath.
{noformat}
$ bin/hadoop classpath --glob | sed -z 's/:/\n/g' | grep beanutils
/home/iwasakims/dist/hadoop-2.10.0-SNAPSHOT/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar
/home/iwasakims/dist/hadoop-2.10.0-SNAPSHOT/share/hadoop/common/lib/commons-beanutils-1.9.4.jar
/home/iwasakims/dist/hadoop-2.10.0-SNAPSHOT/share/hadoop/yarn/lib/commons-beanutils-core-1.8.0.jar
/home/iwasakims/dist/hadoop-2.10.0-SNAPSHOT/share/hadoop/yarn/lib/commons-beanutils-1.9.4.jar
{noformat}

> Update commons-beanutils version to 1.9.4 in branch-2
> -
>
> Key: HADOOP-16588
> URL: https://issues.apache.org/jira/browse/HADOOP-16588
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Critical
>  Labels: release-blocker
> Attachments: HADOOP-16588.branch-2.001.patch
>
>
> Similar to HADOOP-16542 but we need to do it differently.
> In branch-2, we pull in commons-beanutils through commons-configuration 1.6 
> --> commons-digester 1.8
> {noformat}
> [INFO] +- commons-configuration:commons-configuration:jar:1.6:compile
> [INFO] |  +- commons-digester:commons-digester:jar:1.8:compile
> [INFO] |  |  \- commons-beanutils:commons-beanutils:jar:1.7.0:compile
> [INFO] |  \- commons-beanutils:commons-beanutils-core:jar:1.8.0:compile
> {noformat}
> I have a patch to update version of the transitive dependency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #1525: HDDS-2179. ConfigFileGenerator fails with Java 10 or newer

2019-09-26 Thread GitBox
anuengineer commented on issue #1525: HDDS-2179. ConfigFileGenerator fails with 
Java 10 or newer
URL: https://github.com/apache/hadoop/pull/1525#issuecomment-535789540
 
 
   The failures are not related to this patch. I have committed this patch to 
the trunk.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer closed pull request #1525: HDDS-2179. ConfigFileGenerator fails with Java 10 or newer

2019-09-26 Thread GitBox
anuengineer closed pull request #1525: HDDS-2179. ConfigFileGenerator fails 
with Java 10 or newer
URL: https://github.com/apache/hadoop/pull/1525
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16606) checksum link from hadoop web site is broken.

2019-09-26 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16606:
---
Fix Version/s: asf-site
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Merged the PR.

> checksum link from hadoop web site is broken.
> -
>
> Key: HADOOP-16606
> URL: https://issues.apache.org/jira/browse/HADOOP-16606
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Akira Ajisaka
>Priority: Blocker
> Fix For: asf-site
>
>
> Post HADOOP-16494, artifacts generated for release doesn't include *mds* 
> file. But hadoop web site binary tar ball points to mds file which doesn't 
> have. This breaks the hadoop website. 
> For 3.2.1 release, I have manually generated *mds* file and pushed into 
> artifacts folder so that hadoop website link is not broken.
> The same issue will happen for 3.1.3 release also. 
> I am referring https://hadoop.apache.org/releases.html page for checksum 
> hyperlink.
> cc:/ [~vinodkv] [~tangzhankun] [~aajisaka]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1535: HADOOP-16593. [YARN] Polish the protobuf plugin for hadoop-yarn-csi

2019-09-26 Thread GitBox
hadoop-yetus commented on issue #1535: HADOOP-16593. [YARN] Polish the protobuf 
plugin for hadoop-yarn-csi
URL: https://github.com/apache/hadoop/pull/1535#issuecomment-535786035
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 73 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 58 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1212 | trunk passed |
   | +1 | compile | 1064 | trunk passed |
   | +1 | mvnsite | 64 | trunk passed |
   | +1 | shadedclient | 3226 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 54 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | +1 | mvninstall | 38 | the patch passed |
   | +1 | compile | 1017 | the patch passed |
   | +1 | javac | 1017 | the patch passed |
   | +1 | mvnsite | 65 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 3 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 801 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 54 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 24 | hadoop-project in the patch passed. |
   | +1 | unit | 43 | hadoop-yarn-csi in the patch passed. |
   | +1 | asflicense | 44 | The patch does not generate ASF License warnings. |
   | | | 5651 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.0 Server=19.03.0 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1535/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1535 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux f2816d4bfb63 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 2adcc3c |
   | Default Java | 1.8.0_222 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1535/1/testReport/ |
   | Max. process+thread count | 307 (vs. ulimit of 5500) |
   | modules | C: hadoop-project 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1535/1/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer closed pull request #1534: HDDS-2193. Adding container related metrics in SCM.

2019-09-26 Thread GitBox
anuengineer closed pull request #1534: HDDS-2193. Adding container related 
metrics in SCM.
URL: https://github.com/apache/hadoop/pull/1534
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #1534: HDDS-2193. Adding container related metrics in SCM.

2019-09-26 Thread GitBox
anuengineer commented on issue #1534: HDDS-2193. Adding container related 
metrics in SCM.
URL: https://github.com/apache/hadoop/pull/1534#issuecomment-535784216
 
 
   Thank you for the contribution. I have committed this patch to the trunk.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dineshchitlangia commented on issue #1519: HDDS-2174. Delete GDPR Encryption Key from metadata when a Key is deleted

2019-09-26 Thread GitBox
dineshchitlangia commented on issue #1519: HDDS-2174. Delete GDPR Encryption 
Key from metadata when a Key is deleted
URL: https://github.com/apache/hadoop/pull/1519#issuecomment-535781468
 
 
   Thank you @anuengineer  for review/commit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #1519: HDDS-2174. Delete GDPR Encryption Key from metadata when a Key is deleted

2019-09-26 Thread GitBox
anuengineer commented on issue #1519: HDDS-2174. Delete GDPR Encryption Key 
from metadata when a Key is deleted
URL: https://github.com/apache/hadoop/pull/1519#issuecomment-535780292
 
 
   Thank you for the contribution. I have committed this patch to the trunk.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer closed pull request #1519: HDDS-2174. Delete GDPR Encryption Key from metadata when a Key is deleted

2019-09-26 Thread GitBox
anuengineer closed pull request #1519: HDDS-2174. Delete GDPR Encryption Key 
from metadata when a Key is deleted
URL: https://github.com/apache/hadoop/pull/1519
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ChenSammi commented on issue #1469: HDDS-2034. Async RATIS pipeline creation and destroy through heartbea…

2019-09-26 Thread GitBox
ChenSammi commented on issue #1469: HDDS-2034. Async RATIS pipeline creation 
and destroy through heartbea…
URL: https://github.com/apache/hadoop/pull/1469#issuecomment-535777599
 
 
   @anuengineer and @xiaoyuyao ,  should I provide a new patch on trunk now, or 
wait until the whole communication channel design come out next week? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1511: HDDS-2162. Make Kerberos related configuration support HA style config.

2019-09-26 Thread GitBox
bharatviswa504 commented on a change in pull request #1511: HDDS-2162. Make 
Kerberos related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#discussion_r328903326
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -309,13 +305,33 @@ private OzoneManager(OzoneConfiguration conf) throws 
IOException,
   AuthenticationException {
 super(OzoneVersionInfo.OZONE_VERSION_INFO);
 Preconditions.checkNotNull(conf);
-configuration = conf;
+configuration = new OzoneConfiguration(conf);
 
 Review comment:
   here configuration = new OzoneConfiguration(conf);
   And later I am changing the configuration property values in this new object 
so that it will not affect to original configuration.
   
   I just checked with below:
   ```
   
   OzoneConfiguration configuration = new OzoneConfiguration();
   configuration.set(OZONE_METADATA_DIRS,
   folder.newFolder().getAbsolutePath());
   
   OzoneConfiguration configuration1 = new 
OzoneConfiguration(configuration);
   configuration1.set(OZONE_METADATA_DIRS, "bharat");
   
   System.out.println(configuration.get(OZONE_METADATA_DIRS));
   System.out.println(configuration1.get(OZONE_METADATA_DIRS));
   ```
   
/var/folders/g5/fk451xl14vdf891pq7b6m6v0gp/T/junit852875409842836/junit4506171024308775995
   bharat
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16593) [YARN] Polish the protobuf plugin for hadoop-yarn-csi

2019-09-26 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HADOOP-16593:
---
Component/s: build

> [YARN] Polish the protobuf plugin for hadoop-yarn-csi
> -
>
> Key: HADOOP-16593
> URL: https://issues.apache.org/jira/browse/HADOOP-16593
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
>
> As discussed here:
> https://github.com/apache/hadoop/pull/1496#discussion_r326931072
> We should align the execution id in the parent pom.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Apache9 commented on issue #1535: HADOOP-16593. [YARN] Polish the protobuf plugin for hadoop-yarn-csi

2019-09-26 Thread GitBox
Apache9 commented on issue #1535: HADOOP-16593. [YARN] Polish the protobuf 
plugin for hadoop-yarn-csi
URL: https://github.com/apache/hadoop/pull/1535#issuecomment-535771148
 
 
   The property protobuf-compile.version  is only used in ozone so remove it 
from the hadoop pom. And for grpc.version, it is only used in yarn csi so 
remove it from the parent pom.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Apache9 opened a new pull request #1535: HADOOP-16593. [YARN] Polish the protobuf plugin for hadoop-yarn-csi

2019-09-26 Thread GitBox
Apache9 opened a new pull request #1535: HADOOP-16593. [YARN] Polish the 
protobuf plugin for hadoop-yarn-csi
URL: https://github.com/apache/hadoop/pull/1535
 
 
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16579) Upgrade to Apache Curator 4.2.0 in Hadoop

2019-09-26 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939081#comment-16939081
 ] 

Wei-Chiu Chuang commented on HADOOP-16579:
--

[~daryn] FYI I think you'd be interested in making connection to zookeeper more 
secure.

> Upgrade to Apache Curator 4.2.0 in Hadoop
> -
>
> Key: HADOOP-16579
> URL: https://issues.apache.org/jira/browse/HADOOP-16579
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Mate Szalay-Beko
>Assignee: Norbert Kalmár
>Priority: Major
>
> Currently in Hadoop we are using [ZooKeeper version 
> 3.4.13|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L90].
>  ZooKeeper 3.5.5 is the latest stable Apache ZooKeeper release. It contains 
> many new features (including SSL related improvements which can be very 
> important for production use; see [the release 
> notes|https://zookeeper.apache.org/doc/r3.5.5/releasenotes.html]).
> Apache Curator is a high level ZooKeeper client library, that makes it easier 
> to use the low level ZooKeeper API. Currently [in Hadoop we are using Curator 
> 2.13.0|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L91]
>  and [in Ozone we use Curator 
> 2.12.0|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/pom.ozone.xml#L146].
> Curator 2.x is supporting only the ZooKeeper 3.4.x releases, while Curator 
> 3.x is compatible only with the new ZooKeeper 3.5.x releases. Fortunately, 
> the latest Curator 4.x versions are compatible with both ZooKeeper 3.4.x and 
> 3.5.x. (see [the relevant Curator 
> page|https://curator.apache.org/zk-compatibility.html]). Many Apache projects 
> have already migrated to Curator 4 (like HBase, Phoenix, Druid, etc.), other 
> components are doing it right now (e.g. Hive).
> *The aims of this task are* to:
>  - change Curator version in Hadoop to the latest stable 4.x version 
> (currently 4.2.0)
>  - also make sure we don't have multiple ZooKeeper versions in the classpath 
> to avoid runtime problems (it is 
> [recommended|https://curator.apache.org/zk-compatibility.html] to exclude the 
> ZooKeeper which come with Curator, so that there will be only a single 
> ZooKeeper version used runtime in Hadoop)
> In this ticket we still don't want to change the default ZooKeeper version in 
> Hadoop, we only want to make it possible for the community to be able to 
> build / use Hadoop with the new ZooKeeper (e.g. if they need to secure the 
> ZooKeeper communication with SSL, what is only supported in the new ZooKeeper 
> version). Upgrading to Curator 4.x should keep Hadoop to be compatible with 
> both ZooKeeper 3.4 and 3.5.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16579) Upgrade to Apache Curator 4.2.0 in Hadoop

2019-09-26 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-16579:
-
Target Version/s: 3.3.0

> Upgrade to Apache Curator 4.2.0 in Hadoop
> -
>
> Key: HADOOP-16579
> URL: https://issues.apache.org/jira/browse/HADOOP-16579
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Mate Szalay-Beko
>Assignee: Norbert Kalmár
>Priority: Major
>
> Currently in Hadoop we are using [ZooKeeper version 
> 3.4.13|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L90].
>  ZooKeeper 3.5.5 is the latest stable Apache ZooKeeper release. It contains 
> many new features (including SSL related improvements which can be very 
> important for production use; see [the release 
> notes|https://zookeeper.apache.org/doc/r3.5.5/releasenotes.html]).
> Apache Curator is a high level ZooKeeper client library, that makes it easier 
> to use the low level ZooKeeper API. Currently [in Hadoop we are using Curator 
> 2.13.0|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L91]
>  and [in Ozone we use Curator 
> 2.12.0|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/pom.ozone.xml#L146].
> Curator 2.x is supporting only the ZooKeeper 3.4.x releases, while Curator 
> 3.x is compatible only with the new ZooKeeper 3.5.x releases. Fortunately, 
> the latest Curator 4.x versions are compatible with both ZooKeeper 3.4.x and 
> 3.5.x. (see [the relevant Curator 
> page|https://curator.apache.org/zk-compatibility.html]). Many Apache projects 
> have already migrated to Curator 4 (like HBase, Phoenix, Druid, etc.), other 
> components are doing it right now (e.g. Hive).
> *The aims of this task are* to:
>  - change Curator version in Hadoop to the latest stable 4.x version 
> (currently 4.2.0)
>  - also make sure we don't have multiple ZooKeeper versions in the classpath 
> to avoid runtime problems (it is 
> [recommended|https://curator.apache.org/zk-compatibility.html] to exclude the 
> ZooKeeper which come with Curator, so that there will be only a single 
> ZooKeeper version used runtime in Hadoop)
> In this ticket we still don't want to change the default ZooKeeper version in 
> Hadoop, we only want to make it possible for the community to be able to 
> build / use Hadoop with the new ZooKeeper (e.g. if they need to secure the 
> ZooKeeper communication with SSL, what is only supported in the new ZooKeeper 
> version). Upgrading to Curator 4.x should keep Hadoop to be compatible with 
> both ZooKeeper 3.4 and 3.5.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16593) [YARN] Polish the protobuf plugin for hadoop-yarn-csi

2019-09-26 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang reassigned HADOOP-16593:
--

Assignee: Duo Zhang

> [YARN] Polish the protobuf plugin for hadoop-yarn-csi
> -
>
> Key: HADOOP-16593
> URL: https://issues.apache.org/jira/browse/HADOOP-16593
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
>
> As discussed here:
> https://github.com/apache/hadoop/pull/1496#discussion_r326931072
> We should align the execution id in the parent pom.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-16593) [YARN] Polish the protobuf plugin for hadoop-yarn-csi

2019-09-26 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-16593 started by Duo Zhang.
--
> [YARN] Polish the protobuf plugin for hadoop-yarn-csi
> -
>
> Key: HADOOP-16593
> URL: https://issues.apache.org/jira/browse/HADOOP-16593
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
>
> As discussed here:
> https://github.com/apache/hadoop/pull/1496#discussion_r326931072
> We should align the execution id in the parent pom.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16598) Backport "HADOOP-16558 [COMMON+HDFS] use protobuf-maven-plugin to generate protobuf classes" to all active branches

2019-09-26 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939080#comment-16939080
 ] 

Duo Zhang commented on HADOOP-16598:


I assume branch-3.0 and branch-2.8 will be dead soon? If not, will also upload 
patches for them. [~djp].

> Backport "HADOOP-16558 [COMMON+HDFS] use protobuf-maven-plugin to generate 
> protobuf classes" to all active branches
> ---
>
> Key: HADOOP-16598
> URL: https://issues.apache.org/jira/browse/HADOOP-16598
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HADOOP-16598-branch-2.9.patch, 
> HADOOP-16598-branch-2.patch, HADOOP-16598-branch-3.1.patch, 
> HADOOP-16598-branch-3.2.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1511: HDDS-2162. Make Kerberos related configuration support HA style config.

2019-09-26 Thread GitBox
bharatviswa504 commented on a change in pull request #1511: HDDS-2162. Make 
Kerberos related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#discussion_r328897127
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerConfiguration.java
 ##
 @@ -119,10 +119,13 @@ public void testDefaultPortIfNotSpecified() throws 
Exception {
 String omNode1Id = "omNode1";
 String omNode2Id = "omNode2";
 String omNodesKeyValue = omNode1Id + "," + omNode2Id;
-conf.set(OMConfigKeys.OZONE_OM_NODES_KEY, omNodesKeyValue);
+String serviceID = "service1";
+conf.set(OMConfigKeys.OZONE_OM_SERVICE_IDS_KEY, serviceID);
+conf.set(OMConfigKeys.OZONE_OM_NODES_KEY + "." + serviceID,
+omNodesKeyValue);
 
-String omNode1RpcAddrKey = getOMAddrKeyWithSuffix(null, omNode1Id);
-String omNode2RpcAddrKey = getOMAddrKeyWithSuffix(null, omNode2Id);
+String omNode1RpcAddrKey = getOMAddrKeyWithSuffix(serviceID, omNode1Id);
+String omNode2RpcAddrKey = getOMAddrKeyWithSuffix(serviceID, omNode2Id);
 
 
 Review comment:
   Discussed offline, from my understanding this is being done as to share 
config across all OM's. And this PR is not changing any config loading code of 
OM HA, it just added Kerberos/DB config as described in Jira description. 
   
   Anu said we don't require the current way, and we shall continue the 
discussion later to see how we can do it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1511: HDDS-2162. Make Kerberos related configuration support HA style config.

2019-09-26 Thread GitBox
bharatviswa504 commented on a change in pull request #1511: HDDS-2162. Make 
Kerberos related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#discussion_r328897127
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerConfiguration.java
 ##
 @@ -119,10 +119,13 @@ public void testDefaultPortIfNotSpecified() throws 
Exception {
 String omNode1Id = "omNode1";
 String omNode2Id = "omNode2";
 String omNodesKeyValue = omNode1Id + "," + omNode2Id;
-conf.set(OMConfigKeys.OZONE_OM_NODES_KEY, omNodesKeyValue);
+String serviceID = "service1";
+conf.set(OMConfigKeys.OZONE_OM_SERVICE_IDS_KEY, serviceID);
+conf.set(OMConfigKeys.OZONE_OM_NODES_KEY + "." + serviceID,
+omNodesKeyValue);
 
-String omNode1RpcAddrKey = getOMAddrKeyWithSuffix(null, omNode1Id);
-String omNode2RpcAddrKey = getOMAddrKeyWithSuffix(null, omNode2Id);
+String omNode1RpcAddrKey = getOMAddrKeyWithSuffix(serviceID, omNode1Id);
+String omNode2RpcAddrKey = getOMAddrKeyWithSuffix(serviceID, omNode2Id);
 
 
 Review comment:
   Discussed offline, from my understanding this is being done as to share 
config across all OM's. And this PR is not changing any config loading code of 
OM HA, it just added Kerberos/DB config as described in Jira description. 
   
   Anu said we don't require it, and we shall continue the discussion later to 
see how we can do it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16598) Backport "HADOOP-16558 [COMMON+HDFS] use protobuf-maven-plugin to generate protobuf classes" to all active branches

2019-09-26 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HADOOP-16598:
---
Attachment: HADOOP-16598-branch-2.9.patch

> Backport "HADOOP-16558 [COMMON+HDFS] use protobuf-maven-plugin to generate 
> protobuf classes" to all active branches
> ---
>
> Key: HADOOP-16598
> URL: https://issues.apache.org/jira/browse/HADOOP-16598
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HADOOP-16598-branch-2.9.patch, 
> HADOOP-16598-branch-2.patch, HADOOP-16598-branch-3.1.patch, 
> HADOOP-16598-branch-3.2.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16598) Backport "HADOOP-16558 [COMMON+HDFS] use protobuf-maven-plugin to generate protobuf classes" to all active branches

2019-09-26 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HADOOP-16598:
---
Attachment: HADOOP-16598-branch-2.patch

> Backport "HADOOP-16558 [COMMON+HDFS] use protobuf-maven-plugin to generate 
> protobuf classes" to all active branches
> ---
>
> Key: HADOOP-16598
> URL: https://issues.apache.org/jira/browse/HADOOP-16598
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HADOOP-16598-branch-2.patch, 
> HADOOP-16598-branch-3.1.patch, HADOOP-16598-branch-3.2.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16612) Track Azure Blob File System client-perceived latency

2019-09-26 Thread Jeetesh Mangwani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeetesh Mangwani updated HADOOP-16612:
--
Description: Track the end-to-end performance of ADLS Gen 2 REST APIs by 
measuring latencies in the Hadoop ABFS driver.  (was: Track the performance of 
Hadoop ABFS driver by measuring the latencies of ADLS Gen 2 REST APIs.)

> Track Azure Blob File System client-perceived latency
> -
>
> Key: HADOOP-16612
> URL: https://issues.apache.org/jira/browse/HADOOP-16612
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, hdfs-client
>Reporter: Jeetesh Mangwani
>Priority: Major
>
> Track the end-to-end performance of ADLS Gen 2 REST APIs by measuring 
> latencies in the Hadoop ABFS driver.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16612) Track Azure Blob File System client-perceived latency

2019-09-26 Thread Jeetesh Mangwani (Jira)
Jeetesh Mangwani created HADOOP-16612:
-

 Summary: Track Azure Blob File System client-perceived latency
 Key: HADOOP-16612
 URL: https://issues.apache.org/jira/browse/HADOOP-16612
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure, hdfs-client
Reporter: Jeetesh Mangwani


Track the performance of Hadoop ABFS driver by measuring the latencies of ADLS 
Gen 2 REST APIs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #1511: HDDS-2162. Make Kerberos related configuration support HA style config.

2019-09-26 Thread GitBox
anuengineer commented on a change in pull request #1511: HDDS-2162. Make 
Kerberos related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#discussion_r328890013
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -309,13 +305,33 @@ private OzoneManager(OzoneConfiguration conf) throws 
IOException,
   AuthenticationException {
 super(OzoneVersionInfo.OZONE_VERSION_INFO);
 Preconditions.checkNotNull(conf);
-configuration = conf;
+configuration = new OzoneConfiguration(conf);
 
 Review comment:
   But you just lost the reference to the original object. I am slightly 
confused here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #1511: HDDS-2162. Make Kerberos related configuration support HA style config.

2019-09-26 Thread GitBox
anuengineer commented on a change in pull request #1511: HDDS-2162. Make 
Kerberos related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#discussion_r328889881
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerConfiguration.java
 ##
 @@ -119,10 +119,13 @@ public void testDefaultPortIfNotSpecified() throws 
Exception {
 String omNode1Id = "omNode1";
 String omNode2Id = "omNode2";
 String omNodesKeyValue = omNode1Id + "," + omNode2Id;
-conf.set(OMConfigKeys.OZONE_OM_NODES_KEY, omNodesKeyValue);
+String serviceID = "service1";
+conf.set(OMConfigKeys.OZONE_OM_SERVICE_IDS_KEY, serviceID);
+conf.set(OMConfigKeys.OZONE_OM_NODES_KEY + "." + serviceID,
+omNodesKeyValue);
 
-String omNode1RpcAddrKey = getOMAddrKeyWithSuffix(null, omNode1Id);
-String omNode2RpcAddrKey = getOMAddrKeyWithSuffix(null, omNode2Id);
+String omNode1RpcAddrKey = getOMAddrKeyWithSuffix(serviceID, omNode1Id);
+String omNode2RpcAddrKey = getOMAddrKeyWithSuffix(serviceID, omNode2Id);
 
 
 Review comment:
   > Not got your last part what is proposed.
   
   
https://docs.microsoft.com/en-us/windows/win32/ad/name-formats-for-unique-spns
   
   OzoneManager/host1.example.com/CN=hrdb,OU=mktg,DC=example,DC=com
   OzoneManager/host2.example.com/CN=hrdb,OU=mktg,DC=example,DC=com
   OzoneManager/host3.example.com/CN=hrdb,OU=mktg,DC=example,DC=com
   
   This is all we need, is what I am trying to say. 
   > Suppose the user wants to use different keytab file location/principal 
name it will also help in this situation.
   
   Why would you want separate identities to communicate to the same service ? 
Can you give me an example of why this would be needed ? More over, why support 
that identity via naming tricks in Ozone instead of creating an new SPN in 
Kerberos Domain?
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] timmylicheng commented on issue #1431: HDDS-1569 Support creating multiple pipelines with same datanode

2019-09-26 Thread GitBox
timmylicheng commented on issue #1431: HDDS-1569 Support creating multiple 
pipelines with same datanode
URL: https://github.com/apache/hadoop/pull/1431#issuecomment-535753019
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16598) Backport "HADOOP-16558 [COMMON+HDFS] use protobuf-maven-plugin to generate protobuf classes" to all active branches

2019-09-26 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HADOOP-16598:
---
Attachment: HADOOP-16598-branch-3.1.patch

> Backport "HADOOP-16558 [COMMON+HDFS] use protobuf-maven-plugin to generate 
> protobuf classes" to all active branches
> ---
>
> Key: HADOOP-16598
> URL: https://issues.apache.org/jira/browse/HADOOP-16598
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HADOOP-16598-branch-3.1.patch, 
> HADOOP-16598-branch-3.2.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] DadanielZ commented on a change in pull request #1481: HADOOP-16587: Made auth endpoints configurable for MSI and refresh token flows

2019-09-26 Thread GitBox
DadanielZ commented on a change in pull request #1481: HADOOP-16587: Made auth 
endpoints configurable for MSI and refresh token flows
URL: https://github.com/apache/hadoop/pull/1481#discussion_r32948
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
 ##
 @@ -635,4 +641,24 @@ void setWriteBufferSize(int bufferSize) {
   void setEnableFlush(boolean enableFlush) {
 this.enableFlush = enableFlush;
   }
+
+  private String getMsiAuthEndpoint() throws IOException {
+String authEndpoint = getPasswordString(
+FS_AZURE_ACCOUNT_OAUTH_MSI_ENDPOINT);
+if (Utils.isEmpty(authEndpoint)) {
+  authEndpoint =
+  AuthConfigurations.DEFAULT_FS_AZURE_ACCOUNT_OAUTH_MSI_ENDPOINT;
+}
+return authEndpoint;
+  }
+
+  private String getRefreshTokenAuthEndpoint() throws IOException {
+String authEndpoint = getPasswordString(
+FS_AZURE_ACCOUNT_OAUTH_REFRESH_TOKEN_ENDPOINT);
+if (Utils.isEmpty(authEndpoint)) {
 
 Review comment:
   minor: this new function isEmpty() might not be necessary


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] DadanielZ commented on a change in pull request #1481: HADOOP-16587: Made auth endpoints configurable for MSI and refresh token flows

2019-09-26 Thread GitBox
DadanielZ commented on a change in pull request #1481: HADOOP-16587: Made auth 
endpoints configurable for MSI and refresh token flows
URL: https://github.com/apache/hadoop/pull/1481#discussion_r32536
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/AzureADAuthenticator.java
 ##
 @@ -109,15 +112,12 @@ public static AzureADToken 
getTokenUsingClientCreds(String authEndpoint,
* @return {@link AzureADToken} obtained using the creds
* @throws IOException throws IOException if there is a failure in obtaining 
the token
*/
-  public static AzureADToken getTokenFromMsi(String tenantGuid, String 
clientId,
+  public static AzureADToken getTokenFromMsi(final String authEndpoint, String 
tenantGuid, String clientId,
 
 Review comment:
   since you are adding `final`, could you also update the other params? :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] DadanielZ commented on a change in pull request #1481: HADOOP-16587: Made auth endpoints configurable for MSI and refresh token flows

2019-09-26 Thread GitBox
DadanielZ commented on a change in pull request #1481: HADOOP-16587: Made auth 
endpoints configurable for MSI and refresh token flows
URL: https://github.com/apache/hadoop/pull/1481#discussion_r32576
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/AzureADAuthenticator.java
 ##
 @@ -141,14 +141,16 @@ public static AzureADToken getTokenFromMsi(String 
tenantGuid, String clientId,
   /**
* Gets Azure Active Directory token using refresh token.
*
+   * @param authEndpoint the OAuth 2.0 token endpoint associated
+   * with the user's directory (obtain from
+   * Active Directory configuration)
* @param clientId the client ID (GUID) of the client web app obtained from 
Azure Active Directory configuration
* @param refreshToken the refresh token
* @return {@link AzureADToken} obtained using the refresh token
* @throws IOException throws IOException if there is a failure in 
connecting to Azure AD
*/
-  public static AzureADToken getTokenUsingRefreshToken(String clientId,
+  public static AzureADToken getTokenUsingRefreshToken(final String 
authEndpoint, String clientId,
 
 Review comment:
   same here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] DadanielZ commented on a change in pull request #1508: HADOOP-16548 : Disable Flush() over config

2019-09-26 Thread GitBox
DadanielZ commented on a change in pull request #1508: HADOOP-16548 : Disable 
Flush() over config
URL: https://github.com/apache/hadoop/pull/1508#discussion_r328886182
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java
 ##
 @@ -199,7 +202,7 @@ private void maybeThrowLastError() throws IOException {
*/
   @Override
   public void flush() throws IOException {
-if (supportFlush) {
+if (!disableOutputStreamFlush) {
 
 Review comment:
   `supportFlush && !disableOutputStreamFlush` ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] DadanielZ commented on a change in pull request #1508: HADOOP-16548 : Disable Flush() over config

2019-09-26 Thread GitBox
DadanielZ commented on a change in pull request #1508: HADOOP-16548 : Disable 
Flush() over config
URL: https://github.com/apache/hadoop/pull/1508#discussion_r328885317
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java
 ##
 @@ -52,6 +52,7 @@
   public static final String FS_AZURE_ATOMIC_RENAME_KEY = 
"fs.azure.atomic.rename.key";
   public static final String FS_AZURE_READ_AHEAD_QUEUE_DEPTH = 
"fs.azure.readaheadqueue.depth";
   public static final String FS_AZURE_ENABLE_FLUSH = "fs.azure.enable.flush";
+  public static final String FS_AZURE_DISABLE_OUTPUTSTREAM_FLUSH = 
"fs.azure.disable.outputstream.flush";
 
 Review comment:
   Could you also add documentation here for this new config? it would be more 
clear to differentiate it with FS_AZURE_ENABLE_FLUSH.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] DadanielZ commented on a change in pull request #1508: HADOOP-16548 : Disable Flush() over config

2019-09-26 Thread GitBox
DadanielZ commented on a change in pull request #1508: HADOOP-16548 : Disable 
Flush() over config
URL: https://github.com/apache/hadoop/pull/1508#discussion_r328886665
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemFlush.java
 ##
 @@ -208,43 +208,44 @@ public Void call() throws Exception {
   }
 
   @Test
-  public void testFlushWithFlushEnabled() throws Exception {
-testFlush(true);
+  public void testFlushWithOutputStreamFlushEnabled() throws Exception {
 
 Review comment:
   How about adding tests for combinations of `supportFlush `and 
`disableOutputStreamFlush` ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16600) StagingTestBase uses methods not available in Mockito 1.8.5 in branch-3.1

2019-09-26 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939059#comment-16939059
 ] 

Duo Zhang commented on HADOOP-16600:


{quote}
maybe there's been some classpath ordering thing which has hid this for me?
{quote}

Oh, missed the mockito-core dependency... I thought since we explicitly declare 
mockito-all 1.8.5 in our own pom, it should supercede all other transitive 
dependencies. But we may get newer mockito-core dependency from the transitive 
dependencies. So if we load the newer mockito-core first we will be safe, 
otherwise there will be a compile error...

Anyway, I think we'd better explicitly depend on 1.10.19 so we can get a stable 
result? What do you think [~ste...@apache.org].

Thanks.

> StagingTestBase uses methods not available in Mockito 1.8.5 in branch-3.1
> -
>
> Key: HADOOP-16600
> URL: https://issues.apache.org/jira/browse/HADOOP-16600
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.1.1, 3.1.2
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-16600.branch-3.1.v1.patch
>
>
> details see HADOOP-15398
> Problem: hadoop trunk compilation is failing
> Root Cause:
> compilation error is coming from 
> org.apache.hadoop.fs.s3a.commit.staging.StagingTestBase. Compilation error is 
> "The method getArgumentAt(int, Class) is undefined for the 
> type InvocationOnMock".
> StagingTestBase is using getArgumentAt(int, Class) method 
> which is not available in mockito-all 1.8.5 version. getArgumentAt(int, 
> Class) method is available only from version 2.0.0-beta
> as follow code:
> {code:java}
> InitiateMultipartUploadRequest req = invocation.getArgumentAt(
> 0, InitiateMultipartUploadRequest.class);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16606) checksum link from hadoop web site is broken.

2019-09-26 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939054#comment-16939054
 ] 

Akira Ajisaka commented on HADOOP-16606:


Discussion thread: 
https://lists.apache.org/thread.html/db2f5d5d8600c405293ebfb3bfc415e200e59f72605c5a920a461c09@%3Cgeneral.hadoop.apache.org%3E

bq. mds file may be provided. Should we keep both i.e mds and sha512?

I thought I don't just want to have two checksum files for a file.

> checksum link from hadoop web site is broken.
> -
>
> Key: HADOOP-16606
> URL: https://issues.apache.org/jira/browse/HADOOP-16606
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Akira Ajisaka
>Priority: Blocker
>
> Post HADOOP-16494, artifacts generated for release doesn't include *mds* 
> file. But hadoop web site binary tar ball points to mds file which doesn't 
> have. This breaks the hadoop website. 
> For 3.2.1 release, I have manually generated *mds* file and pushed into 
> artifacts folder so that hadoop website link is not broken.
> The same issue will happen for 3.1.3 release also. 
> I am referring https://hadoop.apache.org/releases.html page for checksum 
> hyperlink.
> cc:/ [~vinodkv] [~tangzhankun] [~aajisaka]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1534: HDDS-2193. Adding container related metrics in SCM.

2019-09-26 Thread GitBox
hadoop-yetus commented on issue #1534: HDDS-2193. Adding container related 
metrics in SCM.
URL: https://github.com/apache/hadoop/pull/1534#issuecomment-535724358
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 1800 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 66 | Maven dependency ordering for branch |
   | -1 | mvninstall | 31 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 30 | hadoop-ozone in trunk failed. |
   | -1 | compile | 20 | hadoop-hdds in trunk failed. |
   | -1 | compile | 15 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 60 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 860 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 22 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 21 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 958 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 30 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 21 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for patch |
   | -1 | mvninstall | 34 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 30 | hadoop-ozone in the patch failed. |
   | -1 | compile | 24 | hadoop-hdds in the patch failed. |
   | -1 | compile | 20 | hadoop-ozone in the patch failed. |
   | -1 | javac | 24 | hadoop-hdds in the patch failed. |
   | -1 | javac | 20 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 57 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 719 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 21 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 20 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 30 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 21 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 29 | hadoop-hdds in the patch failed. |
   | -1 | unit | 24 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 4201 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1534 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux b3121e605a17 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 2adcc3c |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/4/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/4/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/4/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/4/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/4/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/4/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/4/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/4/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/4/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/4/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/4/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/4/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/4/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | javac | 

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1528: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests…

2019-09-26 Thread GitBox
bharatviswa504 commented on a change in pull request #1528: HDDS-2181. Ozone 
Manager should send correct ACL type in ACL requests…
URL: https://github.com/apache/hadoop/pull/1528#discussion_r328864850
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileCreateRequest.java
 ##
 @@ -177,7 +178,8 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 OMClientResponse omClientResponse = null;
 try {
   // check Acl
-  checkBucketAcls(ozoneManager, volumeName, bucketName, keyName);
+  checkKeyAcls(ozoneManager, volumeName, bucketName, keyName,
 
 Review comment:
   What I mean here is for Key/File/Directory create there will be no entry, so 
should we perform checkBucketAcls as before?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1534: HDDS-2193. Adding container related metrics in SCM.

2019-09-26 Thread GitBox
hadoop-yetus commented on issue #1534: HDDS-2193. Adding container related 
metrics in SCM.
URL: https://github.com/apache/hadoop/pull/1534#issuecomment-535720142
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 164 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 94 | Maven dependency ordering for branch |
   | -1 | mvninstall | 59 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 37 | hadoop-ozone in trunk failed. |
   | -1 | compile | 22 | hadoop-hdds in trunk failed. |
   | -1 | compile | 16 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 77 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1168 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 22 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 23 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1282 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 44 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 20 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 34 | Maven dependency ordering for patch |
   | -1 | mvninstall | 40 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 30 | hadoop-ozone in the patch failed. |
   | -1 | compile | 24 | hadoop-hdds in the patch failed. |
   | -1 | compile | 19 | hadoop-ozone in the patch failed. |
   | -1 | javac | 24 | hadoop-hdds in the patch failed. |
   | -1 | javac | 19 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 65 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 899 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 19 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 18 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 29 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 18 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 29 | hadoop-hdds in the patch failed. |
   | -1 | unit | 20 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 30 | The patch does not generate ASF License warnings. |
   | | | 3141 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.0 Server=19.03.0 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1534 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux fc007d0fa81f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b1e55cf |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/1/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/1/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/1/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/1/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/1/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/1/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/1/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/1/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | javac | 

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1511: HDDS-2162. Make Kerberos related configuration support HA style config.

2019-09-26 Thread GitBox
bharatviswa504 commented on a change in pull request #1511: HDDS-2162. Make 
Kerberos related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#discussion_r328861588
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerConfiguration.java
 ##
 @@ -119,10 +119,13 @@ public void testDefaultPortIfNotSpecified() throws 
Exception {
 String omNode1Id = "omNode1";
 String omNode2Id = "omNode2";
 String omNodesKeyValue = omNode1Id + "," + omNode2Id;
-conf.set(OMConfigKeys.OZONE_OM_NODES_KEY, omNodesKeyValue);
+String serviceID = "service1";
+conf.set(OMConfigKeys.OZONE_OM_SERVICE_IDS_KEY, serviceID);
+conf.set(OMConfigKeys.OZONE_OM_NODES_KEY + "." + serviceID,
+omNodesKeyValue);
 
-String omNode1RpcAddrKey = getOMAddrKeyWithSuffix(null, omNode1Id);
-String omNode2RpcAddrKey = getOMAddrKeyWithSuffix(null, omNode2Id);
+String omNode1RpcAddrKey = getOMAddrKeyWithSuffix(serviceID, omNode1Id);
+String omNode2RpcAddrKey = getOMAddrKeyWithSuffix(serviceID, omNode2Id);
 
 
 Review comment:
   > I have an uber comment on this JIRA. Under Ozone, what we really need is 3 
+ 3 six kerberos Identites.
   > 
   > Why don't we just follow the standard Kerberos SPN names? Simple take one 
config key from the user, either the SPN or file name path the service kerberos 
identity.
   > 
   > Once you have this, we don't have to do any second guessing or munching of 
names with any other strings -- after all it is just a service on a host. Code 
is simpler, and the best part, it is simple enough for any one to understand. 
In other words, what evil are we trying to prevent here with all the service 
name munching ?
   
   Not got your last part what is proposed.
   
   This is done for Kerberos settings and also for other configs like OM DB 
DIrs Http/Https Address. Suppose the user wants to use different keytab file 
location/principal name it will also help in this situation.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1511: HDDS-2162. Make Kerberos related configuration support HA style config.

2019-09-26 Thread GitBox
bharatviswa504 commented on a change in pull request #1511: HDDS-2162. Make 
Kerberos related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#discussion_r328861588
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerConfiguration.java
 ##
 @@ -119,10 +119,13 @@ public void testDefaultPortIfNotSpecified() throws 
Exception {
 String omNode1Id = "omNode1";
 String omNode2Id = "omNode2";
 String omNodesKeyValue = omNode1Id + "," + omNode2Id;
-conf.set(OMConfigKeys.OZONE_OM_NODES_KEY, omNodesKeyValue);
+String serviceID = "service1";
+conf.set(OMConfigKeys.OZONE_OM_SERVICE_IDS_KEY, serviceID);
+conf.set(OMConfigKeys.OZONE_OM_NODES_KEY + "." + serviceID,
+omNodesKeyValue);
 
-String omNode1RpcAddrKey = getOMAddrKeyWithSuffix(null, omNode1Id);
-String omNode2RpcAddrKey = getOMAddrKeyWithSuffix(null, omNode2Id);
+String omNode1RpcAddrKey = getOMAddrKeyWithSuffix(serviceID, omNode1Id);
+String omNode2RpcAddrKey = getOMAddrKeyWithSuffix(serviceID, omNode2Id);
 
 
 Review comment:
   > I have an uber comment on this JIRA. Under Ozone, what we really need is 3 
+ 3 six kerberos Identites.
   > 
   > Why don't we just follow the standard Kerberos SPN names? Simple take one 
config key from the user, either the SPN or file name path the service kerberos 
identity.
   > 
   > Once you have this, we don't have to do any second guessing or munching of 
names with any other strings -- after all it is just a service on a host. Code 
is simpler, and the best part, it is simple enough for any one to understand. 
In other words, what evil are we trying to prevent here with all the service 
name munching ?
   
   This is done for Kerberos settings and also for other configs like OM DB 
DIrs Http/Https Address. Suppose the user wants to use different keytab file 
location/principal name it will also help in this situation.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1534: HDDS-2193. Adding container related metrics in SCM.

2019-09-26 Thread GitBox
hadoop-yetus commented on issue #1534: HDDS-2193. Adding container related 
metrics in SCM.
URL: https://github.com/apache/hadoop/pull/1534#issuecomment-535719371
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 46 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 65 | Maven dependency ordering for branch |
   | -1 | mvninstall | 32 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 29 | hadoop-ozone in trunk failed. |
   | -1 | compile | 18 | hadoop-hdds in trunk failed. |
   | -1 | compile | 14 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 64 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1034 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 22 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 19 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1132 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 33 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 19 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 32 | Maven dependency ordering for patch |
   | -1 | mvninstall | 38 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 29 | hadoop-ozone in the patch failed. |
   | -1 | compile | 26 | hadoop-hdds in the patch failed. |
   | -1 | compile | 19 | hadoop-ozone in the patch failed. |
   | -1 | javac | 26 | hadoop-hdds in the patch failed. |
   | -1 | javac | 19 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 61 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 904 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 21 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 17 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 32 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 19 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 34 | hadoop-hdds in the patch failed. |
   | -1 | unit | 23 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 2824 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1534 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux c6a0cff0280b 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b1e55cf |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/2/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/2/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/2/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/2/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/2/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/2/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/2/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/2/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/2/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/2/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/2/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/2/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/2/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | javac | 

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1511: HDDS-2162. Make Kerberos related configuration support HA style config.

2019-09-26 Thread GitBox
bharatviswa504 commented on a change in pull request #1511: HDDS-2162. Make 
Kerberos related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#discussion_r328861588
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerConfiguration.java
 ##
 @@ -119,10 +119,13 @@ public void testDefaultPortIfNotSpecified() throws 
Exception {
 String omNode1Id = "omNode1";
 String omNode2Id = "omNode2";
 String omNodesKeyValue = omNode1Id + "," + omNode2Id;
-conf.set(OMConfigKeys.OZONE_OM_NODES_KEY, omNodesKeyValue);
+String serviceID = "service1";
+conf.set(OMConfigKeys.OZONE_OM_SERVICE_IDS_KEY, serviceID);
+conf.set(OMConfigKeys.OZONE_OM_NODES_KEY + "." + serviceID,
+omNodesKeyValue);
 
-String omNode1RpcAddrKey = getOMAddrKeyWithSuffix(null, omNode1Id);
-String omNode2RpcAddrKey = getOMAddrKeyWithSuffix(null, omNode2Id);
+String omNode1RpcAddrKey = getOMAddrKeyWithSuffix(serviceID, omNode1Id);
+String omNode2RpcAddrKey = getOMAddrKeyWithSuffix(serviceID, omNode2Id);
 
 
 Review comment:
   > I have an uber comment on this JIRA. Under Ozone, what we really need is 3 
+ 3 six kerberos Identites.
   > 
   > Why don't we just follow the standard Kerberos SPN names? Simple take one 
config key from the user, either the SPN or file name path the service kerberos 
identity.
   > 
   > Once you have this, we don't have to do any second guessing or munching of 
names with any other strings -- after all it is just a service on a host. Code 
is simpler, and the best part, it is simple enough for any one to understand. 
In other words, what evil are we trying to prevent here with all the service 
name munching ?
   
   I have an uber comment on this JIRA. Under Ozone, what we really need is 3 + 
3 six kerberos Identites.
   
   Why don't we just follow the standard Kerberos SPN names? Simple take one 
config key from the user, either the SPN or file name path the service kerberos 
identity.
   
   Once you have this, we don't have to do any second guessing or munching of 
names with any other strings -- after all it is just a service on a host. Code 
is simpler, and the best part, it is simple enough for any one to understand. 
In other words, what evil are we trying to prevent here with all the service 
name munching ?
   
   This is done for Kerberos settings and also for other configs like OM DB 
DIrs Http/Https Address. Suppose the user wants to use different keytab file 
location/principal name it will also help in this situation.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1511: HDDS-2162. Make Kerberos related configuration support HA style config.

2019-09-26 Thread GitBox
bharatviswa504 commented on a change in pull request #1511: HDDS-2162. Make 
Kerberos related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#discussion_r328860862
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -309,13 +305,33 @@ private OzoneManager(OzoneConfiguration conf) throws 
IOException,
   AuthenticationException {
 super(OzoneVersionInfo.OZONE_VERSION_INFO);
 Preconditions.checkNotNull(conf);
-configuration = conf;
+configuration = new OzoneConfiguration(conf);
 
 Review comment:
   This is being done, so not to change the original values of configuration 
object passed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1534: HDDS-2193. Adding container related metrics in SCM.

2019-09-26 Thread GitBox
hadoop-yetus commented on issue #1534: HDDS-2193. Adding container related 
metrics in SCM.
URL: https://github.com/apache/hadoop/pull/1534#issuecomment-535718246
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 40 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 66 | Maven dependency ordering for branch |
   | -1 | mvninstall | 31 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 30 | hadoop-ozone in trunk failed. |
   | -1 | compile | 21 | hadoop-hdds in trunk failed. |
   | -1 | compile | 15 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 61 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 849 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 23 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 20 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 951 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 34 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 21 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 32 | Maven dependency ordering for patch |
   | -1 | mvninstall | 36 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 29 | hadoop-ozone in the patch failed. |
   | -1 | compile | 25 | hadoop-hdds in the patch failed. |
   | -1 | compile | 20 | hadoop-ozone in the patch failed. |
   | -1 | javac | 25 | hadoop-hdds in the patch failed. |
   | -1 | javac | 20 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 62 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 720 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 22 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 20 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 31 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 20 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 29 | hadoop-hdds in the patch failed. |
   | -1 | unit | 23 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 2443 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1534 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 86ecbcbe0081 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b1e55cf |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/3/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/3/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/3/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/3/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/3/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/3/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/3/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/3/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/3/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/3/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/3/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/3/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1534/3/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | javac | 

[GitHub] [hadoop] anuengineer commented on a change in pull request #1511: HDDS-2162. Make Kerberos related configuration support HA style config.

2019-09-26 Thread GitBox
anuengineer commented on a change in pull request #1511: HDDS-2162. Make 
Kerberos related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#discussion_r328839876
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerConfiguration.java
 ##
 @@ -119,10 +119,13 @@ public void testDefaultPortIfNotSpecified() throws 
Exception {
 String omNode1Id = "omNode1";
 String omNode2Id = "omNode2";
 String omNodesKeyValue = omNode1Id + "," + omNode2Id;
-conf.set(OMConfigKeys.OZONE_OM_NODES_KEY, omNodesKeyValue);
+String serviceID = "service1";
+conf.set(OMConfigKeys.OZONE_OM_SERVICE_IDS_KEY, serviceID);
+conf.set(OMConfigKeys.OZONE_OM_NODES_KEY + "." + serviceID,
+omNodesKeyValue);
 
-String omNode1RpcAddrKey = getOMAddrKeyWithSuffix(null, omNode1Id);
-String omNode2RpcAddrKey = getOMAddrKeyWithSuffix(null, omNode2Id);
+String omNode1RpcAddrKey = getOMAddrKeyWithSuffix(serviceID, omNode1Id);
+String omNode2RpcAddrKey = getOMAddrKeyWithSuffix(serviceID, omNode2Id);
 
 
 Review comment:
   Don't need to verify that these strings are in the expected format here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #1511: HDDS-2162. Make Kerberos related configuration support HA style config.

2019-09-26 Thread GitBox
anuengineer commented on a change in pull request #1511: HDDS-2162. Make 
Kerberos related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#discussion_r328851483
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -309,13 +305,33 @@ private OzoneManager(OzoneConfiguration conf) throws 
IOException,
   AuthenticationException {
 super(OzoneVersionInfo.OZONE_VERSION_INFO);
 Preconditions.checkNotNull(conf);
-configuration = conf;
+configuration = new OzoneConfiguration(conf);
 
 Review comment:
   This function passes a conf object of type OzoneConfiguration. Why are we 
reallocating a new object before assigining that configuration obect to the 
member variable?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 opened a new pull request #1534: HDD-2193. Adding container related metrics in SCM.

2019-09-26 Thread GitBox
bharatviswa504 opened a new pull request #1534: HDD-2193. Adding container 
related metrics in SCM.
URL: https://github.com/apache/hadoop/pull/1534
 
 
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sidseth commented on issue #1516: HADOOP-16599. Allow a SignerInitializer to be specified along with a

2019-09-26 Thread GitBox
sidseth commented on issue #1516: HADOOP-16599. Allow a SignerInitializer to be 
specified along with a
URL: https://github.com/apache/hadoop/pull/1516#issuecomment-535707344
 
 
   > Have you thought about how to do an integration test here? I could imagine 
a custom signer which just forwards to the AWS signer
   Wasn't planning on adding any integration tests. Most of this can be tested 
quite easily with unit tests.
   
   > and what about collecting metrics on this, e.g. #of signing requests made. 
We could have another callback under 
org.apache.hadoop.fs.s3a.S3AInstrumentation which the signers could use to pass 
this info back
   The default usage (no custom signers) will not be able to use any 
instrumentation, and I don't think we want to force a Wrapper Signer just for 
isntrumentation(may not even be possible given Signers cannot access configs 
and we would not know the real signer in a wrapper) Instrumentation could be 
passed as a parameter to SignerInitializer that is being added as part of this 
patch. I'll defer to you on whether adding the Instrumentation to the interface 
makes sense. Don't know enough about S3AInstrumentation and usage.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sidseth commented on a change in pull request #1516: HADOOP-16599. Allow a SignerInitializer to be specified along with a

2019-09-26 Thread GitBox
sidseth commented on a change in pull request #1516: HADOOP-16599. Allow a 
SignerInitializer to be specified along with a
URL: https://github.com/apache/hadoop/pull/1516#discussion_r328848779
 
 

 ##
 File path: hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
 ##
 @@ -1879,3 +1879,61 @@ To disable checksum verification in `distcp`, use the 
`-skipcrccheck` option:
 hadoop distcp -update -skipcrccheck -numListstatusThreads 40 
/user/alice/datasets s3a://alice-backup/datasets
 ```
 
+###  Advanced - Custom Signers
 
 Review comment:
   Explicitly keeping the signer documentation vague. This is not a feature 
that's going to be used by a lot of people. The default signers will change 
with the SDK version - and are mentioned in the documentation already.
   I'd prefer not having a page which talks only about signing (the new auth 
page) - again don't want to call this out since it's not something that 
majority of users will want to touch. Delegation is already a separate page 
from what I can tell.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sidseth commented on a change in pull request #1516: HADOOP-16599. Allow a SignerInitializer to be specified along with a

2019-09-26 Thread GitBox
sidseth commented on a change in pull request #1516: HADOOP-16599. Allow a 
SignerInitializer to be specified along with a
URL: https://github.com/apache/hadoop/pull/1516#discussion_r328847643
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/SignerManager.java
 ##
 @@ -17,14 +17,19 @@
  */
 package org.apache.hadoop.fs.s3a;
 
-import com.amazonaws.auth.Signer;
-import com.amazonaws.auth.SignerFactory;
 import java.io.Closeable;
 import java.io.IOException;
+import java.util.LinkedList;
 
 Review comment:
   Moved to s3a.auth.
   
   What needs to change in the imports?
   It's using
   ```
   java.*
   \n
   Everything other than org.apache.*
   \n
   org.apache.*
   \n
   static imports
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sidseth commented on a change in pull request #1516: HADOOP-16599. Allow a SignerInitializer to be specified along with a

2019-09-26 Thread GitBox
sidseth commented on a change in pull request #1516: HADOOP-16599. Allow a 
SignerInitializer to be specified along with a
URL: https://github.com/apache/hadoop/pull/1516#discussion_r328846983
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DelegationTokenProvider.java
 ##
 @@ -0,0 +1,35 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import java.io.IOException;
+
+import org.apache.hadoop.classification.InterfaceAudience.Public;
+import org.apache.hadoop.classification.InterfaceStability.Evolving;
+import org.apache.hadoop.security.token.Token;
+import org.apache.hadoop.security.token.TokenIdentifier;
+
+/**
+ * Interface for S3A Delegation Token access.
+ */
+@Public
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sidseth commented on a change in pull request #1516: HADOOP-16599. Allow a SignerInitializer to be specified along with a

2019-09-26 Thread GitBox
sidseth commented on a change in pull request #1516: HADOOP-16599. Allow a 
SignerInitializer to be specified along with a
URL: https://github.com/apache/hadoop/pull/1516#discussion_r328846913
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DelegationTokenProvider.java
 ##
 @@ -0,0 +1,35 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
 
 Review comment:
   Changed.
   On a related note - what are your thoughts on moving some of these 
delegation and auth interfaces to a new module - something like s3a-plugins. 
That makes it easier for downstream projects to have a limited dependency which 
doesn't pull in all of S3AFileSystem, aws-sdk etc. Would be a separate jira 
ofcourse.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sidseth commented on a change in pull request #1516: HADOOP-16599. Allow a SignerInitializer to be specified along with a

2019-09-26 Thread GitBox
sidseth commented on a change in pull request #1516: HADOOP-16599. Allow a 
SignerInitializer to be specified along with a
URL: https://github.com/apache/hadoop/pull/1516#discussion_r328846360
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
 ##
 @@ -352,9 +352,12 @@ private Constants() {
 
   /**
* List of custom Signers. The signer class will be loaded, and the signer
-   * name will be associated with this signer class in the S3 SDK. e.g. Single
-   * CustomSigner -> 'CustomSigner:org.apache...CustomSignerClass Multiple
-   * CustomSigners -> 
'CSigner1:CustomSignerClass1,CSigner2:CustomerSignerClass2
+   * name will be associated with this signer class in the S3 SDK.
+   * Examples
+   * CustomSigner -> 'CustomSigner:org.apache...CustomSignerClass'
 
 Review comment:
   Ack. Surprised the pre-commit didn't catch this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sidseth commented on a change in pull request #1516: HADOOP-16599. Allow a SignerInitializer to be specified along with a

2019-09-26 Thread GitBox
sidseth commented on a change in pull request #1516: HADOOP-16599. Allow a 
SignerInitializer to be specified along with a
URL: https://github.com/apache/hadoop/pull/1516#discussion_r328846243
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/AwsSignerInitializer.java
 ##
 @@ -0,0 +1,54 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
 
 Review comment:
   Have moved it under fs.s3a.auth (not fs.s3a.auth.impl). This is an interface 
which is meant to be implemented by others.
   Removed the interface* annotation in favor or package-info.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1528: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests…

2019-09-26 Thread GitBox
hadoop-yetus commented on issue #1528: HDDS-2181. Ozone Manager should send 
correct ACL type in ACL requests…
URL: https://github.com/apache/hadoop/pull/1528#issuecomment-535703552
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 74 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 43 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 24 | hadoop-ozone in trunk failed. |
   | -1 | compile | 18 | hadoop-hdds in trunk failed. |
   | -1 | compile | 12 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 53 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 941 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 19 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 16 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1026 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 29 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 17 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 31 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 26 | hadoop-ozone in the patch failed. |
   | -1 | compile | 21 | hadoop-hdds in the patch failed. |
   | -1 | compile | 16 | hadoop-ozone in the patch failed. |
   | -1 | javac | 21 | hadoop-hdds in the patch failed. |
   | -1 | javac | 16 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 53 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 790 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 18 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 17 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 28 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 16 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 26 | hadoop-hdds in the patch failed. |
   | -1 | unit | 19 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 30 | The patch does not generate ASF License warnings. |
   | | | 2459 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1528 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux fe6484579c06 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 06998a1 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/3/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/3/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/3/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/3/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/3/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/3/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/3/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/3/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/3/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/3/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/3/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/3/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/3/artifact/out/patch-compile-hadoop-hdds.txt
 

[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1528: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests…

2019-09-26 Thread GitBox
xiaoyuyao commented on a change in pull request #1528: HDDS-2181. Ozone Manager 
should send correct ACL type in ACL requests…
URL: https://github.com/apache/hadoop/pull/1528#discussion_r328844216
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyRenameRequest.java
 ##
 @@ -118,7 +119,8 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 OMException.ResultCodes.INVALID_KEY_NAME);
   }
   // check Acl
-  checkKeyAcls(ozoneManager, volumeName, bucketName, fromKeyName);
+  checkKeyAcls(ozoneManager, volumeName, bucketName, toKeyName,
+  IAccessAuthorizer.ACLType.CREATE);
 
 Review comment:
   Discussed offline, we should have DELETE check for fromKeyName and CREATE 
check for toKeyName.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1478: HDFS-14856 Fetch file ACLs while mounting external store

2019-09-26 Thread GitBox
hadoop-yetus commented on issue #1478: HDFS-14856 Fetch file ACLs while 
mounting external store
URL: https://github.com/apache/hadoop/pull/1478#issuecomment-535700755
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 36 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 62 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1256 | trunk passed |
   | +1 | compile | 1068 | trunk passed |
   | +1 | checkstyle | 177 | trunk passed |
   | +1 | mvnsite | 110 | trunk passed |
   | +1 | shadedclient | 1124 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 113 | trunk passed |
   | 0 | spotbugs | 43 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 224 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for patch |
   | +1 | mvninstall | 86 | the patch passed |
   | +1 | compile | 1010 | the patch passed |
   | +1 | javac | 1010 | the patch passed |
   | +1 | checkstyle | 173 | root: The patch generated 0 new + 452 unchanged - 
1 fixed = 452 total (was 453) |
   | +1 | mvnsite | 109 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 795 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 117 | the patch passed |
   | +1 | findbugs | 242 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 5220 | hadoop-hdfs in the patch failed. |
   | -1 | unit | 33 | hadoop-fs2img in the patch failed. |
   | -1 | asflicense | 53 | The patch generated 2 ASF License warnings. |
   | | | 12071 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
   |   | hadoop.hdfs.server.balancer.TestBalancer |
   |   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
   |   | hadoop.hdfs.TestLeaseRecovery |
   |   | hadoop.hdfs.TestDFSStripedOutputStream |
   |   | hadoop.hdfs.tools.TestDFSZKFailoverController |
   |   | hadoop.hdfs.TestDecommissionWithStriped |
   |   | hadoop.hdfs.TestHDFSFileSystemContract |
   |   | hadoop.hdfs.TestWriteBlockGetsBlockLengthHint |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1478/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1478 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 64825d0c72c7 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 06998a1 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1478/7/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1478/7/artifact/out/patch-unit-hadoop-tools_hadoop-fs2img.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1478/7/testReport/ |
   | asflicense | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1478/7/artifact/out/patch-asflicense-problems.txt
 |
   | Max. process+thread count | 3669 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs hadoop-tools/hadoop-fs2img U: 
. |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1478/7/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #1513: HDDS-2149. Replace FindBugs with SpotBugs

2019-09-26 Thread GitBox
anuengineer commented on issue #1513: HDDS-2149. Replace FindBugs with SpotBugs
URL: https://github.com/apache/hadoop/pull/1513#issuecomment-535695781
 
 
   This patch is not working for me on a Mac. If there are clear instructions 
to make it work, I can test it out and commit this. Otherwise, I am going to 
pass.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1442: HADOOP-16570. S3A committers encounter scale issues

2019-09-26 Thread GitBox
hadoop-yetus commented on issue #1442: HADOOP-16570. S3A committers encounter 
scale issues
URL: https://github.com/apache/hadoop/pull/1442#issuecomment-535694799
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 43 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 10 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1416 | trunk passed |
   | +1 | compile | 38 | trunk passed |
   | +1 | checkstyle | 26 | trunk passed |
   | +1 | mvnsite | 40 | trunk passed |
   | +1 | shadedclient | 934 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 28 | trunk passed |
   | 0 | spotbugs | 72 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 69 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 38 | the patch passed |
   | +1 | compile | 32 | the patch passed |
   | +1 | javac | 32 | the patch passed |
   | -0 | checkstyle | 22 | hadoop-tools/hadoop-aws: The patch generated 8 new 
+ 46 unchanged - 0 fixed = 54 total (was 46) |
   | +1 | mvnsite | 37 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 966 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 26 | the patch passed |
   | +1 | findbugs | 70 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 91 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 31 | The patch does not generate ASF License warnings. |
   | | | 4021 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1442/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1442 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux d71ebd4c0744 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 06998a1 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1442/8/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1442/8/testReport/ |
   | Max. process+thread count | 345 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1442/8/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #1526: HDDS-2180. Add Object ID and update ID on VolumeList Object.

2019-09-26 Thread GitBox
anuengineer commented on issue #1526: HDDS-2180. Add Object ID and update ID on 
VolumeList Object.
URL: https://github.com/apache/hadoop/pull/1526#issuecomment-535684699
 
 
   > None that I know of. However we should not commit a patch without CI on 
principle.
   
   This has been broken for quite a while; So I rely on Jenkins/Yetus and hand 
building before apply the commits. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel commented on issue #1528: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests…

2019-09-26 Thread GitBox
vivekratnavel commented on issue #1528: HDDS-2181. Ozone Manager should send 
correct ACL type in ACL requests…
URL: https://github.com/apache/hadoop/pull/1528#issuecomment-535683415
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16605) NPE in TestAdlSdkConfiguration failing in yetus

2019-09-26 Thread Sneha Vijayarajan (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16938968#comment-16938968
 ] 

Sneha Vijayarajan commented on HADOOP-16605:


[~ste...@apache.org] - Possible issue could be that correct version of 
wildfly-openssl library is not picked up (1.0.7.Final is the expected version - 
Updated as part of https://issues.apache.org/jira/browse/HADOOP-16460)

Build log should be able to show the version of wildfly library version 
bundled. Can you please share a failed build link. 

> NPE in TestAdlSdkConfiguration failing in yetus
> ---
>
> Key: HADOOP-16605
> URL: https://issues.apache.org/jira/browse/HADOOP-16605
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Sneha Vijayarajan
>Priority: Major
>
> Yetus builds are failing with NPE in TestAdlSdkConfiguration if they go near 
> hadoop-azure-datalake. Assuming HADOOP-16438 until proven differently, though 
> HADOOP-16371 may have done something too (how?), something which wasn't 
> picked up as yetus didn't know that hadoo-azuredatalake was affected.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on issue #1526: HDDS-2180. Add Object ID and update ID on VolumeList Object.

2019-09-26 Thread GitBox
arp7 commented on issue #1526: HDDS-2180. Add Object ID and update ID on 
VolumeList Object.
URL: https://github.com/apache/hadoop/pull/1526#issuecomment-535680717
 
 
   None that I know of. However we should not commit a patch without CI on 
principle.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16548) ABFS: Config to enable/disable flush operation

2019-09-26 Thread Sneha Vijayarajan (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16938959#comment-16938959
 ] 

Sneha Vijayarajan commented on HADOOP-16548:


Hi [~ste...@apache.org], Kindly request your review. 

> ABFS: Config to enable/disable flush operation
> --
>
> Key: HADOOP-16548
> URL: https://issues.apache.org/jira/browse/HADOOP-16548
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Bilahari T H
>Assignee: Sneha Vijayarajan
>Priority: Minor
> Attachments: HADOOP-16548.001.patch
>
>
> Make flush operation enabled/disabled through configuration. This is part of 
> performance improvements for ABFS driver.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1442: HADOOP-16570. S3A committers encounter scale issues

2019-09-26 Thread GitBox
hadoop-yetus commented on issue #1442: HADOOP-16570. S3A committers encounter 
scale issues
URL: https://github.com/apache/hadoop/pull/1442#issuecomment-535678029
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 71 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 10 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1266 | trunk passed |
   | +1 | compile | 32 | trunk passed |
   | +1 | checkstyle | 27 | trunk passed |
   | +1 | mvnsite | 38 | trunk passed |
   | +1 | shadedclient | 872 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 28 | trunk passed |
   | 0 | spotbugs | 64 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 62 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 34 | the patch passed |
   | +1 | compile | 27 | the patch passed |
   | +1 | javac | 27 | the patch passed |
   | -0 | checkstyle | 21 | hadoop-tools/hadoop-aws: The patch generated 8 new 
+ 46 unchanged - 0 fixed = 54 total (was 46) |
   | +1 | mvnsite | 31 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 874 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 25 | the patch passed |
   | +1 | findbugs | 67 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 80 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 31 | The patch does not generate ASF License warnings. |
   | | | 3682 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1442/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1442 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux c9b80b627287 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 06998a1 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1442/7/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1442/7/testReport/ |
   | Max. process+thread count | 417 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1442/7/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #1526: HDDS-2180. Add Object ID and update ID on VolumeList Object.

2019-09-26 Thread GitBox
anuengineer commented on issue #1526: HDDS-2180. Add Object ID and update ID on 
VolumeList Object.
URL: https://github.com/apache/hadoop/pull/1526#issuecomment-535675877
 
 
   > Was this committed without a pre-commit run from Anzix?
   
   Looks like it was not run. I did look at the yetus output but the output is 
too noisy to make any sense.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer edited a comment on issue #1526: HDDS-2180. Add Object ID and update ID on VolumeList Object.

2019-09-26 Thread GitBox
anuengineer edited a comment on issue #1526: HDDS-2180. Add Object ID and 
update ID on VolumeList Object.
URL: https://github.com/apache/hadoop/pull/1526#issuecomment-535675877
 
 
   > Was this committed without a pre-commit run from Anzix?
   
   Looks like it was not run. I did look at the yetus output but the output is 
too noisy to make any sense.
   
   Is this commit causing any issues ? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1519: HDDS-2174. Delete GDPR Encryption Key from metadata when a Key is deleted

2019-09-26 Thread GitBox
hadoop-yetus commented on issue #1519: HDDS-2174. Delete GDPR Encryption Key 
from metadata when a Key is deleted
URL: https://github.com/apache/hadoop/pull/1519#issuecomment-535671164
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 40 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for branch |
   | -1 | mvninstall | 31 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 27 | hadoop-ozone in trunk failed. |
   | -1 | compile | 21 | hadoop-hdds in trunk failed. |
   | -1 | compile | 15 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 62 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1067 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 22 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 19 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1165 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 32 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 20 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 18 | Maven dependency ordering for patch |
   | -1 | mvninstall | 37 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 30 | hadoop-ozone in the patch failed. |
   | -1 | compile | 23 | hadoop-hdds in the patch failed. |
   | -1 | compile | 19 | hadoop-ozone in the patch failed. |
   | -1 | javac | 23 | hadoop-hdds in the patch failed. |
   | -1 | javac | 19 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 32 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 895 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 21 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 18 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 33 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 19 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 28 | hadoop-hdds in the patch failed. |
   | -1 | unit | 22 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 2764 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1519 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 27d12699ba98 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 06998a1 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/6/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/6/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/6/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/6/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/6/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/6/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/6/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/6/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/6/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/6/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/6/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/6/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 

[GitHub] [hadoop] hadoop-yetus commented on issue #1528: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests…

2019-09-26 Thread GitBox
hadoop-yetus commented on issue #1528: HDDS-2181. Ozone Manager should send 
correct ACL type in ACL requests…
URL: https://github.com/apache/hadoop/pull/1528#issuecomment-535665911
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 43 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 44 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 30 | hadoop-ozone in trunk failed. |
   | -1 | compile | 20 | hadoop-hdds in trunk failed. |
   | -1 | compile | 16 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 62 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 857 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 22 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 21 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 957 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 32 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 20 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 33 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 29 | hadoop-ozone in the patch failed. |
   | -1 | compile | 23 | hadoop-hdds in the patch failed. |
   | -1 | compile | 21 | hadoop-ozone in the patch failed. |
   | -1 | javac | 23 | hadoop-hdds in the patch failed. |
   | -1 | javac | 21 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 57 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 704 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 22 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 19 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 32 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 20 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 27 | hadoop-hdds in the patch failed. |
   | -1 | unit | 23 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 2343 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1528 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 56f659e81bb7 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 06998a1 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/2/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/2/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/2/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/2/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/2/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/2/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/2/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/2/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/2/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/2/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/2/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/2/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/2/artifact/out/patch-compile-hadoop-hdds.txt
 |

[GitHub] [hadoop] vivekratnavel commented on a change in pull request #1528: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests…

2019-09-26 Thread GitBox
vivekratnavel commented on a change in pull request #1528: HDDS-2181. Ozone 
Manager should send correct ACL type in ACL requests…
URL: https://github.com/apache/hadoop/pull/1528#discussion_r328797836
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyRenameRequest.java
 ##
 @@ -118,7 +119,8 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 OMException.ResultCodes.INVALID_KEY_NAME);
   }
   // check Acl
-  checkKeyAcls(ozoneManager, volumeName, bucketName, fromKeyName);
+  checkKeyAcls(ozoneManager, volumeName, bucketName, toKeyName,
+  IAccessAuthorizer.ACLType.CREATE);
 
 Review comment:
   @bharatviswa504 You mean DELETE acl on fromKeyName? I think it might be a 
good idea to also check for DELETE acl on fromKeyName
   
   @xiaoyuyao What do you think?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1442: HADOOP-16570. S3A committers encounter scale issues

2019-09-26 Thread GitBox
steveloughran commented on issue #1442: HADOOP-16570. S3A committers encounter 
scale issues
URL: https://github.com/apache/hadoop/pull/1442#issuecomment-535658748
 
 
   latest test run -s3 ireland. There's a new unit test which with the current 
values takes 1 min; plan to cut the numbers back, just leaving as is to be 
confident that there's no scale problems with these values. I think I'll 
declare many more blocks per file. 
   
   The slow parts of the test are actually 
   * the non serialized creation of all the pendingset files. that can be 
massively speeded up
   * the actual listing of files to commit. That's a sequential operation at 
the start of the commit; I will look at it a bit to see if there are some easy 
opportunities for speedups, as that would mattter in production, maybe moving 
off fancy java 8 stuff to simple loops will help there.
   
   As that list process is the one for the staging committers, it is only 
listing the consistent cluster FS (i.e HDFS) so s3 perf won't matter. In real 
jobs the time to POST commits will dominate -and with that patch every 
pendingset file is loaded and processed in parallel


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1442: HADOOP-16570. S3A committers encounter scale issues

2019-09-26 Thread GitBox
hadoop-yetus commented on issue #1442: HADOOP-16570. S3A committers encounter 
scale issues
URL: https://github.com/apache/hadoop/pull/1442#issuecomment-535653705
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 10 | https://github.com/apache/hadoop/pull/1442 does not 
apply to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/1442 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1442/6/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16520) race condition in DDB table init and waiting threads

2019-09-26 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16938914#comment-16938914
 ] 

Steve Loughran commented on HADOOP-16520:
-

see also HADOOP-16349

> race condition in DDB table init and waiting threads
> 
>
> Key: HADOOP-16520
> URL: https://issues.apache.org/jira/browse/HADOOP-16520
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
>
> s3guard threads waiting for table creation completion can be scheduled before 
> the creating thread, look for the version marker and then fail.
> window will be sleep times in AWS SDK Table.waitForActive();



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16489) S3Guard operations log has tombstone/PUT swapped

2019-09-26 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-16489:
---

Assignee: Steve Loughran

> S3Guard operations log has tombstone/PUT swapped
> 
>
> Key: HADOOP-16489
> URL: https://issues.apache.org/jira/browse/HADOOP-16489
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.3.0
>
>
> HADOOP-16384 added a log of ongoing operations, e.g PUT/DELETE/TOMBSTONE; but 
> the put/tombstone values are inverted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16489) S3Guard operations log has tombstone/PUT swapped

2019-09-26 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16489.
-
Fix Version/s: 3.3.0
   Resolution: Duplicate

fixed this in HADOOP-16430

> S3Guard operations log has tombstone/PUT swapped
> 
>
> Key: HADOOP-16489
> URL: https://issues.apache.org/jira/browse/HADOOP-16489
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Minor
> Fix For: 3.3.0
>
>
> HADOOP-16384 added a log of ongoing operations, e.g PUT/DELETE/TOMBSTONE; but 
> the put/tombstone values are inverted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15672) add s3guard CLI command to generate session keys for an assumed role

2019-09-26 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-15672:
---

Assignee: Steve Loughran

> add s3guard CLI command to generate session keys for an assumed role
> 
>
> Key: HADOOP-15672
> URL: https://issues.apache.org/jira/browse/HADOOP-15672
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.3.0
>
>
> the aws cli 
> [get-session-token|https://docs.aws.amazon.com/cli/latest/reference/sts/get-session-token.html]
>  can generate the keys for short-lived session.
> I'd like something similar in an s3guard command, e.g. "create-role-keys", 
> which would take the existing (full) credentials and optionally: 
>  * ARN of role to adopt
>  * duration
>  * name
>  * restrictions as path to a JSON file or just stdin
>  * output format
>  * whether to use a per-bucket binding for the credentials in the property 
> names generated
>  * MFA secrets
> output formats
> * A JCEKS file (with chosen passwd? For better hive use: append/replace 
> entries in existing file); saved through the hadoop FS APIs to HDFS, file:// 
> or elsewhere
> * hadoop config XML
> * spark properties
> The goal here is to have a workflow where you can generate role credentials 
> to use for a limited time, store them in a JCEKS file and then share them in 
> your jobs. This can be for: Jenkins, Oozie, build files, ..



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15672) add s3guard CLI command to generate session keys for an assumed role

2019-09-26 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-15672.
-
Fix Version/s: 3.3.0
   Resolution: Duplicate

> add s3guard CLI command to generate session keys for an assumed role
> 
>
> Key: HADOOP-15672
> URL: https://issues.apache.org/jira/browse/HADOOP-15672
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Minor
> Fix For: 3.3.0
>
>
> the aws cli 
> [get-session-token|https://docs.aws.amazon.com/cli/latest/reference/sts/get-session-token.html]
>  can generate the keys for short-lived session.
> I'd like something similar in an s3guard command, e.g. "create-role-keys", 
> which would take the existing (full) credentials and optionally: 
>  * ARN of role to adopt
>  * duration
>  * name
>  * restrictions as path to a JSON file or just stdin
>  * output format
>  * whether to use a per-bucket binding for the credentials in the property 
> names generated
>  * MFA secrets
> output formats
> * A JCEKS file (with chosen passwd? For better hive use: append/replace 
> entries in existing file); saved through the hadoop FS APIs to HDFS, file:// 
> or elsewhere
> * hadoop config XML
> * spark properties
> The goal here is to have a workflow where you can generate role credentials 
> to use for a limited time, store them in a JCEKS file and then share them in 
> your jobs. This can be for: Jenkins, Oozie, build files, ..



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15672) add s3guard CLI command to generate session keys for an assumed role

2019-09-26 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16938905#comment-16938905
 ] 

Steve Loughran commented on HADOOP-15672:
-

I don't think we need this any more. I have successfully issued session 
delegation tokens and then loaded them back from a file for authentication.

That is: you can use hadoop dfsutil to save a token you can then pass on to 
others via email, etc. This includes encryption. Closing as DONE

> add s3guard CLI command to generate session keys for an assumed role
> 
>
> Key: HADOOP-15672
> URL: https://issues.apache.org/jira/browse/HADOOP-15672
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Minor
>
> the aws cli 
> [get-session-token|https://docs.aws.amazon.com/cli/latest/reference/sts/get-session-token.html]
>  can generate the keys for short-lived session.
> I'd like something similar in an s3guard command, e.g. "create-role-keys", 
> which would take the existing (full) credentials and optionally: 
>  * ARN of role to adopt
>  * duration
>  * name
>  * restrictions as path to a JSON file or just stdin
>  * output format
>  * whether to use a per-bucket binding for the credentials in the property 
> names generated
>  * MFA secrets
> output formats
> * A JCEKS file (with chosen passwd? For better hive use: append/replace 
> entries in existing file); saved through the hadoop FS APIs to HDFS, file:// 
> or elsewhere
> * hadoop config XML
> * spark properties
> The goal here is to have a workflow where you can generate role credentials 
> to use for a limited time, store them in a JCEKS file and then share them in 
> your jobs. This can be for: Jenkins, Oozie, build files, ..



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1528: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests…

2019-09-26 Thread GitBox
bharatviswa504 commented on a change in pull request #1528: HDDS-2181. Ozone 
Manager should send correct ACL type in ACL requests…
URL: https://github.com/apache/hadoop/pull/1528#discussion_r328782828
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyRenameRequest.java
 ##
 @@ -118,7 +119,8 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 OMException.ResultCodes.INVALID_KEY_NAME);
   }
   // check Acl
-  checkKeyAcls(ozoneManager, volumeName, bucketName, fromKeyName);
+  checkKeyAcls(ozoneManager, volumeName, bucketName, toKeyName,
+  IAccessAuthorizer.ACLType.CREATE);
 
 Review comment:
   Here do we need to check whether the originalKey acls also?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1497: HDDS-2001. Update Ratis version to 0.4.0.

2019-09-26 Thread GitBox
hadoop-yetus commented on issue #1497: HDDS-2001. Update Ratis version to 0.4.0.
URL: https://github.com/apache/hadoop/pull/1497#issuecomment-535646732
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 964 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ ozone-0.4.1 Compile Tests _ |
   | 0 | mvndep | 29 | Maven dependency ordering for branch |
   | +1 | mvninstall | 681 | ozone-0.4.1 passed |
   | +1 | compile | 388 | ozone-0.4.1 passed |
   | +1 | checkstyle | 79 | ozone-0.4.1 passed |
   | +1 | mvnsite | 0 | ozone-0.4.1 passed |
   | +1 | shadedclient | 879 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 176 | ozone-0.4.1 passed |
   | 0 | spotbugs | 421 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 615 | ozone-0.4.1 passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 40 | Maven dependency ordering for patch |
   | +1 | mvninstall | 553 | the patch passed |
   | +1 | compile | 390 | the patch passed |
   | +1 | javac | 390 | the patch passed |
   | +1 | checkstyle | 90 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 3 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 719 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 174 | the patch passed |
   | +1 | findbugs | 633 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 322 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2382 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 53 | The patch does not generate ASF License warnings. |
   | | | 9391 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.client.rpc.TestContainerReplicationEndToEnd |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1497/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1497 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 006cbfc41bc7 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | ozone-0.4.1 / 2eb41fb |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1497/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1497/2/testReport/ |
   | Max. process+thread count | 5361 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds hadoop-hdds/container-service hadoop-ozone 
hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1497/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1528: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests…

2019-09-26 Thread GitBox
bharatviswa504 commented on a change in pull request #1528: HDDS-2181. Ozone 
Manager should send correct ACL type in ACL requests…
URL: https://github.com/apache/hadoop/pull/1528#discussion_r328782110
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCreateRequest.java
 ##
 @@ -162,7 +163,8 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 OMClientResponse omClientResponse = null;
 try {
   // check Acl
-  checkBucketAcls(ozoneManager, volumeName, bucketName, keyName);
+  checkKeyAcls(ozoneManager, volumeName, bucketName, keyName,
 
 Review comment:
   And also our acceptance tests will fail.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1528: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests…

2019-09-26 Thread GitBox
bharatviswa504 commented on a change in pull request #1528: HDDS-2181. Ozone 
Manager should send correct ACL type in ACL requests…
URL: https://github.com/apache/hadoop/pull/1528#discussion_r328781990
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCreateRequest.java
 ##
 @@ -162,7 +163,8 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 OMClientResponse omClientResponse = null;
 try {
   // check Acl
-  checkBucketAcls(ozoneManager, volumeName, bucketName, keyName);
+  checkKeyAcls(ozoneManager, volumeName, bucketName, keyName,
 
 Review comment:
   This requires a change in ozoneNativeAuthorizer, as this newly created Key 
does not have an entry in key table. (If this is created for the first time)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel commented on a change in pull request #1528: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests…

2019-09-26 Thread GitBox
vivekratnavel commented on a change in pull request #1528: HDDS-2181. Ozone 
Manager should send correct ACL type in ACL requests…
URL: https://github.com/apache/hadoop/pull/1528#discussion_r328781882
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileCreateRequest.java
 ##
 @@ -177,7 +178,8 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 OMClientResponse omClientResponse = null;
 try {
   // check Acl
-  checkBucketAcls(ozoneManager, volumeName, bucketName, keyName);
+  checkKeyAcls(ozoneManager, volumeName, bucketName, keyName,
 
 Review comment:
   I can take care of this as part of 
https://issues.apache.org/jira/browse/HDDS-2191


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1528: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests…

2019-09-26 Thread GitBox
bharatviswa504 commented on a change in pull request #1528: HDDS-2181. Ozone 
Manager should send correct ACL type in ACL requests…
URL: https://github.com/apache/hadoop/pull/1528#discussion_r328781534
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileCreateRequest.java
 ##
 @@ -177,7 +178,8 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 OMClientResponse omClientResponse = null;
 try {
   // check Acl
-  checkBucketAcls(ozoneManager, volumeName, bucketName, keyName);
+  checkKeyAcls(ozoneManager, volumeName, bucketName, keyName,
 
 Review comment:
   This requires a change in ozoneNativeAuthorizer, as this newly created Key 
does not have an entry in key table. (If this is created for the first time)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1528: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests…

2019-09-26 Thread GitBox
bharatviswa504 commented on a change in pull request #1528: HDDS-2181. Ozone 
Manager should send correct ACL type in ACL requests…
URL: https://github.com/apache/hadoop/pull/1528#discussion_r328781534
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileCreateRequest.java
 ##
 @@ -177,7 +178,8 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 OMClientResponse omClientResponse = null;
 try {
   // check Acl
-  checkBucketAcls(ozoneManager, volumeName, bucketName, keyName);
+  checkKeyAcls(ozoneManager, volumeName, bucketName, keyName,
 
 Review comment:
   This requires a change in ozoneNativeAuthorizer, as this newly created Key 
does not have an entry in key table.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1528: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests…

2019-09-26 Thread GitBox
bharatviswa504 commented on a change in pull request #1528: HDDS-2181. Ozone 
Manager should send correct ACL type in ACL requests…
URL: https://github.com/apache/hadoop/pull/1528#discussion_r328781404
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMDirectoryCreateRequest.java
 ##
 @@ -127,7 +128,8 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 OMClientResponse omClientResponse = null;
 try {
   // check Acl
-  checkBucketAcls(ozoneManager, volumeName, bucketName, keyName);
+  checkKeyAcls(ozoneManager, volumeName, bucketName, keyName,
 
 Review comment:
   This requires a change in ozoneNativeAuthorizer, as this newly created Key 
does not have an entry in key table. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel commented on a change in pull request #1528: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests…

2019-09-26 Thread GitBox
vivekratnavel commented on a change in pull request #1528: HDDS-2181. Ozone 
Manager should send correct ACL type in ACL requests…
URL: https://github.com/apache/hadoop/pull/1528#discussion_r328781094
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketCreateRequest.java
 ##
 @@ -143,7 +143,7 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 try {
   // check Acl
   if (ozoneManager.getAclsEnabled()) {
-checkAcls(ozoneManager, OzoneObj.ResourceType.VOLUME,
+checkAcls(ozoneManager, OzoneObj.ResourceType.BUCKET,
 
 Review comment:
   Created https://issues.apache.org/jira/browse/HDDS-2191 to take care of this


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ashvina commented on a change in pull request #1478: HDFS-14856 Fetch file ACLs while mounting external store

2019-09-26 Thread GitBox
ashvina commented on a change in pull request #1478: HDFS-14856 Fetch file ACLs 
while mounting external store
URL: https://github.com/apache/hadoop/pull/1478#discussion_r328767391
 
 

 ##
 File path: 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/UGIResolver.java
 ##
 @@ -132,4 +138,134 @@ public FsPermission permission(FileStatus s) {
 return s.getPermission();
   }
 
+  private long resolve(AclStatus aclStatus) {
+return buildPermissionStatus(
+user(aclStatus), group(aclStatus), permission(aclStatus).toShort());
+  }
+
+  /**
+   * Get the locally mapped user for external {@link AclStatus}.
+   *
+   * @param aclStatus AclStatus on external store.
+   * @return locally mapped user name.
+   */
+  public String user(AclStatus aclStatus) {
 
 Review comment:
   The user mapping behavior can be different in a subclass. For e.g. In this 
case `SingleUGIResolver` maps all users to a single user.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1505: HDDS-2166. Some RPC metrics are missing from SCM prometheus endpoint

2019-09-26 Thread GitBox
xiaoyuyao commented on a change in pull request #1505: HDDS-2166. Some RPC 
metrics are missing from SCM prometheus endpoint
URL: https://github.com/apache/hadoop/pull/1505#discussion_r328759936
 
 

 ##
 File path: 
hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/TestPrometheusMetricsSink.java
 ##
 @@ -71,6 +73,50 @@ public void testPublish() throws IOException {
 metrics.shutdown();
   }
 
+  @Test
+  public void testPublishWithSameName() throws IOException {
+//GIVEN
+MetricsSystem metrics = DefaultMetricsSystem.instance();
+
+metrics.init("test");
+PrometheusMetricsSink sink = new PrometheusMetricsSink();
+metrics.register("Prometheus", "Prometheus", sink);
+metrics.register("FooBar", "fooBar", (MetricsSource) (collector, all) -> {
+  collector.addRecord("RpcMetrics").add(new MetricsTag(PORT_INFO, "1234"))
+  .addGauge(COUNTER_INFO, 123).endRecord();
+
+  collector.addRecord("RpcMetrics").add(new MetricsTag(
+  PORT_INFO, "2345")).addGauge(COUNTER_INFO, 234).endRecord();
+});
+
+metrics.start();
+metrics.publishMetricsNow();
+
+ByteArrayOutputStream stream = new ByteArrayOutputStream();
+OutputStreamWriter writer = new OutputStreamWriter(stream, UTF_8);
+
+//WHEN
+sink.writeMetrics(writer);
+writer.flush();
+
+//THEN
+String writtenMetrics = stream.toString(UTF_8.name());
+System.out.println(writtenMetrics);
 
 Review comment:
   NIT: debug message can be removed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >