[GitHub] [hadoop] hadoop-yetus commented on issue #547: HDDS-594. SCM CA: DN sends CSR and uses certificate issued by SCM.

2019-03-06 Thread GitBox
hadoop-yetus commented on issue #547: HDDS-594. SCM CA: DN sends CSR and uses 
certificate issued by SCM.
URL: https://github.com/apache/hadoop/pull/547#issuecomment-470404232
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 30 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1301 | trunk passed |
   | +1 | compile | 79 | trunk passed |
   | +1 | checkstyle | 30 | trunk passed |
   | +1 | mvnsite | 72 | trunk passed |
   | +1 | shadedclient | 785 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 124 | trunk passed |
   | +1 | javadoc | 64 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 11 | Maven dependency ordering for patch |
   | +1 | mvninstall | 78 | the patch passed |
   | +1 | compile | 69 | the patch passed |
   | +1 | javac | 69 | the patch passed |
   | +1 | checkstyle | 25 | the patch passed |
   | +1 | mvnsite | 65 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 770 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 139 | the patch passed |
   | +1 | javadoc | 60 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 73 | common in the patch failed. |
   | -1 | unit | 80 | container-service in the patch failed. |
   | +1 | asflicense | 25 | The patch does not generate ASF License warnings. |
   | | | 3940 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
   |   | hadoop.ozone.container.common.TestDatanodeStateMachine |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-547/13/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/547 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 1f4f9c25055d 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri 
Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 09a9938 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-547/13/artifact/out/patch-unit-hadoop-hdds_common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-547/13/artifact/out/patch-unit-hadoop-hdds_container-service.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-547/13/testReport/ |
   | Max. process+thread count | 399 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service U: 
hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-547/13/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16119) KMS on Hadoop RPC Engine

2019-03-06 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16786438#comment-16786438
 ] 

He Xiaoqiao commented on HADOOP-16119:
--

[~jojochuang] Thanks for your quick response. And sorry for fuzzy expression.
{quote}Regarding delegation tokens – delegation tokens are stored in zookeeper, 
and after HADOOP-14445, delegation tokens are shared among KMS instances.{quote}
My branch is based on branch-2.7 and not patch HADOOP-14445, it makes sense for 
me. If just consider community version (include branch trunk), It seems to 
offer local storage with Java KeyStore only and no other choice, Please correct 
me if I am wrong. Looking forward to CKTS open source.
About part "HA", I means KMS instance adding/removing/fault is not transparent 
for client. title "HA" may mislead, I think this is also scalability issue and 
sorry for that.:)

> KMS on Hadoop RPC Engine
> 
>
> Key: HADOOP-16119
> URL: https://issues.apache.org/jira/browse/HADOOP-16119
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Jonathan Eagles
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: Design doc_ KMS v2.pdf
>
>
> Per discussion on common-dev and text copied here for ease of reference.
> https://lists.apache.org/thread.html/0e2eeaf07b013f17fad6d362393f53d52041828feec53dcddff04808@%3Ccommon-dev.hadoop.apache.org%3E
> {noformat}
> Thanks all for the inputs,
> To offer additional information (while Daryn is working on his stuff),
> optimizing RPC encryption opens up another possibility: migrating KMS
> service to use Hadoop RPC.
> Today's KMS uses HTTPS + REST API, much like webhdfs. It has very
> undesirable performance (a few thousand ops per second) compared to
> NameNode. Unfortunately for each NameNode namespace operation you also need
> to access KMS too.
> Migrating KMS to Hadoop RPC greatly improves its performance (if
> implemented correctly), and RPC encryption would be a prerequisite. So
> please keep that in mind when discussing the Hadoop RPC encryption
> improvements. Cloudera is very interested to help with the Hadoop RPC
> encryption project because a lot of our customers are using at-rest
> encryption, and some of them are starting to hit KMS performance limit.
> This whole "migrating KMS to Hadoop RPC" was Daryn's idea. I heard this
> idea in the meetup and I am very thrilled to see this happening because it
> is a real issue bothering some of our customers, and I suspect it is the
> right solution to address this tech debt.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16161) NetworkTopology#getWeightUsingNetworkLocation return unexpected result

2019-03-06 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16786402#comment-16786402
 ] 

He Xiaoqiao commented on HADOOP-16161:
--

Thanks [~elgoiri]. resubmit [^HADOOP-16161.003.patch] which following review 
comments, and pending Jenkins.

> NetworkTopology#getWeightUsingNetworkLocation return unexpected result
> --
>
> Key: HADOOP-16161
> URL: https://issues.apache.org/jira/browse/HADOOP-16161
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HADOOP-16161.001.patch, HADOOP-16161.002.patch, 
> HADOOP-16161.003.patch
>
>
> Consider the following scenario:
> 1. there are 4 slaves and topology like:
> Rack: /IDC/RACK1
>hostname1
>hostname2
> Rack: /IDC/RACK2
>hostname3
>hostname4
> 2. Reader from hostname1, and calculate weight between reader and [hostname1, 
> hostname3, hostname4] by #getWeight, and their corresponding values are 
> [0,4,4]
> 3. Reader from client which is not in the topology, and in the same IDC but 
> in none rack of the topology, and calculate weight between reader and 
> [hostname1, hostname3, hostname4] by #getWeightUsingNetworkLocation, and 
> their corresponding values are [2,2,2]
> 4. Other different Reader can get the similar results.
> The weight result for case #3 is obviously not the expected value, the truth 
> is [4,4,4]. this issue may cause reader not really following arrange: local 
> -> local rack -> remote rack. 
> After dig the detailed implement, the root cause is 
> #getWeightUsingNetworkLocation only calculate distance between Racks rather 
> than hosts.
> I think we should add constant 2 to correct the weight of 
> #getWeightUsingNetworkLocation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16161) NetworkTopology#getWeightUsingNetworkLocation return unexpected result

2019-03-06 Thread He Xiaoqiao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

He Xiaoqiao updated HADOOP-16161:
-
Attachment: HADOOP-16161.003.patch

> NetworkTopology#getWeightUsingNetworkLocation return unexpected result
> --
>
> Key: HADOOP-16161
> URL: https://issues.apache.org/jira/browse/HADOOP-16161
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HADOOP-16161.001.patch, HADOOP-16161.002.patch, 
> HADOOP-16161.003.patch
>
>
> Consider the following scenario:
> 1. there are 4 slaves and topology like:
> Rack: /IDC/RACK1
>hostname1
>hostname2
> Rack: /IDC/RACK2
>hostname3
>hostname4
> 2. Reader from hostname1, and calculate weight between reader and [hostname1, 
> hostname3, hostname4] by #getWeight, and their corresponding values are 
> [0,4,4]
> 3. Reader from client which is not in the topology, and in the same IDC but 
> in none rack of the topology, and calculate weight between reader and 
> [hostname1, hostname3, hostname4] by #getWeightUsingNetworkLocation, and 
> their corresponding values are [2,2,2]
> 4. Other different Reader can get the similar results.
> The weight result for case #3 is obviously not the expected value, the truth 
> is [4,4,4]. this issue may cause reader not really following arrange: local 
> -> local rack -> remote rack. 
> After dig the detailed implement, the root cause is 
> #getWeightUsingNetworkLocation only calculate distance between Racks rather 
> than hosts.
> I think we should add constant 2 to correct the weight of 
> #getWeightUsingNetworkLocation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13577) Download page must link to https://www.apache.org/dist/ for KEYS, sigs, hashes

2019-03-06 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13577:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Fixed. 
https://github.com/apache/hadoop-site/commit/a2f61eebff79c507f3b1b2c38ef3bef142717dc4

> Download page must link to https://www.apache.org/dist/ for KEYS, sigs, hashes
> --
>
> Key: HADOOP-13577
> URL: https://issues.apache.org/jira/browse/HADOOP-13577
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: site
> Environment: http://hadoop.apache.org/releases.html
>Reporter: Sebb
>Assignee: Akira Ajisaka
>Priority: Major
>
> The download page currently points to 
> https://dist.apache.org/repos/dist/release/hadoop/...
> for KEYS, sigs and hashes.
> However the dist SVN tree is not designed for this; such files must be 
> downloaded from the ASF mirrors, i.e
> https://www.apache.org/dist/hadoop/...
> Please can you adjust the links accordingly?
> The links should use HTTPS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] asfgit closed pull request #559: SUBMARINE-41:Fix ASF warning in submarine

2019-03-06 Thread GitBox
asfgit closed pull request #559: SUBMARINE-41:Fix ASF warning in submarine
URL: https://github.com/apache/hadoop/pull/559
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16173) Apply some link checker to hadoop-site and fix dead links

2019-03-06 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16173:
---
Description: Reported by [~busbey]. 
https://github.com/apache/hadoop-site/pull/4#discussion_r262514326

> Apply some link checker to hadoop-site and fix dead links
> -
>
> Key: HADOOP-16173
> URL: https://issues.apache.org/jira/browse/HADOOP-16173
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Priority: Major
>
> Reported by [~busbey]. 
> https://github.com/apache/hadoop-site/pull/4#discussion_r262514326



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16173) Apply some link checker to hadoop-site and fix dead links

2019-03-06 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-16173:
--

 Summary: Apply some link checker to hadoop-site and fix dead links
 Key: HADOOP-16173
 URL: https://issues.apache.org/jira/browse/HADOOP-16173
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Akira Ajisaka






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16167) "hadoop CLASSFILE" prints error messages on Ubuntu 18

2019-03-06 Thread Daniel Templeton (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16786346#comment-16786346
 ] 

Daniel Templeton commented on HADOOP-16167:
---

Good point, [~eyang].  I think [~aw] was aiming at having consistency in the 
scripts for how substitution is done, but in the specific cases where we're 
having issues, that's a super easy fix.  [~aw], any comments?

> "hadoop CLASSFILE" prints error messages on Ubuntu 18
> -
>
> Key: HADOOP-16167
> URL: https://issues.apache.org/jira/browse/HADOOP-16167
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.2.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Major
> Attachments: HADOOP-16167.001.patch
>
>
> {noformat}
> # hadoop org.apache.hadoop.conf.Configuration
> /usr/lib/hadoop/bin/../lib/hadoop/libexec//hadoop-functions.sh: line 2366: 
> HADOOP_ORG.APACHE.HADOOP.CONF.CONFIGURATION_USER: bad substitution
> /usr/lib/hadoop/bin/../lib/hadoop/libexec//hadoop-functions.sh: line 2331: 
> HADOOP_ORG.APACHE.HADOOP.CONF.CONFIGURATION_USER: bad substitution
> /usr/lib/hadoop/bin/../lib/hadoop/libexec//hadoop-functions.sh: line 2426: 
> HADOOP_ORG.APACHE.HADOOP.CONF.CONFIGURATION_OPTS: bad substitution
> {noformat}
> The issue is a regression in bash 4.4.  See 
> [here|http://savannah.gnu.org/support/?109649].  The extraneous output can 
> break scripts that read the command output.
> According to [~aw]:
> {quote}Oh, I think I see the bug.  HADOOP_SUBCMD (and equivalents in yarn, 
> hdfs, etc) just needs some special handling when a custom method is being 
> called.  For example, there’s no point in checking to see if it should run 
> with privileges, so just skip over that.  Probably a few other places too.  
> Relatively easy fix.  2 lines of code, maybe.{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru merged pull request #557: HDDS-1175. Serve read requests directly from RocksDB.

2019-03-06 Thread GitBox
hanishakoneru merged pull request #557: HDDS-1175. Serve read requests directly 
from RocksDB.
URL: https://github.com/apache/hadoop/pull/557
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #560: HDDS-1226. ozone-filesystem jar missing in hadoop classpath

2019-03-06 Thread GitBox
hadoop-yetus commented on issue #560: HDDS-1226. ozone-filesystem jar missing 
in hadoop classpath
URL: https://github.com/apache/hadoop/pull/560#issuecomment-470372635
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 24 | Docker mode activated. |
   ||| _ Prechecks _ |
   | 0 | yamllint | 1 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 69 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1033 | trunk passed |
   | +1 | compile | 961 | trunk passed |
   | -1 | mvnsite | 112 | hadoop-ozone in trunk failed. |
   | +1 | shadedclient | 654 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 162 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | -1 | mvninstall | 108 | hadoop-ozone in the patch failed. |
   | -1 | mvninstall | 14 | dist in the patch failed. |
   | -1 | mvninstall | 11 | ozonefs-lib in the patch failed. |
   | +1 | compile | 905 | the patch passed |
   | +1 | javac | 905 | the patch passed |
   | -1 | mvnsite | 94 | hadoop-ozone in the patch failed. |
   | -1 | mvnsite | 30 | ozonefs-lib in the patch failed. |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | shelldocs | 38 | There were no new shelldocs issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 4 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 615 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 27 | ozonefs-lib in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 33 | docs in the patch passed. |
   | -1 | unit | 110 | hadoop-ozone in the patch failed. |
   | +1 | unit | 35 | dist in the patch passed. |
   | -1 | unit | 28 | ozonefs-lib in the patch failed. |
   | +1 | unit | 32 | ozonefs-lib-current in the patch passed. |
   | +1 | asflicense | 46 | The patch does not generate ASF License warnings. |
   | | | 5847 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-560/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/560 |
   | Optional Tests |  dupname  asflicense  mvnsite  shellcheck  shelldocs  
compile  javac  javadoc  mvninstall  unit  shadedclient  xml  yamllint  |
   | uname | Linux 320e3c4e668a 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / a55fc36 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-560/2/artifact/out/branch-mvnsite-hadoop-ozone.txt
 |
   | shellcheck | v0.4.6 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-560/2/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-560/2/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-560/2/artifact/out/patch-mvninstall-hadoop-ozone_ozonefs-lib.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-560/2/artifact/out/patch-mvnsite-hadoop-ozone.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-560/2/artifact/out/patch-mvnsite-hadoop-ozone_ozonefs-lib.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-560/2/artifact/out/patch-javadoc-hadoop-ozone_ozonefs-lib.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-560/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-560/2/artifact/out/patch-unit-hadoop-ozone_ozonefs-lib.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-560/2/testReport/ |
   | Max. process+thread count | 469 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/docs hadoop-ozone hadoop-ozone/dist 
hadoop-ozone/ozonefs-lib hadoop-ozone/ozonefs-lib-current U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-560/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Ap

[jira] [Commented] (HADOOP-16169) ABFS: Bug fix for getPathProperties

2019-03-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16786331#comment-16786331
 ] 

Hadoop QA commented on HADOOP-16169:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
43s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 37m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
16s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16169 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12961483/HADOOP-16169-001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6a92c5762a46 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a55fc36 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16027/testReport/ |
| Max. process+thread count | 342 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16027/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> ABFS: Bug fix for getPathProperties
> ---
>
> Key: HADOOP-16169
> URL: https://issues.apa

[GitHub] [hadoop] swagle commented on a change in pull request #558: HDDS-1217. Refactor ChillMode rules and chillmode manager.

2019-03-06 Thread GitBox
swagle commented on a change in pull request #558: HDDS-1217. Refactor 
ChillMode rules and chillmode manager.
URL: https://github.com/apache/hadoop/pull/558#discussion_r263220972
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/chillmode/ContainerChillModeRule.java
 ##
 @@ -46,14 +46,20 @@
   private double maxContainer;
 
   private AtomicLong containerWithMinReplicas = new AtomicLong(0);
-  private final SCMChillModeManager chillModeManager;
 
-  public ContainerChillModeRule(Configuration conf,
+  public ContainerChillModeRule(String ruleName, EventQueue eventQueue,
+  Configuration conf,
   List containers, SCMChillModeManager manager) {
+super(manager, ruleName);
 chillModeCutoff = conf.getDouble(
 HddsConfigKeys.HDDS_SCM_CHILLMODE_THRESHOLD_PCT,
 HddsConfigKeys.HDDS_SCM_CHILLMODE_THRESHOLD_PCT_DEFAULT);
-chillModeManager = manager;
+
+if (chillModeCutoff > 1.0 || chillModeCutoff < 0.0) {
 
 Review comment:
   FFT: Guava preconditions might improve readability here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] swagle commented on a change in pull request #558: HDDS-1217. Refactor ChillMode rules and chillmode manager.

2019-03-06 Thread GitBox
swagle commented on a change in pull request #558: HDDS-1217. Refactor 
ChillMode rules and chillmode manager.
URL: https://github.com/apache/hadoop/pull/558#discussion_r263219972
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/chillmode/ChillModeExitRule.java
 ##
 @@ -17,16 +17,79 @@
  */
 package org.apache.hadoop.hdds.scm.chillmode;
 
+import org.apache.hadoop.hdds.server.events.EventHandler;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+
+
 /**
- * Interface for defining chill mode exit rules.
+ * Abstract class for ChillModeExitRules. When a new rule is added, the new
+ * rule should extend this abstract class.
+ *
+ * Each rule Should do:
+ * 1. Should add a handler for the event it is looking for during the
+ * initialization of the rule.
+ * 2. Add the rule in ScmChillModeManager to list of the rules.
+ *
  *
  * @param 
  */
-public interface ChillModeExitRule {
 
 Review comment:
   Just food for thought: You could actually still keep this as an interface 
and provide a default implementation for onMessage. New Java 8 contract change 
that avoids introducing inheritance purely for a default impl. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #527: HDDS-1093. Configuration tab in OM/SCM ui is not displaying the correct values

2019-03-06 Thread GitBox
hadoop-yetus commented on issue #527: HDDS-1093. Configuration tab in OM/SCM ui 
is not displaying the correct values
URL: https://github.com/apache/hadoop/pull/527#issuecomment-470357905
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 89 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 90 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1211 | trunk passed |
   | +1 | compile | 1048 | trunk passed |
   | +1 | checkstyle | 221 | trunk passed |
   | +1 | mvnsite | 440 | trunk passed |
   | +1 | shadedclient | 738 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist |
   | +1 | findbugs | 291 | trunk passed |
   | +1 | javadoc | 234 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for patch |
   | -1 | mvninstall | 20 | dist in the patch failed. |
   | -1 | jshint | 195 | The patch generated 3087 new + 0 unchanged - 0 fixed = 
3087 total (was 0) |
   | +1 | compile | 957 | the patch passed |
   | +1 | javac | 957 | the patch passed |
   | +1 | checkstyle | 220 | the patch passed |
   | +1 | mvnsite | 274 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | shelldocs | 30 | There were no new shelldocs issues. |
   | +1 | whitespace | 1 | The patch has no whitespace issues. |
   | +1 | shadedclient | 694 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist |
   | +1 | findbugs | 323 | the patch passed |
   | +1 | javadoc | 231 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 547 | hadoop-common in the patch failed. |
   | +1 | unit | 113 | common in the patch passed. |
   | +1 | unit | 49 | framework in the patch passed. |
   | +1 | unit | 143 | server-scm in the patch passed. |
   | +1 | unit | 35 | dist in the patch passed. |
   | +1 | asflicense | 52 | The patch does not generate ASF License warnings. |
   | | | 8812 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.util.TestReadWriteDiskValidator |
   |   | hadoop.util.TestBasicDiskValidator |
   |   | hadoop.util.TestDiskCheckerWithDiskIo |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/527 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  jshint  shellcheck  
shelldocs  |
   | uname | Linux e5b0179c4a92 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 618e009 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | shellcheck | v0.4.6 |
   | findbugs | v3.1.0-RC1 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/6/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
   | jshint | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/6/artifact/out/diff-patch-jshint.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/6/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/6/testReport/ |
   | Max. process+thread count | 1396 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-hdds/common 
hadoop-hdds/framework hadoop-hdds/server-scm hadoop-ozone/dist U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/6/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #560: HDDS-1226. ozone-filesystem jar missing in hadoop classpath

2019-03-06 Thread GitBox
bharatviswa504 commented on a change in pull request #560: HDDS-1226. 
ozone-filesystem jar missing in hadoop classpath
URL: https://github.com/apache/hadoop/pull/560#discussion_r263211970
 
 

 ##
 File path: hadoop-ozone/dist/dev-support/bin/dist-layout-stitching
 ##
 @@ -110,7 +110,6 @@ run cp 
"${ROOT}/hadoop-ozone/common/src/main/bin/stop-ozone.sh" "sbin/"
 run mkdir -p "./share/hadoop/ozoneplugin"
 run cp 
"${ROOT}/hadoop-ozone/objectstore-service/target/hadoop-ozone-objectstore-service-${HDDS_VERSION}-plugin.jar"
 "./share/hadoop/ozoneplugin/hadoop-ozone-datanode-plugin-${HDDS_VERSION}.jar"
 
 
 Review comment:
   We can open a new Jira to remove share/hadoop/ozoneplugin also?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #561: HDDS-1043. Enable token based authentication for S3 api.

2019-03-06 Thread GitBox
hadoop-yetus commented on issue #561: HDDS-1043. Enable token based 
authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#issuecomment-470352560
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 29 | Docker mode activated. |
   ||| _ Prechecks _ |
   | 0 | yamllint | 0 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1016 | trunk passed |
   | +1 | compile | 968 | trunk passed |
   | +1 | checkstyle | 199 | trunk passed |
   | -1 | mvnsite | 41 | integration-test in trunk failed. |
   | -1 | mvnsite | 37 | ozone-manager in trunk failed. |
   | +1 | shadedclient | 655 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist hadoop-ozone/integration-test |
   | -1 | findbugs | 34 | ozone-manager in trunk failed. |
   | +1 | javadoc | 217 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for patch |
   | -1 | mvninstall | 19 | dist in the patch failed. |
   | -1 | mvninstall | 27 | integration-test in the patch failed. |
   | +1 | compile | 967 | the patch passed |
   | +1 | cc | 967 | the patch passed |
   | +1 | javac | 967 | the patch passed |
   | +1 | checkstyle | 237 | the patch passed |
   | -1 | mvnsite | 41 | integration-test in the patch failed. |
   | +1 | shellcheck | 1 | There were no new shellcheck issues. |
   | +1 | shelldocs | 30 | There were no new shelldocs issues. |
   | -1 | whitespace | 4 | The patch has 75 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | -1 | whitespace | 5 | The patch 19851  line(s) with tabs. |
   | +1 | shadedclient | 796 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist hadoop-ozone/integration-test |
   | +1 | findbugs | 273 | the patch passed |
   | +1 | javadoc | 201 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 85 | common in the patch passed. |
   | +1 | unit | 45 | common in the patch passed. |
   | +1 | unit | 31 | dist in the patch passed. |
   | -1 | unit | 37 | integration-test in the patch failed. |
   | +1 | unit | 47 | ozone-manager in the patch passed. |
   | +1 | unit | 46 | s3gateway in the patch passed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 7266 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-561/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/561 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  cc  yamllint  shellcheck  
shelldocs  |
   | uname | Linux 28089b4aa12f 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 45f976f |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-561/7/artifact/out/branch-mvnsite-hadoop-ozone_integration-test.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-561/7/artifact/out/branch-mvnsite-hadoop-ozone_ozone-manager.txt
 |
   | shellcheck | v0.4.6 |
   | findbugs | v3.1.0-RC1 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-561/7/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-561/7/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-561/7/artifact/out/patch-mvninstall-hadoop-ozone_integration-test.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-561/7/artifact/out/patch-mvnsite-hadoop-ozone_integration-test.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-561/7/artifact/out/whitespace-eol.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-561/7/artifact/out/whitespace-tabs.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-561/7/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-561/7/testReport/ |
   | Max. process+t

[jira] [Updated] (HADOOP-16169) ABFS: Bug fix for getPathProperties

2019-03-06 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-16169:
-
Description: 
There is a bug in AbfsClient, getPathProperties().
For both xns accnout and non-xns account, it should use 
AbfsRestOperationType.GetPathStatus.

  was:
There is a bug in AbfsClient, getPathProperties().
For both xns accnout and non-xns account, it should use 
AbfsRestOperationType.GetPathStatus 


> ABFS: Bug fix for getPathProperties
> ---
>
> Key: HADOOP-16169
> URL: https://issues.apache.org/jira/browse/HADOOP-16169
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-16169-001.patch
>
>
> There is a bug in AbfsClient, getPathProperties().
> For both xns accnout and non-xns account, it should use 
> AbfsRestOperationType.GetPathStatus.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16169) ABFS: Bug fix for getPathProperties

2019-03-06 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-16169:
-
Status: Patch Available  (was: Open)

> ABFS: Bug fix for getPathProperties
> ---
>
> Key: HADOOP-16169
> URL: https://issues.apache.org/jira/browse/HADOOP-16169
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-16169-001.patch
>
>
> There is a bug in AbfsClient, getPathProperties().
> For both xns accnout and non-xns account, it should use 
> AbfsRestOperationType.GetPathStatus.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16169) ABFS: Bug fix for getPathProperties

2019-03-06 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-16169:
-
Attachment: HADOOP-16169-001.patch

> ABFS: Bug fix for getPathProperties
> ---
>
> Key: HADOOP-16169
> URL: https://issues.apache.org/jira/browse/HADOOP-16169
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-16169-001.patch
>
>
> There is a bug in AbfsClient, getPathProperties().
> For both xns accnout and non-xns account, it should use 
> AbfsRestOperationType.GetPathStatus 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16169) ABFS: Bug fix for getPathProperties

2019-03-06 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-16169:
-
Attachment: (was: HADOOP-16169-001.patch)

> ABFS: Bug fix for getPathProperties
> ---
>
> Key: HADOOP-16169
> URL: https://issues.apache.org/jira/browse/HADOOP-16169
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
>
> There is a bug in AbfsClient, getPathProperties().
> For both xns accnout and non-xns account, it should use 
> AbfsRestOperationType.GetPathStatus 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 merged pull request #527: HDDS-1093. Configuration tab in OM/SCM ui is not displaying the correct values

2019-03-06 Thread GitBox
bharatviswa504 merged pull request #527: HDDS-1093. Configuration tab in OM/SCM 
ui is not displaying the correct values
URL: https://github.com/apache/hadoop/pull/527
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #527: HDDS-1093. Configuration tab in OM/SCM ui is not displaying the correct values

2019-03-06 Thread GitBox
bharatviswa504 commented on issue #527: HDDS-1093. Configuration tab in OM/SCM 
ui is not displaying the correct values
URL: https://github.com/apache/hadoop/pull/527#issuecomment-470348685
 
 
   Thank You @vivekratnavel  for the fix.
   WIll commit this shortly.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #558: HDDS-1217. Refactor ChillMode rules and chillmode manager.

2019-03-06 Thread GitBox
hadoop-yetus commented on issue #558: HDDS-1217. Refactor ChillMode rules and 
chillmode manager.
URL: https://github.com/apache/hadoop/pull/558#issuecomment-470348167
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 23 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 974 | trunk passed |
   | +1 | compile | 46 | trunk passed |
   | +1 | checkstyle | 19 | trunk passed |
   | +1 | mvnsite | 30 | trunk passed |
   | +1 | shadedclient | 659 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 36 | trunk passed |
   | +1 | javadoc | 21 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 32 | the patch passed |
   | +1 | compile | 23 | the patch passed |
   | +1 | javac | 23 | the patch passed |
   | -0 | checkstyle | 14 | hadoop-hdds/server-scm: The patch generated 2 new + 
0 unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 28 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 690 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 43 | the patch passed |
   | +1 | javadoc | 17 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 87 | server-scm in the patch failed. |
   | +1 | asflicense | 25 | The patch does not generate ASF License warnings. |
   | | | 2849 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.chillmode.TestSCMChillModeManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-558/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/558 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux bab87144a106 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 618e009 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-558/2/artifact/out/diff-checkstyle-hadoop-hdds_server-scm.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-558/2/artifact/out/patch-unit-hadoop-hdds_server-scm.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-558/2/testReport/ |
   | Max. process+thread count | 467 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-558/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel commented on issue #549: HDDS-1213. Support plain text S3 MPU initialization request

2019-03-06 Thread GitBox
vivekratnavel commented on issue #549: HDDS-1213. Support plain text S3 MPU 
initialization request
URL: https://github.com/apache/hadoop/pull/549#issuecomment-470348018
 
 
   I am hitting the same error as @bharatviswa504 
   
   @elek Please find attached the log from docker-compose
   
   [HDDS-1213.log](https://github.com/apache/hadoop/files/2939068/HDDS-1213.log)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aw-was-here commented on issue #539: HADOOP-16109. Parquet reading S3AFileSystem causes EOF

2019-03-06 Thread GitBox
aw-was-here commented on issue #539: HADOOP-16109. Parquet reading 
S3AFileSystem causes EOF
URL: https://github.com/apache/hadoop/pull/539#issuecomment-470348002
 
 
   TL;DR: squash and rebase regularly
   
   > does Yetus perhaps try to test each commit individually? ... [Later Edit] 
It looks like maybe a GitHub issue.
   
   Yup.
   
   Yetus gets the equivalent of 'git format-patch' and then calls 'git apply'  
or 'patch' or whatever works on top of master (or branch-2 or whatever) with 
it. This has several downsides, but most of them go away if the patch branch is 
regularly rebased and commits are squashed.  The latter fixes 99% of the 
issues.  When commits aren't squashed, you'll see weird stuff like what shows 
up here.  (Remember, the top of the other tree is moving too)
   
   It's important to note that git/github really only provides three ways to 
handle PRs:
   * git merge
   * git diff
   * git format-patch
   
   The first one has lots of issues from a Yetus functionality perspective 
since it taints the source tree and doing multiple checkouts has other issues 
(remember: yetus also runs locally!). The second one doesn't work with binary 
files.  That leaves us with the third one, which has lots of weird 
idiosyncracies but works when good branch hygiene is in play.  YETUS-724 will 
enable test-patch to switch to using the git diff IFF the format-patch version 
of the file can't be applied.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #567: HDDS-1196. Add a ReplicationStartTimer class.

2019-03-06 Thread GitBox
hadoop-yetus commented on issue #567: HDDS-1196. Add a ReplicationStartTimer 
class.
URL: https://github.com/apache/hadoop/pull/567#issuecomment-470347478
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 34 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1055 | trunk passed |
   | +1 | compile | 71 | trunk passed |
   | +1 | checkstyle | 27 | trunk passed |
   | +1 | mvnsite | 72 | trunk passed |
   | +1 | shadedclient | 696 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 99 | trunk passed |
   | +1 | javadoc | 53 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 9 | Maven dependency ordering for patch |
   | +1 | mvninstall | 74 | the patch passed |
   | +1 | compile | 63 | the patch passed |
   | +1 | javac | 63 | the patch passed |
   | -0 | checkstyle | 21 | hadoop-hdds: The patch generated 3 new + 0 
unchanged - 0 fixed = 3 total (was 0) |
   | +1 | mvnsite | 58 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 688 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 114 | the patch passed |
   | +1 | javadoc | 54 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 58 | common in the patch failed. |
   | +1 | unit | 104 | server-scm in the patch passed. |
   | +1 | asflicense | 24 | The patch does not generate ASF License warnings. |
   | | | 3458 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-567/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/567 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
   | uname | Linux fbd33c068d8a 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 618e009 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-567/2/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-567/2/artifact/out/patch-unit-hadoop-hdds_common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-567/2/testReport/ |
   | Max. process+thread count | 445 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/server-scm U: hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-567/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16053) Backport HADOOP-14816 to branch-2

2019-03-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16786278#comment-16786278
 ] 

Hadoop QA commented on HADOOP-16053:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  8m  
3s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} branch-2.9 Compile Tests {color} ||
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} hadolint {color} | {color:green}  0m  
2s{color} | {color:green} The patch generated 0 new + 1 unchanged - 14 fixed = 
1 total (was 15) {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
10s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:blue}0{color} | {color:blue} asflicense {color} | {color:blue}  0m 
11s{color} | {color:blue} ASF License check generated no output? {color} |
| {color:black}{color} | {color:black} {color} | {color:black}  8m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:af069b6 |
| JIRA Issue | HADOOP-16053 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12961479/HADOOP-16053-branch-2.9-01.patch
 |
| Optional Tests |  dupname  asflicense  hadolint  shellcheck  shelldocs  |
| uname | Linux 925ceaccdf67 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2.9 / fc76e98 |
| maven | version: Apache Maven 3.3.9 |
| shellcheck | v0.4.6 |
| Max. process+thread count | 32 (vs. ulimit of 1) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16026/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Backport HADOOP-14816 to branch-2
> -
>
> Key: HADOOP-16053
> URL: https://issues.apache.org/jira/browse/HADOOP-16053
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 2.10.0
>
> Attachments: HADOOP-16053-branch-2-01.patch, 
> HADOOP-16053-branch-2-02.patch, HADOOP-16053-branch-2.9-01.patch
>
>
> Ubuntu Trusty becomes EoL in April 2019, let's upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16053) Backport HADOOP-14816 to branch-2

2019-03-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16786274#comment-16786274
 ] 

Hadoop QA commented on HADOOP-16053:


(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16026/console in case of 
problems.


> Backport HADOOP-14816 to branch-2
> -
>
> Key: HADOOP-16053
> URL: https://issues.apache.org/jira/browse/HADOOP-16053
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 2.10.0
>
> Attachments: HADOOP-16053-branch-2-01.patch, 
> HADOOP-16053-branch-2-02.patch, HADOOP-16053-branch-2.9-01.patch
>
>
> Ubuntu Trusty becomes EoL in April 2019, let's upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16053) Backport HADOOP-14816 to branch-2

2019-03-06 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16053:
---
Attachment: HADOOP-16053-branch-2.9-01.patch

> Backport HADOOP-14816 to branch-2
> -
>
> Key: HADOOP-16053
> URL: https://issues.apache.org/jira/browse/HADOOP-16053
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 2.10.0
>
> Attachments: HADOOP-16053-branch-2-01.patch, 
> HADOOP-16053-branch-2-02.patch, HADOOP-16053-branch-2.9-01.patch
>
>
> Ubuntu Trusty becomes EoL in April 2019, let's upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16053) Backport HADOOP-14816 to branch-2

2019-03-06 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16786270#comment-16786270
 ] 

Akira Ajisaka commented on HADOOP-16053:


Attached a patch for branch-2.9.

> Backport HADOOP-14816 to branch-2
> -
>
> Key: HADOOP-16053
> URL: https://issues.apache.org/jira/browse/HADOOP-16053
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 2.10.0
>
> Attachments: HADOOP-16053-branch-2-01.patch, 
> HADOOP-16053-branch-2-02.patch, HADOOP-16053-branch-2.9-01.patch
>
>
> Ubuntu Trusty becomes EoL in April 2019, let's upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16109) Parquet reading S3AFileSystem causes EOF

2019-03-06 Thread Matt Foley (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16780826#comment-16780826
 ] 

Matt Foley edited comment on HADOOP-16109 at 3/7/19 12:53 AM:
--

Hi [~ste...@apache.org], one of my colleagues, Shruti Gumma, has a proposed fix 
which I'll help him post here today.

Edit: Shruti is unable to test on S3, so requests you proceed with your fix.


was (Author: mattf):
Hi [~ste...@apache.org], one of my colleagues, Shruti Gumma, has a proposed fix 
which I'll help him post here today.

> Parquet reading S3AFileSystem causes EOF
> 
>
> Key: HADOOP-16109
> URL: https://issues.apache.org/jira/browse/HADOOP-16109
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.2, 2.8.5, 3.3.0, 3.1.2
>Reporter: Dave Christianson
>Assignee: Steve Loughran
>Priority: Blocker
>
> When using S3AFileSystem to read Parquet files a specific set of 
> circumstances causes an  EOFException that is not thrown when reading the 
> same file from local disk
> Note this has only been observed under specific circumstances:
>   - when the reader is doing a projection (will cause it to do a seek 
> backwards and put the filesystem into random mode)
>  - when the file is larger than the readahead buffer size
>  - when the seek behavior of the Parquet reader causes the reader to seek 
> towards the end of the current input stream without reopening, such that the 
> next read on the currently open stream will read past the end of the 
> currently open stream.
> Exception from Parquet reader is as follows:
> {code}
> Caused by: java.io.EOFException: Reached the end of stream with 51 bytes left 
> to read
>  at 
> org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:104)
>  at 
> org.apache.parquet.io.DelegatingSeekableInputStream.readFullyHeapBuffer(DelegatingSeekableInputStream.java:127)
>  at 
> org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:91)
>  at 
> org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:1174)
>  at 
> org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:805)
>  at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:127)
>  at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:222)
>  at 
> org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:207)
>  at 
> org.apache.flink.api.java.hadoop.mapreduce.HadoopInputFormatBase.fetchNext(HadoopInputFormatBase.java:206)
>  at 
> org.apache.flink.api.java.hadoop.mapreduce.HadoopInputFormatBase.reachedEnd(HadoopInputFormatBase.java:199)
>  at 
> org.apache.flink.runtime.operators.DataSourceTask.invoke(DataSourceTask.java:190)
>  at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
>  at java.lang.Thread.run(Thread.java:748)
> {code}
> The following example program generate the same root behavior (sans finding a 
> Parquet file that happens to trigger this condition) by purposely reading 
> past the already active readahead range on any file >= 1029 bytes in size.. 
> {code:java}
> final Configuration conf = new Configuration();
> conf.set("fs.s3a.readahead.range", "1K");
> conf.set("fs.s3a.experimental.input.fadvise", "random");
> final FileSystem fs = FileSystem.get(path.toUri(), conf);
> // forward seek reading across readahead boundary
> try (FSDataInputStream in = fs.open(path)) {
> final byte[] temp = new byte[5];
> in.readByte();
> in.readFully(1023, temp); // <-- works
> }
> // forward seek reading from end of readahead boundary
> try (FSDataInputStream in = fs.open(path)) {
>  final byte[] temp = new byte[5];
>  in.readByte();
>  in.readFully(1024, temp); // <-- throws EOFException
> }
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16172) Update apache/hadoop:3 to 3.2.0 release

2019-03-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16786261#comment-16786261
 ] 

Hadoop QA commented on HADOOP-16172:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  5m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
1s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} docker-hadoop-3 Compile Tests {color} ||
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce 
Image:yetus/hadoop:date2019-03-07 |
| JIRA Issue | HADOOP-16172 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12961473/HADOOP-16172-docker-hadoop-3.01.patch
 |
| Optional Tests |  dupname  asflicense  hadolint  shellcheck  shelldocs  |
| uname | Linux 0d958663296d 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | docker-hadoop-3 / 544ee8e |
| maven | version: Apache Maven 3.3.9 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16025/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Update apache/hadoop:3 to 3.2.0 release
> ---
>
> Key: HADOOP-16172
> URL: https://issues.apache.org/jira/browse/HADOOP-16172
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HADOOP-16172-docker-hadoop-3.01.patch
>
>
> This ticket is opened to update apache/hadoop:3 from 3.1.1 to 3.2.0 release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #567: HDDS-1196. Add a ReplicationStartTimer class.

2019-03-06 Thread GitBox
hadoop-yetus commented on issue #567: HDDS-1196. Add a ReplicationStartTimer 
class.
URL: https://github.com/apache/hadoop/pull/567#issuecomment-470338100
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 24 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 13 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1027 | trunk passed |
   | +1 | compile | 84 | trunk passed |
   | +1 | checkstyle | 32 | trunk passed |
   | +1 | mvnsite | 81 | trunk passed |
   | +1 | shadedclient | 772 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 113 | trunk passed |
   | +1 | javadoc | 66 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 13 | Maven dependency ordering for patch |
   | +1 | mvninstall | 75 | the patch passed |
   | +1 | compile | 69 | the patch passed |
   | +1 | javac | 69 | the patch passed |
   | -0 | checkstyle | 25 | hadoop-hdds: The patch generated 3 new + 0 
unchanged - 0 fixed = 3 total (was 0) |
   | +1 | mvnsite | 63 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 759 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 115 | the patch passed |
   | +1 | javadoc | 47 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 60 | common in the patch failed. |
   | +1 | unit | 103 | server-scm in the patch passed. |
   | +1 | asflicense | 24 | The patch does not generate ASF License warnings. |
   | | | 3608 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-567/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/567 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
   | uname | Linux e166862f4f8d 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 618e009 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-567/1/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-567/1/artifact/out/patch-unit-hadoop-hdds_common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-567/1/testReport/ |
   | Max. process+thread count | 413 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/server-scm U: hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-567/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #558: HDDS-1217. Refactor ChillMode rules and chillmode manager.

2019-03-06 Thread GitBox
bharatviswa504 commented on issue #558: HDDS-1217. Refactor ChillMode rules and 
chillmode manager.
URL: https://github.com/apache/hadoop/pull/558#issuecomment-470338041
 
 
   Except protected checkstyle warning, remaining all have been addressed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16172) Update apache/hadoop:3 to 3.2.0 release

2019-03-06 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-16172:

Status: Patch Available  (was: Open)

> Update apache/hadoop:3 to 3.2.0 release
> ---
>
> Key: HADOOP-16172
> URL: https://issues.apache.org/jira/browse/HADOOP-16172
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HADOOP-16172-docker-hadoop-3.01.patch
>
>
> This ticket is opened to update apache/hadoop:3 from 3.1.1 to 3.2.0 release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16172) Update apache/hadoop:3 to 3.2.0 release

2019-03-06 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-16172:

Attachment: HADOOP-16172-docker-hadoop-3.01.patch

> Update apache/hadoop:3 to 3.2.0 release
> ---
>
> Key: HADOOP-16172
> URL: https://issues.apache.org/jira/browse/HADOOP-16172
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HADOOP-16172-docker-hadoop-3.01.patch
>
>
> This ticket is opened to update apache/hadoop:3 from 3.1.1 to 3.2.0 release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on issue #557: HDDS-1175. Serve read requests directly from RocksDB.

2019-03-06 Thread GitBox
hanishakoneru commented on issue #557: HDDS-1175. Serve read requests directly 
from RocksDB.
URL: https://github.com/apache/hadoop/pull/557#issuecomment-470337074
 
 
   The test failures are unrelated and pass locally. I will merge the PR 
shortly.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16172) Update apache/hadoop:3 to 3.2.0 release

2019-03-06 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HADOOP-16172:
---

 Summary: Update apache/hadoop:3 to 3.2.0 release
 Key: HADOOP-16172
 URL: https://issues.apache.org/jira/browse/HADOOP-16172
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


This ticket is opened to update apache/hadoop:3 from 3.1.1 to 3.2.0 release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #561: HDDS-1043. Enable token based authentication for S3 api.

2019-03-06 Thread GitBox
hadoop-yetus commented on a change in pull request #561: HDDS-1043. Enable 
token based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#discussion_r263197868
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/security/ozone-secure.robot
 ##
 @@ -107,5 +138,16 @@ Run ozoneFS tests
 Execute   ls -l GET.txt
 ${rc}  ${result} =  Run And Return Rc And Outputozone fs -ls 
o3fs://abcde.pqrs/
 Should Be Equal As Integers ${rc}1
-Should contain${result} VOLUME_NOT_FOUND
+Should contain${result} not found
+
+
+Secure S3 test Failure
+Run Keyword Install aws cli
+${rc}  ${result} =  Run And Return Rc And Output  aws s3api --endpoint-url 
${ENDPOINT_URL} create-bucket --bucket bucket-test123
 
 Review comment:
   whitespace:tabs in line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #561: HDDS-1043. Enable token based authentication for S3 api.

2019-03-06 Thread GitBox
hadoop-yetus commented on a change in pull request #561: HDDS-1043. Enable 
token based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#discussion_r263197870
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/security/ozone-secure.robot
 ##
 @@ -16,14 +16,42 @@
 *** Settings ***
 Documentation   Smoke test to start cluster with docker-compose 
environments.
 Library OperatingSystem
+Library String
 Resource../commonlib.robot
 
+*** Variables ***
+${ENDPOINT_URL}   http://s3g:9878
+
+*** Keywords ***
+Install aws cli s3 centos
+Executesudo yum install -y awscli
+Install aws cli s3 debian
+Executesudo apt-get install -y awscli
+
+Install aws cli
+${rc}  ${output} = Run And Return Rc And 
Output   which apt-get
+Run Keyword if '${rc}' == '0'  Install aws cli s3 debian
+${rc}  ${output} = Run And Return Rc And 
Output   yum --help
+Run Keyword if '${rc}' == '0'  Install aws cli s3 centos
+
+Setup credentials
+${hostname}=Executehostname
+Execute kinit -k testuser/${hostname}@EXAMPLE.COM -t 
/etc/security/keytabs/testuser.keytab
+${result} = Executeozone sh s3 getsecret
+${accessKey} =  Get Regexp Matches ${result} 
(?<=awsAccessKey=).*
 
 Review comment:
   whitespace:tabs in line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16169) ABFS: Bug fix for getPathProperties

2019-03-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16786249#comment-16786249
 ] 

Hadoop QA commented on HADOOP-16169:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 75 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m 
14s{color} | {color:red} The patch 19849 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
77m 59s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
18s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}126m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16169 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12961461/HADOOP-16169-001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e8f86a043209 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 45f976f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16024/artifact/out/whitespace-eol.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16024/artifact/out/whitespace-tabs.txt
 |

[GitHub] [hadoop] hadoop-yetus commented on issue #547: HDDS-594. SCM CA: DN sends CSR and uses certificate issued by SCM.

2019-03-06 Thread GitBox
hadoop-yetus commented on issue #547: HDDS-594. SCM CA: DN sends CSR and uses 
certificate issued by SCM.
URL: https://github.com/apache/hadoop/pull/547#issuecomment-470333886
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 33 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 12 | Maven dependency ordering for branch |
   | +1 | mvninstall | 993 | trunk passed |
   | +1 | compile | 73 | trunk passed |
   | +1 | checkstyle | 31 | trunk passed |
   | +1 | mvnsite | 79 | trunk passed |
   | +1 | shadedclient | 723 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 122 | trunk passed |
   | +1 | javadoc | 67 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 10 | Maven dependency ordering for patch |
   | +1 | mvninstall | 78 | the patch passed |
   | +1 | compile | 70 | the patch passed |
   | +1 | javac | 70 | the patch passed |
   | -0 | checkstyle | 26 | hadoop-hdds: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 68 | the patch passed |
   | -1 | whitespace | 0 | The patch has 75 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | -1 | whitespace | 1 | The patch 19849  line(s) with tabs. |
   | +1 | shadedclient | 893 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 128 | the patch passed |
   | +1 | javadoc | 63 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 61 | common in the patch failed. |
   | -1 | unit | 52 | container-service in the patch failed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 3684 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-547/12/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/547 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 932fd68b209e 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 45f976f |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-547/12/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-547/12/artifact/out/whitespace-eol.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-547/12/artifact/out/whitespace-tabs.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-547/12/artifact/out/patch-unit-hadoop-hdds_common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-547/12/artifact/out/patch-unit-hadoop-hdds_container-service.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-547/12/testReport/ |
   | Max. process+thread count | 410 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service U: 
hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-547/12/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on issue #562: HDDS-1225. Provide docker-compose for OM HA.

2019-03-06 Thread GitBox
hanishakoneru commented on issue #562: HDDS-1225. Provide docker-compose for OM 
HA.
URL: https://github.com/apache/hadoop/pull/562#issuecomment-470333286
 
 
   The unit test failure is unrelated and fails on trunk too. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on issue #547: HDDS-594. SCM CA: DN sends CSR and uses certificate issued by SCM.

2019-03-06 Thread GitBox
xiaoyuyao commented on issue #547: HDDS-594. SCM CA: DN sends CSR and uses 
certificate issued by SCM.
URL: https://github.com/apache/hadoop/pull/547#issuecomment-470332125
 
 
   +1 pending Jenkins. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16169) ABFS: Bug fix for getPathProperties

2019-03-06 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-16169:
-
Status: Open  (was: Patch Available)

> ABFS: Bug fix for getPathProperties
> ---
>
> Key: HADOOP-16169
> URL: https://issues.apache.org/jira/browse/HADOOP-16169
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-16169-001.patch
>
>
> There is a bug in AbfsClient, getPathProperties().
> For both xns accnout and non-xns account, it should use 
> AbfsRestOperationType.GetPathStatus 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 opened a new pull request #567: HDDS-1196. Add a ReplicationStartTimer class.

2019-03-06 Thread GitBox
bharatviswa504 opened a new pull request #567: HDDS-1196. Add a 
ReplicationStartTimer class.
URL: https://github.com/apache/hadoop/pull/567
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #539: HADOOP-16109. Parquet reading S3AFileSystem causes EOF

2019-03-06 Thread GitBox
hadoop-yetus commented on issue #539: HADOOP-16109. Parquet reading 
S3AFileSystem causes EOF
URL: https://github.com/apache/hadoop/pull/539#issuecomment-470321465
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 25 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 67 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1001 | trunk passed |
   | -1 | compile | 509 | root in trunk failed. |
   | +1 | checkstyle | 168 | trunk passed |
   | +1 | mvnsite | 98 | trunk passed |
   | +1 | shadedclient | 898 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 133 | trunk passed |
   | +1 | javadoc | 69 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for patch |
   | -1 | mvninstall | 27 | hadoop-aws in the patch failed. |
   | -1 | compile | 532 | root in the patch failed. |
   | -1 | javac | 532 | root in the patch failed. |
   | -0 | checkstyle | 176 | root: The patch generated 3 new + 10 unchanged - 0 
fixed = 13 total (was 10) |
   | -1 | mvnsite | 36 | hadoop-aws in the patch failed. |
   | -1 | whitespace | 0 | The patch has 75 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | -1 | whitespace | 1 | The patch 19849  line(s) with tabs. |
   | +1 | shadedclient | 910 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | findbugs | 28 | hadoop-aws in the patch failed. |
   | +1 | javadoc | 83 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 497 | hadoop-common in the patch failed. |
   | -1 | unit | 34 | hadoop-aws in the patch failed. |
   | +1 | asflicense | 35 | The patch does not generate ASF License warnings. |
   | | | 5550 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.util.TestReadWriteDiskValidator |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/539 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 3b44191e02d6 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 45f976f |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/5/artifact/out/branch-compile-root.txt
 |
   | findbugs | v3.1.0-RC1 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/5/artifact/out/patch-mvninstall-hadoop-tools_hadoop-aws.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/5/artifact/out/patch-compile-root.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/5/artifact/out/patch-compile-root.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/5/artifact/out/diff-checkstyle-root.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/5/artifact/out/patch-mvnsite-hadoop-tools_hadoop-aws.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/5/artifact/out/whitespace-eol.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/5/artifact/out/whitespace-tabs.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/5/artifact/out/patch-findbugs-hadoop-tools_hadoop-aws.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/5/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/5/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/5/testReport/ |
   | Max. process+thread count | 1717 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/5/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to Git

[GitHub] [hadoop] ajayydv commented on issue #547: HDDS-594. SCM CA: DN sends CSR and uses certificate issued by SCM.

2019-03-06 Thread GitBox
ajayydv commented on issue #547: HDDS-594. SCM CA: DN sends CSR and uses 
certificate issued by SCM.
URL: https://github.com/apache/hadoop/pull/547#issuecomment-470314298
 
 
   @xiaoyuyao rebase seems to have pushed spurious commit. Forced reset it to 
intended commits.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on issue #561: HDDS-1043. Enable token based authentication for S3 api.

2019-03-06 Thread GitBox
ajayydv commented on issue #561: HDDS-1043. Enable token based authentication 
for S3 api.
URL: https://github.com/apache/hadoop/pull/561#issuecomment-470307739
 
 
   forced reset to squashed commit as local rebase added spurious commits.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #561: HDDS-1043. Enable token based authentication for S3 api.

2019-03-06 Thread GitBox
hadoop-yetus commented on issue #561: HDDS-1043. Enable token based 
authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#issuecomment-470305918
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 7 | https://github.com/apache/hadoop/pull/561 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/561 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-561/6/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16169) ABFS: Bug fix for getPathProperties

2019-03-06 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16786179#comment-16786179
 ] 

Da Zhou commented on HADOOP-16169:
--

All tests passed, using US west account:
non-xns: sharedKey
Tests run: 40, Failures: 0, Errors: 0, Skipped: 0
Tests run: 333, Failures: 1, Errors: 0, Skipped: 207
Tests run: 190, Failures: 0, Errors: 0, Skipped: 15

xns: Oauth
Tests run: 40, Failures: 0, Errors: 0, Skipped: 0
Tests run: 333, Failures: 0, Errors: 0, Skipped: 25
Tests run: 190, Failures: 0, Errors: 0, Skipped: 23

> ABFS: Bug fix for getPathProperties
> ---
>
> Key: HADOOP-16169
> URL: https://issues.apache.org/jira/browse/HADOOP-16169
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-16169-001.patch
>
>
> There is a bug in AbfsClient, getPathProperties().
> For both xns accnout and non-xns account, it should use 
> AbfsRestOperationType.GetPathStatus 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16169) ABFS: Bug fix for getPathProperties

2019-03-06 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-16169:
-
Status: Patch Available  (was: Open)

> ABFS: Bug fix for getPathProperties
> ---
>
> Key: HADOOP-16169
> URL: https://issues.apache.org/jira/browse/HADOOP-16169
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-16169-001.patch
>
>
> There is a bug in AbfsClient, getPathProperties().
> For both xns accnout and non-xns account, it should use 
> AbfsRestOperationType.GetPathStatus 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16169) ABFS: Bug fix for getPathProperties

2019-03-06 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-16169:
-
Attachment: HADOOP-16169-001.patch

> ABFS: Bug fix for getPathProperties
> ---
>
> Key: HADOOP-16169
> URL: https://issues.apache.org/jira/browse/HADOOP-16169
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-16169-001.patch
>
>
> There is a bug in AbfsClient, getPathProperties().
> For both xns accnout and non-xns account, it should use 
> AbfsRestOperationType.GetPathStatus 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #566: HADOOP-15961 Add PathCapabilities to FS and FC to complement StreamCapabilities

2019-03-06 Thread GitBox
hadoop-yetus commented on issue #566: HADOOP-15961 Add PathCapabilities to FS 
and FC to complement StreamCapabilities
URL: https://github.com/apache/hadoop/pull/566#issuecomment-470286377
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 7 | https://github.com/apache/hadoop/pull/566 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/566 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-566/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran opened a new pull request #566: HADOOP-15961 Add PathCapabilities to FS and FC to complement StreamCapabilities

2019-03-06 Thread GitBox
steveloughran opened a new pull request #566: HADOOP-15961 Add PathCapabilities 
to FS and FC to complement StreamCapabilities
URL: https://github.com/apache/hadoop/pull/566
 
 
   Add a PathCapabilities interface to both FileSystem and FileContext to 
declare the capabilities under the path of a filesystem through both the 
FileSystem and FileContext APIs


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #565: HADOOP-16058 S3A tests to include Terasort

2019-03-06 Thread GitBox
hadoop-yetus commented on issue #565: HADOOP-16058 S3A tests to include 
Terasort 
URL: https://github.com/apache/hadoop/pull/565#issuecomment-470285403
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 5 | https://github.com/apache/hadoop/pull/565 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/565 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-565/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran closed pull request #531: HADOOP-15961 Add PathCapabilities to FS and FC to complement StreamCapabilities

2019-03-06 Thread GitBox
steveloughran closed pull request #531: HADOOP-15961 Add PathCapabilities to FS 
and FC to complement StreamCapabilities
URL: https://github.com/apache/hadoop/pull/531
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran opened a new pull request #565: HADOOP-16058 S3A tests to include Terasort

2019-03-06 Thread GitBox
steveloughran opened a new pull request #565: HADOOP-16058 S3A tests to include 
Terasort 
URL: https://github.com/apache/hadoop/pull/565
 
 
   HADOOP-16058. Add S3A tests to run terasort for the magic and directory 
committers.
   
   Contributed by Steve Loughran.
   
   Contains:
   
   MAPREDUCE-7090. BigMapOutput example doesn't work with paths off cluster fs
   
   MAPREDUCE-7091. Terasort on S3A to switch to new committers
   
   MAPREDUCE-7092. MR examples to work better against cloud stores
   
   Bonus feature: prints the results to see which committers are faster in the 
specific test setup. As that's a function of latency to the store, bandwidth 
and size of jobs, it's not at all meaningful, just interesting.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #564: HADOOP-13327 Output Stream Specification

2019-03-06 Thread GitBox
hadoop-yetus commented on issue #564: HADOOP-13327 Output Stream Specification
URL: https://github.com/apache/hadoop/pull/564#issuecomment-470284232
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 8 | https://github.com/apache/hadoop/pull/564 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/564 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-564/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mattf-apache commented on a change in pull request #539: HADOOP-16109. Parquet reading S3AFileSystem causes EOF

2019-03-06 Thread GitBox
mattf-apache commented on a change in pull request #539: HADOOP-16109. Parquet 
reading S3AFileSystem causes EOF
URL: https://github.com/apache/hadoop/pull/539#discussion_r263143321
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractSeek.java
 ##
 @@ -18,31 +18,280 @@
 
 package org.apache.hadoop.fs.contract.s3a;
 
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.Arrays;
+import java.util.Collection;
+
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.contract.AbstractContractSeekTest;
 import org.apache.hadoop.fs.contract.AbstractFSContract;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.apache.hadoop.fs.s3a.S3AInputPolicy;
+import org.apache.hadoop.fs.s3a.S3ATestUtils;
 
+import static com.google.common.base.Preconditions.checkNotNull;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADVISE;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADV_NORMAL;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADV_RANDOM;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADV_SEQUENTIAL;
+import static org.apache.hadoop.fs.s3a.Constants.READAHEAD_RANGE;
+import static 
org.apache.hadoop.fs.s3a.S3ATestConstants.FS_S3A_IMPL_DISABLE_CACHE;
 import static org.apache.hadoop.fs.s3a.S3ATestUtils.maybeEnableS3Guard;
 
 /**
  * S3A contract tests covering file seek.
  */
+@RunWith(Parameterized.class)
 public class ITestS3AContractSeek extends AbstractContractSeekTest {
 
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ITestS3AContractSeek.class);
+
+  protected static final int READAHEAD = 1024;
+
+  private final String seekPolicy;
+
+  public static final int DATASET_LEN = READAHEAD * 2;
+
+  public static final byte[] DATASET = ContractTestUtils.dataset(DATASET_LEN, 
'a', 32);
+
+  /**
+   * Test array for parameterized test runs.
+   * @return a list of parameter tuples.
+   */
+  @Parameterized.Parameters
+  public static Collection params() {
+return Arrays.asList(new Object[][]{
+{INPUT_FADV_RANDOM},
+{INPUT_FADV_NORMAL},
+{INPUT_FADV_SEQUENTIAL},
+});
+  }
+
+  public ITestS3AContractSeek(final String seekPolicy) {
+this.seekPolicy = seekPolicy;
+  }
+
   /**
* Create a configuration, possibly patching in S3Guard options.
+   * The FS is set to be uncached and the readhead and seek policies
+   * of the bucket itself are removed, so as to guarantee that the
+   * parameterized and test settings are
* @return a configuration
*/
   @Override
   protected Configuration createConfiguration() {
 Configuration conf = super.createConfiguration();
 // patch in S3Guard options
 maybeEnableS3Guard(conf);
+// purge any per-bucket overrides.
+try {
+  URI bucketURI = new 
URI(checkNotNull(conf.get("fs.contract.test.fs.s3a")));
+  S3ATestUtils.removeBucketOverrides(bucketURI.getHost(), conf,
+  READAHEAD_RANGE,
+  INPUT_FADVISE);
+} catch (URISyntaxException e) {
+  throw new RuntimeException(e);
+}
+// the FS is uncached, so will need clearing in test teardowns.
+S3ATestUtils.disableFilesystemCaching(conf);
+conf.setInt(READAHEAD_RANGE, READAHEAD);
+conf.set(INPUT_FADVISE, seekPolicy);
+conf.set(INPUT_FADVISE, seekPolicy);
 return conf;
   }
 
   @Override
   protected AbstractFSContract createContract(Configuration conf) {
 return new S3AContract(conf);
   }
+
+  @Override
+  public void teardown() throws Exception {
+S3AFileSystem fs = getFileSystem();
+if (fs.getConf().getBoolean(FS_S3A_IMPL_DISABLE_CACHE, false)) {
+  fs.close();
+}
+super.teardown();
+  }
+
+  /**
+   * This subclass of the {@code path(path)} operation adds the seek policy
+   * to the end to guarantee uniqueness across different calls of the same
+   * method.
+   *
+   * {@inheritDoc}
+   */
+  @Override
+  protected Path path(final String filepath) throws IOException {
+return super.path(filepath + "-" + seekPolicy);
+  }
+
+  /**
+   * Go to end, read then seek back to the previous position to force normal
+   * seek policy to switch to random IO.
+   * This will call readByte to trigger the second GET
+   * @param in input stream
+   * @return the byte read
+   * @throws IOException failure.
+   */
+  private byte readAtEndAndReturn(final FSDataInputStream in)
+  throws IOException {
+long pos = in.getPos();
+in.seek(DATASET_LEN -1);
+in.readByte();
+// go back to start and force a new GET
+in.seek(pos);
+return in.read

[GitHub] [hadoop] steveloughran closed pull request #530: HADOOP-16058 S3A tests to include Terasort

2019-03-06 Thread GitBox
steveloughran closed pull request #530: HADOOP-16058 S3A tests to include 
Terasort
URL: https://github.com/apache/hadoop/pull/530
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran opened a new pull request #564: HADOOP-13327 Output Stream Specification

2019-03-06 Thread GitBox
steveloughran opened a new pull request #564: HADOOP-13327 Output Stream 
Specification
URL: https://github.com/apache/hadoop/pull/564
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran closed pull request #163: HADOOP-13227 outputstream specification

2019-03-06 Thread GitBox
steveloughran closed pull request #163: HADOOP-13227 outputstream specification
URL: https://github.com/apache/hadoop/pull/163
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran closed pull request #532: HADOOP-13327: Add OutputStream + Syncable to the Filesystem Specification

2019-03-06 Thread GitBox
steveloughran closed pull request #532: HADOOP-13327: Add OutputStream + 
Syncable to the Filesystem Specification
URL: https://github.com/apache/hadoop/pull/532
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran closed pull request #61: YARN-4430 registry security validation can fail when downgrading to insecure would work

2019-03-06 Thread GitBox
steveloughran closed pull request #61: YARN-4430 registry security validation 
can fail when downgrading to insecure would work
URL: https://github.com/apache/hadoop/pull/61
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #557: HDDS-1175. Serve read requests directly from RocksDB.

2019-03-06 Thread GitBox
hadoop-yetus commented on issue #557: HDDS-1175. Serve read requests directly 
from RocksDB.
URL: https://github.com/apache/hadoop/pull/557#issuecomment-470271423
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 28 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 112 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1085 | trunk passed |
   | -1 | compile | 608 | root in trunk failed. |
   | +1 | checkstyle | 177 | trunk passed |
   | -1 | mvnsite | 44 | integration-test in trunk failed. |
   | +1 | shadedclient | 997 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 174 | trunk passed |
   | +1 | javadoc | 131 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for patch |
   | -1 | mvninstall | 27 | integration-test in the patch failed. |
   | -1 | compile | 524 | root in the patch failed. |
   | -1 | javac | 524 | root in the patch failed. |
   | +1 | checkstyle | 172 | the patch passed |
   | -1 | mvnsite | 34 | integration-test in the patch failed. |
   | -1 | whitespace | 0 | The patch has 75 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | -1 | whitespace | 1 | The patch 19849  line(s) with tabs. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 630 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 179 | the patch passed |
   | +1 | javadoc | 97 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 74 | common in the patch failed. |
   | +1 | unit | 36 | common in the patch passed. |
   | -1 | unit | 28 | integration-test in the patch failed. |
   | +1 | unit | 34 | ozone-manager in the patch passed. |
   | +1 | asflicense | 27 | The patch does not generate ASF License warnings. |
   | | | 5763 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-557/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/557 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  xml  findbugs  checkstyle  |
   | uname | Linux a0e5bebbec85 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 2c3ec37 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-557/4/artifact/out/branch-compile-root.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-557/4/artifact/out/branch-mvnsite-hadoop-ozone_integration-test.txt
 |
   | findbugs | v3.1.0-RC1 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-557/4/artifact/out/patch-mvninstall-hadoop-ozone_integration-test.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-557/4/artifact/out/patch-compile-root.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-557/4/artifact/out/patch-compile-root.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-557/4/artifact/out/patch-mvnsite-hadoop-ozone_integration-test.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-557/4/artifact/out/whitespace-eol.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-557/4/artifact/out/whitespace-tabs.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-557/4/artifact/out/patch-unit-hadoop-hdds_common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-557/4/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-557/4/testReport/ |
   | Max. process+thread count | 445 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-557/4/console |
   

[GitHub] [hadoop] hadoop-yetus commented on issue #562: HDDS-1225. Provide docker-compose for OM HA.

2019-03-06 Thread GitBox
hadoop-yetus commented on issue #562: HDDS-1225. Provide docker-compose for OM 
HA.
URL: https://github.com/apache/hadoop/pull/562#issuecomment-470270408
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 20 | Docker mode activated. |
   ||| _ Prechecks _ |
   | 0 | yamllint | 0 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1243 | trunk passed |
   | +1 | compile | 66 | trunk passed |
   | +1 | mvnsite | 28 | trunk passed |
   | +1 | shadedclient | 692 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 28 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 19 | dist in the patch failed. |
   | +1 | compile | 19 | the patch passed |
   | +1 | javac | 19 | the patch passed |
   | +1 | mvnsite | 24 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | shelldocs | 14 | The patch generated 0 new + 104 unchanged - 136 
fixed = 104 total (was 240) |
   | -1 | whitespace | 4 | The patch has 75 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | -1 | whitespace | 5 | The patch 19849  line(s) with tabs. |
   | +1 | shadedclient | 1016 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 17 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 24 | dist in the patch passed. |
   | +1 | asflicense | 27 | The patch does not generate ASF License warnings. |
   | | | 3398 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-562/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/562 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  yamllint  shellcheck  shelldocs  |
   | uname | Linux db61dcbb1e8a 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri 
Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 2c3ec37 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | shellcheck | v0.4.6 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-562/2/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-562/2/artifact/out/whitespace-eol.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-562/2/artifact/out/whitespace-tabs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-562/2/testReport/ |
   | Max. process+thread count | 309 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-562/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #539: HADOOP-16109. Parquet reading S3AFileSystem causes EOF

2019-03-06 Thread GitBox
steveloughran commented on issue #539: HADOOP-16109. Parquet reading 
S3AFileSystem causes EOF
URL: https://github.com/apache/hadoop/pull/539#issuecomment-470269174
 
 
   tested: S3 ireland
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #539: HADOOP-16109. Parquet reading S3AFileSystem causes EOF

2019-03-06 Thread GitBox
steveloughran commented on issue #539: HADOOP-16109. Parquet reading 
S3AFileSystem causes EOF
URL: https://github.com/apache/hadoop/pull/539#issuecomment-470267763
 
 
   new patch: address core comments from @mattf-apache , including adding a new 
test case.
   
   Matt, if you ever look at `readByte()`, it is just calling read and checking 
the return value before casting
   
   ```java
   public final byte readByte() throws IOException {
   int ch = in.read();
   if (ch < 0)
   throw new EOFException();
   return (byte)(ch);
   }
   ```
   
   the first test was effectively doing it. But in case something in future did 
ever subclass it, I've added the new test case


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-03-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16786096#comment-16786096
 ] 

Hadoop QA commented on HADOOP-15625:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
4s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 7s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  8m 
19s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  8m 
47s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  8m 47s{color} 
| {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 42s{color} | {color:orange} root: The patch generated 4 new + 33 unchanged - 
0 fixed = 37 total (was 33) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 75 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m 
15s{color} | {color:red} The patch 19849 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 57s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
28s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
38s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}165m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15625 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12961422/HADOOP-15625-015.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux 23963dc8e437 4.4.0-138-generic #164-

[GitHub] [hadoop] ajayydv commented on a change in pull request #561: HDDS-1043. Enable token based authentication for S3 api.

2019-03-06 Thread GitBox
ajayydv commented on a change in pull request #561: HDDS-1043. Enable token 
based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#discussion_r263114161
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/AWSV4AuthValidator.java
 ##
 @@ -0,0 +1,98 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.security;
+
+import org.apache.hadoop.util.StringUtils;
+import org.apache.kerby.util.Hex;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.crypto.Mac;
+import javax.crypto.spec.SecretKeySpec;
+import java.io.UnsupportedEncodingException;
+import java.net.URLDecoder;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
+import java.security.GeneralSecurityException;
+import java.security.MessageDigest;
+import java.security.NoSuchAlgorithmException;
+
+/**
+ * AWS v4 authentication payload validator. For more details refer to AWS
+ * documentation https://docs.aws.amazon.com/general/latest/gr/
+ * sigv4-create-canonical-request.html.
+ **/
+final class AWSV4AuthValidator {
+
+  private final static Logger LOG =
+  LoggerFactory.getLogger(AWSV4AuthValidator.class);
+  private static final String HMAC_SHA256_ALGORITHM = "HmacSHA256";
+  private static final Charset UTF_8 = Charset.forName("utf-8");
+
+  private AWSV4AuthValidator() {
+  }
+
+  private static String urlDecode(String str) {
+try {
+  return URLDecoder.decode(str, UTF_8.name());
+} catch (UnsupportedEncodingException e) {
+  throw new RuntimeException(e);
+}
+  }
+
+  public static String hash(String payload) throws NoSuchAlgorithmException {
+MessageDigest md = MessageDigest.getInstance("SHA-256");
+md.update(payload.getBytes(UTF_8));
+return String.format("%064x", new java.math.BigInteger(1, md.digest()));
+  }
+
+  private static byte[] sign(byte[] key, String msg) {
+try {
+  SecretKeySpec signingKey = new SecretKeySpec(key, HMAC_SHA256_ALGORITHM);
+  Mac mac = Mac.getInstance(HMAC_SHA256_ALGORITHM);
+  mac.init(signingKey);
+  return mac.doFinal(msg.getBytes(StandardCharsets.UTF_8));
+} catch (GeneralSecurityException gse) {
+  throw new RuntimeException(gse);
+}
+  }
+
+  private static byte[] getSignatureKey(String key, String strToSign) {
+String[] signData = StringUtils.split(StringUtils.split(strToSign,
+'\n')[2], '/');
+String dateStamp = signData[0];
+String regionName = signData[1];
+String serviceName = signData[2];
+byte[] kDate = sign(("AWS4" + key).getBytes(UTF_8), dateStamp);
+byte[] kRegion = sign(kDate, regionName);
+byte[] kService = sign(kRegion, serviceName);
+byte[] kSigning = sign(kService, "aws4_request");
+LOG.info(Hex.encode(kSigning));
+return kSigning;
+  }
+
+  /**
+   * Validate request. Returns true if aws request is legit else returns false.
+   */
+  public static boolean validateRequest(String strToSign, String signature,
+  String userKey) {
+String expectedSignature = Hex.encode(sign(getSignatureKey(
 
 Review comment:
   Added java doc for, let me know if you feel strongly about having it in 
separate function as it is one liner code.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #561: HDDS-1043. Enable token based authentication for S3 api.

2019-03-06 Thread GitBox
ajayydv commented on a change in pull request #561: HDDS-1043. Enable token 
based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#discussion_r263113793
 
 

 ##
 File path: 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/OzoneServiceProvider.java
 ##
 @@ -0,0 +1,52 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.s3;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.security.SecurityUtil;
+
+import javax.enterprise.context.ApplicationScoped;
+import javax.enterprise.inject.Produces;
+import javax.inject.Inject;
+import java.util.concurrent.atomic.AtomicReference;
+
+/**
+ * This class creates the OM service .
+ */
+@ApplicationScoped
+public class OzoneServiceProvider {
+
+  private static final AtomicReference OM_SERVICE_ADD =
+  new AtomicReference<>();
+
+  @Inject
+  private OzoneConfiguration conf;
+
+
+  @Produces
+  public Text getService() {
+if (OM_SERVICE_ADD.get() == null) {
+  OM_SERVICE_ADD.compareAndSet(null,
+  
SecurityUtil.buildTokenService(OmUtils.getOmAddressForClients(conf)));
 
 Review comment:
   We are adding om serviceso that TokenSelctor can select right token for OM. 
See OzoneDelegationTokenSelector for more details. For OM HA we might have to 
update this. HDDS-1230 to handle that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #561: HDDS-1043. Enable token based authentication for S3 api.

2019-03-06 Thread GitBox
ajayydv commented on a change in pull request #561: HDDS-1043. Enable token 
based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#discussion_r263112688
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneDelegationTokenSecretManager.java
 ##
 @@ -327,6 +336,37 @@ public boolean verifySignature(OzoneTokenIdentifier 
identifier,
 }
   }
 
+  /**
+   * Validates if a S3 identifier is valid or not.
+   * */
+  private byte[] validateS3Token(OzoneTokenIdentifier identifier)
+  throws InvalidToken {
+LOG.trace("Validating S3Token for identifier:{}", identifier);
+String awsSecret;
+try {
+  awsSecret = s3SecretManager.getS3UserSecretString(identifier
+  .getAwsAccessId());
+} catch (IOException e) {
+  LOG.error("Error while validating S3 identifier:{}",
+  identifier, e);
+  throw new InvalidToken("No S3 secret found for S3 identifier:"
 
 Review comment:
   Now if token validation fails rpc connection will fail itself. S3 gateway 
will get an error. Error propagation to client will depend on S3g error 
handling. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek closed pull request #490: HDDS-1113. Remove default dependencies from hadoop-ozone project

2019-03-06 Thread GitBox
elek closed pull request #490: HDDS-1113. Remove default dependencies from 
hadoop-ozone project
URL: https://github.com/apache/hadoop/pull/490
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #561: HDDS-1043. Enable token based authentication for S3 api.

2019-03-06 Thread GitBox
ajayydv commented on a change in pull request #561: HDDS-1043. Enable token 
based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#discussion_r263112086
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/AWSV4AuthValidator.java
 ##
 @@ -0,0 +1,98 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.security;
+
+import org.apache.hadoop.util.StringUtils;
+import org.apache.kerby.util.Hex;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.crypto.Mac;
+import javax.crypto.spec.SecretKeySpec;
+import java.io.UnsupportedEncodingException;
+import java.net.URLDecoder;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
+import java.security.GeneralSecurityException;
+import java.security.MessageDigest;
+import java.security.NoSuchAlgorithmException;
+
+/**
+ * AWS v4 authentication payload validator. For more details refer to AWS
+ * documentation https://docs.aws.amazon.com/general/latest/gr/
+ * sigv4-create-canonical-request.html.
+ **/
+final class AWSV4AuthValidator {
+
+  private final static Logger LOG =
+  LoggerFactory.getLogger(AWSV4AuthValidator.class);
+  private static final String HMAC_SHA256_ALGORITHM = "HmacSHA256";
+  private static final Charset UTF_8 = Charset.forName("utf-8");
+
+  private AWSV4AuthValidator() {
+  }
+
+  private static String urlDecode(String str) {
+try {
+  return URLDecoder.decode(str, UTF_8.name());
+} catch (UnsupportedEncodingException e) {
+  throw new RuntimeException(e);
+}
+  }
+
+  public static String hash(String payload) throws NoSuchAlgorithmException {
+MessageDigest md = MessageDigest.getInstance("SHA-256");
+md.update(payload.getBytes(UTF_8));
+return String.format("%064x", new java.math.BigInteger(1, md.digest()));
+  }
+
+  private static byte[] sign(byte[] key, String msg) {
+try {
+  SecretKeySpec signingKey = new SecretKeySpec(key, HMAC_SHA256_ALGORITHM);
+  Mac mac = Mac.getInstance(HMAC_SHA256_ALGORITHM);
+  mac.init(signingKey);
+  return mac.doFinal(msg.getBytes(StandardCharsets.UTF_8));
+} catch (GeneralSecurityException gse) {
+  throw new RuntimeException(gse);
+}
+  }
+
+  private static byte[] getSignatureKey(String key, String strToSign) {
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #561: HDDS-1043. Enable token based authentication for S3 api.

2019-03-06 Thread GitBox
ajayydv commented on a change in pull request #561: HDDS-1043. Enable token 
based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#discussion_r263112043
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/AWSV4AuthValidator.java
 ##
 @@ -0,0 +1,98 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.security;
+
+import org.apache.hadoop.util.StringUtils;
+import org.apache.kerby.util.Hex;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.crypto.Mac;
+import javax.crypto.spec.SecretKeySpec;
+import java.io.UnsupportedEncodingException;
+import java.net.URLDecoder;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
+import java.security.GeneralSecurityException;
+import java.security.MessageDigest;
+import java.security.NoSuchAlgorithmException;
+
+/**
+ * AWS v4 authentication payload validator. For more details refer to AWS
+ * documentation https://docs.aws.amazon.com/general/latest/gr/
+ * sigv4-create-canonical-request.html.
+ **/
+final class AWSV4AuthValidator {
+
+  private final static Logger LOG =
+  LoggerFactory.getLogger(AWSV4AuthValidator.class);
+  private static final String HMAC_SHA256_ALGORITHM = "HmacSHA256";
+  private static final Charset UTF_8 = Charset.forName("utf-8");
+
+  private AWSV4AuthValidator() {
+  }
+
+  private static String urlDecode(String str) {
+try {
+  return URLDecoder.decode(str, UTF_8.name());
+} catch (UnsupportedEncodingException e) {
+  throw new RuntimeException(e);
+}
+  }
+
+  public static String hash(String payload) throws NoSuchAlgorithmException {
+MessageDigest md = MessageDigest.getInstance("SHA-256");
+md.update(payload.getBytes(UTF_8));
+return String.format("%064x", new java.math.BigInteger(1, md.digest()));
+  }
+
+  private static byte[] sign(byte[] key, String msg) {
+try {
+  SecretKeySpec signingKey = new SecretKeySpec(key, HMAC_SHA256_ALGORITHM);
+  Mac mac = Mac.getInstance(HMAC_SHA256_ALGORITHM);
+  mac.init(signingKey);
+  return mac.doFinal(msg.getBytes(StandardCharsets.UTF_8));
+} catch (GeneralSecurityException gse) {
+  throw new RuntimeException(gse);
+}
+  }
+
+  private static byte[] getSignatureKey(String key, String strToSign) {
+String[] signData = StringUtils.split(StringUtils.split(strToSign,
+'\n')[2], '/');
+String dateStamp = signData[0];
+String regionName = signData[1];
+String serviceName = signData[2];
+byte[] kDate = sign(("AWS4" + key).getBytes(UTF_8), dateStamp);
+byte[] kRegion = sign(kDate, regionName);
+byte[] kService = sign(kRegion, serviceName);
+byte[] kSigning = sign(kService, "aws4_request");
+LOG.info(Hex.encode(kSigning));
+return kSigning;
+  }
+
+  /**
+   * Validate request. Returns true if aws request is legit else returns false.
+   */
+  public static boolean validateRequest(String strToSign, String signature,
+  String userKey) {
+String expectedSignature = Hex.encode(sign(getSignatureKey(
+userKey, strToSign), strToSign));
+return expectedSignature.equals(signature);
+  }
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek commented on a change in pull request #561: HDDS-1043. Enable token based authentication for S3 api.

2019-03-06 Thread GitBox
elek commented on a change in pull request #561: HDDS-1043. Enable token based 
authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#discussion_r263107858
 
 

 ##
 File path: 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/VirtualHostStyleFilter.java
 ##
 @@ -65,6 +65,10 @@ public void filter(ContainerRequestContext requestContext) 
throws
 
 authenticationHeaderParser.setAuthHeader(requestContext.getHeaderString(
 HttpHeaders.AUTHORIZATION));
+
 
 Review comment:
   Yes, there were 3 lines with one comment about token and I didn't find the 
token. But I may miss something.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #539: HADOOP-16109. Parquet reading S3AFileSystem causes EOF

2019-03-06 Thread GitBox
steveloughran commented on a change in pull request #539: HADOOP-16109. Parquet 
reading S3AFileSystem causes EOF
URL: https://github.com/apache/hadoop/pull/539#discussion_r263106757
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractSeek.java
 ##
 @@ -18,31 +18,280 @@
 
 package org.apache.hadoop.fs.contract.s3a;
 
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.Arrays;
+import java.util.Collection;
+
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.contract.AbstractContractSeekTest;
 import org.apache.hadoop.fs.contract.AbstractFSContract;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.apache.hadoop.fs.s3a.S3AInputPolicy;
+import org.apache.hadoop.fs.s3a.S3ATestUtils;
 
+import static com.google.common.base.Preconditions.checkNotNull;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADVISE;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADV_NORMAL;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADV_RANDOM;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADV_SEQUENTIAL;
+import static org.apache.hadoop.fs.s3a.Constants.READAHEAD_RANGE;
+import static 
org.apache.hadoop.fs.s3a.S3ATestConstants.FS_S3A_IMPL_DISABLE_CACHE;
 import static org.apache.hadoop.fs.s3a.S3ATestUtils.maybeEnableS3Guard;
 
 /**
  * S3A contract tests covering file seek.
  */
+@RunWith(Parameterized.class)
 public class ITestS3AContractSeek extends AbstractContractSeekTest {
 
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ITestS3AContractSeek.class);
+
+  protected static final int READAHEAD = 1024;
+
+  private final String seekPolicy;
+
+  public static final int DATASET_LEN = READAHEAD * 2;
+
+  public static final byte[] DATASET = ContractTestUtils.dataset(DATASET_LEN, 
'a', 32);
+
+  /**
+   * Test array for parameterized test runs.
+   * @return a list of parameter tuples.
+   */
+  @Parameterized.Parameters
+  public static Collection params() {
+return Arrays.asList(new Object[][]{
+{INPUT_FADV_RANDOM},
+{INPUT_FADV_NORMAL},
+{INPUT_FADV_SEQUENTIAL},
+});
+  }
+
+  public ITestS3AContractSeek(final String seekPolicy) {
+this.seekPolicy = seekPolicy;
+  }
+
   /**
* Create a configuration, possibly patching in S3Guard options.
+   * The FS is set to be uncached and the readhead and seek policies
+   * of the bucket itself are removed, so as to guarantee that the
+   * parameterized and test settings are
* @return a configuration
*/
   @Override
   protected Configuration createConfiguration() {
 Configuration conf = super.createConfiguration();
 // patch in S3Guard options
 maybeEnableS3Guard(conf);
+// purge any per-bucket overrides.
+try {
+  URI bucketURI = new 
URI(checkNotNull(conf.get("fs.contract.test.fs.s3a")));
+  S3ATestUtils.removeBucketOverrides(bucketURI.getHost(), conf,
+  READAHEAD_RANGE,
+  INPUT_FADVISE);
+} catch (URISyntaxException e) {
+  throw new RuntimeException(e);
+}
+// the FS is uncached, so will need clearing in test teardowns.
+S3ATestUtils.disableFilesystemCaching(conf);
+conf.setInt(READAHEAD_RANGE, READAHEAD);
+conf.set(INPUT_FADVISE, seekPolicy);
+conf.set(INPUT_FADVISE, seekPolicy);
 return conf;
   }
 
   @Override
   protected AbstractFSContract createContract(Configuration conf) {
 return new S3AContract(conf);
   }
+
+  @Override
+  public void teardown() throws Exception {
+S3AFileSystem fs = getFileSystem();
+if (fs.getConf().getBoolean(FS_S3A_IMPL_DISABLE_CACHE, false)) {
+  fs.close();
+}
+super.teardown();
+  }
+
+  /**
+   * This subclass of the {@code path(path)} operation adds the seek policy
+   * to the end to guarantee uniqueness across different calls of the same
+   * method.
+   *
+   * {@inheritDoc}
+   */
+  @Override
+  protected Path path(final String filepath) throws IOException {
+return super.path(filepath + "-" + seekPolicy);
+  }
+
+  /**
+   * Go to end, read then seek back to the previous position to force normal
+   * seek policy to switch to random IO.
+   * This will call readByte to trigger the second GET
+   * @param in input stream
+   * @return the byte read
+   * @throws IOException failure.
+   */
+  private byte readAtEndAndReturn(final FSDataInputStream in)
+  throws IOException {
+long pos = in.getPos();
+in.seek(DATASET_LEN -1);
+in.readByte();
+// go back to start and force a new GET
+in.seek(pos);
+return in.rea

[GitHub] [hadoop] steveloughran commented on a change in pull request #539: HADOOP-16109. Parquet reading S3AFileSystem causes EOF

2019-03-06 Thread GitBox
steveloughran commented on a change in pull request #539: HADOOP-16109. Parquet 
reading S3AFileSystem causes EOF
URL: https://github.com/apache/hadoop/pull/539#discussion_r263105317
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractSeek.java
 ##
 @@ -18,31 +18,280 @@
 
 package org.apache.hadoop.fs.contract.s3a;
 
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.Arrays;
+import java.util.Collection;
+
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.contract.AbstractContractSeekTest;
 import org.apache.hadoop.fs.contract.AbstractFSContract;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.apache.hadoop.fs.s3a.S3AInputPolicy;
+import org.apache.hadoop.fs.s3a.S3ATestUtils;
 
+import static com.google.common.base.Preconditions.checkNotNull;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADVISE;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADV_NORMAL;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADV_RANDOM;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADV_SEQUENTIAL;
+import static org.apache.hadoop.fs.s3a.Constants.READAHEAD_RANGE;
+import static 
org.apache.hadoop.fs.s3a.S3ATestConstants.FS_S3A_IMPL_DISABLE_CACHE;
 import static org.apache.hadoop.fs.s3a.S3ATestUtils.maybeEnableS3Guard;
 
 /**
  * S3A contract tests covering file seek.
  */
+@RunWith(Parameterized.class)
 public class ITestS3AContractSeek extends AbstractContractSeekTest {
 
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ITestS3AContractSeek.class);
+
+  protected static final int READAHEAD = 1024;
+
+  private final String seekPolicy;
+
+  public static final int DATASET_LEN = READAHEAD * 2;
+
+  public static final byte[] DATASET = ContractTestUtils.dataset(DATASET_LEN, 
'a', 32);
+
+  /**
+   * Test array for parameterized test runs.
+   * @return a list of parameter tuples.
+   */
+  @Parameterized.Parameters
+  public static Collection params() {
+return Arrays.asList(new Object[][]{
+{INPUT_FADV_RANDOM},
+{INPUT_FADV_NORMAL},
+{INPUT_FADV_SEQUENTIAL},
+});
+  }
+
+  public ITestS3AContractSeek(final String seekPolicy) {
+this.seekPolicy = seekPolicy;
+  }
+
   /**
* Create a configuration, possibly patching in S3Guard options.
+   * The FS is set to be uncached and the readhead and seek policies
+   * of the bucket itself are removed, so as to guarantee that the
+   * parameterized and test settings are
* @return a configuration
*/
   @Override
   protected Configuration createConfiguration() {
 Configuration conf = super.createConfiguration();
 // patch in S3Guard options
 maybeEnableS3Guard(conf);
+// purge any per-bucket overrides.
+try {
+  URI bucketURI = new 
URI(checkNotNull(conf.get("fs.contract.test.fs.s3a")));
+  S3ATestUtils.removeBucketOverrides(bucketURI.getHost(), conf,
+  READAHEAD_RANGE,
+  INPUT_FADVISE);
+} catch (URISyntaxException e) {
+  throw new RuntimeException(e);
+}
+// the FS is uncached, so will need clearing in test teardowns.
+S3ATestUtils.disableFilesystemCaching(conf);
+conf.setInt(READAHEAD_RANGE, READAHEAD);
+conf.set(INPUT_FADVISE, seekPolicy);
+conf.set(INPUT_FADVISE, seekPolicy);
 return conf;
   }
 
   @Override
   protected AbstractFSContract createContract(Configuration conf) {
 return new S3AContract(conf);
   }
+
+  @Override
+  public void teardown() throws Exception {
+S3AFileSystem fs = getFileSystem();
+if (fs.getConf().getBoolean(FS_S3A_IMPL_DISABLE_CACHE, false)) {
+  fs.close();
+}
+super.teardown();
+  }
+
+  /**
+   * This subclass of the {@code path(path)} operation adds the seek policy
+   * to the end to guarantee uniqueness across different calls of the same
+   * method.
+   *
+   * {@inheritDoc}
+   */
+  @Override
+  protected Path path(final String filepath) throws IOException {
+return super.path(filepath + "-" + seekPolicy);
+  }
+
+  /**
+   * Go to end, read then seek back to the previous position to force normal
+   * seek policy to switch to random IO.
+   * This will call readByte to trigger the second GET
+   * @param in input stream
+   * @return the byte read
+   * @throws IOException failure.
+   */
+  private byte readAtEndAndReturn(final FSDataInputStream in)
+  throws IOException {
+long pos = in.getPos();
+in.seek(DATASET_LEN -1);
+in.readByte();
+// go back to start and force a new GET
+in.seek(pos);
+return in.rea

[GitHub] [hadoop] steveloughran commented on a change in pull request #539: HADOOP-16109. Parquet reading S3AFileSystem causes EOF

2019-03-06 Thread GitBox
steveloughran commented on a change in pull request #539: HADOOP-16109. Parquet 
reading S3AFileSystem causes EOF
URL: https://github.com/apache/hadoop/pull/539#discussion_r263105044
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractSeek.java
 ##
 @@ -18,31 +18,280 @@
 
 package org.apache.hadoop.fs.contract.s3a;
 
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.Arrays;
+import java.util.Collection;
+
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.contract.AbstractContractSeekTest;
 import org.apache.hadoop.fs.contract.AbstractFSContract;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.apache.hadoop.fs.s3a.S3AInputPolicy;
+import org.apache.hadoop.fs.s3a.S3ATestUtils;
 
+import static com.google.common.base.Preconditions.checkNotNull;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADVISE;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADV_NORMAL;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADV_RANDOM;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADV_SEQUENTIAL;
+import static org.apache.hadoop.fs.s3a.Constants.READAHEAD_RANGE;
+import static 
org.apache.hadoop.fs.s3a.S3ATestConstants.FS_S3A_IMPL_DISABLE_CACHE;
 import static org.apache.hadoop.fs.s3a.S3ATestUtils.maybeEnableS3Guard;
 
 /**
  * S3A contract tests covering file seek.
  */
+@RunWith(Parameterized.class)
 public class ITestS3AContractSeek extends AbstractContractSeekTest {
 
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ITestS3AContractSeek.class);
+
+  protected static final int READAHEAD = 1024;
+
+  private final String seekPolicy;
+
+  public static final int DATASET_LEN = READAHEAD * 2;
+
+  public static final byte[] DATASET = ContractTestUtils.dataset(DATASET_LEN, 
'a', 32);
+
+  /**
+   * Test array for parameterized test runs.
+   * @return a list of parameter tuples.
+   */
+  @Parameterized.Parameters
+  public static Collection params() {
+return Arrays.asList(new Object[][]{
+{INPUT_FADV_RANDOM},
+{INPUT_FADV_NORMAL},
+{INPUT_FADV_SEQUENTIAL},
+});
+  }
+
+  public ITestS3AContractSeek(final String seekPolicy) {
+this.seekPolicy = seekPolicy;
+  }
+
   /**
* Create a configuration, possibly patching in S3Guard options.
+   * The FS is set to be uncached and the readhead and seek policies
+   * of the bucket itself are removed, so as to guarantee that the
+   * parameterized and test settings are
* @return a configuration
*/
   @Override
   protected Configuration createConfiguration() {
 Configuration conf = super.createConfiguration();
 // patch in S3Guard options
 maybeEnableS3Guard(conf);
+// purge any per-bucket overrides.
+try {
+  URI bucketURI = new 
URI(checkNotNull(conf.get("fs.contract.test.fs.s3a")));
+  S3ATestUtils.removeBucketOverrides(bucketURI.getHost(), conf,
+  READAHEAD_RANGE,
+  INPUT_FADVISE);
+} catch (URISyntaxException e) {
+  throw new RuntimeException(e);
+}
+// the FS is uncached, so will need clearing in test teardowns.
+S3ATestUtils.disableFilesystemCaching(conf);
+conf.setInt(READAHEAD_RANGE, READAHEAD);
+conf.set(INPUT_FADVISE, seekPolicy);
+conf.set(INPUT_FADVISE, seekPolicy);
 return conf;
   }
 
   @Override
   protected AbstractFSContract createContract(Configuration conf) {
 return new S3AContract(conf);
   }
+
+  @Override
+  public void teardown() throws Exception {
+S3AFileSystem fs = getFileSystem();
+if (fs.getConf().getBoolean(FS_S3A_IMPL_DISABLE_CACHE, false)) {
+  fs.close();
+}
+super.teardown();
+  }
+
+  /**
+   * This subclass of the {@code path(path)} operation adds the seek policy
+   * to the end to guarantee uniqueness across different calls of the same
+   * method.
+   *
+   * {@inheritDoc}
+   */
+  @Override
+  protected Path path(final String filepath) throws IOException {
+return super.path(filepath + "-" + seekPolicy);
+  }
+
+  /**
+   * Go to end, read then seek back to the previous position to force normal
+   * seek policy to switch to random IO.
+   * This will call readByte to trigger the second GET
+   * @param in input stream
+   * @return the byte read
+   * @throws IOException failure.
+   */
+  private byte readAtEndAndReturn(final FSDataInputStream in)
+  throws IOException {
+long pos = in.getPos();
+in.seek(DATASET_LEN -1);
+in.readByte();
+// go back to start and force a new GET
+in.seek(pos);
+return in.rea

[GitHub] [hadoop] steveloughran commented on a change in pull request #539: HADOOP-16109. Parquet reading S3AFileSystem causes EOF

2019-03-06 Thread GitBox
steveloughran commented on a change in pull request #539: HADOOP-16109. Parquet 
reading S3AFileSystem causes EOF
URL: https://github.com/apache/hadoop/pull/539#discussion_r263104896
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractSeek.java
 ##
 @@ -18,31 +18,280 @@
 
 package org.apache.hadoop.fs.contract.s3a;
 
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.Arrays;
+import java.util.Collection;
+
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.contract.AbstractContractSeekTest;
 import org.apache.hadoop.fs.contract.AbstractFSContract;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.apache.hadoop.fs.s3a.S3AInputPolicy;
+import org.apache.hadoop.fs.s3a.S3ATestUtils;
 
+import static com.google.common.base.Preconditions.checkNotNull;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADVISE;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADV_NORMAL;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADV_RANDOM;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADV_SEQUENTIAL;
+import static org.apache.hadoop.fs.s3a.Constants.READAHEAD_RANGE;
+import static 
org.apache.hadoop.fs.s3a.S3ATestConstants.FS_S3A_IMPL_DISABLE_CACHE;
 import static org.apache.hadoop.fs.s3a.S3ATestUtils.maybeEnableS3Guard;
 
 /**
  * S3A contract tests covering file seek.
  */
+@RunWith(Parameterized.class)
 public class ITestS3AContractSeek extends AbstractContractSeekTest {
 
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ITestS3AContractSeek.class);
+
+  protected static final int READAHEAD = 1024;
+
+  private final String seekPolicy;
+
+  public static final int DATASET_LEN = READAHEAD * 2;
+
+  public static final byte[] DATASET = ContractTestUtils.dataset(DATASET_LEN, 
'a', 32);
+
+  /**
+   * Test array for parameterized test runs.
+   * @return a list of parameter tuples.
+   */
+  @Parameterized.Parameters
+  public static Collection params() {
+return Arrays.asList(new Object[][]{
+{INPUT_FADV_RANDOM},
+{INPUT_FADV_NORMAL},
+{INPUT_FADV_SEQUENTIAL},
+});
+  }
+
+  public ITestS3AContractSeek(final String seekPolicy) {
+this.seekPolicy = seekPolicy;
+  }
+
   /**
* Create a configuration, possibly patching in S3Guard options.
+   * The FS is set to be uncached and the readhead and seek policies
+   * of the bucket itself are removed, so as to guarantee that the
+   * parameterized and test settings are
* @return a configuration
*/
   @Override
   protected Configuration createConfiguration() {
 Configuration conf = super.createConfiguration();
 // patch in S3Guard options
 maybeEnableS3Guard(conf);
+// purge any per-bucket overrides.
+try {
+  URI bucketURI = new 
URI(checkNotNull(conf.get("fs.contract.test.fs.s3a")));
+  S3ATestUtils.removeBucketOverrides(bucketURI.getHost(), conf,
+  READAHEAD_RANGE,
+  INPUT_FADVISE);
+} catch (URISyntaxException e) {
+  throw new RuntimeException(e);
+}
+// the FS is uncached, so will need clearing in test teardowns.
+S3ATestUtils.disableFilesystemCaching(conf);
+conf.setInt(READAHEAD_RANGE, READAHEAD);
+conf.set(INPUT_FADVISE, seekPolicy);
+conf.set(INPUT_FADVISE, seekPolicy);
 return conf;
   }
 
   @Override
   protected AbstractFSContract createContract(Configuration conf) {
 return new S3AContract(conf);
   }
+
+  @Override
+  public void teardown() throws Exception {
+S3AFileSystem fs = getFileSystem();
+if (fs.getConf().getBoolean(FS_S3A_IMPL_DISABLE_CACHE, false)) {
+  fs.close();
+}
+super.teardown();
+  }
+
+  /**
+   * This subclass of the {@code path(path)} operation adds the seek policy
+   * to the end to guarantee uniqueness across different calls of the same
+   * method.
+   *
+   * {@inheritDoc}
+   */
+  @Override
+  protected Path path(final String filepath) throws IOException {
+return super.path(filepath + "-" + seekPolicy);
+  }
+
+  /**
+   * Go to end, read then seek back to the previous position to force normal
+   * seek policy to switch to random IO.
+   * This will call readByte to trigger the second GET
+   * @param in input stream
+   * @return the byte read
+   * @throws IOException failure.
+   */
+  private byte readAtEndAndReturn(final FSDataInputStream in)
+  throws IOException {
+long pos = in.getPos();
+in.seek(DATASET_LEN -1);
+in.readByte();
+// go back to start and force a new GET
+in.seek(pos);
+return in.rea

[GitHub] [hadoop] steveloughran commented on a change in pull request #539: HADOOP-16109. Parquet reading S3AFileSystem causes EOF

2019-03-06 Thread GitBox
steveloughran commented on a change in pull request #539: HADOOP-16109. Parquet 
reading S3AFileSystem causes EOF
URL: https://github.com/apache/hadoop/pull/539#discussion_r263103916
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractSeek.java
 ##
 @@ -18,31 +18,280 @@
 
 package org.apache.hadoop.fs.contract.s3a;
 
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.Arrays;
+import java.util.Collection;
+
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.contract.AbstractContractSeekTest;
 import org.apache.hadoop.fs.contract.AbstractFSContract;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.apache.hadoop.fs.s3a.S3AInputPolicy;
+import org.apache.hadoop.fs.s3a.S3ATestUtils;
 
+import static com.google.common.base.Preconditions.checkNotNull;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADVISE;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADV_NORMAL;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADV_RANDOM;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADV_SEQUENTIAL;
+import static org.apache.hadoop.fs.s3a.Constants.READAHEAD_RANGE;
+import static 
org.apache.hadoop.fs.s3a.S3ATestConstants.FS_S3A_IMPL_DISABLE_CACHE;
 import static org.apache.hadoop.fs.s3a.S3ATestUtils.maybeEnableS3Guard;
 
 /**
  * S3A contract tests covering file seek.
  */
+@RunWith(Parameterized.class)
 public class ITestS3AContractSeek extends AbstractContractSeekTest {
 
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ITestS3AContractSeek.class);
+
+  protected static final int READAHEAD = 1024;
+
+  private final String seekPolicy;
+
+  public static final int DATASET_LEN = READAHEAD * 2;
+
+  public static final byte[] DATASET = ContractTestUtils.dataset(DATASET_LEN, 
'a', 32);
+
+  /**
+   * Test array for parameterized test runs.
+   * @return a list of parameter tuples.
+   */
+  @Parameterized.Parameters
+  public static Collection params() {
+return Arrays.asList(new Object[][]{
+{INPUT_FADV_RANDOM},
+{INPUT_FADV_NORMAL},
+{INPUT_FADV_SEQUENTIAL},
+});
+  }
+
+  public ITestS3AContractSeek(final String seekPolicy) {
+this.seekPolicy = seekPolicy;
+  }
+
   /**
* Create a configuration, possibly patching in S3Guard options.
+   * The FS is set to be uncached and the readhead and seek policies
+   * of the bucket itself are removed, so as to guarantee that the
+   * parameterized and test settings are
* @return a configuration
*/
   @Override
   protected Configuration createConfiguration() {
 Configuration conf = super.createConfiguration();
 // patch in S3Guard options
 maybeEnableS3Guard(conf);
+// purge any per-bucket overrides.
+try {
+  URI bucketURI = new 
URI(checkNotNull(conf.get("fs.contract.test.fs.s3a")));
+  S3ATestUtils.removeBucketOverrides(bucketURI.getHost(), conf,
+  READAHEAD_RANGE,
+  INPUT_FADVISE);
+} catch (URISyntaxException e) {
+  throw new RuntimeException(e);
+}
+// the FS is uncached, so will need clearing in test teardowns.
+S3ATestUtils.disableFilesystemCaching(conf);
+conf.setInt(READAHEAD_RANGE, READAHEAD);
+conf.set(INPUT_FADVISE, seekPolicy);
+conf.set(INPUT_FADVISE, seekPolicy);
 return conf;
   }
 
   @Override
   protected AbstractFSContract createContract(Configuration conf) {
 return new S3AContract(conf);
   }
+
+  @Override
+  public void teardown() throws Exception {
+S3AFileSystem fs = getFileSystem();
+if (fs.getConf().getBoolean(FS_S3A_IMPL_DISABLE_CACHE, false)) {
+  fs.close();
+}
+super.teardown();
+  }
+
+  /**
+   * This subclass of the {@code path(path)} operation adds the seek policy
+   * to the end to guarantee uniqueness across different calls of the same
+   * method.
+   *
+   * {@inheritDoc}
+   */
+  @Override
+  protected Path path(final String filepath) throws IOException {
+return super.path(filepath + "-" + seekPolicy);
+  }
+
+  /**
+   * Go to end, read then seek back to the previous position to force normal
+   * seek policy to switch to random IO.
+   * This will call readByte to trigger the second GET
+   * @param in input stream
+   * @return the byte read
+   * @throws IOException failure.
+   */
+  private byte readAtEndAndReturn(final FSDataInputStream in)
+  throws IOException {
+long pos = in.getPos();
+in.seek(DATASET_LEN -1);
+in.readByte();
+// go back to start and force a new GET
+in.seek(pos);
+return in.rea

[GitHub] [hadoop] steveloughran commented on a change in pull request #539: HADOOP-16109. Parquet reading S3AFileSystem causes EOF

2019-03-06 Thread GitBox
steveloughran commented on a change in pull request #539: HADOOP-16109. Parquet 
reading S3AFileSystem causes EOF
URL: https://github.com/apache/hadoop/pull/539#discussion_r263102116
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractSeek.java
 ##
 @@ -18,31 +18,280 @@
 
 package org.apache.hadoop.fs.contract.s3a;
 
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.Arrays;
+import java.util.Collection;
+
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.contract.AbstractContractSeekTest;
 import org.apache.hadoop.fs.contract.AbstractFSContract;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.apache.hadoop.fs.s3a.S3AInputPolicy;
+import org.apache.hadoop.fs.s3a.S3ATestUtils;
 
+import static com.google.common.base.Preconditions.checkNotNull;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADVISE;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADV_NORMAL;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADV_RANDOM;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADV_SEQUENTIAL;
+import static org.apache.hadoop.fs.s3a.Constants.READAHEAD_RANGE;
+import static 
org.apache.hadoop.fs.s3a.S3ATestConstants.FS_S3A_IMPL_DISABLE_CACHE;
 import static org.apache.hadoop.fs.s3a.S3ATestUtils.maybeEnableS3Guard;
 
 /**
  * S3A contract tests covering file seek.
  */
+@RunWith(Parameterized.class)
 public class ITestS3AContractSeek extends AbstractContractSeekTest {
 
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ITestS3AContractSeek.class);
+
+  protected static final int READAHEAD = 1024;
+
+  private final String seekPolicy;
 
 Review comment:
   they are defined in the Constants.INPUT_FADV. vars


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #561: HDDS-1043. Enable token based authentication for S3 api.

2019-03-06 Thread GitBox
bharatviswa504 commented on a change in pull request #561: HDDS-1043. Enable 
token based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#discussion_r263096318
 
 

 ##
 File path: 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/OzoneServiceProvider.java
 ##
 @@ -0,0 +1,52 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.s3;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.security.SecurityUtil;
+
+import javax.enterprise.context.ApplicationScoped;
+import javax.enterprise.inject.Produces;
+import javax.inject.Inject;
+import java.util.concurrent.atomic.AtomicReference;
+
+/**
+ * This class creates the OM service .
+ */
+@ApplicationScoped
+public class OzoneServiceProvider {
+
+  private static final AtomicReference OM_SERVICE_ADD =
+  new AtomicReference<>();
+
+  @Inject
+  private OzoneConfiguration conf;
+
+
+  @Produces
+  public Text getService() {
+if (OM_SERVICE_ADD.get() == null) {
+  OM_SERVICE_ADD.compareAndSet(null,
+  
SecurityUtil.buildTokenService(OmUtils.getOmAddressForClients(conf)));
 
 Review comment:
   Now we have HA, and we are using getOmAddressForClients, this takes om 
address from ozone.om.address. So, do we need to find leader OM and set it. 
Don't have complete context on the security, and how this is used. So, just 
want to know how this works?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #549: HDDS-1213. Support plain text S3 MPU initialization request

2019-03-06 Thread GitBox
hadoop-yetus commented on issue #549: HDDS-1213. Support plain text S3 MPU 
initialization request
URL: https://github.com/apache/hadoop/pull/549#issuecomment-470236321
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1026 | trunk passed |
   | -1 | compile | 36 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 30 | trunk passed |
   | +1 | mvnsite | 56 | trunk passed |
   | +1 | shadedclient | 733 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist |
   | +1 | findbugs | 40 | trunk passed |
   | +1 | javadoc | 39 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 11 | Maven dependency ordering for patch |
   | -1 | mvninstall | 18 | dist in the patch failed. |
   | -1 | compile | 26 | hadoop-ozone in the patch failed. |
   | -1 | javac | 26 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 18 | the patch passed |
   | +1 | mvnsite | 46 | the patch passed |
   | -1 | whitespace | 0 | The patch has 75 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | -1 | whitespace | 1 | The patch 19849  line(s) with tabs. |
   | +1 | shadedclient | 1000 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist |
   | +1 | findbugs | 46 | the patch passed |
   | +1 | javadoc | 40 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 37 | s3gateway in the patch passed. |
   | +1 | unit | 21 | dist in the patch passed. |
   | +1 | asflicense | 31 | The patch does not generate ASF License warnings. |
   | | | 3430 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-549/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/549 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux f1b54cd51a83 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 2c3ec37 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-549/4/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | findbugs | v3.1.0-RC1 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-549/4/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-549/4/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-549/4/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-549/4/artifact/out/whitespace-eol.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-549/4/artifact/out/whitespace-tabs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-549/4/testReport/ |
   | Max. process+thread count | 413 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/s3gateway hadoop-ozone/dist U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-549/4/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #527: HDDS-1093. Configuration tab in OM/SCM ui is not displaying the correct values

2019-03-06 Thread GitBox
hadoop-yetus commented on issue #527: HDDS-1093. Configuration tab in OM/SCM ui 
is not displaying the correct values
URL: https://github.com/apache/hadoop/pull/527#issuecomment-470236419
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 27 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 29 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1198 | trunk passed |
   | +1 | compile | 75 | trunk passed |
   | +1 | checkstyle | 27 | trunk passed |
   | +1 | mvnsite | 71 | trunk passed |
   | +1 | shadedclient | 745 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 107 | trunk passed |
   | +1 | javadoc | 57 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 12 | Maven dependency ordering for patch |
   | +1 | mvninstall | 70 | the patch passed |
   | +1 | jshint | 385 | There were no new jshint issues. |
   | +1 | compile | 73 | the patch passed |
   | +1 | javac | 73 | the patch passed |
   | +1 | checkstyle | 28 | the patch passed |
   | +1 | mvnsite | 61 | the patch passed |
   | -1 | whitespace | 0 | The patch has 75 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | -1 | whitespace | 1 | The patch 19849  line(s) with tabs. |
   | +1 | shadedclient | 888 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 112 | the patch passed |
   | +1 | javadoc | 53 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 74 | common in the patch failed. |
   | +1 | unit | 32 | framework in the patch passed. |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 4228 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/527 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  jshint  |
   | uname | Linux 4127a35af4d8 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 2c3ec37 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/5/artifact/out/whitespace-eol.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/5/artifact/out/whitespace-tabs.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/5/artifact/out/patch-unit-hadoop-hdds_common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/5/testReport/ |
   | Max. process+thread count | 446 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/framework U: hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/5/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #561: HDDS-1043. Enable token based authentication for S3 api.

2019-03-06 Thread GitBox
bharatviswa504 commented on a change in pull request #561: HDDS-1043. Enable 
token based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#discussion_r263090394
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneDelegationTokenSecretManager.java
 ##
 @@ -327,6 +336,37 @@ public boolean verifySignature(OzoneTokenIdentifier 
identifier,
 }
   }
 
+  /**
+   * Validates if a S3 identifier is valid or not.
+   * */
+  private byte[] validateS3Token(OzoneTokenIdentifier identifier)
+  throws InvalidToken {
+LOG.trace("Validating S3Token for identifier:{}", identifier);
+String awsSecret;
+try {
+  awsSecret = s3SecretManager.getS3UserSecretString(identifier
+  .getAwsAccessId());
+} catch (IOException e) {
+  LOG.error("Error while validating S3 identifier:{}",
+  identifier, e);
+  throw new InvalidToken("No S3 secret found for S3 identifier:"
 
 Review comment:
   Now if InvalidToken is thrown as an exception during invalid/malformed 
header, then how this will be thrown to the end user s3 request? I don't see 
any code for it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #561: HDDS-1043. Enable token based authentication for S3 api.

2019-03-06 Thread GitBox
bharatviswa504 commented on a change in pull request #561: HDDS-1043. Enable 
token based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#discussion_r263085452
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/AWSV4AuthValidator.java
 ##
 @@ -0,0 +1,98 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.security;
+
+import org.apache.hadoop.util.StringUtils;
+import org.apache.kerby.util.Hex;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.crypto.Mac;
+import javax.crypto.spec.SecretKeySpec;
+import java.io.UnsupportedEncodingException;
+import java.net.URLDecoder;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
+import java.security.GeneralSecurityException;
+import java.security.MessageDigest;
+import java.security.NoSuchAlgorithmException;
+
+/**
+ * AWS v4 authentication payload validator. For more details refer to AWS
+ * documentation https://docs.aws.amazon.com/general/latest/gr/
+ * sigv4-create-canonical-request.html.
+ **/
+final class AWSV4AuthValidator {
+
+  private final static Logger LOG =
+  LoggerFactory.getLogger(AWSV4AuthValidator.class);
+  private static final String HMAC_SHA256_ALGORITHM = "HmacSHA256";
+  private static final Charset UTF_8 = Charset.forName("utf-8");
+
+  private AWSV4AuthValidator() {
+  }
+
+  private static String urlDecode(String str) {
+try {
+  return URLDecoder.decode(str, UTF_8.name());
+} catch (UnsupportedEncodingException e) {
+  throw new RuntimeException(e);
+}
+  }
+
+  public static String hash(String payload) throws NoSuchAlgorithmException {
+MessageDigest md = MessageDigest.getInstance("SHA-256");
+md.update(payload.getBytes(UTF_8));
+return String.format("%064x", new java.math.BigInteger(1, md.digest()));
+  }
+
+  private static byte[] sign(byte[] key, String msg) {
+try {
+  SecretKeySpec signingKey = new SecretKeySpec(key, HMAC_SHA256_ALGORITHM);
+  Mac mac = Mac.getInstance(HMAC_SHA256_ALGORITHM);
+  mac.init(signingKey);
+  return mac.doFinal(msg.getBytes(StandardCharsets.UTF_8));
+} catch (GeneralSecurityException gse) {
+  throw new RuntimeException(gse);
+}
+  }
+
+  private static byte[] getSignatureKey(String key, String strToSign) {
+String[] signData = StringUtils.split(StringUtils.split(strToSign,
+'\n')[2], '/');
+String dateStamp = signData[0];
+String regionName = signData[1];
+String serviceName = signData[2];
+byte[] kDate = sign(("AWS4" + key).getBytes(UTF_8), dateStamp);
+byte[] kRegion = sign(kDate, regionName);
+byte[] kService = sign(kRegion, serviceName);
+byte[] kSigning = sign(kService, "aws4_request");
+LOG.info(Hex.encode(kSigning));
+return kSigning;
+  }
+
+  /**
+   * Validate request. Returns true if aws request is legit else returns false.
+   */
+  public static boolean validateRequest(String strToSign, String signature,
+  String userKey) {
+String expectedSignature = Hex.encode(sign(getSignatureKey(
 
 Review comment:
   Can we move this line in to a method say getSignature()
   
   As from the doc, this is signature.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #561: HDDS-1043. Enable token based authentication for S3 api.

2019-03-06 Thread GitBox
bharatviswa504 commented on a change in pull request #561: HDDS-1043. Enable 
token based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#discussion_r263085452
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/AWSV4AuthValidator.java
 ##
 @@ -0,0 +1,98 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.security;
+
+import org.apache.hadoop.util.StringUtils;
+import org.apache.kerby.util.Hex;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.crypto.Mac;
+import javax.crypto.spec.SecretKeySpec;
+import java.io.UnsupportedEncodingException;
+import java.net.URLDecoder;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
+import java.security.GeneralSecurityException;
+import java.security.MessageDigest;
+import java.security.NoSuchAlgorithmException;
+
+/**
+ * AWS v4 authentication payload validator. For more details refer to AWS
+ * documentation https://docs.aws.amazon.com/general/latest/gr/
+ * sigv4-create-canonical-request.html.
+ **/
+final class AWSV4AuthValidator {
+
+  private final static Logger LOG =
+  LoggerFactory.getLogger(AWSV4AuthValidator.class);
+  private static final String HMAC_SHA256_ALGORITHM = "HmacSHA256";
+  private static final Charset UTF_8 = Charset.forName("utf-8");
+
+  private AWSV4AuthValidator() {
+  }
+
+  private static String urlDecode(String str) {
+try {
+  return URLDecoder.decode(str, UTF_8.name());
+} catch (UnsupportedEncodingException e) {
+  throw new RuntimeException(e);
+}
+  }
+
+  public static String hash(String payload) throws NoSuchAlgorithmException {
+MessageDigest md = MessageDigest.getInstance("SHA-256");
+md.update(payload.getBytes(UTF_8));
+return String.format("%064x", new java.math.BigInteger(1, md.digest()));
+  }
+
+  private static byte[] sign(byte[] key, String msg) {
+try {
+  SecretKeySpec signingKey = new SecretKeySpec(key, HMAC_SHA256_ALGORITHM);
+  Mac mac = Mac.getInstance(HMAC_SHA256_ALGORITHM);
+  mac.init(signingKey);
+  return mac.doFinal(msg.getBytes(StandardCharsets.UTF_8));
+} catch (GeneralSecurityException gse) {
+  throw new RuntimeException(gse);
+}
+  }
+
+  private static byte[] getSignatureKey(String key, String strToSign) {
+String[] signData = StringUtils.split(StringUtils.split(strToSign,
+'\n')[2], '/');
+String dateStamp = signData[0];
+String regionName = signData[1];
+String serviceName = signData[2];
+byte[] kDate = sign(("AWS4" + key).getBytes(UTF_8), dateStamp);
+byte[] kRegion = sign(kDate, regionName);
+byte[] kService = sign(kRegion, serviceName);
+byte[] kSigning = sign(kService, "aws4_request");
+LOG.info(Hex.encode(kSigning));
+return kSigning;
+  }
+
+  /**
+   * Validate request. Returns true if aws request is legit else returns false.
+   */
+  public static boolean validateRequest(String strToSign, String signature,
+  String userKey) {
+String expectedSignature = Hex.encode(sign(getSignatureKey(
 
 Review comment:
   Can we add this in to a method say getSignature()
   
   As from the doc, this is signature.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #561: HDDS-1043. Enable token based authentication for S3 api.

2019-03-06 Thread GitBox
bharatviswa504 commented on a change in pull request #561: HDDS-1043. Enable 
token based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#discussion_r263085017
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/AWSV4AuthValidator.java
 ##
 @@ -0,0 +1,98 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.security;
+
+import org.apache.hadoop.util.StringUtils;
+import org.apache.kerby.util.Hex;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.crypto.Mac;
+import javax.crypto.spec.SecretKeySpec;
+import java.io.UnsupportedEncodingException;
+import java.net.URLDecoder;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
+import java.security.GeneralSecurityException;
+import java.security.MessageDigest;
+import java.security.NoSuchAlgorithmException;
+
+/**
+ * AWS v4 authentication payload validator. For more details refer to AWS
+ * documentation https://docs.aws.amazon.com/general/latest/gr/
+ * sigv4-create-canonical-request.html.
+ **/
+final class AWSV4AuthValidator {
+
+  private final static Logger LOG =
+  LoggerFactory.getLogger(AWSV4AuthValidator.class);
+  private static final String HMAC_SHA256_ALGORITHM = "HmacSHA256";
+  private static final Charset UTF_8 = Charset.forName("utf-8");
+
+  private AWSV4AuthValidator() {
+  }
+
+  private static String urlDecode(String str) {
+try {
+  return URLDecoder.decode(str, UTF_8.name());
+} catch (UnsupportedEncodingException e) {
+  throw new RuntimeException(e);
+}
+  }
+
+  public static String hash(String payload) throws NoSuchAlgorithmException {
+MessageDigest md = MessageDigest.getInstance("SHA-256");
+md.update(payload.getBytes(UTF_8));
+return String.format("%064x", new java.math.BigInteger(1, md.digest()));
+  }
+
+  private static byte[] sign(byte[] key, String msg) {
+try {
+  SecretKeySpec signingKey = new SecretKeySpec(key, HMAC_SHA256_ALGORITHM);
+  Mac mac = Mac.getInstance(HMAC_SHA256_ALGORITHM);
+  mac.init(signingKey);
+  return mac.doFinal(msg.getBytes(StandardCharsets.UTF_8));
+} catch (GeneralSecurityException gse) {
+  throw new RuntimeException(gse);
+}
+  }
+
+  private static byte[] getSignatureKey(String key, String strToSign) {
 
 Review comment:
   Can we rename this method to getSigningKey


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on issue #561: HDDS-1043. Enable token based authentication for S3 api.

2019-03-06 Thread GitBox
ajayydv commented on issue #561: HDDS-1043. Enable token based authentication 
for S3 api.
URL: https://github.com/apache/hadoop/pull/561#issuecomment-470230590
 
 
   > * a few unit tests are failing (NPE in s3 token token related tests)
   
   could you please share the failing tests. I can't find them in test report.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #561: HDDS-1043. Enable token based authentication for S3 api.

2019-03-06 Thread GitBox
ajayydv commented on a change in pull request #561: HDDS-1043. Enable token 
based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#discussion_r263085093
 
 

 ##
 File path: 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/AWSV4AuthParser.java
 ##
 @@ -0,0 +1,302 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.s3;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.ozone.s3.exception.OS3Exception;
+import org.apache.hadoop.ozone.s3.header.AuthorizationHeaderV4;
+import org.apache.hadoop.ozone.s3.header.Credential;
+import org.apache.kerby.util.Hex;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.ws.rs.container.ContainerRequestContext;
+import javax.ws.rs.core.MultivaluedMap;
+import java.io.UnsupportedEncodingException;
+import java.net.InetAddress;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URLEncoder;
+import java.net.UnknownHostException;
+import java.security.MessageDigest;
+import java.security.NoSuchAlgorithmException;
+import java.time.LocalDate;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
+
+import static java.time.temporal.ChronoUnit.SECONDS;
+import static 
org.apache.hadoop.ozone.s3.exception.S3ErrorTable.S3_TOKEN_CREATION_ERROR;
+import static 
org.apache.hadoop.ozone.s3.header.AWSConstants.PRESIGN_URL_MAX_EXPIRATION_SECONDS;
+import static org.apache.hadoop.ozone.s3.header.AWSConstants.TIME_FORMATTER;
+
+/**
+ * Parser to process AWS v4 auth request. Creates string to sign and auth
+ * header. For more details refer to AWS documentation https://docs.aws
+ * .amazon.com/general/latest/gr/sigv4-create-canonical-request.html.
+ **/
+public class AWSV4AuthParser implements AWSAuthParser {
 
 Review comment:
   renamed the member field to v4Header. AuthorizationHeaderV4 parses just the 
auth header while AWSAuthParser parses the whole request(to construct "String 
to sign"). IMO it makes sense to use AuthorizationHeaderV4 inside AWSAuthParser 
for modularity.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #561: HDDS-1043. Enable token based authentication for S3 api.

2019-03-06 Thread GitBox
bharatviswa504 commented on a change in pull request #561: HDDS-1043. Enable 
token based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#discussion_r263083685
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/AWSV4AuthValidator.java
 ##
 @@ -0,0 +1,98 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.security;
+
+import org.apache.hadoop.util.StringUtils;
+import org.apache.kerby.util.Hex;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.crypto.Mac;
+import javax.crypto.spec.SecretKeySpec;
+import java.io.UnsupportedEncodingException;
+import java.net.URLDecoder;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
+import java.security.GeneralSecurityException;
+import java.security.MessageDigest;
+import java.security.NoSuchAlgorithmException;
+
+/**
+ * AWS v4 authentication payload validator. For more details refer to AWS
+ * documentation https://docs.aws.amazon.com/general/latest/gr/
+ * sigv4-create-canonical-request.html.
+ **/
+final class AWSV4AuthValidator {
+
+  private final static Logger LOG =
+  LoggerFactory.getLogger(AWSV4AuthValidator.class);
+  private static final String HMAC_SHA256_ALGORITHM = "HmacSHA256";
+  private static final Charset UTF_8 = Charset.forName("utf-8");
+
+  private AWSV4AuthValidator() {
+  }
+
+  private static String urlDecode(String str) {
+try {
+  return URLDecoder.decode(str, UTF_8.name());
+} catch (UnsupportedEncodingException e) {
+  throw new RuntimeException(e);
+}
+  }
+
+  public static String hash(String payload) throws NoSuchAlgorithmException {
+MessageDigest md = MessageDigest.getInstance("SHA-256");
+md.update(payload.getBytes(UTF_8));
+return String.format("%064x", new java.math.BigInteger(1, md.digest()));
+  }
+
+  private static byte[] sign(byte[] key, String msg) {
+try {
+  SecretKeySpec signingKey = new SecretKeySpec(key, HMAC_SHA256_ALGORITHM);
+  Mac mac = Mac.getInstance(HMAC_SHA256_ALGORITHM);
+  mac.init(signingKey);
+  return mac.doFinal(msg.getBytes(StandardCharsets.UTF_8));
+} catch (GeneralSecurityException gse) {
+  throw new RuntimeException(gse);
+}
+  }
+
+  private static byte[] getSignatureKey(String key, String strToSign) {
+String[] signData = StringUtils.split(StringUtils.split(strToSign,
+'\n')[2], '/');
+String dateStamp = signData[0];
+String regionName = signData[1];
+String serviceName = signData[2];
+byte[] kDate = sign(("AWS4" + key).getBytes(UTF_8), dateStamp);
+byte[] kRegion = sign(kDate, regionName);
+byte[] kService = sign(kRegion, serviceName);
+byte[] kSigning = sign(kService, "aws4_request");
+LOG.info(Hex.encode(kSigning));
+return kSigning;
+  }
+
+  /**
+   * Validate request. Returns true if aws request is legit else returns false.
+   */
+  public static boolean validateRequest(String strToSign, String signature,
+  String userKey) {
+String expectedSignature = Hex.encode(sign(getSignatureKey(
+userKey, strToSign), strToSign));
+return expectedSignature.equals(signature);
+  }
 
 Review comment:
   Can we add javadoc for the methods.
   It is very difficult to look in to this code, as when reviewing we need to 
have this link
   
https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html
 and review.
   
   We can take the snippet from the doc, add it in comments or javadoc. It will 
be very helpful during reading code later.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #561: HDDS-1043. Enable token based authentication for S3 api.

2019-03-06 Thread GitBox
ajayydv commented on a change in pull request #561: HDDS-1043. Enable token 
based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#discussion_r263083622
 
 

 ##
 File path: 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/AWSAuthParser.java
 ##
 @@ -0,0 +1,41 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.s3;
+
+import java.nio.charset.Charset;
+
+/*
+ * Parser to request auth parser for http request.
+ * */
+interface AWSAuthParser {
+
+  String UNSIGNED_PAYLOAD = "UNSIGNED-PAYLOAD";
+  String NEWLINE = "\n";
+  String CONTENT_TYPE = "content-type";
+  String X_AMAZ_DATE = "X-Amz-Date";
+  String CONTENT_MD5 = "content-md5";
+  String AUTHORIZATION_HEADER = "Authorization";
 
 Review comment:
   Moved all the constants to AWSAuthParser as they all are related to AWS auth 
parsing.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #561: HDDS-1043. Enable token based authentication for S3 api.

2019-03-06 Thread GitBox
ajayydv commented on a change in pull request #561: HDDS-1043. Enable token 
based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#discussion_r263083045
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/S3SecretManager.java
 ##
 @@ -27,4 +27,10 @@
 public interface S3SecretManager {
 
   S3SecretValue getS3Secret(String kerberosID) throws IOException;
+
 
 Review comment:
   Renamed new api to getS3UserSecretString, open to any better name you may 
suggest. Purpose of both api's is different so consolidating them right not 
might not be a good option. We can discuss this further in separate jira.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #561: HDDS-1043. Enable token based authentication for S3 api.

2019-03-06 Thread GitBox
hadoop-yetus commented on issue #561: HDDS-1043. Enable token based 
authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#issuecomment-470228233
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 8 | https://github.com/apache/hadoop/pull/561 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/561 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-561/5/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #561: HDDS-1043. Enable token based authentication for S3 api.

2019-03-06 Thread GitBox
ajayydv commented on a change in pull request #561: HDDS-1043. Enable token 
based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#discussion_r263081761
 
 

 ##
 File path: 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/VirtualHostStyleFilter.java
 ##
 @@ -65,6 +65,10 @@ public void filter(ContainerRequestContext requestContext) 
throws
 
 authenticationHeaderParser.setAuthHeader(requestContext.getHeaderString(
 HttpHeaders.AUTHORIZATION));
+
 
 Review comment:
   did you mean blank line? its removed .


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on issue #562: HDDS-1225. Provide docker-compose for OM HA.

2019-03-06 Thread GitBox
hanishakoneru commented on issue #562: HDDS-1225. Provide docker-compose for OM 
HA.
URL: https://github.com/apache/hadoop/pull/562#issuecomment-470227122
 
 
   Thanks for the review @elek . I renamed ozoneManager to om.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #561: HDDS-1043. Enable token based authentication for S3 api.

2019-03-06 Thread GitBox
hadoop-yetus commented on issue #561: HDDS-1043. Enable token based 
authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#issuecomment-470225913
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 7 | https://github.com/apache/hadoop/pull/561 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/561 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-561/4/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >