[jira] [Commented] (HADOOP-15457) Add Security-Related HTTP Response Header in WEBUIs.

2019-11-15 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16975467#comment-16975467
 ] 

Bharat Viswanadham commented on HADOOP-15457:
-

Hi [~kanwaljeets] [~rkanter]

Just want to understand this, in Jira description for other http headers it is 
said "add support for headers to be able to get added via xml config"

But in the code, I see we have a regex and reading all the values matching with 
regex from the configuration.

Like for example to set HSTS header, I think we should be set as 

 
{code:java}

hadoop.http.header.Strict_Transport_Security
max-age=7200; includeSubDomains; preload
.
{code}
 

So do you mean here reading from xml config means, reading from core-site.xml, 
and gave some sample value for HSTS header?

 hadoop.http.header.Strict_Transport_Security
 valHSTSFromXML
 

> Add Security-Related HTTP Response Header in WEBUIs.
> 
>
> Key: HADOOP-15457
> URL: https://issues.apache.org/jira/browse/HADOOP-15457
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kanwaljeet Sachdev
>Assignee: Kanwaljeet Sachdev
>Priority: Major
>  Labels: security
> Fix For: 3.2.0
>
> Attachments: HADOOP-15457.001.patch, HADOOP-15457.002.patch, 
> HADOOP-15457.003.patch, HADOOP-15457.004.patch, HADOOP-15457.005.patch, 
> YARN-8198.001.patch, YARN-8198.002.patch, YARN-8198.003.patch, 
> YARN-8198.004.patch, YARN-8198.005.patch
>
>
> As of today, YARN web-ui lacks certain security related http response 
> headers. We are planning to add few default ones and also add support for 
> headers to be able to get added via xml config. Planning to make the below 
> two as default.
>  * X-XSS-Protection: 1; mode=block
>  * X-Content-Type-Options: nosniff
>  
> Support for headers via config properties in core-site.xml will be along the 
> below lines
> {code:java}
> 
>  hadoop.http.header.Strict_Transport_Security
>  valHSTSFromXML
>  {code}
>  
> A regex matcher will lift these properties and add into the response header 
> when Jetty prepares the response.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16700) RpcQueueTime may be negative when the response has to be sent later

2019-11-15 Thread Erik Krogen (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16975457#comment-16975457
 ] 

Erik Krogen commented on HADOOP-16700:
--

Thanks for the explanation [~xuzq_zander]! It definitely seems like a valid 
issue.

I took a look at the v001 patch. By the way, please follow the [patch naming 
conventions|https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute#HowToContribute-Namingyourpatch]
 -- it should be {{HADOOP-16700.001.patch}} (period instead of hyphen before 
the version, and you don't need to specify a branch when it is trunk).

The general approach seems sound to me. I am concerned about all of the changes 
you've made to the signatures of methods, removing {{receiveTime}}. First off 
{{Server}} is a public interface, so we should not make breaking changes to its 
API. To introduce a new method here, you need to keep the old one but mark it 
as {{@Deprecated}}. Second off, this change seems unrelated to this JIRA? If 
that is the case, we should keep it separate.

My only other comment is that we should update the comments here within 
{{Call}}:
{code}
long timestampNanos; // time received when response is null
 // time served when response is not null
long responseTimestampNanos;
{code}
I think that it would now be more accurate to say:
{code}
long timestampNanos; // time the call was received
long responseTimestampNanos; // time the call was served
{code}
Let me know if you think that isn't correct.

> RpcQueueTime may be negative when the response has to be sent later
> ---
>
> Key: HADOOP-16700
> URL: https://issues.apache.org/jira/browse/HADOOP-16700
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: xuzq
>Assignee: xuzq
>Priority: Minor
> Attachments: HADOOP-16700-trunk-001.patch
>
>
> RpcQueueTime may be negative when the response has to be sent later.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16700) RpcQueueTime may be negative when the response has to be sent later

2019-11-15 Thread Erik Krogen (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16975457#comment-16975457
 ] 

Erik Krogen edited comment on HADOOP-16700 at 11/15/19 10:58 PM:
-

Thanks for the explanation [~xuzq_zander]! Very helpful. It definitely seems 
like a valid issue.

I took a look at the v001 patch. By the way, please follow the [patch naming 
conventions|https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute#HowToContribute-Namingyourpatch]
 -- it should be {{HADOOP-16700.001.patch}} (period instead of hyphen before 
the version, and you don't need to specify a branch when it is trunk).

The general approach seems sound to me. I am concerned about all of the changes 
you've made to the signatures of methods, removing {{receiveTime}}. First off 
{{Server}} is a public interface, so we should not make breaking changes to its 
API. To introduce a new method here, you need to keep the old one but mark it 
as {{@Deprecated}}. Second off, this change seems unrelated to this JIRA? If 
that is the case, we should keep it separate.

My only other comment is that we should update the comments here within 
{{Call}}:
{code}
long timestampNanos; // time received when response is null
 // time served when response is not null
long responseTimestampNanos;
{code}
I think that it would now be more accurate to say:
{code}
long timestampNanos; // time the call was received
long responseTimestampNanos; // time the call was served
{code}
Let me know if you think that isn't correct.


was (Author: xkrogen):
Thanks for the explanation [~xuzq_zander]! It definitely seems like a valid 
issue.

I took a look at the v001 patch. By the way, please follow the [patch naming 
conventions|https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute#HowToContribute-Namingyourpatch]
 -- it should be {{HADOOP-16700.001.patch}} (period instead of hyphen before 
the version, and you don't need to specify a branch when it is trunk).

The general approach seems sound to me. I am concerned about all of the changes 
you've made to the signatures of methods, removing {{receiveTime}}. First off 
{{Server}} is a public interface, so we should not make breaking changes to its 
API. To introduce a new method here, you need to keep the old one but mark it 
as {{@Deprecated}}. Second off, this change seems unrelated to this JIRA? If 
that is the case, we should keep it separate.

My only other comment is that we should update the comments here within 
{{Call}}:
{code}
long timestampNanos; // time received when response is null
 // time served when response is not null
long responseTimestampNanos;
{code}
I think that it would now be more accurate to say:
{code}
long timestampNanos; // time the call was received
long responseTimestampNanos; // time the call was served
{code}
Let me know if you think that isn't correct.

> RpcQueueTime may be negative when the response has to be sent later
> ---
>
> Key: HADOOP-16700
> URL: https://issues.apache.org/jira/browse/HADOOP-16700
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: xuzq
>Assignee: xuzq
>Priority: Minor
> Attachments: HADOOP-16700-trunk-001.patch
>
>
> RpcQueueTime may be negative when the response has to be sent later.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #1711: HADOOP-16455. ABFS: Implement FileSystem.access() method.

2019-11-15 Thread GitBox
goiri commented on a change in pull request #1711: HADOOP-16455. ABFS: 
Implement FileSystem.access() method.
URL: https://github.com/apache/hadoop/pull/1711#discussion_r347031332
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java
 ##
 @@ -517,6 +517,19 @@ public AbfsRestOperation getAclStatus(final String path, 
final boolean useUPN) t
 return op;
   }
 
+  public AbfsRestOperation checkAccess(String path, String rwx)
 
 Review comment:
   Javadoc


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16700) RpcQueueTime may be negative when the response has to be sent later

2019-11-15 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HADOOP-16700:


Assignee: xuzq

> RpcQueueTime may be negative when the response has to be sent later
> ---
>
> Key: HADOOP-16700
> URL: https://issues.apache.org/jira/browse/HADOOP-16700
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: xuzq
>Assignee: xuzq
>Priority: Minor
> Attachments: HADOOP-16700-trunk-001.patch
>
>
> RpcQueueTime may be negative when the response has to be sent later.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16705) MBeanInfoBuilder puts unnecessary memory pressure on the system with a debug log

2019-11-15 Thread Jira


[ 
https://issues.apache.org/jira/browse/HADOOP-16705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16975414#comment-16975414
 ] 

Íñigo Goiri commented on HADOOP-16705:
--

Backported to branch-3.2, branch-3.1, and branch-3.0.

> MBeanInfoBuilder puts unnecessary memory pressure on the system with a debug 
> log
> 
>
> Key: HADOOP-16705
> URL: https://issues.apache.org/jira/browse/HADOOP-16705
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 2.9.2
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: mbeaninfobuilder.JPG
>
>
> MBeanInfoBuilder's get() method DEBUG logs all the MBeanAttributeInfo 
> attributes that it gathered. This can have a high memory churn that can be 
> easily avoided. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16705) MBeanInfoBuilder puts unnecessary memory pressure on the system with a debug log

2019-11-15 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HADOOP-16705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HADOOP-16705:
-
Fix Version/s: 3.2.2
   3.1.4
   3.0.4

> MBeanInfoBuilder puts unnecessary memory pressure on the system with a debug 
> log
> 
>
> Key: HADOOP-16705
> URL: https://issues.apache.org/jira/browse/HADOOP-16705
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 2.9.2
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Fix For: 3.0.4, 3.3.0, 3.1.4, 3.2.2
>
> Attachments: mbeaninfobuilder.JPG
>
>
> MBeanInfoBuilder's get() method DEBUG logs all the MBeanAttributeInfo 
> attributes that it gathered. This can have a high memory churn that can be 
> easily avoided. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dineshchitlangia commented on issue #1675: HADOOP-16654:Delete hadoop-ozone and hadoop-hdds subprojects from apa…

2019-11-15 Thread GitBox
dineshchitlangia commented on issue #1675: HADOOP-16654:Delete hadoop-ozone and 
hadoop-hdds subprojects from apa…
URL: https://github.com/apache/hadoop/pull/1675#issuecomment-554523524
 
 
   Closing this PR as this was merged using the patch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dineshchitlangia closed pull request #1675: HADOOP-16654:Delete hadoop-ozone and hadoop-hdds subprojects from apa…

2019-11-15 Thread GitBox
dineshchitlangia closed pull request #1675: HADOOP-16654:Delete hadoop-ozone 
and hadoop-hdds subprojects from apa…
URL: https://github.com/apache/hadoop/pull/1675
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16714) Hadoop website does not mention 2.8.5 release

2019-11-15 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HADOOP-16714:


 Summary: Hadoop website does not mention 2.8.5 release
 Key: HADOOP-16714
 URL: https://issues.apache.org/jira/browse/HADOOP-16714
 Project: Hadoop Common
  Issue Type: Task
  Components: website
Affects Versions: 2.8.5
Reporter: Wei-Chiu Chuang


I'm not seeing the 2.8.5 release mentioned in the new website 
https://hadoop.apache.org/releases.html, nor is it mentioned in the old website 
https://hadoop.apache.org/old/releases.html

The 2.8.5 doc: https://hadoop.apache.org/docs/r2.8.5/



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1707: HADOOP-16697. Tune/audit auth mode

2019-11-15 Thread GitBox
hadoop-yetus commented on issue #1707: HADOOP-16697. Tune/audit auth mode
URL: https://github.com/apache/hadoop/pull/1707#issuecomment-554520031
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 92 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1369 | trunk passed |
   | +1 | compile | 38 | trunk passed |
   | +1 | checkstyle | 24 | trunk passed |
   | +1 | mvnsite | 44 | trunk passed |
   | +1 | shadedclient | 956 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 25 | trunk passed |
   | 0 | spotbugs | 59 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 57 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 35 | the patch passed |
   | +1 | compile | 30 | the patch passed |
   | +1 | javac | 30 | the patch passed |
   | -0 | checkstyle | 21 | hadoop-tools/hadoop-aws: The patch generated 6 new 
+ 58 unchanged - 0 fixed = 64 total (was 58) |
   | +1 | mvnsite | 32 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 916 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 24 | the patch passed |
   | -1 | findbugs | 67 | hadoop-tools/hadoop-aws generated 1 new + 0 unchanged 
- 0 fixed = 1 total (was 0) |
   ||| _ Other Tests _ |
   | +1 | unit | 83 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 3933 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-aws |
   |  |  Switch statement found in 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$BucketInfo.run(String[], 
PrintStream) where one case falls through to the next case  At 
S3GuardTool.java:PrintStream) where one case falls through to the next case  At 
S3GuardTool.java:[lines 1301-1308] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1707/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1707 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux b694540a3f90 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b2cc8b6 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1707/3/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1707/3/artifact/out/new-findbugs-hadoop-tools_hadoop-aws.html
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1707/3/testReport/ |
   | Max. process+thread count | 340 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1707/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang merged pull request #1669: HDFS-14802. The feature of protect directories should be used in RenameOp

2019-11-15 Thread GitBox
jojochuang merged pull request #1669: HDFS-14802. The feature of protect 
directories should be used in RenameOp
URL: https://github.com/apache/hadoop/pull/1669
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on issue #1669: HDFS-14802. The feature of protect directories should be used in RenameOp

2019-11-15 Thread GitBox
jojochuang commented on issue #1669: HDFS-14802. The feature of protect 
directories should be used in RenameOp
URL: https://github.com/apache/hadoop/pull/1669#issuecomment-554518407
 
 
   +1 merging this


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16654) Delete hadoop-ozone and hadoop-hdds subprojects from apache trunk

2019-11-15 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HADOOP-16654:
---
Fix Version/s: 3.3.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

[~aengineer] Thanks for reviews, [~elek] Thanks for filing this patch, 
[~Sandeep Nemuri] for the contribution.

> Delete hadoop-ozone and hadoop-hdds subprojects from apache trunk
> -
>
> Key: HADOOP-16654
> URL: https://issues.apache.org/jira/browse/HADOOP-16654
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Marton Elek
>Assignee: Sandeep Nemuri
>Priority: Major
> Fix For: 3.3.0
>
>
> As described in the HDDS-2287 ozone/hdds sources are moving to the 
> apache/hadoop-ozone git repository.
> All the remaining ozone/hdds files can be removed from trunk (including hdds 
> profile in main pom.xml)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16654) Delete hadoop-ozone and hadoop-hdds subprojects from apache trunk

2019-11-15 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16975377#comment-16975377
 ] 

Hudson commented on HADOOP-16654:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17646 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17646/])
HADOOP-16654:Delete hadoop-ozone and hadoop-hdds subprojects from apache 
(dineshchitlangia: rev 9f0610fb83ae064e2e2c854fb2e9c9dc4cbc1646)
* (delete) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/replication/GrpcReplicationService.java
* (delete) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneKeyLocation.java
* (delete) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/package-info.java
* (delete) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeStorageStatMap.java
* (delete) hadoop-hdds/docs/content/start/StartFromDockerHub.md
* (delete) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainer.java
* (delete) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneClientUtils.java
* (delete) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/RDBTable.java
* (delete) 
hadoop-hdds/common/src/test/resources/networkTopologyTestFiles/no-leaf.xml
* (delete) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/DBUpdatesWrapper.java
* (delete) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
* (delete) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/authority/PKIProfiles/package-info.java
* (delete) 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/EventWatcher.java
* (delete) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/RocksDBCheckpoint.java
* (delete) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/authority/PKIProfiles/DefaultProfile.java
* (delete) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/metadata/LongCodec.java
* (delete) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
* (delete) 
hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/utils/TestHddsIdFactory.java
* (delete) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/DBStoreBuilder.java
* (delete) hadoop-hdds/common/src/main/bin/hadoop-daemons.sh
* (delete) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/DeleteContainerCommandHandler.java
* (delete) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/report/TestReportPublisherFactory.java
* (delete) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerReportHandler.java
* (delete) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/client/ScmClient.java
* (delete) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/utils/CertificateCodec.java
* (delete) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/states/package-info.java
* (delete) hadoop-hdds/common/src/main/conf/hadoop-policy.xml
* (delete) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/report/package-info.java
* (delete) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/VolumeUsage.java
* (delete) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/placement/TestDatanodeMetrics.java
* (delete) hadoop-hdds/common/src/main/bin/hadoop-config.cmd
* (delete) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/Codec.java
* (delete) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/authority/CertificateApprover.java
* (delete) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/helpers/DatanodeIdYaml.java
* (delete) hadoop-hdds/docs/content/concept/Overview.md
* (delete) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/tracing/GrpcClientInterceptor.java
* (delete) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/SCMContainerPlacementRackAware.java
* (delete) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/testutils/ReplicationNodeManagerMock.java
* (delete) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/package-info.java
* (delete) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeProtocolServer.java
* (delete) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/AsyncChecker.java
* (delete) hadoop-hdds/server-scm/pom.xml
* (delete) 

[GitHub] [hadoop] anuengineer commented on issue #1675: HADOOP-16654:Delete hadoop-ozone and hadoop-hdds subprojects from apa…

2019-11-15 Thread GitBox
anuengineer commented on issue #1675: HADOOP-16654:Delete hadoop-ozone and 
hadoop-hdds subprojects from apa…
URL: https://github.com/apache/hadoop/pull/1675#issuecomment-554493878
 
 
   > @anuengineer Does the tag name `removed-ozone` and description `Removed 
hadoop-hdds and hadoop-ozone from trunk`look good to you ?
   
   +1. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16697) audit/tune s3a authoritative flag in s3guard DDB Table

2019-11-15 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16975293#comment-16975293
 ] 

Steve Loughran commented on HADOOP-16697:
-

Prune of tombstones was broken here ... Always marking at the parent directory 
as non-auth. Root cause turned out to be we'd forgotten to ask for the 
is_deleted field in the query. Easier to fix than track down.


> audit/tune s3a authoritative flag in s3guard DDB Table
> --
>
> Key: HADOOP-16697
> URL: https://issues.apache.org/jira/browse/HADOOP-16697
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> S3A auth mode can cause confusion in deployments, because people expect there 
> never to be any HTTP requests to S3 in a path marked as authoritative.
> This is *not* the case when S3Guard doesn't have an entry for the path in the 
> table. Which is the state it is in when the directory was populated using 
> different tools (e.g AWS s3 command).
> Proposed
> 1. HADOOP-16684 to give more diagnostics about the bucket
> 2. add an audit command to take a path and verify that it is marked in 
> dynamoDB as authoritative *all the way down*
> This command is designed to be executed from the commandline and will return 
> different error codes based on different situations
> * path isn't guarded
> * path is not authoritative in s3a settings (dir, path)
> * path not known in table: use the 404/44 response
> * path contains 1+ dir entry which is non-auth
> 3. Use this audit after some of the bulk rename, delete, import, commit 
> (soon: upload, copy) operations to verify that's where appropriate, we do 
> update the directories. Particularly for incremental rename() where I have 
> long suspected we may have to do more there.
> 4. Review documentation and make it clear what is needed (import) after 
> uploading/Generating Data through other tools.
> I'm going to pull in the open JIRAs on this topic as they are all related



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #1711: HADOOP-16455. ABFS: Implement FileSystem.access() method.

2019-11-15 Thread GitBox
goiri commented on a change in pull request #1711: HADOOP-16455. ABFS: 
Implement FileSystem.access() method.
URL: https://github.com/apache/hadoop/pull/1711#discussion_r346933953
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java
 ##
 @@ -62,6 +62,9 @@
   public static final String FS_AZURE_DISABLE_OUTPUTSTREAM_FLUSH = 
"fs.azure.disable.outputstream.flush";
   public static final String FS_AZURE_USER_AGENT_PREFIX_KEY = 
"fs.azure.user.agent.prefix";
   public static final String FS_AZURE_SSL_CHANNEL_MODE_KEY = 
"fs.azure.ssl.channel.mode";
+  /** Provides a config to enable/disable the checkAccess API.
+   *  By default this will be false.   */
 
 Review comment:
   This comment should be in DEFAULT_ENABLE_CHECK_ACCESS.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16683) Disable retry of FailoverOnNetworkExceptionRetry in case of wrapped AccessControlException

2019-11-15 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16975260#comment-16975260
 ] 

Hadoop QA commented on HADOOP-16683:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 19m  
2s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
23s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
38s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
36s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} branch-3.2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
7s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}115m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:63396beab41 |
| JIRA Issue | HADOOP-16683 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12985963/HADOOP-16683.branch-3.2.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a1012163b720 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-3.2 / ea9c74d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16672/testReport/ |
| Max. process+thread count | 1364 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16672/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Disable retry of FailoverOnNetworkExceptionRetry in case of wrapped 
> AccessControlException
> 

[jira] [Commented] (HADOOP-16712) Config ha.failover-controller.active-standby-elector.zk.op.retries is not in core-default.xml

2019-11-15 Thread Xieming Li (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16975255#comment-16975255
 ] 

Xieming Li commented on HADOOP-16712:
-

I think no additional test is required for this change.

> Config ha.failover-controller.active-standby-elector.zk.op.retries is not in 
> core-default.xml
> -
>
> Key: HADOOP-16712
> URL: https://issues.apache.org/jira/browse/HADOOP-16712
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Xieming Li
>Priority: Trivial
> Attachments: HADOOP-16712.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dineshchitlangia commented on issue #1675: HADOOP-16654:Delete hadoop-ozone and hadoop-hdds subprojects from apa…

2019-11-15 Thread GitBox
dineshchitlangia commented on issue #1675: HADOOP-16654:Delete hadoop-ozone and 
hadoop-hdds subprojects from apa…
URL: https://github.com/apache/hadoop/pull/1675#issuecomment-55355
 
 
   @anuengineer Does the tag name `removed-ozone` and description `Removed 
hadoop-hdds and hadoop-ozone from trunk`look good to you ?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1721: HADOOP-16709. Consider having the ability to turn off TTL in S3Guard …

2019-11-15 Thread GitBox
hadoop-yetus commented on issue #1721: HADOOP-16709. Consider having the 
ability to turn off TTL in S3Guard …
URL: https://github.com/apache/hadoop/pull/1721#issuecomment-554428261
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 38 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1097 | trunk passed |
   | +1 | compile | 36 | trunk passed |
   | +1 | checkstyle | 28 | trunk passed |
   | +1 | mvnsite | 39 | trunk passed |
   | +1 | shadedclient | 824 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 29 | trunk passed |
   | 0 | spotbugs | 59 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 58 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 34 | the patch passed |
   | +1 | compile | 29 | the patch passed |
   | +1 | javac | 29 | the patch passed |
   | -0 | checkstyle | 20 | hadoop-tools/hadoop-aws: The patch generated 1 new 
+ 35 unchanged - 0 fixed = 36 total (was 35) |
   | +1 | mvnsite | 33 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 806 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 27 | the patch passed |
   | +1 | findbugs | 61 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 80 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 3380 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1721/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1721 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 23b8044b9e6a 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 92c28c1 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1721/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1721/1/testReport/ |
   | Max. process+thread count | 469 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1721/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1707: HADOOP-16697. Tune/audit auth mode

2019-11-15 Thread GitBox
hadoop-yetus commented on issue #1707: HADOOP-16697. Tune/audit auth mode
URL: https://github.com/apache/hadoop/pull/1707#issuecomment-554409242
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 73 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1075 | trunk passed |
   | +1 | compile | 36 | trunk passed |
   | +1 | checkstyle | 29 | trunk passed |
   | +1 | mvnsite | 40 | trunk passed |
   | +1 | shadedclient | 829 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 30 | trunk passed |
   | 0 | spotbugs | 59 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 56 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 35 | the patch passed |
   | +1 | compile | 29 | the patch passed |
   | +1 | javac | 29 | the patch passed |
   | -0 | checkstyle | 22 | hadoop-tools/hadoop-aws: The patch generated 5 new 
+ 58 unchanged - 0 fixed = 63 total (was 58) |
   | +1 | mvnsite | 33 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 802 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 27 | the patch passed |
   | -1 | findbugs | 64 | hadoop-tools/hadoop-aws generated 1 new + 0 unchanged 
- 0 fixed = 1 total (was 0) |
   ||| _ Other Tests _ |
   | +1 | unit | 86 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 3405 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-aws |
   |  |  Switch statement found in 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$BucketInfo.run(String[], 
PrintStream) where one case falls through to the next case  At 
S3GuardTool.java:PrintStream) where one case falls through to the next case  At 
S3GuardTool.java:[lines 1301-1308] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1707/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1707 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 56464a5a4b20 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 92c28c1 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1707/2/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1707/2/artifact/out/new-findbugs-hadoop-tools_hadoop-aws.html
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1707/2/testReport/ |
   | Max. process+thread count | 411 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1707/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on a change in pull request #1721: HADOOP-16709. Consider having the ability to turn off TTL in S3Guard …

2019-11-15 Thread GitBox
bgaborg commented on a change in pull request #1721: HADOOP-16709. Consider 
having the ability to turn off TTL in S3Guard …
URL: https://github.com/apache/hadoop/pull/1721#discussion_r346877206
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 ##
 @@ -2669,9 +2671,7 @@ S3AFileStatus innerGetFileStatus(final Path f,
   // modification - compare the modTime to check if metadata is up to date
   // Skip going to s3 if the file checked is a directory. Because if the
   // dest is also a directory, there's no difference.
-  // TODO After HADOOP-16085 the modification detection can be done with
 
 Review comment:
   If we really do modtime that's bad. That's so bad because we never store the 
modtime right when a new file is created, so we query it two times! Something 
to review and work on! (I'm not sure about this, this is just a fyi, should be 
checked.)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on issue #1721: HADOOP-16709. Consider having the ability to turn off TTL in S3Guard …

2019-11-15 Thread GitBox
bgaborg commented on issue #1721: HADOOP-16709. Consider having the ability to 
turn off TTL in S3Guard …
URL: https://github.com/apache/hadoop/pull/1721#issuecomment-554403163
 
 
   Things to do:
   - Update docs
   - Review if javadocs should be updated


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg opened a new pull request #1721: HADOOP-16709. Consider having the ability to turn off TTL in S3Guard …

2019-11-15 Thread GitBox
bgaborg opened a new pull request #1721: HADOOP-16709. Consider having the 
ability to turn off TTL in S3Guard …
URL: https://github.com/apache/hadoop/pull/1721
 
 
   …+ Authoritative mode
   
   Change-Id: I042cbf3bcb33107d98b3345293fb1bcfbf0de176
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16683) Disable retry of FailoverOnNetworkExceptionRetry in case of wrapped AccessControlException

2019-11-15 Thread Adam Antal (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16975170#comment-16975170
 ] 

Adam Antal commented on HADOOP-16683:
-

Thanks [~pbacsko]! Reuploaded the patch for branch-3.2. Could you please 
backport these when jenkins finished [~snemeth]? Thanks!

> Disable retry of FailoverOnNetworkExceptionRetry in case of wrapped 
> AccessControlException
> --
>
> Key: HADOOP-16683
> URL: https://issues.apache.org/jira/browse/HADOOP-16683
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16683.001.patch, HADOOP-16683.002.patch, 
> HADOOP-16683.003.patch, HADOOP-16683.branch-3.1.001.patch, 
> HADOOP-16683.branch-3.2.001.patch, HADOOP-16683.branch-3.2.001.patch
>
>
> Follow up patch on HADOOP-16580.
> We successfully disabled the retry in case of an AccessControlException which 
> has resolved some of the cases, but in other cases AccessControlException is 
> wrapped inside another IOException and you can only get the original 
> exception by calling getCause().
> Let's add this extra case as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16683) Disable retry of FailoverOnNetworkExceptionRetry in case of wrapped AccessControlException

2019-11-15 Thread Adam Antal (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Antal updated HADOOP-16683:

Attachment: HADOOP-16683.branch-3.2.001.patch

> Disable retry of FailoverOnNetworkExceptionRetry in case of wrapped 
> AccessControlException
> --
>
> Key: HADOOP-16683
> URL: https://issues.apache.org/jira/browse/HADOOP-16683
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16683.001.patch, HADOOP-16683.002.patch, 
> HADOOP-16683.003.patch, HADOOP-16683.branch-3.1.001.patch, 
> HADOOP-16683.branch-3.2.001.patch, HADOOP-16683.branch-3.2.001.patch
>
>
> Follow up patch on HADOOP-16580.
> We successfully disabled the retry in case of an AccessControlException which 
> has resolved some of the cases, but in other cases AccessControlException is 
> wrapped inside another IOException and you can only get the original 
> exception by calling getCause().
> Let's add this extra case as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16683) Disable retry of FailoverOnNetworkExceptionRetry in case of wrapped AccessControlException

2019-11-15 Thread Peter Bacsko (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16975157#comment-16975157
 ] 

Peter Bacsko commented on HADOOP-16683:
---

[~adam.antal] we don't have build results for branch-3.2. The trick is to 
upload a patch, wait until the build starts, then upload the next one.

> Disable retry of FailoverOnNetworkExceptionRetry in case of wrapped 
> AccessControlException
> --
>
> Key: HADOOP-16683
> URL: https://issues.apache.org/jira/browse/HADOOP-16683
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16683.001.patch, HADOOP-16683.002.patch, 
> HADOOP-16683.003.patch, HADOOP-16683.branch-3.1.001.patch, 
> HADOOP-16683.branch-3.2.001.patch
>
>
> Follow up patch on HADOOP-16580.
> We successfully disabled the retry in case of an AccessControlException which 
> has resolved some of the cases, but in other cases AccessControlException is 
> wrapped inside another IOException and you can only get the original 
> exception by calling getCause().
> Let's add this extra case as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1707: HADOOP-16697. Tune/audit auth mode

2019-11-15 Thread GitBox
steveloughran commented on issue #1707: HADOOP-16697. Tune/audit auth mode
URL: https://github.com/apache/hadoop/pull/1707#issuecomment-554385963
 
 
   going to add a S3GuardInstrumentation interface so the internals of the S3A 
FS aren't so exposed in the S3Guard API. This also lets us assume that the 
instrumentation is never null, which simplifies code


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1707: HADOOP-16697. Tune/audit auth mode

2019-11-15 Thread GitBox
hadoop-yetus removed a comment on issue #1707: HADOOP-16697. Tune/audit auth 
mode
URL: https://github.com/apache/hadoop/pull/1707#issuecomment-551980383
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 1970 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1417 | trunk passed |
   | +1 | compile | 38 | trunk passed |
   | +1 | checkstyle | 29 | trunk passed |
   | +1 | mvnsite | 43 | trunk passed |
   | +1 | shadedclient | 980 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 32 | trunk passed |
   | 0 | spotbugs | 70 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 67 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 39 | the patch passed |
   | +1 | compile | 29 | the patch passed |
   | +1 | javac | 29 | the patch passed |
   | -0 | checkstyle | 19 | hadoop-tools/hadoop-aws: The patch generated 1 new 
+ 39 unchanged - 0 fixed = 40 total (was 39) |
   | +1 | mvnsite | 33 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 994 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 27 | the patch passed |
   | -1 | findbugs | 82 | hadoop-tools/hadoop-aws generated 1 new + 0 unchanged 
- 0 fixed = 1 total (was 0) |
   ||| _ Other Tests _ |
   | +1 | unit | 80 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 6013 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-aws |
   |  |  Switch statement found in 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$BucketInfo.run(String[], 
PrintStream) where one case falls through to the next case  At 
S3GuardTool.java:PrintStream) where one case falls through to the next case  At 
S3GuardTool.java:[lines 1301-1308] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.4 Server=19.03.4 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1707/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1707 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux e8c2c4b1f247 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 42fc888 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1707/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1707/1/artifact/out/new-findbugs-hadoop-tools_hadoop-aws.html
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1707/1/testReport/ |
   | Max. process+thread count | 454 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1707/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16713) Use PathCapabilities for default configuring append mode for RollingFileSystemSink

2019-11-15 Thread Adam Antal (Jira)
Adam Antal created HADOOP-16713:
---

 Summary: Use PathCapabilities for default configuring append mode 
for RollingFileSystemSink
 Key: HADOOP-16713
 URL: https://issues.apache.org/jira/browse/HADOOP-16713
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Affects Versions: 3.3.0
Reporter: Adam Antal


{{RollingFileSystemSink}} uses a filesystem to store metrics. The key 
{{allow-append}} is disabled by default, but if enabled, new metrics can be 
appended to an existing file.

Given that we can have the {{PathCapabilities}} interface, we can change the 
default of {{allow-append}} mode depending on the support of the append 
operation decided by the {{FileSystem.hasPathCapability()}} call 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #763: [WIP] HADOOP-15984. Update jersey from 1.19 to 2.x

2019-11-15 Thread GitBox
hadoop-yetus commented on issue #763: [WIP] HADOOP-15984. Update jersey from 
1.19 to 2.x
URL: https://github.com/apache/hadoop/pull/763#issuecomment-554334013
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 36 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 2 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 9 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 60 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1191 | trunk passed |
   | +1 | compile | 1066 | trunk passed |
   | +1 | checkstyle | 183 | trunk passed |
   | +1 | mvnsite | 495 | trunk passed |
   | +1 | shadedclient | 1581 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 464 | trunk passed |
   | 0 | spotbugs | 57 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | 0 | findbugs | 25 | branch/hadoop-project no findbugs output file 
(findbugsXml.xml) |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 36 | Maven dependency ordering for patch |
   | -1 | mvninstall | 21 | hadoop-yarn-server-nodemanager in the patch failed. 
|
   | -1 | mvninstall | 23 | hadoop-yarn-server-resourcemanager in the patch 
failed. |
   | -1 | mvninstall | 17 | hadoop-yarn-client in the patch failed. |
   | -1 | compile | 354 | root in the patch failed. |
   | -1 | javac | 354 | root in the patch failed. |
   | -0 | checkstyle | 176 | root: The patch generated 14 new + 678 unchanged - 
30 fixed = 692 total (was 708) |
   | -1 | mvnsite | 25 | hadoop-yarn-server-nodemanager in the patch failed. |
   | -1 | mvnsite | 26 | hadoop-yarn-server-resourcemanager in the patch 
failed. |
   | -1 | mvnsite | 19 | hadoop-yarn-client in the patch failed. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 12 | The patch has no ill-formed XML file. |
   | -1 | shadedclient | 204 | patch has errors when building and testing our 
client artifacts. |
   | -1 | javadoc | 27 | 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 1 new + 4 unchanged - 0 fixed = 5 total (was 4) |
   | -1 | javadoc | 20 | hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client 
generated 33 new + 0 unchanged - 0 fixed = 33 total (was 0) |
   | 0 | findbugs | 14 | hadoop-project has no data from findbugs |
   | -1 | findbugs | 24 | hadoop-yarn-server-nodemanager in the patch failed. |
   | -1 | findbugs | 25 | hadoop-yarn-server-resourcemanager in the patch 
failed. |
   | -1 | findbugs | 18 | hadoop-yarn-client in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 14 | hadoop-project in the patch passed. |
   | -1 | unit | 524 | hadoop-common in the patch failed. |
   | +1 | unit | 198 | hadoop-kms in the patch passed. |
   | -1 | unit | 5731 | hadoop-hdfs in the patch failed. |
   | -1 | unit | 254 | hadoop-hdfs-httpfs in the patch failed. |
   | -1 | unit | 532 | hadoop-hdfs-rbf in the patch failed. |
   | -1 | unit | 202 | hadoop-yarn-common in the patch failed. |
   | -1 | unit | 30 | hadoop-yarn-server-nodemanager in the patch failed. |
   | -1 | unit | 31 | hadoop-yarn-server-resourcemanager in the patch failed. |
   | -1 | unit | 23 | hadoop-yarn-client in the patch failed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 15538 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.viewfs.TestViewFsTrash |
   |   | hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy |
   |   | hadoop.hdfs.web.TestWebHdfsTokens |
   |   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
   |   | hadoop.hdfs.web.TestWebHdfsUrl |
   |   | hadoop.hdfs.web.TestWebHdfsFileSystemContract |
   |   | hadoop.fs.http.server.TestHttpFSServer |
   |   | hadoop.hdfs.server.federation.metrics.TestRBFMetrics |
   |   | hadoop.yarn.client.api.impl.TestTimelineClientV2Impl |
   |   | hadoop.yarn.client.api.impl.TestTimelineClientForATS1_5 |
   |   | hadoop.yarn.client.api.impl.TestTimelineClient |
   |   | hadoop.yarn.client.api.impl.TestTimelineReaderClientImpl |
   |   | hadoop.yarn.webapp.TestWebApp |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-763/13/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/763 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux b3d047efb7b4 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |

[jira] [Commented] (HADOOP-16548) ABFS: Config to enable/disable flush operation

2019-11-15 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16975022#comment-16975022
 ] 

Steve Loughran commented on HADOOP-16548:
-

[~mandarinamdar]: There's no fundamental reason why this shouldn't go back in

[~tmarquardt] has been fairly rigorous about backporting all abfs changes to 
branch-3.2; I think the thing to do would be work with him to sync branch-3.2 
up, with all the patches applied in the same order...that's the best way to 
keep it easy to pull in other changes.



> ABFS: Config to enable/disable flush operation
> --
>
> Key: HADOOP-16548
> URL: https://issues.apache.org/jira/browse/HADOOP-16548
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Bilahari T H
>Assignee: Sneha Vijayarajan
>Priority: Minor
> Fix For: 3.3.0
>
>
> Make flush operation enabled/disabled through configuration. This is part of 
> performance improvements for ABFS driver.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in…

2019-11-15 Thread GitBox
hadoop-yetus commented on issue #1702: HDFS-14788 Use dynamic regex filter to 
ignore copy of source files in…
URL: https://github.com/apache/hadoop/pull/1702#issuecomment-554312788
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 37 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1114 | trunk passed |
   | +1 | compile | 29 | trunk passed |
   | +1 | checkstyle | 22 | trunk passed |
   | +1 | mvnsite | 30 | trunk passed |
   | +1 | shadedclient | 769 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 25 | trunk passed |
   | 0 | spotbugs | 43 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 41 | trunk passed |
   | -0 | patch | 66 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 27 | the patch passed |
   | +1 | compile | 22 | the patch passed |
   | +1 | javac | 22 | the patch passed |
   | -0 | checkstyle | 16 | hadoop-tools/hadoop-distcp: The patch generated 22 
new + 21 unchanged - 0 fixed = 43 total (was 21) |
   | +1 | mvnsite | 25 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 784 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 23 | the patch passed |
   | +1 | findbugs | 47 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 721 | hadoop-distcp in the patch passed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 3877 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1702/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1702 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 467703f43d36 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 92c28c1 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1702/3/artifact/out/diff-checkstyle-hadoop-tools_hadoop-distcp.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1702/3/testReport/ |
   | Max. process+thread count | 506 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1702/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16709) Consider having the ability to turn off TTL in S3Guard + Authoritative mode

2019-11-15 Thread Gabor Bota (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota reassigned HADOOP-16709:
---

Assignee: Gabor Bota

> Consider having the ability to turn off TTL in S3Guard + Authoritative mode
> ---
>
> Key: HADOOP-16709
> URL: https://issues.apache.org/jira/browse/HADOOP-16709
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Rajesh Balamohan
>Assignee: Gabor Bota
>Priority: Minor
>
> Authoritative mode has TTL which is set to 15 minutes by default. However, 
> there are cases when we know for sure that the data wouldn't be 
> changed/updated.
> In certain cases, AppMaster ends up spending good amount of time in getSplits 
> due to TTL expiry. It would be great to have an option to disable TTL (or 
> specify as -1 when TTL shouldn't be checked).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16709) Consider having the ability to turn off TTL in S3Guard + Authoritative mode

2019-11-15 Thread Gabor Bota (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16975000#comment-16975000
 ] 

Gabor Bota commented on HADOOP-16709:
-

Talking offline with [~ste...@apache.org] we figured out that it would be the 
most straightforward to handle authoritative directories without TTL (you can 
set directories to be authoritative, not the whole store). So there will be no 
metadata expiry check for an authoritative path or when the authoritative mode 
is on.

[~rajesh.balamohan] if you want to switch TTL off for the full bucket, you can 
set a very large number for the expiry, so the metadata won't expire. This 
workaround can be used until this is fixed.

> Consider having the ability to turn off TTL in S3Guard + Authoritative mode
> ---
>
> Key: HADOOP-16709
> URL: https://issues.apache.org/jira/browse/HADOOP-16709
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Rajesh Balamohan
>Priority: Minor
>
> Authoritative mode has TTL which is set to 15 minutes by default. However, 
> there are cases when we know for sure that the data wouldn't be 
> changed/updated.
> In certain cases, AppMaster ends up spending good amount of time in getSplits 
> due to TTL expiry. It would be great to have an option to disable TTL (or 
> specify as -1 when TTL shouldn't be checked).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1711: HADOOP-16455. ABFS: Implement FileSystem.access() method.

2019-11-15 Thread GitBox
hadoop-yetus commented on issue #1711: HADOOP-16455. ABFS: Implement 
FileSystem.access() method.
URL: https://github.com/apache/hadoop/pull/1711#issuecomment-554288619
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall |  | trunk passed |
   | +1 | compile | 32 | trunk passed |
   | +1 | checkstyle | 25 | trunk passed |
   | +1 | mvnsite | 35 | trunk passed |
   | +1 | shadedclient | 851 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 28 | trunk passed |
   | 0 | spotbugs | 50 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 47 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 28 | the patch passed |
   | +1 | compile | 25 | the patch passed |
   | +1 | javac | 25 | the patch passed |
   | -0 | checkstyle | 17 | hadoop-tools/hadoop-azure: The patch generated 5 
new + 6 unchanged - 1 fixed = 11 total (was 7) |
   | +1 | mvnsite | 27 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 820 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 22 | the patch passed |
   | +1 | findbugs | 57 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 75 | hadoop-azure in the patch passed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 3381 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1711/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1711 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux c4806dfe5043 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 92c28c1 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1711/4/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1711/4/testReport/ |
   | Max. process+thread count | 418 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1711/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16712) Config ha.failover-controller.active-standby-elector.zk.op.retries is not in core-default.xml

2019-11-15 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16974956#comment-16974956
 ] 

Hadoop QA commented on HADOOP-16712:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
50m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
18s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 95m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HADOOP-16712 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12985923/HADOOP-16712.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  |
| uname | Linux 27d1cb2dda28 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 92c28c1 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16671/testReport/ |
| Max. process+thread count | 1369 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16671/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Config ha.failover-controller.active-standby-elector.zk.op.retries is not in 
> core-default.xml
> -
>
> Key: HADOOP-16712
> URL: https://issues.apache.org/jira/browse/HADOOP-16712
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Xieming Li
>

[GitHub] [hadoop] bilaharith commented on a change in pull request #1711: HADOOP-16455. ABFS: Implement FileSystem.access() method.

2019-11-15 Thread GitBox
bilaharith commented on a change in pull request #1711: HADOOP-16455. ABFS: 
Implement FileSystem.access() method.
URL: https://github.com/apache/hadoop/pull/1711#discussion_r346708030
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
 ##
 @@ -399,6 +403,10 @@ public long getAzureBlockSize() {
 return this.azureBlockSize;
   }
 
+  public boolean isCheckAccessEnabled() {
 
 Review comment:
   Currently the ABFS access() is a noop. Adding the functionality with a 
config flag to keep it backward compatible.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bilaharith commented on a change in pull request #1711: HADOOP-16455. ABFS: Implement FileSystem.access() method.

2019-11-15 Thread GitBox
bilaharith commented on a change in pull request #1711: HADOOP-16455. ABFS: 
Implement FileSystem.access() method.
URL: https://github.com/apache/hadoop/pull/1711#discussion_r346707854
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
 ##
 @@ -862,8 +862,13 @@ public AclStatus getAclStatus(final Path path) throws 
IOException {
*/
   @Override
   public void access(final Path path, FsAction mode) throws IOException {
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bilaharith commented on a change in pull request #1711: HADOOP-16455. ABFS: Implement FileSystem.access() method.

2019-11-15 Thread GitBox
bilaharith commented on a change in pull request #1711: HADOOP-16455. ABFS: 
Implement FileSystem.access() method.
URL: https://github.com/apache/hadoop/pull/1711#discussion_r346707830
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
 ##
 @@ -862,8 +862,13 @@ public AclStatus getAclStatus(final Path path) throws 
IOException {
*/
   @Override
   public void access(final Path path, FsAction mode) throws IOException {
-// TODO: make it no-op to unblock hive permission issue for now.
-// Will add a long term fix similar to the implementation in AdlFileSystem.
+LOG.debug("AzureBlobFileSystem.access path : {}, mode : {}", path, mode);
+Path qualifiedPath = makeQualified(path);
+try {
+  this.abfsStore.access(qualifiedPath, mode);
+}catch(AzureBlobFileSystemException ex){
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bilaharith commented on a change in pull request #1711: HADOOP-16455. ABFS: Implement FileSystem.access() method.

2019-11-15 Thread GitBox
bilaharith commented on a change in pull request #1711: HADOOP-16455. ABFS: 
Implement FileSystem.access() method.
URL: https://github.com/apache/hadoop/pull/1711#discussion_r346707768
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
 ##
 @@ -264,6 +266,17 @@ public AbfsConfiguration getAbfsConfiguration() {
 return this.abfsConfiguration;
   }
 
+  public void access(Path path, FsAction mode)
+  throws AzureBlobFileSystemException {
+LOG.debug("access for filesystem: {}, path: {}, mode: {}",
+this.client.getFileSystem(), path, mode);
+if (!this.abfsConfiguration.isCheckAccessEnabled()) {
 
 Review comment:
   Currently the ABFS access() is a noop. Adding the functionality with a 
config flag to keep it backward compatible.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bilaharith commented on a change in pull request #1711: HADOOP-16455. ABFS: Implement FileSystem.access() method.

2019-11-15 Thread GitBox
bilaharith commented on a change in pull request #1711: HADOOP-16455. ABFS: 
Implement FileSystem.access() method.
URL: https://github.com/apache/hadoop/pull/1711#discussion_r346706919
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java
 ##
 @@ -62,6 +62,9 @@
   public static final String FS_AZURE_DISABLE_OUTPUTSTREAM_FLUSH = 
"fs.azure.disable.outputstream.flush";
   public static final String FS_AZURE_USER_AGENT_PREFIX_KEY = 
"fs.azure.user.agent.prefix";
   public static final String FS_AZURE_SSL_CHANNEL_MODE_KEY = 
"fs.azure.ssl.channel.mode";
+  /** Provides a config to enable/disable the checkAccess API
+   *  By default this will be false   */
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bilaharith commented on a change in pull request #1711: HADOOP-16455. ABFS: Implement FileSystem.access() method.

2019-11-15 Thread GitBox
bilaharith commented on a change in pull request #1711: HADOOP-16455. ABFS: 
Implement FileSystem.access() method.
URL: https://github.com/apache/hadoop/pull/1711#discussion_r346706819
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemCheckAccess.java
 ##
 @@ -0,0 +1,296 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.azurebfs;
+
+import com.google.common.collect.Lists;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.util.List;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.utils.AclTestHelpers;
+import org.apache.hadoop.fs.permission.AclEntry;
+import org.apache.hadoop.fs.permission.AclEntryScope;
+import org.apache.hadoop.fs.permission.AclEntryType;
+import org.apache.hadoop.fs.permission.FsAction;
+import org.apache.hadoop.security.AccessControlException;
+import org.junit.Assume;
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bilaharith commented on a change in pull request #1711: HADOOP-16455. ABFS: Implement FileSystem.access() method.

2019-11-15 Thread GitBox
bilaharith commented on a change in pull request #1711: HADOOP-16455. ABFS: 
Implement FileSystem.access() method.
URL: https://github.com/apache/hadoop/pull/1711#discussion_r346706598
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemCheckAccess.java
 ##
 @@ -0,0 +1,296 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.azurebfs;
+
+import com.google.common.collect.Lists;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.util.List;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.utils.AclTestHelpers;
+import org.apache.hadoop.fs.permission.AclEntry;
+import org.apache.hadoop.fs.permission.AclEntryScope;
+import org.apache.hadoop.fs.permission.AclEntryType;
+import org.apache.hadoop.fs.permission.FsAction;
+import org.apache.hadoop.security.AccessControlException;
+import org.junit.Assume;
+import org.junit.Test;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.AZURE_CREATE_REMOTE_FILESYSTEM_DURING_INITIALIZATION;
+import static 
org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.FS_AZURE_ENABLE_CHECK_ACCESS;
+import static 
org.apache.hadoop.fs.azurebfs.constants.TestConfigurationKeys.FS_AZURE_BLOB_FS_CHECKACCESS_TEST_CLIENT_ID;
+import static 
org.apache.hadoop.fs.azurebfs.constants.TestConfigurationKeys.FS_AZURE_BLOB_FS_CHECKACCESS_TEST_CLIENT_SECRET;
+import static 
org.apache.hadoop.fs.azurebfs.constants.TestConfigurationKeys.FS_AZURE_BLOB_FS_CHECKACCESS_TEST_USER_GUID;
+import static 
org.apache.hadoop.fs.azurebfs.constants.TestConfigurationKeys.FS_AZURE_BLOB_FS_CLIENT_ID;
+import static 
org.apache.hadoop.fs.azurebfs.constants.TestConfigurationKeys.FS_AZURE_BLOB_FS_CLIENT_SECRET;
+import static 
org.apache.hadoop.fs.azurebfs.constants.TestConfigurationKeys.FS_AZURE_TEST_NAMESPACE_ENABLED_ACCOUNT;
+
+public class ITestAzureBlobFileSystemCheckAccess
+extends AbstractAbfsIntegrationTest {
+
+  private static final String TEST_FOLDER_PATH = "CheckAccessTestFolder";
+  private final FileSystem superUserFs;
+  private final FileSystem testUserFs;
+  private final String testUserGuid;
+  private final boolean isCheckAccessEnabled;
+
+  public ITestAzureBlobFileSystemCheckAccess() throws Exception {
+super();
+super.setup();
+this.superUserFs = getFileSystem();
+testUserGuid = getConfiguration()
+.get(FS_AZURE_BLOB_FS_CHECKACCESS_TEST_USER_GUID);
+this.testUserFs = getTestUserFs();
+isCheckAccessEnabled = getConfiguration().isCheckAccessEnabled();
+  }
+
+  private FileSystem getTestUserFs() throws Exception {
+String orgClientId = getConfiguration().get(FS_AZURE_BLOB_FS_CLIENT_ID);
+String orgClientSecret = getConfiguration()
+.get(FS_AZURE_BLOB_FS_CLIENT_SECRET);
+Boolean orgCreateFileSystemDurungInit = getConfiguration()
+.getBoolean(AZURE_CREATE_REMOTE_FILESYSTEM_DURING_INITIALIZATION, 
true);
+getRawConfiguration().set(FS_AZURE_BLOB_FS_CLIENT_ID,
+getConfiguration().get(FS_AZURE_BLOB_FS_CHECKACCESS_TEST_CLIENT_ID));
+getRawConfiguration().set(FS_AZURE_BLOB_FS_CLIENT_SECRET, 
getConfiguration()
+.get(FS_AZURE_BLOB_FS_CHECKACCESS_TEST_CLIENT_SECRET));
+getRawConfiguration()
+.setBoolean(AZURE_CREATE_REMOTE_FILESYSTEM_DURING_INITIALIZATION,
+false);
+FileSystem fs = FileSystem.newInstance(getRawConfiguration());
+getRawConfiguration().set(FS_AZURE_BLOB_FS_CLIENT_ID, orgClientId);
+getRawConfiguration().set(FS_AZURE_BLOB_FS_CLIENT_SECRET, orgClientSecret);
+getRawConfiguration()
+.setBoolean(AZURE_CREATE_REMOTE_FILESYSTEM_DURING_INITIALIZATION,
+orgCreateFileSystemDurungInit);
+return fs;
+  }
+
+  @Test(expected = IllegalArgumentException.class)
+  public void testCheckAccessWithNullPath() throws IOException {
+
+superUserFs.access(null, FsAction.READ);
+  }
+
+  @Test(expected = NullPointerException.class)
+  public void testCheckAccessForFileWithNullFsAction() throws Exception {
+

[GitHub] [hadoop] bilaharith commented on a change in pull request #1711: HADOOP-16455. ABFS: Implement FileSystem.access() method.

2019-11-15 Thread GitBox
bilaharith commented on a change in pull request #1711: HADOOP-16455. ABFS: 
Implement FileSystem.access() method.
URL: https://github.com/apache/hadoop/pull/1711#discussion_r346706784
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemCheckAccess.java
 ##
 @@ -0,0 +1,296 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.azurebfs;
+
+import com.google.common.collect.Lists;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.util.List;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.utils.AclTestHelpers;
+import org.apache.hadoop.fs.permission.AclEntry;
+import org.apache.hadoop.fs.permission.AclEntryScope;
+import org.apache.hadoop.fs.permission.AclEntryType;
+import org.apache.hadoop.fs.permission.FsAction;
+import org.apache.hadoop.security.AccessControlException;
+import org.junit.Assume;
+import org.junit.Test;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.AZURE_CREATE_REMOTE_FILESYSTEM_DURING_INITIALIZATION;
+import static 
org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.FS_AZURE_ENABLE_CHECK_ACCESS;
+import static 
org.apache.hadoop.fs.azurebfs.constants.TestConfigurationKeys.FS_AZURE_BLOB_FS_CHECKACCESS_TEST_CLIENT_ID;
+import static 
org.apache.hadoop.fs.azurebfs.constants.TestConfigurationKeys.FS_AZURE_BLOB_FS_CHECKACCESS_TEST_CLIENT_SECRET;
+import static 
org.apache.hadoop.fs.azurebfs.constants.TestConfigurationKeys.FS_AZURE_BLOB_FS_CHECKACCESS_TEST_USER_GUID;
+import static 
org.apache.hadoop.fs.azurebfs.constants.TestConfigurationKeys.FS_AZURE_BLOB_FS_CLIENT_ID;
+import static 
org.apache.hadoop.fs.azurebfs.constants.TestConfigurationKeys.FS_AZURE_BLOB_FS_CLIENT_SECRET;
+import static 
org.apache.hadoop.fs.azurebfs.constants.TestConfigurationKeys.FS_AZURE_TEST_NAMESPACE_ENABLED_ACCOUNT;
+
+public class ITestAzureBlobFileSystemCheckAccess
+extends AbstractAbfsIntegrationTest {
+
+  private static final String TEST_FOLDER_PATH = "CheckAccessTestFolder";
+  private final FileSystem superUserFs;
+  private final FileSystem testUserFs;
+  private final String testUserGuid;
+  private final boolean isCheckAccessEnabled;
+
+  public ITestAzureBlobFileSystemCheckAccess() throws Exception {
+super();
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bilaharith commented on a change in pull request #1711: HADOOP-16455. ABFS: Implement FileSystem.access() method.

2019-11-15 Thread GitBox
bilaharith commented on a change in pull request #1711: HADOOP-16455. ABFS: 
Implement FileSystem.access() method.
URL: https://github.com/apache/hadoop/pull/1711#discussion_r346706733
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemCheckAccess.java
 ##
 @@ -0,0 +1,296 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.azurebfs;
+
+import com.google.common.collect.Lists;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.util.List;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.utils.AclTestHelpers;
+import org.apache.hadoop.fs.permission.AclEntry;
+import org.apache.hadoop.fs.permission.AclEntryScope;
+import org.apache.hadoop.fs.permission.AclEntryType;
+import org.apache.hadoop.fs.permission.FsAction;
+import org.apache.hadoop.security.AccessControlException;
+import org.junit.Assume;
+import org.junit.Test;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.AZURE_CREATE_REMOTE_FILESYSTEM_DURING_INITIALIZATION;
+import static 
org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.FS_AZURE_ENABLE_CHECK_ACCESS;
+import static 
org.apache.hadoop.fs.azurebfs.constants.TestConfigurationKeys.FS_AZURE_BLOB_FS_CHECKACCESS_TEST_CLIENT_ID;
+import static 
org.apache.hadoop.fs.azurebfs.constants.TestConfigurationKeys.FS_AZURE_BLOB_FS_CHECKACCESS_TEST_CLIENT_SECRET;
+import static 
org.apache.hadoop.fs.azurebfs.constants.TestConfigurationKeys.FS_AZURE_BLOB_FS_CHECKACCESS_TEST_USER_GUID;
+import static 
org.apache.hadoop.fs.azurebfs.constants.TestConfigurationKeys.FS_AZURE_BLOB_FS_CLIENT_ID;
+import static 
org.apache.hadoop.fs.azurebfs.constants.TestConfigurationKeys.FS_AZURE_BLOB_FS_CLIENT_SECRET;
+import static 
org.apache.hadoop.fs.azurebfs.constants.TestConfigurationKeys.FS_AZURE_TEST_NAMESPACE_ENABLED_ACCOUNT;
+
+public class ITestAzureBlobFileSystemCheckAccess
+extends AbstractAbfsIntegrationTest {
+
+  private static final String TEST_FOLDER_PATH = "CheckAccessTestFolder";
+  private final FileSystem superUserFs;
+  private final FileSystem testUserFs;
+  private final String testUserGuid;
+  private final boolean isCheckAccessEnabled;
+
+  public ITestAzureBlobFileSystemCheckAccess() throws Exception {
+super();
+super.setup();
+this.superUserFs = getFileSystem();
+testUserGuid = getConfiguration()
+.get(FS_AZURE_BLOB_FS_CHECKACCESS_TEST_USER_GUID);
+this.testUserFs = getTestUserFs();
+isCheckAccessEnabled = getConfiguration().isCheckAccessEnabled();
+  }
+
+  private FileSystem getTestUserFs() throws Exception {
+String orgClientId = getConfiguration().get(FS_AZURE_BLOB_FS_CLIENT_ID);
+String orgClientSecret = getConfiguration()
+.get(FS_AZURE_BLOB_FS_CLIENT_SECRET);
+Boolean orgCreateFileSystemDurungInit = getConfiguration()
+.getBoolean(AZURE_CREATE_REMOTE_FILESYSTEM_DURING_INITIALIZATION, 
true);
+getRawConfiguration().set(FS_AZURE_BLOB_FS_CLIENT_ID,
+getConfiguration().get(FS_AZURE_BLOB_FS_CHECKACCESS_TEST_CLIENT_ID));
+getRawConfiguration().set(FS_AZURE_BLOB_FS_CLIENT_SECRET, 
getConfiguration()
+.get(FS_AZURE_BLOB_FS_CHECKACCESS_TEST_CLIENT_SECRET));
+getRawConfiguration()
+.setBoolean(AZURE_CREATE_REMOTE_FILESYSTEM_DURING_INITIALIZATION,
+false);
+FileSystem fs = FileSystem.newInstance(getRawConfiguration());
+getRawConfiguration().set(FS_AZURE_BLOB_FS_CLIENT_ID, orgClientId);
+getRawConfiguration().set(FS_AZURE_BLOB_FS_CLIENT_SECRET, orgClientSecret);
+getRawConfiguration()
+.setBoolean(AZURE_CREATE_REMOTE_FILESYSTEM_DURING_INITIALIZATION,
+orgCreateFileSystemDurungInit);
+return fs;
+  }
+
+  @Test(expected = IllegalArgumentException.class)
+  public void testCheckAccessWithNullPath() throws IOException {
+
+superUserFs.access(null, FsAction.READ);
+  }
+
+  @Test(expected = NullPointerException.class)
+  public void testCheckAccessForFileWithNullFsAction() throws Exception {
+

[GitHub] [hadoop] bilaharith commented on a change in pull request #1711: HADOOP-16455. ABFS: Implement FileSystem.access() method.

2019-11-15 Thread GitBox
bilaharith commented on a change in pull request #1711: HADOOP-16455. ABFS: 
Implement FileSystem.access() method.
URL: https://github.com/apache/hadoop/pull/1711#discussion_r346706660
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemCheckAccess.java
 ##
 @@ -0,0 +1,296 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.azurebfs;
+
+import com.google.common.collect.Lists;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.util.List;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.utils.AclTestHelpers;
+import org.apache.hadoop.fs.permission.AclEntry;
+import org.apache.hadoop.fs.permission.AclEntryScope;
+import org.apache.hadoop.fs.permission.AclEntryType;
+import org.apache.hadoop.fs.permission.FsAction;
+import org.apache.hadoop.security.AccessControlException;
+import org.junit.Assume;
+import org.junit.Test;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.AZURE_CREATE_REMOTE_FILESYSTEM_DURING_INITIALIZATION;
+import static 
org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.FS_AZURE_ENABLE_CHECK_ACCESS;
+import static 
org.apache.hadoop.fs.azurebfs.constants.TestConfigurationKeys.FS_AZURE_BLOB_FS_CHECKACCESS_TEST_CLIENT_ID;
+import static 
org.apache.hadoop.fs.azurebfs.constants.TestConfigurationKeys.FS_AZURE_BLOB_FS_CHECKACCESS_TEST_CLIENT_SECRET;
+import static 
org.apache.hadoop.fs.azurebfs.constants.TestConfigurationKeys.FS_AZURE_BLOB_FS_CHECKACCESS_TEST_USER_GUID;
+import static 
org.apache.hadoop.fs.azurebfs.constants.TestConfigurationKeys.FS_AZURE_BLOB_FS_CLIENT_ID;
+import static 
org.apache.hadoop.fs.azurebfs.constants.TestConfigurationKeys.FS_AZURE_BLOB_FS_CLIENT_SECRET;
+import static 
org.apache.hadoop.fs.azurebfs.constants.TestConfigurationKeys.FS_AZURE_TEST_NAMESPACE_ENABLED_ACCOUNT;
+
+public class ITestAzureBlobFileSystemCheckAccess
+extends AbstractAbfsIntegrationTest {
+
+  private static final String TEST_FOLDER_PATH = "CheckAccessTestFolder";
+  private final FileSystem superUserFs;
+  private final FileSystem testUserFs;
+  private final String testUserGuid;
+  private final boolean isCheckAccessEnabled;
+
+  public ITestAzureBlobFileSystemCheckAccess() throws Exception {
+super();
+super.setup();
+this.superUserFs = getFileSystem();
+testUserGuid = getConfiguration()
+.get(FS_AZURE_BLOB_FS_CHECKACCESS_TEST_USER_GUID);
+this.testUserFs = getTestUserFs();
+isCheckAccessEnabled = getConfiguration().isCheckAccessEnabled();
+  }
+
+  private FileSystem getTestUserFs() throws Exception {
+String orgClientId = getConfiguration().get(FS_AZURE_BLOB_FS_CLIENT_ID);
+String orgClientSecret = getConfiguration()
+.get(FS_AZURE_BLOB_FS_CLIENT_SECRET);
+Boolean orgCreateFileSystemDurungInit = getConfiguration()
+.getBoolean(AZURE_CREATE_REMOTE_FILESYSTEM_DURING_INITIALIZATION, 
true);
+getRawConfiguration().set(FS_AZURE_BLOB_FS_CLIENT_ID,
+getConfiguration().get(FS_AZURE_BLOB_FS_CHECKACCESS_TEST_CLIENT_ID));
+getRawConfiguration().set(FS_AZURE_BLOB_FS_CLIENT_SECRET, 
getConfiguration()
+.get(FS_AZURE_BLOB_FS_CHECKACCESS_TEST_CLIENT_SECRET));
+getRawConfiguration()
+.setBoolean(AZURE_CREATE_REMOTE_FILESYSTEM_DURING_INITIALIZATION,
+false);
+FileSystem fs = FileSystem.newInstance(getRawConfiguration());
+getRawConfiguration().set(FS_AZURE_BLOB_FS_CLIENT_ID, orgClientId);
+getRawConfiguration().set(FS_AZURE_BLOB_FS_CLIENT_SECRET, orgClientSecret);
+getRawConfiguration()
+.setBoolean(AZURE_CREATE_REMOTE_FILESYSTEM_DURING_INITIALIZATION,
+orgCreateFileSystemDurungInit);
+return fs;
+  }
+
+  @Test(expected = IllegalArgumentException.class)
+  public void testCheckAccessWithNullPath() throws IOException {
+
+superUserFs.access(null, FsAction.READ);
+  }
+
+  @Test(expected = NullPointerException.class)
+  public void testCheckAccessForFileWithNullFsAction() throws Exception {
+