[jira] [Updated] (HADOOP-17165) Implement service-user feature in DecayRPCScheduler

2020-08-11 Thread Takanobu Asanuma (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HADOOP-17165:
--
Attachment: after.png
before.png

> Implement service-user feature in DecayRPCScheduler
> ---
>
> Key: HADOOP-17165
> URL: https://issues.apache.org/jira/browse/HADOOP-17165
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-17165.001.patch, HADOOP-17165.002.patch, 
> after.png, before.png
>
>
> In our cluster, we want to use FairCallQueue to limit heavy users, but not 
> want to restrict certain users who are submitting important requests. This 
> jira proposes to implement the service-user feature that the user is always 
> scheduled high-priority queue.
> According to HADOOP-9640, the initial concept of FCQ has this feature, but 
> not implemented finally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17204) Fix typo in Hadoop KMS document

2020-08-11 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17176076#comment-17176076
 ] 

Hadoop QA commented on HADOOP-17204:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
55s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
46m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/28/artifact/out/Dockerfile
 |
| JIRA Issue | HADOOP-17204 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13009523/HADOOP-17204.001.patch
 |
| Optional Tests | dupname asflicense mvnsite |
| uname | Linux ab74aa3615ba 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 11cec9ab940 |
| Max. process+thread count | 306 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-kms U: 
hadoop-common-project/hadoop-kms |
| Console output | 
https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/28/console |
| versions | git=2.17.1 maven=3.6.0 |
| Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |


This message was automatically generated.



> Fix typo in Hadoop KMS document
> ---
>
> Key: HADOOP-17204
> URL: https://issues.apache.org/jira/browse/HADOOP-17204
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, kms
>Reporter: Akira Ajisaka
>Assignee: Xieming Li
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-17204.001.patch
>
>
> [https://hadoop.apache.org/docs/r3.3.0/hadoop-kms/index.html#HTTP_Kerberos_Principals_Configuration]
> bq. In order to be able to access directly a specific KMS instance, the KMS 
> instance must also have Keberos service name with its own hostname. This is 
> required for monitoring and admin purposes.
> Keberos -> Kerberos



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17204) Fix typo in Hadoop KMS document

2020-08-11 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17176032#comment-17176032
 ] 

Akira Ajisaka commented on HADOOP-17204:


+1

> Fix typo in Hadoop KMS document
> ---
>
> Key: HADOOP-17204
> URL: https://issues.apache.org/jira/browse/HADOOP-17204
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, kms
>Reporter: Akira Ajisaka
>Assignee: Xieming Li
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-17204.001.patch
>
>
> [https://hadoop.apache.org/docs/r3.3.0/hadoop-kms/index.html#HTTP_Kerberos_Principals_Configuration]
> bq. In order to be able to access directly a specific KMS instance, the KMS 
> instance must also have Keberos service name with its own hostname. This is 
> required for monitoring and admin purposes.
> Keberos -> Kerberos



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17204) Fix typo in Hadoop KMS document

2020-08-11 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17176021#comment-17176021
 ] 

Fei Hui commented on HADOOP-17204:
--

+1

> Fix typo in Hadoop KMS document
> ---
>
> Key: HADOOP-17204
> URL: https://issues.apache.org/jira/browse/HADOOP-17204
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, kms
>Reporter: Akira Ajisaka
>Assignee: Xieming Li
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-17204.001.patch
>
>
> [https://hadoop.apache.org/docs/r3.3.0/hadoop-kms/index.html#HTTP_Kerberos_Principals_Configuration]
> bq. In order to be able to access directly a specific KMS instance, the KMS 
> instance must also have Keberos service name with its own hostname. This is 
> required for monitoring and admin purposes.
> Keberos -> Kerberos



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17204) Fix typo in Hadoop KMS document

2020-08-11 Thread Xieming Li (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xieming Li updated HADOOP-17204:

Attachment: HADOOP-17204.001.patch
Status: Patch Available  (was: Open)

[~aajisaka], thank you for reporting.

I have fixed this typo.

> Fix typo in Hadoop KMS document
> ---
>
> Key: HADOOP-17204
> URL: https://issues.apache.org/jira/browse/HADOOP-17204
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, kms
>Reporter: Akira Ajisaka
>Assignee: Xieming Li
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-17204.001.patch
>
>
> [https://hadoop.apache.org/docs/r3.3.0/hadoop-kms/index.html#HTTP_Kerberos_Principals_Configuration]
> bq. In order to be able to access directly a specific KMS instance, the KMS 
> instance must also have Keberos service name with its own hostname. This is 
> required for monitoring and admin purposes.
> Keberos -> Kerberos



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant opened a new pull request #2219: HDFS-15524. Add edit log entry for Snapshot deletion GC thread snapshot deletion.

2020-08-11 Thread GitBox


bshashikant opened a new pull request #2219:
URL: https://github.com/apache/hadoop/pull/2219


   please check https://issues.apache.org/jira/browse/HDFS-15524
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17204) Fix typo in Hadoop KMS document

2020-08-11 Thread Xieming Li (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xieming Li reassigned HADOOP-17204:
---

Assignee: Xieming Li

> Fix typo in Hadoop KMS document
> ---
>
> Key: HADOOP-17204
> URL: https://issues.apache.org/jira/browse/HADOOP-17204
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, kms
>Reporter: Akira Ajisaka
>Assignee: Xieming Li
>Priority: Trivial
>  Labels: newbie
>
> [https://hadoop.apache.org/docs/r3.3.0/hadoop-kms/index.html#HTTP_Kerberos_Principals_Configuration]
> bq. In order to be able to access directly a specific KMS instance, the KMS 
> instance must also have Keberos service name with its own hostname. This is 
> required for monitoring and admin purposes.
> Keberos -> Kerberos



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17204) Fix typo in Hadoop KMS document

2020-08-11 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-17204:
--

 Summary: Fix typo in Hadoop KMS document
 Key: HADOOP-17204
 URL: https://issues.apache.org/jira/browse/HADOOP-17204
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation, kms
Reporter: Akira Ajisaka


[https://hadoop.apache.org/docs/r3.3.0/hadoop-kms/index.html#HTTP_Kerberos_Principals_Configuration]

bq. In order to be able to access directly a specific KMS instance, the KMS 
instance must also have Keberos service name with its own hostname. This is 
required for monitoring and admin purposes.

Keberos -> Kerberos



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17165) Implement service-user feature in DecayRPCScheduler

2020-08-11 Thread Chao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17176003#comment-17176003
 ] 

Chao Sun commented on HADOOP-17165:
---

Thanks [~tasanuma] for pinging. I like the simplicity of this patch and think 
it would be useful. Could you please create a github PR for this and fix the 
failed UT? 

On the patch, I'm thinking whether it will be useful to allow admins 
dynamically refresh the list rather than having to do a full restart and 
failover. Wonder if you have any thoughts there. Also, can you share some 
results of using this in your clusters?

> Implement service-user feature in DecayRPCScheduler
> ---
>
> Key: HADOOP-17165
> URL: https://issues.apache.org/jira/browse/HADOOP-17165
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-17165.001.patch, HADOOP-17165.002.patch
>
>
> In our cluster, we want to use FairCallQueue to limit heavy users, but not 
> want to restrict certain users who are submitting important requests. This 
> jira proposes to implement the service-user feature that the user is always 
> scheduled high-priority queue.
> According to HADOOP-9640, the initial concept of FCQ has this feature, but 
> not implemented finally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] szetszwo merged pull request #2218: HDFS-15523. Fix findbugs warnings.

2020-08-11 Thread GitBox


szetszwo merged pull request #2218:
URL: https://github.com/apache/hadoop/pull/2218


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] liusheng closed pull request #2211: HDFS-15098. Add SM4 encryption method for HDFS

2020-08-11 Thread GitBox


liusheng closed pull request #2211:
URL: https://github.com/apache/hadoop/pull/2211


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2218: HDFS-15523. Fix findbugs warnings.

2020-08-11 Thread GitBox


hadoop-yetus commented on pull request #2218:
URL: https://github.com/apache/hadoop/pull/2218#issuecomment-672426232


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 35s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  28m 36s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 16s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   1m  9s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 49s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 11s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m  2s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | -1 :x: |  findbugs  |   3m  0s |  hadoop-hdfs-project/hadoop-hdfs in trunk 
has 2 extant findbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   1m  8s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 59s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 59s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 39s |  
hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 0 unchanged - 2 
fixed = 0 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 53s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m 10s |  
hadoop-hdfs-project/hadoop-hdfs generated 0 new + 0 unchanged - 2 fixed = 0 
total (was 2)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  95m 56s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 45s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 178m  5s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap |
   |   | hadoop.hdfs.TestMultipleNNPortQOP |
   |   | hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2218/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2218 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux f7001c2e3d56 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3fd3aeb621e |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | findbugs | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2218/1/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
   | unit | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2218/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop

[GitHub] [hadoop] hadoop-yetus commented on pull request #2212: HDFS-15496. Add UI for deleted snapshots

2020-08-11 Thread GitBox


hadoop-yetus commented on pull request #2212:
URL: https://github.com/apache/hadoop/pull/2212#issuecomment-672413667


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  6s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 14s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  28m 10s |  trunk passed  |
   | +1 :green_heart: |  compile  |   4m 18s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   3m 49s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 58s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m  5s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 49s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 52s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   2m 26s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | -1 :x: |  findbugs  |   3m  7s |  hadoop-hdfs-project/hadoop-hdfs in trunk 
has 2 extant findbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 54s |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 10s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   4m 10s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 42s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |   3m 42s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 50s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 54s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 24s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 18s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 49s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   5m 47s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 59s |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 108m 40s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 220m 41s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.hdfs.TestMultipleNNPortQOP |
   |   | hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2212/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2212 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 27fff0c16184 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3fd3aeb621e |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | findbugs | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2212/2/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
   | unit | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2212/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   

[GitHub] [hadoop] hadoop-yetus commented on pull request #2149: HADOOP-13230. S3A to optionally retain directory markers

2020-08-11 Thread GitBox


hadoop-yetus commented on pull request #2149:
URL: https://github.com/apache/hadoop/pull/2149#issuecomment-672410173


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  33m  5s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  1s |  The patch appears to include 
24 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 22s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  29m 43s |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 28s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  16m 50s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 44s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 25s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 46s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 15s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 15s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 27s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 35s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 23s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 46s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javac  |  18m 46s |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 4 new + 2045 unchanged - 
4 fixed = 2049 total (was 2049)  |
   | +1 :green_heart: |  compile  |  16m 44s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  javac  |  16m 44s |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 4 new + 1939 unchanged - 
4 fixed = 1943 total (was 1943)  |
   | -0 :warning: |  checkstyle  |   3m 23s |  root: The patch generated 16 new 
+ 72 unchanged - 2 fixed = 88 total (was 74)  |
   | +1 :green_heart: |  mvnsite  |   2m 23s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 12 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch 3 line(s) with tabs.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  13m 54s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 14s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 14s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  findbugs  |   1m 23s |  hadoop-tools/hadoop-aws generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 17s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   1m 39s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 55s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 211m 16s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-aws |
   |  |  Found reliance on default encoding in 
org.apache.hadoop.fs.s3a.tools.MarkerTool.run(String[], PrintStream):in 
org.apache.hadoop.fs.s3a.tools.MarkerTool.run(String[], PrintStream): new 
java.io.FileWriter(String)  At MarkerTool.java:[line 284] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2149/11/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2149 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs c

[GitHub] [hadoop] szetszwo commented on pull request #2218: HDFS-15523. Fix findbugs warnings.

2020-08-11 Thread GitBox


szetszwo commented on pull request #2218:
URL: https://github.com/apache/hadoop/pull/2218#issuecomment-672392664


   @liuml07 , Thanks the review and the hint.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sguggilam commented on pull request #2197: HADOOP-17159 Ability for forceful relogin in UserGroupInformation class

2020-08-11 Thread GitBox


sguggilam commented on pull request #2197:
URL: https://github.com/apache/hadoop/pull/2197#issuecomment-672370883


   @arp7 @kihwal @xiaoyuyao  Can you please review and provide your feedback ?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17165) Implement service-user feature in DecayRPCScheduler

2020-08-11 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17175880#comment-17175880
 ] 

Hadoop QA commented on HADOOP-17165:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} markdownlint {color} | {color:blue}  0m  
0s{color} | {color:blue} markdownlint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
12s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
45s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
14s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
12s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
39s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
41s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 50s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {co

[GitHub] [hadoop] szetszwo opened a new pull request #2218: HDFS-15523. Fix findbugs warnings.

2020-08-11 Thread GitBox


szetszwo opened a new pull request #2218:
URL: https://github.com/apache/hadoop/pull/2218


   See https://issues.apache.org/jira/browse/HDFS-15523



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] szetszwo commented on pull request #2203: HDFS-15520 Use visitor pattern to visit namespace tree

2020-08-11 Thread GitBox


szetszwo commented on pull request #2203:
URL: https://github.com/apache/hadoop/pull/2203#issuecomment-672308956


   Thanks for pointing out.  Will fix it.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sodonnel commented on pull request #2203: HDFS-15520 Use visitor pattern to visit namespace tree

2020-08-11 Thread GitBox


sodonnel commented on pull request #2203:
URL: https://github.com/apache/hadoop/pull/2203#issuecomment-672298155


   Looks like this PR introduced two new find bugs warnings. Could you check 
and fix them please?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] JohnZZGithub commented on pull request #2185: HADOOP-15891. provide Regex Based Mount Point In Inode Tree

2020-08-11 Thread GitBox


JohnZZGithub commented on pull request #2185:
URL: https://github.com/apache/hadoop/pull/2185#issuecomment-672293070


   Thank you! let me check ViewFs.java. Function wise, this patch worked for MR 
jobs and HDFS use cases in our internal clusters. 
   
   > I will review it in a day or two, Thanks
   > BTW, you may need the similar changes in ViewFs.java as well, I think nfly 
also missed there.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2149: HADOOP-13230. S3A to optionally retain directory markers

2020-08-11 Thread GitBox


steveloughran commented on pull request #2149:
URL: https://github.com/apache/hadoop/pull/2149#issuecomment-672283597


   latest patch tweaks markertool, but also adds pathcapabilities probes to the 
s3a store so you can see if an instance is (a) markeraware and (b) whether 
markers are being kept or deleted on a given path. Look at the docs for details.
   ```
   s bin/hadoop jar $CLOUDSTORE pathcapability 
fs.s3a.capability.directory.marker.keep s3a://stevel-london/tables
   Probing s3a://stevel-london/tables for capability 
fs.s3a.capability.directory.marker.keep
   2020-08-11 22:15:15,501 [main] INFO  s3a.S3AFileSystem 
(S3Guard.java:logS3GuardDisabled(1152)) - S3Guard is disabled on this bucket: 
stevel-london
   2020-08-11 22:15:15,506 [main] INFO  impl.DirectoryPolicyImpl 
(DirectoryPolicyImpl.java:getDirectoryPolicy(143)) - Directory markers will be 
kept on authoritative paths
   Using filesystem s3a://stevel-london
   Path s3a://stevel-london/tables has capability 
fs.s3a.capability.directory.marker.keep
   ```
   branch-3.2 doesn't support Path capabilities, but it does do it for stream 
capabilities; I'll extend cloudstore to have a `streamcapabilities` command too



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #2149: HADOOP-13230. S3A to optionally retain directory markers

2020-08-11 Thread GitBox


steveloughran commented on a change in pull request #2149:
URL: https://github.com/apache/hadoop/pull/2149#discussion_r468868036



##
File path: 
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/directory_markers.md
##
@@ -0,0 +1,416 @@
+
+
+# Controlling the S3A Directory Marker Behavior
+
+##  Critical: this is not backwards compatible!
+
+This document shows how the performance of S3 IO, especially applications
+writing many files (hive) or working with versioned S3 buckets can
+increase performance by changing the S3A directory marker retention policy.
+
+Changing the policy from the default value, `"delete"` _is not backwards 
compatible_.
+
+Versions of Hadoop which are incompatible with other marker retention policies
+
+---
+|  Branch| Compatible Since | Future Fix Planned? |
+||--|-|
+| Hadoop 2.x |  | NO  |
+| Hadoop 3.0 |  | NO  |
+| Hadoop 3.1 |  | Yes |
+| Hadoop 3.2 |  | Yes |
+| Hadoop 3.3 |  3.3.1   | Done|
+---
+
+External Hadoop-based applications should also be assumed to be incompatible
+unless otherwise stated/known.
+
+It is only safe change the directory marker policy if the following
+ conditions are met:
+
+1. You know exactly which applications are writing to and reading from
+   (including backing up) an S3 bucket.
+2. You know all applications which read data from the bucket are as compatible.
+
+###  Applications backing up data.
+
+It is not enough to have a version of Apache Hadoop which is compatible, any
+application which backs up an S3 bucket or copies elsewhere must have an S3
+connector which is compatible. For the Hadoop codebase, that means that if
+distcp is used, it _must_ be from a compatible hadoop version.
+
+###  How will incompatible applications/versions 
fail? 
+
+Applications using an incompatible version of the S3A connector will mistake
+directories containing data for empty directories. This means that
+
+* Listing directories/directory trees may exclude files which exist.
+* Queries across the data will miss data files.
+* Renaming a directory to a new location may exclude files underneath.
+
+###  If an application has updated a directory tree 
incompatibly-- what can be done?
+
+There's a tool on the hadoop command line, [marker tool](#marker-tool) which 
can audit
+a bucket/path for markers, and clean up any which were found.
+It can be used to make a bucket compatible with older applications.
+
+Now that this is all clear, let's explain the problem.
+
+
+##  Background: Directory Markers: what and why?
+
+Amazon S3 is not a filesystem, it is an object store.
+
+The S3A connector not only provides a hadoop-compatible API to interact with
+data in S3, it tries to maintain the filesystem metaphor.
+
+One key aspect of the metaphor of a file system is "directories"
+
+ The directory concept
+
+In normal Unix-style filesystems, the "filesystem" is really a "directory and
+file tree" in which files are always stored in "directories"
+
+
+* A directory may contain 0 or more files.
+* A directory may contain 0 or more directories "subdirectories"
+* At the base of a filesystem is the "root directory"
+* All files MUST be in a directory "the parent directory"
+* All directories other than the root directory must be in another directory.
+* If a directory contains no files or directories, it is "empty"
+* When a directory is _listed_, all files and directories in it are enumerated 
and returned to the caller
+
+
+The S3A connector mocks this entire metaphor by grouping all objects which have
+the same prefix as if they are in the same directory tree.
+
+If there are two objects `a/b/file1` and `a/b/file2` then S3A pretends that 
there is a
+directory `/a/b` containing two files `file1`  and `file2`.
+
+The directory itself does not exist.
+
+There's a bit of a complication here.
+
+ What does `mkdirs()` do?
+
+1. In HDFS and other "real" filesystems, when `mkdirs()` is invoked on a path
+whose parents are all directories, then an _empty directory_ is created.
+
+1. This directory can be probed for "it exists" and listed (an empty list is
+returned)
+
+1. Files and other directories can be created in it.
+
+
+Lots of code contains a big assumption here: after you create a directory it
+exists. They also assume that after files in a directory are deleted, the
+directory still exists.
+
+Given filesystem mimics directories just by aggregating objects which share a
+prefix, how can you have empty directories?
+
+The original Hadoop `s3n://` connector created a Directory Marker -any path 
ending
+in `_$folder$` was considered to be a sign that a directory existed. A call to
+`mkdir(s3n://bucket/a/b)` would create a new marker 

[GitHub] [hadoop] vivekratnavel commented on pull request #2212: HDFS-15496. Add UI for deleted snapshots

2020-08-11 Thread GitBox


vivekratnavel commented on pull request #2212:
URL: https://github.com/apache/hadoop/pull/2212#issuecomment-672276399


   Screen-shot of UI with the latest patch is shown below:
   
   https://user-images.githubusercontent.com/1051198/89948363-d3251e00-dbda-11ea-8e03-19697ca6a821.png";>
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel commented on pull request #2212: HDFS-15496. Add UI for deleted snapshots

2020-08-11 Thread GitBox


vivekratnavel commented on pull request #2212:
URL: https://github.com/apache/hadoop/pull/2212#issuecomment-672275779


   @bshashikant @bharatviswa504 Thanks for the reviews!
   
   > Since the patch modifies SnapshotInfo class, let's remove 
SnapshotStatus.Bean()
   
   Done
   
   > Having different column for snapshotName and then snapshot path may not be 
useful. Instead can we just have one column for the snapshot path (snapshotName 
is implicit).
   
   Done
   
   > Snapshot permission, owner and group added newly to the UI page .. Any 
specific reason?
   
   I added these new columns to be consistent with the display of snapshottable 
directories table and to provide more useful information about snapshots to the 
user. 
   
   Please take another look at the updated patch. Thanks!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2202: HADOOP-17191. ABFS: Run tests with all AuthTypes

2020-08-11 Thread GitBox


hadoop-yetus commented on pull request #2202:
URL: https://github.com/apache/hadoop/pull/2202#issuecomment-672253589


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   2m 41s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
42 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |   1m 45s |  root in trunk failed.  |
   | -1 :x: |  compile  |   0m 10s |  hadoop-azure in trunk failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  compile  |   0m 29s |  hadoop-azure in trunk failed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -0 :warning: |  checkstyle  |   0m 41s |  The patch fails to run 
checkstyle in hadoop-azure  |
   | -1 :x: |  mvnsite  |   0m 20s |  hadoop-azure in trunk failed.  |
   | +1 :green_heart: |  shadedclient  |  24m  2s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 15s |  hadoop-azure in trunk failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javadoc  |   0m 16s |  hadoop-azure in trunk failed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | +0 :ok: |  spotbugs  |  24m 55s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | -1 :x: |  findbugs  |   0m 18s |  hadoop-azure in trunk failed.  |
   | -0 :warning: |  patch  |  25m 16s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 12s |  hadoop-azure in the patch failed.  |
   | -1 :x: |  compile  |   0m 11s |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javac  |   0m 11s |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  compile  |   0m 11s |  hadoop-azure in the patch failed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -1 :x: |  javac  |   0m 11s |  hadoop-azure in the patch failed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -0 :warning: |  checkstyle  |   0m  9s |  The patch fails to run 
checkstyle in hadoop-azure  |
   | -1 :x: |  mvnsite  |   0m 12s |  hadoop-azure in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 37s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 14s |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javadoc  |   0m 15s |  hadoop-azure in the patch failed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -1 :x: |  findbugs  |   0m 14s |  hadoop-azure in the patch failed.  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 13s |  hadoop-azure in the patch failed.  |
   | -1 :x: |  asflicense  |   0m 32s |  The patch generated 1 ASF License 
warnings.  |
   |  |   |  51m 25s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2202/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2202 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ed1e90aa3f38 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3fd3aeb621e |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | mvninstall | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2202/7/artifact/out/branch-mvninstall-root.txt
 |
   | compile | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2202/7/artifact/out/branch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt
 |
   | compile | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2202/7/artifact/out/branch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt
 |
   | checkstyle | 
https://ci-hadoop.apache

[GitHub] [hadoop] hadoop-yetus commented on pull request #2217: HDFS-15518. Fixed String operationName = ListSnapshot.

2020-08-11 Thread GitBox


hadoop-yetus commented on pull request #2217:
URL: https://github.com/apache/hadoop/pull/2217#issuecomment-672202609


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 29s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  28m 59s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   1m 12s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 17s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 59s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 59s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | -1 :x: |  findbugs  |   2m 57s |  hadoop-hdfs-project/hadoop-hdfs in trunk 
has 2 extant findbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  5s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   1m  5s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  4s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   1m  4s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 42s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m  9s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 41s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 20s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m  2s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 116m 10s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 41s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 198m  1s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.TestAclsEndToEnd |
   |   | hadoop.hdfs.TestDFSClientRetries |
   |   | hadoop.hdfs.server.datanode.TestDataNodeLifeline |
   |   | hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader |
   |   | hadoop.hdfs.TestReplication |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.hdfs.TestDFSStripedOutputStream |
   |   | hadoop.hdfs.TestMiniDFSCluster |
   |   | hadoop.hdfs.TestReconstructStripedFile |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2217/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2217 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux c14640de2ba4 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6c2ce3d56b1 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | findbugs | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2217/1/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
   | unit | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2217/1/artifact/out/patch-u

[GitHub] [hadoop] hemanthboyina commented on pull request #2217: HDFS-15518. Fixed String operationName = ListSnapshot.

2020-08-11 Thread GitBox


hemanthboyina commented on pull request #2217:
URL: https://github.com/apache/hadoop/pull/2217#issuecomment-672124923


   the operation name can be common for audit log and lock



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17165) Implement service-user feature in DecayRPCScheduler

2020-08-11 Thread Takanobu Asanuma (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17175701#comment-17175701
 ] 

Takanobu Asanuma commented on HADOOP-17165:
---

Uploaded the 2nd patch for fixing the checkstyle issue.

[~csun] I think this feature can coexist with HADOOP-15016 and it will be 
useful even after implementing HADOOP-15016. Could you review it? 

> Implement service-user feature in DecayRPCScheduler
> ---
>
> Key: HADOOP-17165
> URL: https://issues.apache.org/jira/browse/HADOOP-17165
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-17165.001.patch, HADOOP-17165.002.patch
>
>
> In our cluster, we want to use FairCallQueue to limit heavy users, but not 
> want to restrict certain users who are submitting important requests. This 
> jira proposes to implement the service-user feature that the user is always 
> scheduled high-priority queue.
> According to HADOOP-9640, the initial concept of FCQ has this feature, but 
> not implemented finally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17165) Implement service-user feature in DecayRPCScheduler

2020-08-11 Thread Takanobu Asanuma (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HADOOP-17165:
--
Attachment: HADOOP-17165.002.patch

> Implement service-user feature in DecayRPCScheduler
> ---
>
> Key: HADOOP-17165
> URL: https://issues.apache.org/jira/browse/HADOOP-17165
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-17165.001.patch, HADOOP-17165.002.patch
>
>
> In our cluster, we want to use FairCallQueue to limit heavy users, but not 
> want to restrict certain users who are submitting important requests. This 
> jira proposes to implement the service-user feature that the user is always 
> scheduled high-priority queue.
> According to HADOOP-9640, the initial concept of FCQ has this feature, but 
> not implemented finally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17165) Implement service-user feature in DecayRPCScheduler

2020-08-11 Thread Takanobu Asanuma (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17175698#comment-17175698
 ] 

Takanobu Asanuma commented on HADOOP-17165:
---

Thanks for your comment, [~John Smith].

For the simplify, and to differentiate HADOOP-17165 from HADOOP-15016, I'd like 
to just put service-users in the highest priority queue.

> Implement service-user feature in DecayRPCScheduler
> ---
>
> Key: HADOOP-17165
> URL: https://issues.apache.org/jira/browse/HADOOP-17165
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-17165.001.patch, HADOOP-17165.002.patch
>
>
> In our cluster, we want to use FairCallQueue to limit heavy users, but not 
> want to restrict certain users who are submitting important requests. This 
> jira proposes to implement the service-user feature that the user is always 
> scheduled high-priority queue.
> According to HADOOP-9640, the initial concept of FCQ has this feature, but 
> not implemented finally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2216: HADOOP-17194. Adding Context class for AbfsClient in ABFS

2020-08-11 Thread GitBox


hadoop-yetus commented on pull request #2216:
URL: https://github.com/apache/hadoop/pull/2216#issuecomment-672086781


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  5s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 10s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   0m 29s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 36s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 56s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 54s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   0m 28s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 24s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 15s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 29s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   0m 57s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 19s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 28s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  75m 31s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2216/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2216 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 500ec22de4cc 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6c2ce3d56b1 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2216/1/testReport/ |
   | Max. process+thread count | 308 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2216/1/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional

[GitHub] [hadoop] smengcl merged pull request #2176: HDFS-15492. Make trash root inside each snapshottable directory

2020-08-11 Thread GitBox


smengcl merged pull request #2176:
URL: https://github.com/apache/hadoop/pull/2176


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] smengcl commented on pull request #2176: HDFS-15492. Make trash root inside each snapshottable directory

2020-08-11 Thread GitBox


smengcl commented on pull request #2176:
URL: https://github.com/apache/hadoop/pull/2176#issuecomment-672032459


   Reran the test. Much less unrelated flaky test failures now.
   
   I'm merging the closing this PR in a minute.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] smengcl edited a comment on pull request #2176: HDFS-15492. Make trash root inside each snapshottable directory

2020-08-11 Thread GitBox


smengcl edited a comment on pull request #2176:
URL: https://github.com/apache/hadoop/pull/2176#issuecomment-672032459


   Reran the test. Much less unrelated flaky test failures now.
   
   I'm merging this PR in a minute.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on a change in pull request #2149: HADOOP-13230. S3A to optionally retain directory markers

2020-08-11 Thread GitBox


mukund-thakur commented on a change in pull request #2149:
URL: https://github.com/apache/hadoop/pull/2149#discussion_r468621232



##
File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java
##
@@ -295,12 +350,14 @@ protected void renameFileToDest() throws IOException {
 callbacks.deleteObjectAtPath(sourcePath, sourceKey, true, null);
 // and update the tracker
 renameTracker.sourceObjectsDeleted(Lists.newArrayList(sourcePath));
+return copyDestinationPath;
   }
 
   /**
* Execute a full recursive rename.
-   * The source is a file: rename it to the destination.
-   * @throws IOException failure
+   * There is a special handling of directly markers here -only leaf markers
+   * are copied. This reduces incompatibility "regions" across versions.
+Are   * @throws IOException failure

Review comment:
   nit: typo in javadoc





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aryangupta1998 opened a new pull request #2217: HDFS-15518. Fixed String operationName = ListSnapshot.

2020-08-11 Thread GitBox


aryangupta1998 opened a new pull request #2217:
URL: https://github.com/apache/hadoop/pull/2217


   Fixed String operationName = ListSnapshot.
   
   Link: 
[https://issues.apache.org/jira/browse/HDFS-15518](https://issues.apache.org/jira/browse/HDFS-15518)
   
   
   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on a change in pull request #2149: HADOOP-13230. S3A to optionally retain directory markers

2020-08-11 Thread GitBox


mukund-thakur commented on a change in pull request #2149:
URL: https://github.com/apache/hadoop/pull/2149#discussion_r468608626



##
File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/DirMarkerTracker.java
##
@@ -0,0 +1,323 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.impl;
+
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.TreeMap;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.s3a.S3ALocatedFileStatus;
+
+/**
+ * Tracks directory markers which have been reported in object listings.
+ * This is needed for auditing and cleanup, including during rename
+ * operations.
+ * 
+ * Designed to be used while scanning through the results of listObject
+ * calls, where are we assume the results come in alphanumeric sort order
+ * and parent entries before children.
+ * 
+ * This lets as assume that we can identify all leaf markers as those
+ * markers which were added to set of leaf markers and not subsequently
+ * removed as a child entries were discovered.
+ * 
+ * To avoid scanning datastructures excessively, the path of the parent
+ * directory of the last file added is cached. This allows for a
+ * quick bailout when many children of the same directory are
+ * returned in a listing.
+ */
+public class DirMarkerTracker {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(DirMarkerTracker.class);
+
+  /**
+   * all leaf markers.
+   */
+  private final Map leafMarkers
+  = new TreeMap<>();
+
+  /**
+   * all surplus markers.
+   */
+  private final Map surplusMarkers
+  = new TreeMap<>();
+
+  private final Path basePath;
+
+  /**
+   * last parent directory checked.
+   */
+  private Path lastDirChecked;
+
+  /**
+   * Count of scans; used for test assertions.
+   */
+  private int scanCount;
+
+  /**
+   * How many files were found.
+   */
+  private int filesFound;
+
+  /**
+   * How many markers were found.
+   */
+  private int markersFound;
+
+  /**
+   * How many objects of any kind were found?
+   */
+  private int objectsFound;
+
+  /**
+   * Construct.
+   * Base path is currently only used for information rather than validating
+   * paths supplied in other mathods.
+   * @param basePath base path of track
+   */
+  public DirMarkerTracker(final Path basePath) {
+this.basePath = basePath;
+  }
+
+  /**
+   * Get the base path of the tracker.
+   * @return the path
+   */
+  public Path getBasePath() {
+return basePath;
+  }
+
+  /**
+   * A marker has been found; this may or may not be a leaf.
+   * Trigger a move of all markers above it into the surplus map.
+   * @param path marker path
+   * @param key object key
+   * @param source listing source
+   * @return the surplus markers found.
+   */
+  public List markerFound(Path path,
+  final String key,
+  final S3ALocatedFileStatus source) {
+markersFound++;
+leafMarkers.put(path, new Marker(path, key, source));
+return pathFound(path, key, source);
+  }
+
+  /**
+   * A file has been found. Trigger a move of all
+   * markers above it into the surplus map.
+   * @param path marker path
+   * @param key object key
+   * @param source listing source
+   * @return the surplus markers found.
+   */
+  public List fileFound(Path path,
+  final String key,
+  final S3ALocatedFileStatus source) {
+filesFound++;
+return pathFound(path, key, source);
+  }
+
+  /**
+   * A path has been found. Trigger a move of all
+   * markers above it into the surplus map.
+   * @param path marker path
+   * @param key object key
+   * @param source listing source
+   * @return the surplus markers found.
+   */
+  private List pathFound(Path path,
+  final String key,
+  final S3ALocatedFileStatus source) {

Review comment:
   Even if we need all three we could just pass the Marker() instance?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2201: HADOOP-17125. Using snappy-java in SnappyCodec

2020-08-11 Thread GitBox


hadoop-yetus commented on pull request #2201:
URL: https://github.com/apache/hadoop/pull/2201#issuecomment-671967655


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  25m 58s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
3 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 21s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 40s |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 24s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  17m  0s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 56s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  20m 30s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 15s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   6m 30s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   7m 10s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 31s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +0 :ok: |  findbugs  |   0m 24s |  branch/hadoop-project no findbugs 
output file (findbugsXml.xml)  |
   | -1 :x: |  findbugs  |  31m 12s |  root in trunk has 2 extant findbugs 
warnings.  |
   | +0 :ok: |  findbugs  |   0m 31s |  branch/hadoop-project-dist no findbugs 
output file (findbugsXml.xml)  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 35s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  21m  8s |  the patch passed  |
   | -1 :x: |  compile  |   1m 16s |  root in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  cc  |   1m 16s |  root in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  golang  |   1m 16s |  root in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javac  |   1m 16s |  root in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  compile  |   1m 10s |  root in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -1 :x: |  cc  |   1m 10s |  root in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -1 :x: |  golang  |   1m 10s |  root in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -1 :x: |  javac  |   1m 10s |  root in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -0 :warning: |  checkstyle  |   2m 34s |  root: The patch generated 10 new 
+ 113 unchanged - 2 fixed = 123 total (was 115)  |
   | +1 :green_heart: |  mvnsite  |  17m  8s |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  2s |  There were no new shellcheck 
issues.  |
   | +1 :green_heart: |  shelldocs  |   0m 19s |  There were no new shelldocs 
issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  5s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  13m 55s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   6m 22s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   7m 10s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  findbugs  |   0m 23s |  hadoop-project has no data from 
findbugs  |
   | +0 :ok: |  findbugs  |   0m 23s |  hadoop-project-dist has no data from 
findbugs  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   7m  2s |  root in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   1m  8s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 297m 25s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2201/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2201 |
   | Optional Tests | dupname asflicense shellcheck shelldocs compile javac 
javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle cc golang |
   | uname | Linux a63449d124fa 4.15.0-58-generic #64-Ubun

[GitHub] [hadoop] mukund-thakur commented on a change in pull request #2149: HADOOP-13230. S3A to optionally retain directory markers

2020-08-11 Thread GitBox


mukund-thakur commented on a change in pull request #2149:
URL: https://github.com/apache/hadoop/pull/2149#discussion_r468594922



##
File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/DirMarkerTracker.java
##
@@ -0,0 +1,323 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.impl;
+
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.TreeMap;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.s3a.S3ALocatedFileStatus;
+
+/**
+ * Tracks directory markers which have been reported in object listings.
+ * This is needed for auditing and cleanup, including during rename
+ * operations.
+ * 
+ * Designed to be used while scanning through the results of listObject
+ * calls, where are we assume the results come in alphanumeric sort order
+ * and parent entries before children.
+ * 
+ * This lets as assume that we can identify all leaf markers as those
+ * markers which were added to set of leaf markers and not subsequently
+ * removed as a child entries were discovered.
+ * 
+ * To avoid scanning datastructures excessively, the path of the parent
+ * directory of the last file added is cached. This allows for a
+ * quick bailout when many children of the same directory are
+ * returned in a listing.
+ */
+public class DirMarkerTracker {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(DirMarkerTracker.class);
+
+  /**
+   * all leaf markers.
+   */
+  private final Map leafMarkers
+  = new TreeMap<>();
+
+  /**
+   * all surplus markers.
+   */
+  private final Map surplusMarkers
+  = new TreeMap<>();
+
+  private final Path basePath;
+
+  /**
+   * last parent directory checked.
+   */
+  private Path lastDirChecked;
+
+  /**
+   * Count of scans; used for test assertions.
+   */
+  private int scanCount;
+
+  /**
+   * How many files were found.
+   */
+  private int filesFound;
+
+  /**
+   * How many markers were found.
+   */
+  private int markersFound;
+
+  /**
+   * How many objects of any kind were found?
+   */
+  private int objectsFound;
+
+  /**
+   * Construct.
+   * Base path is currently only used for information rather than validating
+   * paths supplied in other mathods.
+   * @param basePath base path of track
+   */
+  public DirMarkerTracker(final Path basePath) {
+this.basePath = basePath;
+  }
+
+  /**
+   * Get the base path of the tracker.
+   * @return the path
+   */
+  public Path getBasePath() {
+return basePath;
+  }
+
+  /**
+   * A marker has been found; this may or may not be a leaf.
+   * Trigger a move of all markers above it into the surplus map.
+   * @param path marker path
+   * @param key object key
+   * @param source listing source
+   * @return the surplus markers found.
+   */
+  public List markerFound(Path path,
+  final String key,
+  final S3ALocatedFileStatus source) {
+markersFound++;
+leafMarkers.put(path, new Marker(path, key, source));
+return pathFound(path, key, source);
+  }
+
+  /**
+   * A file has been found. Trigger a move of all
+   * markers above it into the surplus map.
+   * @param path marker path
+   * @param key object key
+   * @param source listing source
+   * @return the surplus markers found.
+   */
+  public List fileFound(Path path,
+  final String key,
+  final S3ALocatedFileStatus source) {
+filesFound++;
+return pathFound(path, key, source);
+  }
+
+  /**
+   * A path has been found. Trigger a move of all
+   * markers above it into the surplus map.
+   * @param path marker path
+   * @param key object key
+   * @param source listing source
+   * @return the surplus markers found.
+   */
+  private List pathFound(Path path,
+  final String key,
+  final S3ALocatedFileStatus source) {

Review comment:
   key and source is not used inside the method.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific 

[GitHub] [hadoop] mukund-thakur commented on a change in pull request #2149: HADOOP-13230. S3A to optionally retain directory markers

2020-08-11 Thread GitBox


mukund-thakur commented on a change in pull request #2149:
URL: https://github.com/apache/hadoop/pull/2149#discussion_r468588622



##
File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/DirMarkerTracker.java
##
@@ -0,0 +1,323 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.impl;
+
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.TreeMap;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.s3a.S3ALocatedFileStatus;
+
+/**
+ * Tracks directory markers which have been reported in object listings.
+ * This is needed for auditing and cleanup, including during rename
+ * operations.
+ * 
+ * Designed to be used while scanning through the results of listObject
+ * calls, where are we assume the results come in alphanumeric sort order
+ * and parent entries before children.
+ * 
+ * This lets as assume that we can identify all leaf markers as those

Review comment:
   nit: typo "lets us"





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on a change in pull request #2149: HADOOP-13230. S3A to optionally retain directory markers

2020-08-11 Thread GitBox


mukund-thakur commented on a change in pull request #2149:
URL: https://github.com/apache/hadoop/pull/2149#discussion_r468588135



##
File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/DirMarkerTracker.java
##
@@ -0,0 +1,323 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.impl;
+
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.TreeMap;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.s3a.S3ALocatedFileStatus;
+
+/**
+ * Tracks directory markers which have been reported in object listings.
+ * This is needed for auditing and cleanup, including during rename
+ * operations.
+ * 
+ * Designed to be used while scanning through the results of listObject
+ * calls, where are we assume the results come in alphanumeric sort order

Review comment:
   nit: typo "where we"





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mehakmeet opened a new pull request #2216: HADOOP-17194. Adding Context class for AbfsClient in ABFS

2020-08-11 Thread GitBox


mehakmeet opened a new pull request #2216:
URL: https://github.com/apache/hadoop/pull/2216


   Tested on: mvn -T 1C -Dparallel-tests=abfs clean verify
   Region: East US
   
   ```
   [INFO] Results:
   [INFO]
   [INFO] Tests run: 87, Failures: 0, Errors: 0, Skipped: 0
   ```
   ```
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemCheckAccess.:82->setTestUserFs:104 » 
IllegalArgument
   [ERROR]   ITestAzureBlobFileSystemCheckAccess.:82->setTestUserFs:104 » 
IllegalArgument
   [ERROR]   ITestAzureBlobFileSystemCheckAccess.:82->setTestUserFs:104 » 
IllegalArgument
   [ERROR]   ITestAzureBlobFileSystemCheckAccess.:82->setTestUserFs:104 » 
IllegalArgument
   [ERROR]   ITestAzureBlobFileSystemCheckAccess.:82->setTestUserFs:104 » 
IllegalArgument
   [ERROR]   ITestAzureBlobFileSystemCheckAccess.:82->setTestUserFs:104 » 
IllegalArgument
   [ERROR]   ITestAzureBlobFileSystemCheckAccess.:82->setTestUserFs:104 » 
IllegalArgument
   [ERROR]   ITestAzureBlobFileSystemCheckAccess.:82->setTestUserFs:104 » 
IllegalArgument
   [ERROR]   ITestAzureBlobFileSystemCheckAccess.:82->setTestUserFs:104 » 
IllegalArgument
   [ERROR]   ITestAzureBlobFileSystemCheckAccess.:82->setTestUserFs:104 » 
IllegalArgument
   [ERROR]   ITestAzureBlobFileSystemCheckAccess.:82->setTestUserFs:104 » 
IllegalArgument
   [ERROR]   ITestAzureBlobFileSystemCheckAccess.:82->setTestUserFs:104 » 
IllegalArgument
   [INFO]
   [ERROR] Tests run: 451, Failures: 0, Errors: 12, Skipped: 61
   ```
   ```
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 206, Failures: 0, Errors: 0, Skipped: 29
   ```
   
   The errors are being discussed in 
HADOOP-17203(https://issues.apache.org/jira/browse/HADOOP-17203).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2215: HDFS-15521. Remove INode.dumpTreeRecursively().

2020-08-11 Thread GitBox


hadoop-yetus commented on pull request #2215:
URL: https://github.com/apache/hadoop/pull/2215#issuecomment-671911467


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 32s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
10 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  28m 37s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 16s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   1m 12s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 18s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 22s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 52s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m  1s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | -1 :x: |  findbugs  |   2m 59s |  hadoop-hdfs-project/hadoop-hdfs in trunk 
has 2 extant findbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  8s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   1m  8s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  3s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   1m  3s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 53s |  hadoop-hdfs-project/hadoop-hdfs: 
The patch generated 6 new + 607 unchanged - 14 fixed = 613 total (was 621)  |
   | +1 :green_heart: |  mvnsite  |   1m  9s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 5 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedclient  |  13m 40s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 46s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 19s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   2m 59s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  93m 25s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 41s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 175m 38s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestAddStripedBlockInFBR |
   |   | hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader |
   |   | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2215/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2215 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 699f4e09b747 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 909f1e82d3e |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | findbugs | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2215/1/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2215/1/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | whitespace | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2215/1/artifact/out/whitespace-eol.txt
 |
   | unit | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2215/1/artifact/out/

[jira] [Commented] (HADOOP-17145) Unauthenticated users are not authorized to access this page message is misleading in HttpServer2.java

2020-08-11 Thread Andras Bokor (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17175493#comment-17175493
 ] 

Andras Bokor commented on HADOOP-17145:
---

With patch 007 everything went well. That changes the error message and the 
error code as well.

> Unauthenticated users are not authorized to access this page message is 
> misleading in HttpServer2.java
> --
>
> Key: HADOOP-17145
> URL: https://issues.apache.org/jira/browse/HADOOP-17145
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-17145.001.patch, HADOOP-17145.002.patch, 
> HADOOP-17145.003.patch, HADOOP-17145.004.patch, HADOOP-17145.005.patch, 
> HADOOP-17145.006.patch, HADOOP-17145.007.patch
>
>
> Recently one of the users were misled by the message "Unauthenticated users 
> are not authorized to access this page" when the user was not an admin user.
> At that point the user is authenticated but has no admin access, so it's 
> actually not an authentication issue but an authorization issue.
> Also, 401 as error code would be better.
> Something like "User is unauthorized to access the page" would help to users 
> to find out what is the problem during access an http endpoint.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17203) Test failures in ITestAzureBlobFileSystemCheckAccess in ABFS

2020-08-11 Thread Mehakmeet Singh (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17175463#comment-17175463
 ] 

Mehakmeet Singh commented on HADOOP-17203:
--

I believe my bucket's auth type is shared_key.

> Test failures in ITestAzureBlobFileSystemCheckAccess in ABFS
> 
>
> Key: HADOOP-17203
> URL: https://issues.apache.org/jira/browse/HADOOP-17203
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Mehakmeet Singh
>Priority: Major
>
> ITestAzureBlobFileSystemCheckAccess is giving test failures while running 
> both in parallel as well as in stand-alone(in IDE).
> Tested by:  mvn -T 1C -Dparallel-tests=abfs clean verify
>  Region: East US



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17203) Test failures in ITestAzureBlobFileSystemCheckAccess in ABFS

2020-08-11 Thread Bilahari T H (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17175451#comment-17175451
 ] 

Bilahari T H commented on HADOOP-17203:
---

Hi [~mehakmeetSingh]
I assume that you are running the tests with OAuth. And the above exception 
means that the config fs.azure.account.oauth.provider.type 
(FS_AZURE_ACCOUNT_TOKEN_PROVIDER_TYPE_PROPERTY_NAME) is not set.
https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java#L688

Please check and let me know.
Also, Could you share me what is the authtype you are using and in case of 
OAuth what is the TokenProvider type you are using, so that I can try for repro.

> Test failures in ITestAzureBlobFileSystemCheckAccess in ABFS
> 
>
> Key: HADOOP-17203
> URL: https://issues.apache.org/jira/browse/HADOOP-17203
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Mehakmeet Singh
>Priority: Major
>
> ITestAzureBlobFileSystemCheckAccess is giving test failures while running 
> both in parallel as well as in stand-alone(in IDE).
> Tested by:  mvn -T 1C -Dparallel-tests=abfs clean verify
>  Region: East US



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17144) Update Hadoop's lz4 to v1.9.2

2020-08-11 Thread Hemanth Boyina (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17175444#comment-17175444
 ] 

Hemanth Boyina commented on HADOOP-17144:
-

test failures were not related 

please review 

> Update Hadoop's lz4 to v1.9.2
> -
>
> Key: HADOOP-17144
> URL: https://issues.apache.org/jira/browse/HADOOP-17144
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Hemanth Boyina
>Assignee: Hemanth Boyina
>Priority: Major
> Attachments: HADOOP-17144.001.patch, HADOOP-17144.002.patch, 
> HADOOP-17144.003.patch
>
>
> Update hadoop's native lz4 to v1.9.2 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17203) Test failures in ITestAzureBlobFileSystemCheckAccess in ABFS

2020-08-11 Thread Mehakmeet Singh (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mehakmeet Singh updated HADOOP-17203:
-
Description: 
ITestAzureBlobFileSystemCheckAccess is giving test failures while running both 
in parallel as well as in stand-alone(in IDE).

Tested by:  mvn -T 1C -Dparallel-tests=abfs clean verify
 Region: East US

  was:
ITestAzureBlobFileSystemCheckAccess is giving test failures in both parallel as 
well as stand-alone tests(in IDE).

Tested by:  mvn -T 1C -Dparallel-tests=abfs clean verify
Region: East US


> Test failures in ITestAzureBlobFileSystemCheckAccess in ABFS
> 
>
> Key: HADOOP-17203
> URL: https://issues.apache.org/jira/browse/HADOOP-17203
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Mehakmeet Singh
>Priority: Major
>
> ITestAzureBlobFileSystemCheckAccess is giving test failures while running 
> both in parallel as well as in stand-alone(in IDE).
> Tested by:  mvn -T 1C -Dparallel-tests=abfs clean verify
>  Region: East US



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mehakmeet commented on pull request #2192: HADOOP-17183. ABFS: Enabling checkaccess on ABFS

2020-08-11 Thread GitBox


mehakmeet commented on pull request #2192:
URL: https://github.com/apache/hadoop/pull/2192#issuecomment-671861754


   Tests are failing after the patch.
   https://issues.apache.org/jira/browse/HADOOP-17203



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17203) Test failures in ITestAzureBlobFileSystemCheckAccess in ABFS

2020-08-11 Thread Mehakmeet Singh (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17175437#comment-17175437
 ] 

Mehakmeet Singh commented on HADOOP-17203:
--

Stack Trace for the error(1 of the 12 test):
{code:java}
[ERROR] Tests run: 12, Failures: 0, Errors: 12, Skipped: 0, Time elapsed: 
10.918 s <<< FAILURE! - in 
org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemCheckAccess[ERROR] Tests 
run: 12, Failures: 0, Errors: 12, Skipped: 0, Time elapsed: 10.918 s <<< 
FAILURE! - in 
org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemCheckAccess[ERROR] 
testFsActionWRITE(org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemCheckAccess)
  Time elapsed: 0.005 s  <<< ERROR!java.lang.IllegalArgumentException: Failed 
to initialize null at 
org.apache.hadoop.fs.azurebfs.AbfsConfiguration.getTokenProvider(AbfsConfiguration.java:688)
 at 
org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.initializeClient(AzureBlobFileSystemStore.java:1256)
 at 
org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.(AzureBlobFileSystemStore.java:195)
 at 
org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:113)
 at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3428) at 
org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:171) at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3488) at 
org.apache.hadoop.fs.FileSystem$Cache.getUnique(FileSystem.java:3462) at 
org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:591) at 
org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:603) at 
org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemCheckAccess.setTestUserFs(ITestAzureBlobFileSystemCheckAccess.java:104)
 at 
org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemCheckAccess.(ITestAzureBlobFileSystemCheckAccess.java:82)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at 
org.junit.runners.BlockJUnit4ClassRunner.createTest(BlockJUnit4ClassRunner.java:217)
 at 
org.junit.runners.BlockJUnit4ClassRunner$1.runReflectiveCall(BlockJUnit4ClassRunner.java:266)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.BlockJUnit4ClassRunner.methodBlock(BlockJUnit4ClassRunner.java:263)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at 
org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
 at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
 at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
 at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) 
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
 at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
 at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418){code}

> Test failures in ITestAzureBlobFileSystemCheckAccess in ABFS
> 
>
> Key: HADOOP-17203
> URL: https://issues.apache.org/jira/browse/HADOOP-17203
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Mehakmeet Singh
>Priority: Major
>
> ITestAzureBlobFileSystemCheckAccess is giving test failures in both parallel 
> as well as stand-alone tests(in IDE).
> Tested by:  mvn -T 1C -Dparallel-tests=abfs clean verify
> Region: East US



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17203) Test failures in ITestAzureBlobFileSystemCheckAccess in ABFS

2020-08-11 Thread Mehakmeet Singh (Jira)
Mehakmeet Singh created HADOOP-17203:


 Summary: Test failures in ITestAzureBlobFileSystemCheckAccess in 
ABFS
 Key: HADOOP-17203
 URL: https://issues.apache.org/jira/browse/HADOOP-17203
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/azure
Affects Versions: 3.3.0
Reporter: Mehakmeet Singh


ITestAzureBlobFileSystemCheckAccess is giving test failures in both parallel as 
well as stand-alone tests(in IDE).

Tested by:  mvn -T 1C -Dparallel-tests=abfs clean verify
Region: East US



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17144) Update Hadoop's lz4 to v1.9.2

2020-08-11 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17175433#comment-17175433
 ] 

Hadoop QA commented on HADOOP-17144:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
48s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 33m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
10s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
56s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 24m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
45m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m  
6s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m 
34s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
36s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 45m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 23m 
52s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 23m 52s{color} | 
{color:red} root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 41 new + 130 unchanged - 
32 fixed = 171 total (was 162) {color} |
| {color:green}+1{color} | {color:green} golang {color} | {color:green} 23m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 23m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
18s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 21m 18s{color} | 
{color:red} root-jdkPrivateBuild-1.8.0_252-8u252-b09-1~18.04-b09 with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09 generated 30 new + 141 unchanged 
- 21 fixed = 171 total (was 162) {color} |
| {color:green}+1{color} | {color:green} golang {color} | {color:green} 21m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 21m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 22m 
10s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patc

[jira] [Commented] (HADOOP-15566) Support Opentracing

2020-08-11 Thread Brahma Reddy Battula (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17175430#comment-17175430
 ] 

Brahma Reddy Battula commented on HADOOP-15566:
---

[~weichiu] now 3.3.0 is out.. can we start working on this..? 

FYI..HADOOP-17171 also raised for CVE's in htrace.

> Support Opentracing
> ---
>
> Key: HADOOP-15566
> URL: https://issues.apache.org/jira/browse/HADOOP-15566
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics, tracing
>Affects Versions: 3.1.0
>Reporter: Todd Lipcon
>Assignee: Siyao Meng
>Priority: Major
>  Labels: security
> Attachments: HADOOP-15566.000.WIP.patch, OpenTracing Support Scope 
> Doc.pdf, Screen Shot 2018-06-29 at 11.59.16 AM.png, ss-trace-s3a.png
>
>
> The HTrace incubator project has voted to retire itself and won't be making 
> further releases. The Hadoop project currently has various hooks with HTrace. 
> It seems in some cases (eg HDFS-13702) these hooks have had measurable 
> performance overhead. Given these two factors, I think we should consider 
> removing the HTrace integration. If there is someone willing to do the work, 
> replacing it with OpenTracing might be a better choice since there is an 
> active community.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2202: HADOOP-17191. ABFS: Run tests with all AuthTypes

2020-08-11 Thread GitBox


hadoop-yetus commented on pull request #2202:
URL: https://github.com/apache/hadoop/pull/2202#issuecomment-671854060


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 32s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
41 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  28m 29s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   0m 33s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 58s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 57s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 54s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 13s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   0m 28s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 25s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 17s |  hadoop-tools/hadoop-azure: The 
patch generated 1 new + 9 unchanged - 1 fixed = 10 total (was 10)  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 54s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   0m 57s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   1m 20s |  hadoop-azure in the patch passed.  |
   | -1 :x: |  asflicense  |   0m 33s |  The patch generated 1 ASF License 
warnings.  |
   |  |   |  70m 12s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.azurebfs.TestAbfsNetworkStatistics |
   |   | hadoop.fs.azurebfs.services.TestAzureADAuthenticator |
   |   | hadoop.fs.azurebfs.TestAbfsOutputStreamStatistics |
   |   | hadoop.fs.azurebfs.services.TestAbfsInputStream |
   |   | hadoop.fs.azurebfs.TestAbfsInputStreamStatistics |
   |   | hadoop.fs.azurebfs.TestAbfsStatistics |
   |   | hadoop.fs.azurebfs.services.TestExponentialRetryPolicy |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2202/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2202 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 97b58918fe00 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 909f1e82d3e |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2202/6/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
   | unit | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2202/6/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt
 |
  

[GitHub] [hadoop] hadoop-yetus commented on pull request #2214: HADOOP-17202. Fix findbugs warnings in hadoop-tools on branch-2.10.

2020-08-11 Thread GitBox


hadoop-yetus commented on pull request #2214:
URL: https://github.com/apache/hadoop/pull/2214#issuecomment-671850970


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  21m 52s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ branch-2.10 Compile Tests _ |
   | +0 :ok: |  mvndep  |   2m 23s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  16m 29s |  branch-2.10 passed  |
   | +1 :green_heart: |  compile  |   2m 23s |  branch-2.10 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 42s |  branch-2.10 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  branch-2.10 passed  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  branch-2.10 passed  |
   | +0 :ok: |  spotbugs  |   1m  9s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | -1 :x: |  findbugs  |   1m 18s |  hadoop-tools/hadoop-azure in branch-2.10 
has 1 extant findbugs warnings.  |
   | -1 :x: |  findbugs  |   1m  5s |  hadoop-tools/hadoop-rumen in branch-2.10 
has 1 extant findbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 12s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 27s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 27s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 38s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 11s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   1m 12s |  hadoop-tools/hadoop-rumen 
generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1)  |
   | +1 :green_heart: |  findbugs  |   1m 24s |  hadoop-tools/hadoop-azure 
generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 32s |  hadoop-rumen in the patch passed.  
|
   | +1 :green_heart: |  unit  |   1m 18s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 32s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  64m 12s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2214/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2214 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 41fee7eec366 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | branch-2.10 / 2dffc1d |
   | Default Java | Oracle Corporation-1.7.0_95-b00 |
   | findbugs | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2214/1/artifact/out/branch-findbugs-hadoop-tools_hadoop-azure-warnings.html
 |
   | findbugs | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2214/1/artifact/out/branch-findbugs-hadoop-tools_hadoop-rumen-warnings.html
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2214/1/testReport/ |
   | Max. process+thread count | 232 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-rumen hadoop-tools/hadoop-azure U: 
hadoop-tools |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2214/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.0.1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17197) Decrease size of s3a dependencies

2020-08-11 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17175411#comment-17175411
 ] 

Steve Loughran commented on HADOOP-17197:
-

no. really no. really, really no. really, really, really no

A key rationale is say, Spark, which doesn't just use the S3A bits, it has a 
spark-kinesis module, and there's a spark streaming connector which uses 
Spark's SQS queue to send notifications to spark when monitored files

Since we moved to a single shared shaded JAR, we have eliminated all problems 
related to AWS SDK and transient dependences conflicting with hadoop 
requirements. And because we have a complete jar, we do not have to worry about 
classpath/versioning issues with those external-downstream components.

I seem to be the person where all S3A class path issues end up. I am still 
dealing with older versions of the hadoop and joda time and java 8 
consistencies. There is no way I want to reinstate that problem.


I understand your concerns with the size of the docker image. However I'm 
afraid you have to recognise and accept that this is the price of having a 
complete and functional high-performance connector with AWS S3 and other 
services. You are free to implement your own -I will point you at the Presto 
one who's wonderful minimalism appeals to me. 

But for the S3A connector: it ships with the AWS SDK shaded and complete.

Of course, you can also think about doing something purely for the Impala 
docker images. Let me know how that gets on

> Decrease size of s3a dependencies
> -
>
> Key: HADOOP-17197
> URL: https://issues.apache.org/jira/browse/HADOOP-17197
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sahil Takiar
>Priority: Major
>
> S3A currently has a dependency on the aws-java-sdk-bundle, which includes the 
> SDKs for all AWS services. The jar file for the current version is about 120 
> MB, but continues to grow (the latest is about 170 MB). Organic growth is 
> expected as more and more AWS services are created.
> The aws-java-sdk-bundle jar file is shaded as well, so it includes all 
> transitive dependencies.
> It would be nice if S3A could depend on smaller jar files in order to 
> decrease the size of jar files pulled in transitively by clients. Decreasing 
> the size of dependencies is particularly important for Docker files, where 
> image pull times can be affected by image size.
> One solution here would be for S3A to publish its own shaded jar which 
> includes the SDKs for all needed AWS Services (e.g. S3, DynamoDB, etc.) along 
> with the transitive dependencies for the individual SDKs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on a change in pull request #2149: HADOOP-13230. S3A to optionally retain directory markers

2020-08-11 Thread GitBox


mukund-thakur commented on a change in pull request #2149:
URL: https://github.com/apache/hadoop/pull/2149#discussion_r468455320



##
File path: 
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/directory_markers.md
##
@@ -0,0 +1,416 @@
+
+
+# Controlling the S3A Directory Marker Behavior
+
+##  Critical: this is not backwards compatible!
+
+This document shows how the performance of S3 IO, especially applications
+writing many files (hive) or working with versioned S3 buckets can
+increase performance by changing the S3A directory marker retention policy.
+
+Changing the policy from the default value, `"delete"` _is not backwards 
compatible_.
+
+Versions of Hadoop which are incompatible with other marker retention policies
+
+---
+|  Branch| Compatible Since | Future Fix Planned? |
+||--|-|
+| Hadoop 2.x |  | NO  |
+| Hadoop 3.0 |  | NO  |
+| Hadoop 3.1 |  | Yes |
+| Hadoop 3.2 |  | Yes |
+| Hadoop 3.3 |  3.3.1   | Done|
+---
+
+External Hadoop-based applications should also be assumed to be incompatible
+unless otherwise stated/known.
+
+It is only safe change the directory marker policy if the following
+ conditions are met:
+
+1. You know exactly which applications are writing to and reading from
+   (including backing up) an S3 bucket.
+2. You know all applications which read data from the bucket are as compatible.
+
+###  Applications backing up data.
+
+It is not enough to have a version of Apache Hadoop which is compatible, any
+application which backs up an S3 bucket or copies elsewhere must have an S3
+connector which is compatible. For the Hadoop codebase, that means that if
+distcp is used, it _must_ be from a compatible hadoop version.
+
+###  How will incompatible applications/versions 
fail? 
+
+Applications using an incompatible version of the S3A connector will mistake
+directories containing data for empty directories. This means that
+
+* Listing directories/directory trees may exclude files which exist.
+* Queries across the data will miss data files.
+* Renaming a directory to a new location may exclude files underneath.
+
+###  If an application has updated a directory tree 
incompatibly-- what can be done?
+
+There's a tool on the hadoop command line, [marker tool](#marker-tool) which 
can audit
+a bucket/path for markers, and clean up any which were found.
+It can be used to make a bucket compatible with older applications.
+
+Now that this is all clear, let's explain the problem.
+
+
+##  Background: Directory Markers: what and why?
+
+Amazon S3 is not a filesystem, it is an object store.
+
+The S3A connector not only provides a hadoop-compatible API to interact with
+data in S3, it tries to maintain the filesystem metaphor.
+
+One key aspect of the metaphor of a file system is "directories"
+
+ The directory concept
+
+In normal Unix-style filesystems, the "filesystem" is really a "directory and
+file tree" in which files are always stored in "directories"
+
+
+* A directory may contain 0 or more files.
+* A directory may contain 0 or more directories "subdirectories"
+* At the base of a filesystem is the "root directory"
+* All files MUST be in a directory "the parent directory"
+* All directories other than the root directory must be in another directory.
+* If a directory contains no files or directories, it is "empty"
+* When a directory is _listed_, all files and directories in it are enumerated 
and returned to the caller
+
+
+The S3A connector mocks this entire metaphor by grouping all objects which have
+the same prefix as if they are in the same directory tree.
+
+If there are two objects `a/b/file1` and `a/b/file2` then S3A pretends that 
there is a
+directory `/a/b` containing two files `file1`  and `file2`.
+
+The directory itself does not exist.
+
+There's a bit of a complication here.
+
+ What does `mkdirs()` do?
+
+1. In HDFS and other "real" filesystems, when `mkdirs()` is invoked on a path
+whose parents are all directories, then an _empty directory_ is created.
+
+1. This directory can be probed for "it exists" and listed (an empty list is
+returned)
+
+1. Files and other directories can be created in it.
+
+
+Lots of code contains a big assumption here: after you create a directory it
+exists. They also assume that after files in a directory are deleted, the
+directory still exists.
+
+Given filesystem mimics directories just by aggregating objects which share a
+prefix, how can you have empty directories?
+
+The original Hadoop `s3n://` connector created a Directory Marker -any path 
ending
+in `_$folder$` was considered to be a sign that a directory existed. A call to
+`mkdir(s3n://bucket/a/b)` would create a new marker 

[GitHub] [hadoop] mukund-thakur commented on pull request #2187: HADOOP-17167 Skipping ITestS3AEncryptionWithDefaultS3Settings.testEncryptionOverRename

2020-08-11 Thread GitBox


mukund-thakur commented on pull request #2187:
URL: https://github.com/apache/hadoop/pull/2187#issuecomment-671842499


   > Problem is probably that the test depends on the bucket being set up with 
default encryption = SSE-KMS and a different default key from the test key.
   > 
   > maybe: create a file, look at its encryption, if it's not SSE-KMS then 
skip the test
   
   Exactly same thing I have done in the patch.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2203: HDFS-15520 Use visitor pattern to visit namespace tree

2020-08-11 Thread GitBox


hadoop-yetus commented on pull request #2203:
URL: https://github.com/apache/hadoop/pull/2203#issuecomment-671841846


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 17s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 14s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 28s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   1m 17s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 30s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 59s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 50s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 47s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 17s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 25s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   1m 25s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 13s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   1m 13s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 50s |  hadoop-hdfs-project/hadoop-hdfs: 
The patch generated 3 new + 137 unchanged - 0 fixed = 140 total (was 137)  |
   | +1 :green_heart: |  mvnsite  |   1m 24s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 44s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 18s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  findbugs  |   3m 45s |  hadoop-hdfs-project/hadoop-hdfs 
generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 121m 32s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 217m 32s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
   |  |  Found reliance on default encoding in 
org.apache.hadoop.hdfs.server.namenode.visitor.NamespacePrintVisitor.print(FSNamesystem,
 PrintStream):in 
org.apache.hadoop.hdfs.server.namenode.visitor.NamespacePrintVisitor.print(FSNamesystem,
 PrintStream): new java.io.PrintWriter(OutputStream)  At 
NamespacePrintVisitor.java:[line 81] |
   |  |  Found reliance on default encoding in 
org.apache.hadoop.hdfs.server.namenode.visitor.NamespacePrintVisitor.print2File(INode,
 File):in 
org.apache.hadoop.hdfs.server.namenode.visitor.NamespacePrintVisitor.print2File(INode,
 File): new java.io.FileWriter(File)  At NamespacePrintVisitor.java:[line 59] |
   | Failed junit tests | 
hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.hdfs.TestGetFileChecksum |
   |   | hadoop.hdfs.server.datanode.TestDataNodeLifeline |
   |   | hadoop.hdfs.TestStripedFileAppend |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
   |   | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor |
   |   | hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2203/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2203 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 1c511fc37b9d 4.15

[GitHub] [hadoop] steveloughran commented on pull request #2187: HADOOP-17167 Skipping ITestS3AEncryptionWithDefaultS3Settings.testEncryptionOverRename

2020-08-11 Thread GitBox


steveloughran commented on pull request #2187:
URL: https://github.com/apache/hadoop/pull/2187#issuecomment-671837042


   Problem is probably that the test depends on the bucket being set up with 
default encryption = SSE-KMS and a different default key from the test key.
   
   maybe: create a file, look at its encryption, if it's not SSE-KMS then skip 
the test



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2213: HADOOP-16915. ABFS: Ignoring the test ITestAzureBlobFileSystemRandomRead.testRandomReadPerformance

2020-08-11 Thread GitBox


hadoop-yetus commented on pull request #2213:
URL: https://github.com/apache/hadoop/pull/2213#issuecomment-671834258


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  29m  7s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   0m 34s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 59s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 58s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 56s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   0m 28s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 25s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 46s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   0m 58s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 29s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 34s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  70m 48s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2213/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2213 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 56b85d0dcd98 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 909f1e82d3e |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2213/1/testReport/ |
   | Max. process+thread count | 415 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2213/1/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional c

[GitHub] [hadoop] mukund-thakur commented on a change in pull request #2149: HADOOP-13230. S3A to optionally retain directory markers

2020-08-11 Thread GitBox


mukund-thakur commented on a change in pull request #2149:
URL: https://github.com/apache/hadoop/pull/2149#discussion_r468427878



##
File path: 
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/directory_markers.md
##
@@ -0,0 +1,416 @@
+
+
+# Controlling the S3A Directory Marker Behavior
+
+##  Critical: this is not backwards compatible!
+
+This document shows how the performance of S3 IO, especially applications
+writing many files (hive) or working with versioned S3 buckets can
+increase performance by changing the S3A directory marker retention policy.
+
+Changing the policy from the default value, `"delete"` _is not backwards 
compatible_.
+
+Versions of Hadoop which are incompatible with other marker retention policies
+
+---
+|  Branch| Compatible Since | Future Fix Planned? |
+||--|-|
+| Hadoop 2.x |  | NO  |
+| Hadoop 3.0 |  | NO  |
+| Hadoop 3.1 |  | Yes |
+| Hadoop 3.2 |  | Yes |
+| Hadoop 3.3 |  3.3.1   | Done|
+---
+
+External Hadoop-based applications should also be assumed to be incompatible
+unless otherwise stated/known.
+
+It is only safe change the directory marker policy if the following
+ conditions are met:
+
+1. You know exactly which applications are writing to and reading from
+   (including backing up) an S3 bucket.
+2. You know all applications which read data from the bucket are as compatible.
+
+###  Applications backing up data.
+
+It is not enough to have a version of Apache Hadoop which is compatible, any
+application which backs up an S3 bucket or copies elsewhere must have an S3
+connector which is compatible. For the Hadoop codebase, that means that if
+distcp is used, it _must_ be from a compatible hadoop version.
+
+###  How will incompatible applications/versions 
fail? 
+
+Applications using an incompatible version of the S3A connector will mistake
+directories containing data for empty directories. This means that
+
+* Listing directories/directory trees may exclude files which exist.
+* Queries across the data will miss data files.
+* Renaming a directory to a new location may exclude files underneath.
+
+###  If an application has updated a directory tree 
incompatibly-- what can be done?
+
+There's a tool on the hadoop command line, [marker tool](#marker-tool) which 
can audit
+a bucket/path for markers, and clean up any which were found.

Review comment:
   nit : cleanup any files??





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2212: HDFS-15496. Add UI for deleted snapshots

2020-08-11 Thread GitBox


hadoop-yetus commented on pull request #2212:
URL: https://github.com/apache/hadoop/pull/2212#issuecomment-671820507


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  31m 54s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  29m  4s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   1m 10s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 49s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 17s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 20s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 57s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 56s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  9s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   1m  9s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  2s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   1m  2s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 39s |  hadoop-hdfs-project/hadoop-hdfs: 
The patch generated 1 new + 11 unchanged - 0 fixed = 12 total (was 11)  |
   | +1 :green_heart: |  mvnsite  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 53s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 20s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   2m 56s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 121m 54s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 235m 34s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.hdfs.TestGetFileChecksum |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2212/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2212 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 7d2205311312 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 32895f4f7ea |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2212/1/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | unit | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2212/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2212/1/testReport/ |
   | Max. process+thread count | 3755 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2211: HDFS-15098. Add SM4 encryption method for HDFS

2020-08-11 Thread GitBox


hadoop-yetus commented on pull request #2211:
URL: https://github.com/apache/hadoop/pull/2211#issuecomment-671803202


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 40s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +0 :ok: |  buf  |   0m  1s |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
3 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 25s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  33m 29s |  trunk passed  |
   | +1 :green_heart: |  compile  |  26m 37s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  21m 54s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   3m 41s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 38s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m  7s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  0s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 58s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 56s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   8m 17s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 59s |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m 56s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  cc  |  23m 56s |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 17 new + 145 unchanged - 
17 fixed = 162 total (was 162)  |
   | +1 :green_heart: |  golang  |  23m 56s |  the patch passed  |
   | -1 :x: |  javac  |  23m 56s |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 4 new + 2049 unchanged - 
0 fixed = 2053 total (was 2049)  |
   | +1 :green_heart: |  compile  |  19m 17s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  cc  |  19m 17s |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 28 new + 134 unchanged - 
28 fixed = 162 total (was 162)  |
   | +1 :green_heart: |  golang  |  19m 17s |  the patch passed  |
   | -1 :x: |  javac  |  19m 17s |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 4 new + 1944 unchanged - 
0 fixed = 1948 total (was 1944)  |
   | -0 :warning: |  checkstyle  |   2m 59s |  root: The patch generated 3 new 
+ 211 unchanged - 8 fixed = 214 total (was 219)  |
   | +1 :green_heart: |  mvnsite  |   4m  5s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  3s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m 25s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  9s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 56s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   9m 24s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m 34s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   2m 28s |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 127m  9s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  1s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 355m  0s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
   |   | hadoop.hdfs.tools.TestViewFileSystemOverloadSchemeWithDFSAdmin |
   |   | hadoop.hdfs.client.impl.TestBlockReaderLocal |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor |
   |   | hadoop.hdfs.server.namenode.TestNameNode

[jira] [Updated] (HADOOP-17196) Fix C/C++ standard warnings

2020-08-11 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17196:
---
Summary: Fix C/C++ standard warnings  (was: Compilation warnings caused by 
non-standard flags)

> Fix C/C++ standard warnings
> ---
>
> Key: HADOOP-17196
> URL: https://issues.apache.org/jira/browse/HADOOP-17196
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.1.3
> Environment: Windows 10 Pro 64-bit
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
> Fix For: 3.2.2, 3.3.1, 3.4.0
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> The C/C++ language standard is not specified in a cross-compiler manner. Even 
> though it's as straight forward as passing *-std* as compiler arguments, not 
> all the values are supported by all the compilers. For example, compilation 
> with the Visual C++ compiler on Windows with *-std=gnu99* flag causes the 
> following warning -
> {code:java}
> cl : command line warning D9002: ignoring unknown option '-std=gnu99' 
> [Z:\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\main\native\libhdfs-examples\hdfs_read.vcxproj]
>  {code}
> Thus, we need to use the appropriate flags provided by CMake to specify the 
> C/C++ standards so that it is compiler friendly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17196) Compilation warnings caused by non-standard flags

2020-08-11 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-17196:
--

Assignee: Gautham Banasandra

> Compilation warnings caused by non-standard flags
> -
>
> Key: HADOOP-17196
> URL: https://issues.apache.org/jira/browse/HADOOP-17196
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.1.3
> Environment: Windows 10 Pro 64-bit
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
> Fix For: 3.2.2, 3.3.1, 3.4.0
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> The C/C++ language standard is not specified in a cross-compiler manner. Even 
> though it's as straight forward as passing *-std* as compiler arguments, not 
> all the values are supported by all the compilers. For example, compilation 
> with the Visual C++ compiler on Windows with *-std=gnu99* flag causes the 
> following warning -
> {code:java}
> cl : command line warning D9002: ignoring unknown option '-std=gnu99' 
> [Z:\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\main\native\libhdfs-examples\hdfs_read.vcxproj]
>  {code}
> Thus, we need to use the appropriate flags provided by CMake to specify the 
> C/C++ standards so that it is compiler friendly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17196) Compilation warnings caused by non-standard flags

2020-08-11 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17196.

Fix Version/s: 3.4.0
   3.3.1
   3.2.2
 Hadoop Flags: Reviewed
   Resolution: Fixed

Merged the PR into trunk, branch-3.3, and branch-3.2. Thanks [~gautham] for 
your contribution!

> Compilation warnings caused by non-standard flags
> -
>
> Key: HADOOP-17196
> URL: https://issues.apache.org/jira/browse/HADOOP-17196
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.1.3
> Environment: Windows 10 Pro 64-bit
>Reporter: Gautham Banasandra
>Priority: Major
> Fix For: 3.2.2, 3.3.1, 3.4.0
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> The C/C++ language standard is not specified in a cross-compiler manner. Even 
> though it's as straight forward as passing *-std* as compiler arguments, not 
> all the values are supported by all the compilers. For example, compilation 
> with the Visual C++ compiler on Windows with *-std=gnu99* flag causes the 
> following warning -
> {code:java}
> cl : command line warning D9002: ignoring unknown option '-std=gnu99' 
> [Z:\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\main\native\libhdfs-examples\hdfs_read.vcxproj]
>  {code}
> Thus, we need to use the appropriate flags provided by CMake to specify the 
> C/C++ standards so that it is compiler friendly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17196) Compilation warnings caused by non-standard flags

2020-08-11 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17196:
---
Target Version/s:   (was: 3.1.3)

> Compilation warnings caused by non-standard flags
> -
>
> Key: HADOOP-17196
> URL: https://issues.apache.org/jira/browse/HADOOP-17196
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.1.3
> Environment: Windows 10 Pro 64-bit
>Reporter: Gautham Banasandra
>Priority: Major
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> The C/C++ language standard is not specified in a cross-compiler manner. Even 
> though it's as straight forward as passing *-std* as compiler arguments, not 
> all the values are supported by all the compilers. For example, compilation 
> with the Visual C++ compiler on Windows with *-std=gnu99* flag causes the 
> following warning -
> {code:java}
> cl : command line warning D9002: ignoring unknown option '-std=gnu99' 
> [Z:\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\main\native\libhdfs-examples\hdfs_read.vcxproj]
>  {code}
> Thus, we need to use the appropriate flags provided by CMake to specify the 
> C/C++ standards so that it is compiler friendly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] szetszwo opened a new pull request #2215: HDFS-15521. Remove INode.dumpTreeRecursively().

2020-08-11 Thread GitBox


szetszwo opened a new pull request #2215:
URL: https://github.com/apache/hadoop/pull/2215


   https://issues.apache.org/jira/browse/HDFS-15521



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on pull request #2208: HADOOP-17196 Fix C/C++ standard warnings

2020-08-11 Thread GitBox


aajisaka commented on pull request #2208:
URL: https://github.com/apache/hadoop/pull/2208#issuecomment-671782651


   Thank you @GauthamBanasandra for your contribution!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2133: HADOOP-17122: Preserving Directory Attributes in DistCp with Atomic Copy

2020-08-11 Thread GitBox


hadoop-yetus commented on pull request #2133:
URL: https://github.com/apache/hadoop/pull/2133#issuecomment-671782420


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 22s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 14s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   0m 25s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 23s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 37s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 55s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 52s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   0m 25s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 20s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 20s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 14s |  hadoop-tools/hadoop-distcp: The 
patch generated 1 new + 41 unchanged - 1 fixed = 42 total (was 42)  |
   | +1 :green_heart: |  mvnsite  |   0m 23s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 1 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedclient  |  16m 36s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   0m 51s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  14m 36s |  hadoop-distcp in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 30s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  93m 51s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2133/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2133 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 8f5f9a8cb8eb 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 32895f4f7ea |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2133/7/artifact/out/diff-checkstyle-hadoop-tools_hadoop-distcp.txt
 |
   | whitespace | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2133/7/artifact/out/whitespace-eol.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2133/7/testReport/ |
   | Max. process+thread count | 338 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2133/7/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



[GitHub] [hadoop] aajisaka merged pull request #2208: HADOOP-17196 Fix C/C++ standard warnings

2020-08-11 Thread GitBox


aajisaka merged pull request #2208:
URL: https://github.com/apache/hadoop/pull/2208


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] umamaheswararao commented on pull request #2185: HADOOP-15891. provide Regex Based Mount Point In Inode Tree

2020-08-11 Thread GitBox


umamaheswararao commented on pull request #2185:
URL: https://github.com/apache/hadoop/pull/2185#issuecomment-671776607


   I will review it in a day or two, Thanks
   BTW, you may need the similar changes in ViewFs.java as well, I think nfly 
also missed there.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant commented on pull request #2212: HDFS-15496. Add UI for deleted snapshots

2020-08-11 Thread GitBox


bshashikant commented on pull request #2212:
URL: https://github.com/apache/hadoop/pull/2212#issuecomment-671774640


   Thanks @vivekratnavel for putting up the patch. The patch in general looks 
good. Some comments inline:
   
   1) Since the patch modifies SnapshotInfo class, let's remove 
SnapshotStatus.Bean()
   2) Having different column for snapshotName and then snapshot path may not 
be useful. Instead can we just have one column for the snapshot path 
(snapshotName is implicit).
   3) Snapshot permission, owner and group added newly to the UI page .. Any 
specific reason?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] umamaheswararao merged pull request #2205: HDFS-15515: mkdirs on fallback should throw IOE out instead of suppressing and returning false

2020-08-11 Thread GitBox


umamaheswararao merged pull request #2205:
URL: https://github.com/apache/hadoop/pull/2205


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant merged pull request #2203: HDFS-15520 Use visitor pattern to visit namespace tree

2020-08-11 Thread GitBox


bshashikant merged pull request #2203:
URL: https://github.com/apache/hadoop/pull/2203


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant commented on pull request #2203: HDFS-15520 Use visitor pattern to visit namespace tree

2020-08-11 Thread GitBox


bshashikant commented on pull request #2203:
URL: https://github.com/apache/hadoop/pull/2203#issuecomment-671767285


   Thanks @szetszwo for the contribution. I have committed this.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org