[jira] [Work logged] (HDFS-15635) ViewFileSystemOverloadScheme support specifying mount table loader imp through conf

2020-11-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15635?focusedWorklogId=513954&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-513954
 ]

ASF GitHub Bot logged work on HDFS-15635:
-

Author: ASF GitHub Bot
Created on: 19/Nov/20 06:18
Start Date: 19/Nov/20 06:18
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2389:
URL: https://github.com/apache/hadoop/pull/2389#issuecomment-73016


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 30s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  27m 50s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |  22m 54s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 55s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 39s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 53s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 10s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 54s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 50s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  26m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |  26m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 39s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |  21m 39s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 50s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 39s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  19m 32s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   2m 58s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  11m 58s |  |  hadoop-common in the patch 
passed.  |
   | -1 :x: |  asflicense  |   0m 52s | 
[/patch-asflicense-problems.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2389/6/artifact/out/patch-asflicense-problems.txt)
 |  The patch generated 2 ASF License warnings.  |
   |  |   | 216m 56s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2389/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2389 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 48a312d73366 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 34aa6137bd8 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2389/6/testReport/ |
   | Max. process+thread count | 2118 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-c

[jira] [Commented] (HDFS-12109) "fs" java.net.UnknownHostException when HA NameNode is used

2020-11-18 Thread Daniel Howard (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-12109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17235106#comment-17235106
 ] 

Daniel Howard commented on HDFS-12109:
--

PS: thank you, [~luigidifraia] for documenting this issue and [~surendrasingh] 
for the suggested fix. I am setting up HA right now and I committed the same 
error copy-paste {{dfs.client.failover.proxy.provider.mycluster}} into my 
configuration!

> "fs" java.net.UnknownHostException when HA NameNode is used
> ---
>
> Key: HDFS-12109
> URL: https://issues.apache.org/jira/browse/HDFS-12109
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
> Environment: [hadoop@namenode01 ~]$ cat /etc/redhat-release
> CentOS Linux release 7.3.1611 (Core)
> [hadoop@namenode01 ~]$ uname -a
> Linux namenode01 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC 
> 2017 x86_64 x86_64 x86_64 GNU/Linux
> [hadoop@namenode01 ~]$ java -version
> java version "1.8.0_131"
> Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
> Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
>Reporter: Luigi Di Fraia
>Priority: Major
>
> After setting up an HA NameNode configuration, the following invocation of 
> "fs" fails:
> [hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls /
> -ls: java.net.UnknownHostException: saccluster
> It works if properties are defined as per below:
> /usr/local/hadoop/bin/hdfs dfs -Ddfs.nameservices=saccluster 
> -Ddfs.client.failover.proxy.provider.saccluster=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
>  -Ddfs.ha.namenodes.saccluster=namenode01,namenode02 
> -Ddfs.namenode.rpc-address.saccluster.namenode01=namenode01:8020 
> -Ddfs.namenode.rpc-address.saccluster.namenode02=namenode02:8020 -ls /
> These properties are defined in /usr/local/hadoop/etc/hadoop/hdfs-site.xml as 
> per below:
> 
> dfs.nameservices
> saccluster
> 
> 
> dfs.ha.namenodes.saccluster
> namenode01,namenode02
> 
> 
> dfs.namenode.rpc-address.saccluster.namenode01
> namenode01:8020
> 
> 
> dfs.namenode.rpc-address.saccluster.namenode02
> namenode02:8020
> 
> 
> dfs.namenode.http-address.saccluster.namenode01
> namenode01:50070
> 
> 
> dfs.namenode.http-address.saccluster.namenode02
> namenode02:50070
> 
> 
> dfs.namenode.shared.edits.dir
> 
> qjournal://namenode01:8485;namenode02:8485;datanode01:8485/saccluster
> 
> 
> dfs.client.failover.proxy.provider.mycluster
> 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> 
> In /usr/local/hadoop/etc/hadoop/core-site.xml the default FS is defined as 
> per below:
> 
> fs.defaultFS
> hdfs://saccluster
> 
> In /usr/local/hadoop/etc/hadoop/hadoop-env.sh the following export is defined:
> export HADOOP_CONF_DIR="/usr/local/hadoop/etc/hadoop"
> Is "fs" trying to read these properties from somewhere else, such as a 
> separate client configuration file?
> Apologies if I am missing something obvious here.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15689) allow/disallowSnapshot on EZ roots shouldn't fail due to trash provisioning/emptiness check

2020-11-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15689?focusedWorklogId=513906&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-513906
 ]

ASF GitHub Bot logged work on HDFS-15689:
-

Author: ASF GitHub Bot
Created on: 19/Nov/20 01:45
Start Date: 19/Nov/20 01:45
Worklog Time Spent: 10m 
  Work Description: smengcl commented on a change in pull request #2472:
URL: https://github.com/apache/hadoop/pull/2472#discussion_r526536143



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsAdmin.java
##
@@ -171,7 +174,15 @@ public void clearQuotaByStorageType(Path src, StorageType 
type) throws IOExcepti
   public void allowSnapshot(Path path) throws IOException {
 dfs.allowSnapshot(path);
 if (dfs.isSnapshotTrashRootEnabled()) {
-  dfs.provisionSnapshotTrash(path, TRASH_PERMISSION);
+  try {
+dfs.provisionSnapshotTrash(path, TRASH_PERMISSION);
+  } catch (FileAlreadyExistsException ex) {
+// Don't throw on FileAlreadyExistsException since it is likely due to
+// admin allowing snapshot on an EZ root.
+LOG.warn(ex.getMessage());

Review comment:
   Thanks @bshashikant for the comment.
   
   I was intending to not change the behavior of `DFS#provisionSnapshotTrash` 
but as I think again changing it should be fine since 3.4.0 is not released yet.
   
   I'm in favor of (1). It makes sense to reuse the trash if it is configured 
correctly. I will update the PR a bit later.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 513906)
Time Spent: 1h  (was: 50m)

> allow/disallowSnapshot on EZ roots shouldn't fail due to trash 
> provisioning/emptiness check
> ---
>
> Key: HDFS-15689
> URL: https://issues.apache.org/jira/browse/HDFS-15689
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Affects Versions: 3.4.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> h2. Background
> 1. HDFS-15607 added a feature that when 
> {{dfs.namenode.snapshot.trashroot.enabled=true}}, allowSnapshot will 
> automatically create a .Trash directory immediately after allowSnapshot 
> operation so files deleted will be moved into the trash root inside the 
> snapshottable directory.
> 2. HDFS-15539 prevents admins from disallowing snapshot if the trash root 
> inside is not empty
> h2. Problem
> 1. When {{dfs.namenode.snapshot.trashroot.enabled=true}}, currently if the 
> directory (to be allowed snapshot on) is an EZ root, it throws 
> {{FileAlreadyExistsException}} because the trash root already exists 
> (encryption zone has already created an internal trash root).
> 2. Similarly, at the moment if we disallow snapshot on an EZ root, it may 
> complain that the trash root is not empty (or delete it if empty, which is 
> not desired since EZ will still need it).
> h2. Solution
> 1. Let allowSnapshot succeed by not throwing {{FileAlreadyExistsException}}, 
> but informs the admin that the trash already exists.
> 2. Ignore {{checkTrashRootAndRemoveIfEmpty()}} check if path is EZ root.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15689) allow/disallowSnapshot on EZ roots shouldn't fail due to trash provisioning/emptiness check

2020-11-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15689?focusedWorklogId=513902&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-513902
 ]

ASF GitHub Bot logged work on HDFS-15689:
-

Author: ASF GitHub Bot
Created on: 19/Nov/20 01:40
Start Date: 19/Nov/20 01:40
Worklog Time Spent: 10m 
  Work Description: smengcl commented on a change in pull request #2472:
URL: https://github.com/apache/hadoop/pull/2472#discussion_r526536143



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsAdmin.java
##
@@ -171,7 +174,15 @@ public void clearQuotaByStorageType(Path src, StorageType 
type) throws IOExcepti
   public void allowSnapshot(Path path) throws IOException {
 dfs.allowSnapshot(path);
 if (dfs.isSnapshotTrashRootEnabled()) {
-  dfs.provisionSnapshotTrash(path, TRASH_PERMISSION);
+  try {
+dfs.provisionSnapshotTrash(path, TRASH_PERMISSION);
+  } catch (FileAlreadyExistsException ex) {
+// Don't throw on FileAlreadyExistsException since it is likely due to
+// admin allowing snapshot on an EZ root.
+LOG.warn(ex.getMessage());

Review comment:
   Thanks @bshashikant for the comment.
   
   I was intending to not change the behavior of `DFS#provisionSnapshotTrash` 
but as I think again changing it should be fine since 3.4.0 is not released yet.
   
   I will update the PR a bit later.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 513902)
Time Spent: 50m  (was: 40m)

> allow/disallowSnapshot on EZ roots shouldn't fail due to trash 
> provisioning/emptiness check
> ---
>
> Key: HDFS-15689
> URL: https://issues.apache.org/jira/browse/HDFS-15689
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Affects Versions: 3.4.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> h2. Background
> 1. HDFS-15607 added a feature that when 
> {{dfs.namenode.snapshot.trashroot.enabled=true}}, allowSnapshot will 
> automatically create a .Trash directory immediately after allowSnapshot 
> operation so files deleted will be moved into the trash root inside the 
> snapshottable directory.
> 2. HDFS-15539 prevents admins from disallowing snapshot if the trash root 
> inside is not empty
> h2. Problem
> 1. When {{dfs.namenode.snapshot.trashroot.enabled=true}}, currently if the 
> directory (to be allowed snapshot on) is an EZ root, it throws 
> {{FileAlreadyExistsException}} because the trash root already exists 
> (encryption zone has already created an internal trash root).
> 2. Similarly, at the moment if we disallow snapshot on an EZ root, it may 
> complain that the trash root is not empty (or delete it if empty, which is 
> not desired since EZ will still need it).
> h2. Solution
> 1. Let allowSnapshot succeed by not throwing {{FileAlreadyExistsException}}, 
> but informs the admin that the trash already exists.
> 2. Ignore {{checkTrashRootAndRemoveIfEmpty()}} check if path is EZ root.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14904) Balancer should pick nodes based on utilization in each iteration

2020-11-18 Thread Leon Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leon Gao updated HDFS-14904:

Target Version/s:   (was: 3.4.0)

> Balancer should pick nodes based on utilization in each iteration
> -
>
> Key: HDFS-14904
> URL: https://issues.apache.org/jira/browse/HDFS-14904
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer & mover
>Reporter: Leon Gao
>Assignee: Leon Gao
>Priority: Major
>
> In each iteration, balancer should pick nodes with the highest/lowest usage 
> first, as suggested in HDFS-14894



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15659) Set dfs.namenode.redundancy.considerLoad to false in MiniDFSCluster

2020-11-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15659?focusedWorklogId=513734&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-513734
 ]

ASF GitHub Bot logged work on HDFS-15659:
-

Author: ASF GitHub Bot
Created on: 18/Nov/20 19:43
Start Date: 18/Nov/20 19:43
Worklog Time Spent: 10m 
  Work Description: amahussein commented on pull request #2443:
URL: https://github.com/apache/hadoop/pull/2443#issuecomment-729911590


   I checked hadoop.hdfs.server.namenode.TestListCorruptFileBlocks and it is 
passing.
   I will monitor it and file a new lira if I see it failing frequently.
   @ayushtkn , I think we can go ahead and merge this.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 513734)
Time Spent: 3.5h  (was: 3h 20m)

> Set dfs.namenode.redundancy.considerLoad to false in MiniDFSCluster
> ---
>
> Key: HDFS-15659
> URL: https://issues.apache.org/jira/browse/HDFS-15659
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> dfs.namenode.redundancy.considerLoad is true by default and it is causing 
> many test failures. Let's disable it in MiniDFSCluster.
> Originally reported by [~weichiu]: 
> https://github.com/apache/hadoop/pull/2410#pullrequestreview-51612
> {quote}
> i've certain seen this option causing test failures in the past.
> Maybe we should turn it off by default in MiniDDFSCluster, and only enable it 
> for specific tests.
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15635) ViewFileSystemOverloadScheme support specifying mount table loader imp through conf

2020-11-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15635?focusedWorklogId=513600&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-513600
 ]

ASF GitHub Bot logged work on HDFS-15635:
-

Author: ASF GitHub Bot
Created on: 18/Nov/20 15:34
Start Date: 18/Nov/20 15:34
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2389:
URL: https://github.com/apache/hadoop/pull/2389#issuecomment-729759143


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  31m  3s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m 13s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  26m  7s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |  22m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 39s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 23s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 57s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 54s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m  1s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |  20m  1s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m  7s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |  17m  7s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 52s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 35s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 51s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   2m 41s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  10m 29s | 
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2389/3/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 49s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 225m 51s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.fs.viewfs.TestViewFSOverloadSchemeCentralMountTableConfig |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2389/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2389 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 399fb220a92f 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 425996eb4a1 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-ha

[jira] [Work logged] (HDFS-15689) allow/disallowSnapshot on EZ roots shouldn't fail due to trash provisioning/emptiness check

2020-11-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15689?focusedWorklogId=513593&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-513593
 ]

ASF GitHub Bot logged work on HDFS-15689:
-

Author: ASF GitHub Bot
Created on: 18/Nov/20 15:18
Start Date: 18/Nov/20 15:18
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2472:
URL: https://github.com/apache/hadoop/pull/2472#issuecomment-729748257


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  32m 50s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 55s |  |  Maven dependency ordering for branch  |
   | -1 :x: |  mvninstall  |   7m 12s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2472/1/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | +1 :green_heart: |  compile  |   4m 29s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   4m  2s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 58s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 17s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 22s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 56s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m  9s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 36s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  3s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   4m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m  0s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   4m  0s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 55s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m  5s |  |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s | 
[/whitespace-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2472/1/artifact/out/whitespace-eol.txt)
 |  The patch has 1 line(s) that end in whitespace. Use git apply 
--whitespace=fix <>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedclient  |  15m  4s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 58s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   5m 55s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 27s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 142m 59s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2472/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 279m 54s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestBackupNode |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
   |   | hadoop.hdfs.TestGetFileChecksum |
   |   | hadoop.hdfs.TestErasureCodingExerciseAPIs |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.hdfs.TestDistributedFileSystem |
   |   | hadoop.hdfs.TestViewDistributedFileSystem |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multi

[jira] [Work logged] (HDFS-15635) ViewFileSystemOverloadScheme support specifying mount table loader imp through conf

2020-11-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15635?focusedWorklogId=513585&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-513585
 ]

ASF GitHub Bot logged work on HDFS-15635:
-

Author: ASF GitHub Bot
Created on: 18/Nov/20 15:02
Start Date: 18/Nov/20 15:02
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2389:
URL: https://github.com/apache/hadoop/pull/2389#issuecomment-729737319


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  28m  9s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |   6m  9s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2389/5/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | +1 :green_heart: |  compile  |  27m 11s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |  17m  7s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 31s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 15s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 20s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 16s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 10s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |  19m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m  5s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |  17m  5s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 52s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 15s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   2m 23s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   9m 36s | 
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2389/5/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 55s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 177m 12s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.fs.viewfs.TestViewFSOverloadSchemeCentralMountTableConfig |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2389/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2389 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 08b85b97ebad 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 425996eb4a1 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubun

[jira] [Work logged] (HDFS-15689) allow/disallowSnapshot on EZ roots shouldn't fail due to trash provisioning/emptiness check

2020-11-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15689?focusedWorklogId=513571&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-513571
 ]

ASF GitHub Bot logged work on HDFS-15689:
-

Author: ASF GitHub Bot
Created on: 18/Nov/20 14:20
Start Date: 18/Nov/20 14:20
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2472:
URL: https://github.com/apache/hadoop/pull/2472#issuecomment-729709249


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 30s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 56s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 58s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   4m  4s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   3m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 17s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 44s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   2m  2s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m  2s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 23s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  0s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m  0s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   4m  0s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 42s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   3m 42s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 51s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m  1s |  |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s | 
[/whitespace-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2472/2/artifact/out/whitespace-eol.txt)
 |  The patch has 1 line(s) that end in whitespace. Use git apply 
--whitespace=fix <>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedclient  |  15m  8s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 56s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   5m 32s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 19s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  |  96m 50s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2472/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 44s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 213m 24s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestDistributedFileSystem |
   |   | hadoop.hdfs.TestViewDistributedFileSystem |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.hdfs.TestErasureCodingExerciseAPIs |
   |   | hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2472/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2472 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |

[jira] [Work logged] (HDFS-15689) allow/disallowSnapshot on EZ roots shouldn't fail due to trash provisioning/emptiness check

2020-11-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15689?focusedWorklogId=513528&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-513528
 ]

ASF GitHub Bot logged work on HDFS-15689:
-

Author: ASF GitHub Bot
Created on: 18/Nov/20 12:08
Start Date: 18/Nov/20 12:08
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on a change in pull request #2472:
URL: https://github.com/apache/hadoop/pull/2472#discussion_r526035428



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsAdmin.java
##
@@ -171,7 +174,15 @@ public void clearQuotaByStorageType(Path src, StorageType 
type) throws IOExcepti
   public void allowSnapshot(Path path) throws IOException {
 dfs.allowSnapshot(path);
 if (dfs.isSnapshotTrashRootEnabled()) {
-  dfs.provisionSnapshotTrash(path, TRASH_PERMISSION);
+  try {
+dfs.provisionSnapshotTrash(path, TRASH_PERMISSION);
+  } catch (FileAlreadyExistsException ex) {
+// Don't throw on FileAlreadyExistsException since it is likely due to
+// admin allowing snapshot on an EZ root.
+LOG.warn(ex.getMessage());

Review comment:
   I think there are 2 approaches instead of just ignoring the 
FileAlreadyExists exception:
   1) we can just validate whether the existing Trash path is a directory and 
validate the permissions and not throw FileAlreadyExists exception itself in 
DistributedFileSystem.java#provisionSnapshotTrash
   2) If the trash path already exists with right permissions, we can check if 
the path is an encryption zone as well and throw FileAlreadyExists exception 
only if its not an encryption zone. Similar change will be required for making 
an snapshottable dir an encryption zone.
   
   I am ok with either of the above approaches. I think just ignoring the 
exception here will not work in case. the existing path is not a directory or 
has right permissions.
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 513528)
Time Spent: 20m  (was: 10m)

> allow/disallowSnapshot on EZ roots shouldn't fail due to trash 
> provisioning/emptiness check
> ---
>
> Key: HDFS-15689
> URL: https://issues.apache.org/jira/browse/HDFS-15689
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Affects Versions: 3.4.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> h2. Background
> 1. HDFS-15607 added a feature that when 
> {{dfs.namenode.snapshot.trashroot.enabled=true}}, allowSnapshot will 
> automatically create a .Trash directory immediately after allowSnapshot 
> operation so files deleted will be moved into the trash root inside the 
> snapshottable directory.
> 2. HDFS-15539 prevents admins from disallowing snapshot if the trash root 
> inside is not empty
> h2. Problem
> 1. When {{dfs.namenode.snapshot.trashroot.enabled=true}}, currently if the 
> directory (to be allowed snapshot on) is an EZ root, it throws 
> {{FileAlreadyExistsException}} because the trash root already exists 
> (encryption zone has already created an internal trash root).
> 2. Similarly, at the moment if we disallow snapshot on an EZ root, it may 
> complain that the trash root is not empty (or delete it if empty, which is 
> not desired since EZ will still need it).
> h2. Solution
> 1. Let allowSnapshot succeed by not throwing {{FileAlreadyExistsException}}, 
> but informs the admin that the trash already exists.
> 2. Ignore {{checkTrashRootAndRemoveIfEmpty()}} check if path is EZ root.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15635) ViewFileSystemOverloadScheme support specifying mount table loader imp through conf

2020-11-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15635?focusedWorklogId=513520&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-513520
 ]

ASF GitHub Bot logged work on HDFS-15635:
-

Author: ASF GitHub Bot
Created on: 18/Nov/20 11:59
Start Date: 18/Nov/20 11:59
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2389:
URL: https://github.com/apache/hadoop/pull/2389#issuecomment-729632632


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m  5s |  |  
https://github.com/apache/hadoop/pull/2389 does not apply to trunk. Rebase 
required? Wrong Branch? See 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help.  
|
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/2389 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2389/4/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 513520)
Time Spent: 1h 40m  (was: 1.5h)

> ViewFileSystemOverloadScheme support specifying mount table loader imp 
> through conf
> ---
>
> Key: HDFS-15635
> URL: https://issues.apache.org/jira/browse/HDFS-15635
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: viewfsOverloadScheme
>Reporter: Junfan Zhang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> According to HDFS-15289, the default mountable loader is 
> {{[HCFSMountTableConfigLoader|https://github.com/apache/hadoop/blob/4734c77b4b64b7c6432da4cc32881aba85f94ea1/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/HCFSMountTableConfigLoader.java#L35]}}.
> In some scenarios, users want to implement the mount table loader by 
> themselves, so it is necessary to dynamically configure the loader.
>  
> cc [~shv], [~abhishekd], [~hexiaoqiao]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15689) allow/disallowSnapshot on EZ roots shouldn't fail due to trash provisioning/emptiness check

2020-11-18 Thread Shashikant Banerjee (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-15689:
---
Parent: HDFS-15477
Issue Type: Sub-task  (was: Bug)

> allow/disallowSnapshot on EZ roots shouldn't fail due to trash 
> provisioning/emptiness check
> ---
>
> Key: HDFS-15689
> URL: https://issues.apache.org/jira/browse/HDFS-15689
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Affects Versions: 3.4.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> h2. Background
> 1. HDFS-15607 added a feature that when 
> {{dfs.namenode.snapshot.trashroot.enabled=true}}, allowSnapshot will 
> automatically create a .Trash directory immediately after allowSnapshot 
> operation so files deleted will be moved into the trash root inside the 
> snapshottable directory.
> 2. HDFS-15539 prevents admins from disallowing snapshot if the trash root 
> inside is not empty
> h2. Problem
> 1. When {{dfs.namenode.snapshot.trashroot.enabled=true}}, currently if the 
> directory (to be allowed snapshot on) is an EZ root, it throws 
> {{FileAlreadyExistsException}} because the trash root already exists 
> (encryption zone has already created an internal trash root).
> 2. Similarly, at the moment if we disallow snapshot on an EZ root, it may 
> complain that the trash root is not empty (or delete it if empty, which is 
> not desired since EZ will still need it).
> h2. Solution
> 1. Let allowSnapshot succeed by not throwing {{FileAlreadyExistsException}}, 
> but informs the admin that the trash already exists.
> 2. Ignore {{checkTrashRootAndRemoveIfEmpty()}} check if path is EZ root.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15689) allow/disallowSnapshot on EZ roots shouldn't fail due to trash provisioning/emptiness check

2020-11-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-15689:
--
Labels: pull-request-available  (was: )

> allow/disallowSnapshot on EZ roots shouldn't fail due to trash 
> provisioning/emptiness check
> ---
>
> Key: HDFS-15689
> URL: https://issues.apache.org/jira/browse/HDFS-15689
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.4.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> h2. Background
> 1. HDFS-15607 added a feature that when 
> {{dfs.namenode.snapshot.trashroot.enabled=true}}, allowSnapshot will 
> automatically create a .Trash directory immediately after allowSnapshot 
> operation so files deleted will be moved into the trash root inside the 
> snapshottable directory.
> 2. HDFS-15539 prevents admins from disallowing snapshot if the trash root 
> inside is not empty
> h2. Problem
> 1. When {{dfs.namenode.snapshot.trashroot.enabled=true}}, currently if the 
> directory (to be allowed snapshot on) is an EZ root, it throws 
> {{FileAlreadyExistsException}} because the trash root already exists 
> (encryption zone has already created an internal trash root).
> 2. Similarly, at the moment if we disallow snapshot on an EZ root, it may 
> complain that the trash root is not empty (or delete it if empty, which is 
> not desired since EZ will still need it).
> h2. Solution
> 1. Let allowSnapshot succeed by not throwing {{FileAlreadyExistsException}}, 
> but informs the admin that the trash already exists.
> 2. Ignore {{checkTrashRootAndRemoveIfEmpty()}} check if path is EZ root.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15689) allow/disallowSnapshot on EZ roots shouldn't fail due to trash provisioning/emptiness check

2020-11-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15689?focusedWorklogId=513482&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-513482
 ]

ASF GitHub Bot logged work on HDFS-15689:
-

Author: ASF GitHub Bot
Created on: 18/Nov/20 10:32
Start Date: 18/Nov/20 10:32
Worklog Time Spent: 10m 
  Work Description: smengcl opened a new pull request #2472:
URL: https://github.com/apache/hadoop/pull/2472


   https://issues.apache.org/jira/browse/HDFS-15689
   
   See the jira description for details.
   
   Will add UT in `TestDFSAdmin` later. Maybe also in `TestEncryptionZones`.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 513482)
Remaining Estimate: 0h
Time Spent: 10m

> allow/disallowSnapshot on EZ roots shouldn't fail due to trash 
> provisioning/emptiness check
> ---
>
> Key: HDFS-15689
> URL: https://issues.apache.org/jira/browse/HDFS-15689
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.4.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> h2. Background
> 1. HDFS-15607 added a feature that when 
> {{dfs.namenode.snapshot.trashroot.enabled=true}}, allowSnapshot will 
> automatically create a .Trash directory immediately after allowSnapshot 
> operation so files deleted will be moved into the trash root inside the 
> snapshottable directory.
> 2. HDFS-15539 prevents admins from disallowing snapshot if the trash root 
> inside is not empty
> h2. Problem
> 1. When {{dfs.namenode.snapshot.trashroot.enabled=true}}, currently if the 
> directory (to be allowed snapshot on) is an EZ root, it throws 
> {{FileAlreadyExistsException}} because the trash root already exists 
> (encryption zone has already created an internal trash root).
> 2. Similarly, at the moment if we disallow snapshot on an EZ root, it may 
> complain that the trash root is not empty (or delete it if empty, which is 
> not desired since EZ will still need it).
> h2. Solution
> 1. Let allowSnapshot succeed by not throwing {{FileAlreadyExistsException}}, 
> but informs the admin that the trash already exists.
> 2. Ignore {{checkTrashRootAndRemoveIfEmpty()}} check if path is EZ root.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15689) allow/disallowSnapshot on EZ roots shouldn't fail due to trash provisioning/emptiness check

2020-11-18 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-15689:
--
Status: Patch Available  (was: Open)

> allow/disallowSnapshot on EZ roots shouldn't fail due to trash 
> provisioning/emptiness check
> ---
>
> Key: HDFS-15689
> URL: https://issues.apache.org/jira/browse/HDFS-15689
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.4.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> h2. Background
> 1. HDFS-15607 added a feature that when 
> {{dfs.namenode.snapshot.trashroot.enabled=true}}, allowSnapshot will 
> automatically create a .Trash directory immediately after allowSnapshot 
> operation so files deleted will be moved into the trash root inside the 
> snapshottable directory.
> 2. HDFS-15539 prevents admins from disallowing snapshot if the trash root 
> inside is not empty
> h2. Problem
> 1. When {{dfs.namenode.snapshot.trashroot.enabled=true}}, currently if the 
> directory (to be allowed snapshot on) is an EZ root, it throws 
> {{FileAlreadyExistsException}} because the trash root already exists 
> (encryption zone has already created an internal trash root).
> 2. Similarly, at the moment if we disallow snapshot on an EZ root, it may 
> complain that the trash root is not empty (or delete it if empty, which is 
> not desired since EZ will still need it).
> h2. Solution
> 1. Let allowSnapshot succeed by not throwing {{FileAlreadyExistsException}}, 
> but informs the admin that the trash already exists.
> 2. Ignore {{checkTrashRootAndRemoveIfEmpty()}} check if path is EZ root.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15689) allow/disallowSnapshot on EZ roots shouldn't fail due to trash provisioning/emptiness check

2020-11-18 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-15689:
--
Summary: allow/disallowSnapshot on EZ roots shouldn't fail due to trash 
provisioning/emptiness check  (was: allow/disallowSnapshot on EZ roots 
shouldn't fail due to trash root provisioning or emptiness check)

> allow/disallowSnapshot on EZ roots shouldn't fail due to trash 
> provisioning/emptiness check
> ---
>
> Key: HDFS-15689
> URL: https://issues.apache.org/jira/browse/HDFS-15689
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.4.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> h2. Background
> 1. HDFS-15607 added a feature that when 
> {{dfs.namenode.snapshot.trashroot.enabled=true}}, allowSnapshot will 
> automatically create a .Trash directory immediately after allowSnapshot 
> operation so files deleted will be moved into the trash root inside the 
> snapshottable directory.
> 2. HDFS-15539 prevents admins from disallowing snapshot if the trash root 
> inside is not empty
> h2. Problem
> 1. When {{dfs.namenode.snapshot.trashroot.enabled=true}}, currently if the 
> directory (to be allowed snapshot on) is an EZ root, it throws 
> {{FileAlreadyExistsException}} because the trash root already exists 
> (encryption zone has already created an internal trash root).
> 2. Similarly, at the moment if we disallow snapshot on an EZ root, it may 
> complain that the trash root is not empty (or delete it if empty, which is 
> not desired since EZ will still need it).
> h2. Solution
> 1. Let allowSnapshot succeed by not throwing {{FileAlreadyExistsException}}, 
> but informs the admin that the trash already exists.
> 2. Ignore {{checkTrashRootAndRemoveIfEmpty()}} check if path is EZ root.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15689) allow/disallowSnapshot on EZ roots shouldn't fail due to trash root provisioning or emptiness check

2020-11-18 Thread Siyao Meng (Jira)
Siyao Meng created HDFS-15689:
-

 Summary: allow/disallowSnapshot on EZ roots shouldn't fail due to 
trash root provisioning or emptiness check
 Key: HDFS-15689
 URL: https://issues.apache.org/jira/browse/HDFS-15689
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 3.4.0
Reporter: Siyao Meng
Assignee: Siyao Meng


h2. Background

1. HDFS-15607 added a feature that when 
{{dfs.namenode.snapshot.trashroot.enabled=true}}, allowSnapshot will 
automatically create a .Trash directory immediately after allowSnapshot 
operation so files deleted will be moved into the trash root inside the 
snapshottable directory.
2. HDFS-15539 prevents admins from disallowing snapshot if the trash root 
inside is not empty

h2. Problem

1. When {{dfs.namenode.snapshot.trashroot.enabled=true}}, currently if the 
directory (to be allowed snapshot on) is an EZ root, it throws 
{{FileAlreadyExistsException}} because the trash root already exists 
(encryption zone has already created an internal trash root).
2. Similarly, at the moment if we disallow snapshot on an EZ root, it may 
complain that the trash root is not empty (or delete it if empty, which is not 
desired since EZ will still need it).

h2. Solution

1. Let allowSnapshot succeed by not throwing {{FileAlreadyExistsException}}, 
but informs the admin that the trash already exists.
2. Ignore {{checkTrashRootAndRemoveIfEmpty()}} check if path is EZ root.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15688) Improve MiniDFSCluster by a adding a default constructor

2020-11-18 Thread Sayed Mohammad Hossein Torabi (Jira)
Sayed Mohammad Hossein Torabi created HDFS-15688:


 Summary: Improve MiniDFSCluster by a adding a default constructor
 Key: HDFS-15688
 URL: https://issues.apache.org/jira/browse/HDFS-15688
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Reporter: Sayed Mohammad Hossein Torabi






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org