[jira] [Commented] (HDFS-17089) Close child file systems in ViewFileSystem when cache is disabled.

2023-07-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17743612#comment-17743612
 ] 

ASF GitHub Bot commented on HDFS-17089:
---

Hexiaoqiao commented on code in PR #5847:
URL: https://github.com/apache/hadoop/pull/5847#discussion_r1264823566


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java:
##
@@ -1926,12 +1926,41 @@ enum RenameStrategy {
 SAME_FILESYSTEM_ACROSS_MOUNTPOINT
   }
 
+  private void closeChildFileSystems(FileSystem fs) throws IOException {
+if (fs != null) {
+  FileSystem[] childFs = fs.getChildFileSystems();
+  for (FileSystem child : childFs) {
+if (child != null) {
+  String disableCacheName = String.format("fs.%s.impl.disable.cache",
+  child.getUri().getScheme());
+  if (config.getBoolean(disableCacheName, false)) {
+child.close();
+  }
+}
+  }
+}
+  }
+
   @Override
   public void close() throws IOException {
 super.close();
 if (enableInnerCache && cache != null) {
   cache.closeAll();
   cache.clear();
 }
+
+if (!enableInnerCache) {
+  for (InodeTree.MountPoint mountPoint :
+  fsState.getMountPoints()) {
+FileSystem targetFs = mountPoint.target.getTargetFileSystemForClose();

Review Comment:
   how about to invoke getTargetFileSystem directly?



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java:
##
@@ -413,6 +413,11 @@ public T getTargetFileSystem() throws IOException {
   }
   return targetFileSystem;
 }
+
+T getTargetFileSystemForClose() throws IOException {

Review Comment:
   what difference between this method and `getTargetFileSystem`?





> Close child file systems in ViewFileSystem when cache is disabled.
> --
>
> Key: HDFS-17089
> URL: https://issues.apache.org/jira/browse/HDFS-17089
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Shuyan Zhang
>Priority: Major
>  Labels: pull-request-available
>
> When the cache is configured to disabled (namely, 
> `fs.viewfs.enable.inner.cache=false` and `fs.*.impl.disable.cache=true`), 
> even if `FileSystem.close()` is called, the client cannot truly close the 
> child file systems in a ViewFileSystem. This caused our long-running clients 
> to constantly produce resource leaks.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17091) Blocks on DECOMMISSIONING DNs should be sorted properly in LocatedBlocks

2023-07-16 Thread WangYuanben (Jira)
WangYuanben created HDFS-17091:
--

 Summary: Blocks on DECOMMISSIONING DNs should be sorted properly 
in LocatedBlocks
 Key: HDFS-17091
 URL: https://issues.apache.org/jira/browse/HDFS-17091
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs
Reporter: WangYuanben
Assignee: WangYuanben


Being similar to [HDFS-16076|https://issues.apache.org/jira/browse/HDFS-16076], 
I think decommissioning DNs needs to be taken into consideration. After sorting 
the expected location list will be: live -> slow -> stale -> staleAndSlow -> 
entering_maintenance -> decommissioned -> decommissioning.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17090) Decommission will be stuck for long time when restart because overlapped process Register and BlockReport.

2023-07-16 Thread Xiaoqiao He (Jira)
Xiaoqiao He created HDFS-17090:
--

 Summary: Decommission will be stuck for long time when restart 
because overlapped process Register and BlockReport.
 Key: HDFS-17090
 URL: https://issues.apache.org/jira/browse/HDFS-17090
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Xiaoqiao He
Assignee: Xiaoqiao He


I met one corner case recently, which decommission DataNode impact performance 
of NameNode. After dig carefully, I have reproduced this case.
a. Add some DataNodes to exclude and prepare to decommission this Datanodes.
b. Execute bin/hdfs dfsadmin -refresh (This is optional step).
c. Restart NameNode for upgrade or other reason before complete to decommission.
d. All DataNodes will be trigger to register and FBR.
e. Considering that the load of NameNode will be very high, especially 8040 
CallQueue will be full for a long time because RPC flood about 
register/heartbeat/FBR from DataNodes.
f. For one decommission in-progress node, it will not complete to decommission 
until next FBR even all replicas of this node has been processed, because the 
request order register-heartbeat-(blockreport, register), and the second 
register could be one retry RPC request from DataNode (No more log information 
from DataNode to confirm), and for (blockreport, register), NameNode could 
process one storage then process register then process remaining storages in 
order. 
g. Because the second register RPC, the related DataNodes will be marked 
unhealthy by BlockManager#isNodeHealthyForDecommissionOrMaintenance. So 
decommission will be stuck for long time until next FBR. Thus NameNode need to 
scan this DataNode at every round to check if could complete which hold the 
global write lock and impact performance of NameNode.

To improve it, I think we could filter the repeated register RPC request at 
startup progress. Not think carefully if it will involve other risks when 
filter register directly. Welcome anymore discussions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17069) The documentation and implementation of "dfs.blocksize" are inconsistent.

2023-07-16 Thread ECFuzz (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17743610#comment-17743610
 ] 

ECFuzz commented on HDFS-17069:
---

Yes, you are right, these two configuration items have a dependency 
relationship clearly in doc. And it's not a bug, I'm so sorry for making such 
mistakes. 

> The documentation and implementation of "dfs.blocksize" are inconsistent.
> -
>
> Key: HDFS-17069
> URL: https://issues.apache.org/jira/browse/HDFS-17069
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfs, documentation
>Affects Versions: 3.3.6
> Environment: Linux version 4.15.0-142-generic 
> (buildd@lgw01-amd64-039) (gcc version 5.4.0 20160609 (Ubuntu 
> 5.4.0-6ubuntu1~16.04.12))
> java version "1.8.0_162"
> Java(TM) SE Runtime Environment (build 1.8.0_162-b12)
> Java HotSpot(TM) 64-Bit Server VM (build 25.162-b12, mixed mode)
>Reporter: ECFuzz
>Priority: Major
>  Labels: pull-request-available
>
> My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.
> core-site.xml like below.
> {code:java}
> 
>   
>         fs.defaultFS
>         hdfs://localhost:9000
>     
>     
>         hadoop.tmp.dir
>         /home/hadoop/Mutil_Component/tmp
>     
>    
> {code}
> hdfs-site.xml like below.
> {code:java}
> 
>    
>         dfs.replication
>         1
>     
> 
>         dfs.blocksize
>         128k
>     
>    
> {code}
> And then format the namenode, and start the hdfs.
> {code:java}
> hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
> bin/hdfs namenode -format
> x(many info)
> hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
> sbin/start-dfs.sh
> Starting namenodes on [localhost]
> Starting datanodes
> Starting secondary namenodes [hadoop-Standard-PC-i440FX-PIIX-1996]{code}
> Finally, use dfs to put a file. Then I get the message which means 128k is 
> less than 1M.
>  
> {code:java}
> hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
> bin/hdfs dfs -mkdir -p /user/hadoop
> hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
> bin/hdfs dfs -mkdir input
> hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
> bin/hdfs dfs -put etc/hadoop/hdfs-site.xml input
> put: Specified block size is less than configured minimum value 
> (dfs.namenode.fs-limits.min-block-size): 131072 < 1048576
> {code}
> But I find that in the document, dfs.blocksize can be set like 128k and other 
> values in hdfs-default.xml .
> {code:java}
> The default block size for new files, in bytes. You can use the following 
> suffix (case insensitive): k(kilo), m(mega), g(giga), t(tera), p(peta), 
> e(exa) to specify the size (such as 128k, 512m, 1g, etc.), Or provide 
> complete size in bytes (such as 134217728 for 128 MB).{code}
> So, should there be some issues with the documents here?Or should notice user 
> to set this configuration to be larger than 1M?
>  
> Additionally, I start the yarn and run the given mapreduce job.
> {code:java}
> hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
> sbin/start-yarn.sh 
> hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
> bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.6.jar 
> grep input output 'dfs[a-z.]+'{code}
>  And,  the shell will throw some exceptions like below.
> {code:java}
> 2023-07-12 15:12:29,964 INFO client.DefaultNoHARMFailoverProxyProvider: 
> Connecting to ResourceManager at /0.0.0.0:8032
> 2023-07-12 15:12:30,430 INFO mapreduce.JobResourceUploader: Disabling Erasure 
> Coding for path: 
> /tmp/hadoop-yarn/staging/hadoop/.staging/job_1689145947338_0001
> 2023-07-12 15:12:30,542 INFO mapreduce.JobSubmitter: Cleaning up the staging 
> area /tmp/hadoop-yarn/staging/hadoop/.staging/job_1689145947338_0001
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Specified block 
> size is less than configured minimum value 
> (dfs.namenode.fs-limits.min-block-size): 131072 < 1048576
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2690)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2625)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:807)
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:496)
>         at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at 
> 

[jira] [Commented] (HDFS-17069) The documentation and implementation of "dfs.blocksize" are inconsistent.

2023-07-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17743609#comment-17743609
 ] 

ASF GitHub Bot commented on HDFS-17069:
---

MEILIDEKCL closed pull request #5808: Add documention for HDFS-17069.
URL: https://github.com/apache/hadoop/pull/5808




> The documentation and implementation of "dfs.blocksize" are inconsistent.
> -
>
> Key: HDFS-17069
> URL: https://issues.apache.org/jira/browse/HDFS-17069
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfs, documentation
>Affects Versions: 3.3.6
> Environment: Linux version 4.15.0-142-generic 
> (buildd@lgw01-amd64-039) (gcc version 5.4.0 20160609 (Ubuntu 
> 5.4.0-6ubuntu1~16.04.12))
> java version "1.8.0_162"
> Java(TM) SE Runtime Environment (build 1.8.0_162-b12)
> Java HotSpot(TM) 64-Bit Server VM (build 25.162-b12, mixed mode)
>Reporter: ECFuzz
>Priority: Major
>  Labels: pull-request-available
>
> My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.
> core-site.xml like below.
> {code:java}
> 
>   
>         fs.defaultFS
>         hdfs://localhost:9000
>     
>     
>         hadoop.tmp.dir
>         /home/hadoop/Mutil_Component/tmp
>     
>    
> {code}
> hdfs-site.xml like below.
> {code:java}
> 
>    
>         dfs.replication
>         1
>     
> 
>         dfs.blocksize
>         128k
>     
>    
> {code}
> And then format the namenode, and start the hdfs.
> {code:java}
> hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
> bin/hdfs namenode -format
> x(many info)
> hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
> sbin/start-dfs.sh
> Starting namenodes on [localhost]
> Starting datanodes
> Starting secondary namenodes [hadoop-Standard-PC-i440FX-PIIX-1996]{code}
> Finally, use dfs to put a file. Then I get the message which means 128k is 
> less than 1M.
>  
> {code:java}
> hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
> bin/hdfs dfs -mkdir -p /user/hadoop
> hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
> bin/hdfs dfs -mkdir input
> hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
> bin/hdfs dfs -put etc/hadoop/hdfs-site.xml input
> put: Specified block size is less than configured minimum value 
> (dfs.namenode.fs-limits.min-block-size): 131072 < 1048576
> {code}
> But I find that in the document, dfs.blocksize can be set like 128k and other 
> values in hdfs-default.xml .
> {code:java}
> The default block size for new files, in bytes. You can use the following 
> suffix (case insensitive): k(kilo), m(mega), g(giga), t(tera), p(peta), 
> e(exa) to specify the size (such as 128k, 512m, 1g, etc.), Or provide 
> complete size in bytes (such as 134217728 for 128 MB).{code}
> So, should there be some issues with the documents here?Or should notice user 
> to set this configuration to be larger than 1M?
>  
> Additionally, I start the yarn and run the given mapreduce job.
> {code:java}
> hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
> sbin/start-yarn.sh 
> hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
> bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.6.jar 
> grep input output 'dfs[a-z.]+'{code}
>  And,  the shell will throw some exceptions like below.
> {code:java}
> 2023-07-12 15:12:29,964 INFO client.DefaultNoHARMFailoverProxyProvider: 
> Connecting to ResourceManager at /0.0.0.0:8032
> 2023-07-12 15:12:30,430 INFO mapreduce.JobResourceUploader: Disabling Erasure 
> Coding for path: 
> /tmp/hadoop-yarn/staging/hadoop/.staging/job_1689145947338_0001
> 2023-07-12 15:12:30,542 INFO mapreduce.JobSubmitter: Cleaning up the staging 
> area /tmp/hadoop-yarn/staging/hadoop/.staging/job_1689145947338_0001
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Specified block 
> size is less than configured minimum value 
> (dfs.namenode.fs-limits.min-block-size): 131072 < 1048576
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2690)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2625)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:807)
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:496)
>         at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at 
> 

[jira] (HDFS-15901) Solve the problem of DN repeated block reports occupying too many RPCs during Safemode

2023-07-16 Thread Yanlei Yu (Jira)


[ https://issues.apache.org/jira/browse/HDFS-15901 ]


Yanlei Yu deleted comment on HDFS-15901:
--

was (Author: JIRAUSER294151):
This seems to be a bug, we also encountered a similar error, after restarting 
the namenode, we found that the datanode FBR in the namenode log, some disk 
block report could not be reported successfully because of invalid ticket, 
because it was considered as a second report. After processReport method called 
processFirstBlockReport storageInfo. ReceivedBlockReport (); 
blockReportCount++; , processFirstBlockReport processing is sent to the queue 
is not actual processing quick report, then it is likely that they will appear 
error, lead to the first piece of report by mistake for the second time, then 
will enter storageInfo. GetBlockReportCount () > 0, Then 
blockReportLeaseManager. RemoveLease (node); , causing block reports to be 
rejected for subsequent renewals on the datanode

> Solve the problem of DN repeated block reports occupying too many RPCs during 
> Safemode
> --
>
> Key: HDFS-15901
> URL: https://issues.apache.org/jira/browse/HDFS-15901
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> When the cluster exceeds thousands of nodes, we want to restart the NameNode 
> service, and all DataNodes send a full Block action to the NameNode. During 
> SafeMode, some DataNodes may send blocks to NameNode multiple times, which 
> will take up too much RPC. In fact, this is unnecessary.
> In this case, some block report leases will fail or time out, and in extreme 
> cases, the NameNode will always stay in Safe Mode.
> 2021-03-14 08:16:25,873 [78438700] - INFO  [Block report 
> processor:BlockManager@2158] - BLOCK* processReport 0xe: discarded 
> non-initial block report from DatanodeRegistration(:port, 
> datanodeUuid=, infoPort=, infoSecurePort=, 
> ipcPort=, storageInfo=lv=;nsid=;c=0) because namenode 
> still in startup phase
> 2021-03-14 08:16:31,521 [78444348] - INFO  [Block report 
> processor:BlockManager@2158] - BLOCK* processReport 0xe: discarded 
> non-initial block report from DatanodeRegistration(, 
> datanodeUuid=, infoPort=, infoSecurePort=, 
> ipcPort=, storageInfo=lv=;nsid=;c=0) because namenode 
> still in startup phase
> 2021-03-13 18:35:38,200 [29191027] - WARN  [Block report 
> processor:BlockReportLeaseManager@311] - BR lease 0x is not valid for 
> DN , because the DN is not in the pending set.
> 2021-03-13 18:36:08,143 [29220970] - WARN  [Block report 
> processor:BlockReportLeaseManager@311] - BR lease 0x is not valid for 
> DN , because the DN is not in the pending set.
> 2021-03-13 18:36:08,143 [29220970] - WARN  [Block report 
> processor:BlockReportLeaseManager@317] - BR lease 0x is not valid for 
> DN , because the lease has expired.
> 2021-03-13 18:36:08,145 [29220972] - WARN  [Block report 
> processor:BlockReportLeaseManager@317] - BR lease 0x is not valid for 
> DN , because the lease has expired.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15901) Solve the problem of DN repeated block reports occupying too many RPCs during Safemode

2023-07-16 Thread Yanlei Yu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17743595#comment-17743595
 ] 

Yanlei Yu commented on HDFS-15901:
--

This seems to be a bug, we also encountered a similar error, after restarting 
the namenode, we found that the datanode FBR in the namenode log, some disk 
block report could not be reported successfully because of invalid ticket, 
because it was considered as a second report. After processReport method called 
processFirstBlockReport storageInfo. ReceivedBlockReport (); 
blockReportCount++; , processFirstBlockReport processing is sent to the queue 
is not actual processing quick report, then it is likely that they will appear 
error, lead to the first piece of report by mistake for the second time, then 
will enter storageInfo. GetBlockReportCount () > 0, Then 
blockReportLeaseManager. RemoveLease (node); , causing block reports to be 
rejected for subsequent renewals on the datanode

> Solve the problem of DN repeated block reports occupying too many RPCs during 
> Safemode
> --
>
> Key: HDFS-15901
> URL: https://issues.apache.org/jira/browse/HDFS-15901
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> When the cluster exceeds thousands of nodes, we want to restart the NameNode 
> service, and all DataNodes send a full Block action to the NameNode. During 
> SafeMode, some DataNodes may send blocks to NameNode multiple times, which 
> will take up too much RPC. In fact, this is unnecessary.
> In this case, some block report leases will fail or time out, and in extreme 
> cases, the NameNode will always stay in Safe Mode.
> 2021-03-14 08:16:25,873 [78438700] - INFO  [Block report 
> processor:BlockManager@2158] - BLOCK* processReport 0xe: discarded 
> non-initial block report from DatanodeRegistration(:port, 
> datanodeUuid=, infoPort=, infoSecurePort=, 
> ipcPort=, storageInfo=lv=;nsid=;c=0) because namenode 
> still in startup phase
> 2021-03-14 08:16:31,521 [78444348] - INFO  [Block report 
> processor:BlockManager@2158] - BLOCK* processReport 0xe: discarded 
> non-initial block report from DatanodeRegistration(, 
> datanodeUuid=, infoPort=, infoSecurePort=, 
> ipcPort=, storageInfo=lv=;nsid=;c=0) because namenode 
> still in startup phase
> 2021-03-13 18:35:38,200 [29191027] - WARN  [Block report 
> processor:BlockReportLeaseManager@311] - BR lease 0x is not valid for 
> DN , because the DN is not in the pending set.
> 2021-03-13 18:36:08,143 [29220970] - WARN  [Block report 
> processor:BlockReportLeaseManager@311] - BR lease 0x is not valid for 
> DN , because the DN is not in the pending set.
> 2021-03-13 18:36:08,143 [29220970] - WARN  [Block report 
> processor:BlockReportLeaseManager@317] - BR lease 0x is not valid for 
> DN , because the lease has expired.
> 2021-03-13 18:36:08,145 [29220972] - WARN  [Block report 
> processor:BlockReportLeaseManager@317] - BR lease 0x is not valid for 
> DN , because the lease has expired.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17089) Close child file systems in ViewFileSystem when cache is disabled.

2023-07-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17743586#comment-17743586
 ] 

ASF GitHub Bot commented on HDFS-17089:
---

hadoop-yetus commented on PR #5847:
URL: https://github.com/apache/hadoop/pull/5847#issuecomment-1637218746

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 49s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m 13s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  36m 45s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  18m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |  16m 59s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   4m 39s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 17s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 28s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   6m 13s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  41m 15s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  8s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 47s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |  17m 47s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 56s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |  16m 56s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 36s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   3m 13s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   6m 27s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  41m 53s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m  1s |  |  hadoop-common in the patch 
passed.  |
   | -1 :x: |  unit  | 244m 45s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5847/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  asflicense  |   1m 13s | 
[/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5847/1/artifact/out/results-asflicense.txt)
 |  The patch generated 1 ASF License warnings.  |
   |  |   | 514m 42s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestRollingUpgrade |
   |   | hadoop.hdfs.server.namenode.ha.TestObserverNode |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5847/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5847 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 1ffab75f8229 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 
13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 29dfac9ff61d152a9bd8c2742931cbac9a312c6e |
   | Default Java | Private 

[jira] [Commented] (HDFS-17089) Close child file systems in ViewFileSystem when cache is disabled.

2023-07-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17743580#comment-17743580
 ] 

ASF GitHub Bot commented on HDFS-17089:
---

hadoop-yetus commented on PR #5847:
URL: https://github.com/apache/hadoop/pull/5847#issuecomment-1637212444

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 39s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m  6s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  31m 59s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |  16m  8s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   4m 29s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 35s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 49s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 54s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   6m 11s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  37m  9s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |  16m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |  16m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 18s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   3m 34s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 41s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 58s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   6m 34s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  37m 40s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 20s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 212m 17s |  |  hadoop-hdfs in the patch 
passed.  |
   | -1 :x: |  asflicense  |   1m 29s | 
[/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5847/3/artifact/out/results-asflicense.txt)
 |  The patch generated 1 ASF License warnings.  |
   |  |   | 469m  9s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5847/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5847 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 13c65b47d393 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 
13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ce588235b668635f9a154660b6eb91361dbcc65c |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5847/3/testReport/ |
   | Max. process+thread count | 3649 (vs. ulimit of 5500) |
   | modules | C: 

[jira] [Commented] (HDFS-17089) Close child file systems in ViewFileSystem when cache is disabled.

2023-07-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17743579#comment-17743579
 ] 

ASF GitHub Bot commented on HDFS-17089:
---

hadoop-yetus commented on PR #5847:
URL: https://github.com/apache/hadoop/pull/5847#issuecomment-1637210862

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 39s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 45s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  31m  9s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 20s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |  16m 22s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   4m 22s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 35s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 46s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 58s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   6m 14s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  36m 50s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |  16m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |  16m 22s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 17s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   3m 34s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 43s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 56s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   6m 36s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  37m 33s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 12s |  |  hadoop-common in the patch 
passed.  |
   | -1 :x: |  unit  | 217m  0s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5847/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  asflicense  |   1m 31s | 
[/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5847/2/artifact/out/results-asflicense.txt)
 |  The patch generated 1 ASF License warnings.  |
   |  |   | 472m  4s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5847/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5847 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 7e3aacdb6ad1 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 29dfac9ff61d152a9bd8c2742931cbac9a312c6e |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   | Multi-JDK versions | 

[jira] [Updated] (HDFS-17089) Close child file systems in ViewFileSystem when cache is disabled.

2023-07-16 Thread Shuyan Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shuyan Zhang updated HDFS-17089:

Summary: Close child file systems in ViewFileSystem when cache is disabled. 
 (was: Close child files systems in ViewFileSystem when cache is disabled.)

> Close child file systems in ViewFileSystem when cache is disabled.
> --
>
> Key: HDFS-17089
> URL: https://issues.apache.org/jira/browse/HDFS-17089
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Shuyan Zhang
>Priority: Major
>  Labels: pull-request-available
>
> When the cache is configured to disabled (namely, 
> `fs.viewfs.enable.inner.cache=false` and `fs.*.impl.disable.cache=true`), 
> even if `FileSystem.close()` is called, the client cannot truly close the 
> child file systems in a ViewFileSystem. This caused our long-running clients 
> to constantly produce resource leaks.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17089) Close child files systems in ViewFileSystem when cache is disabled.

2023-07-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17743547#comment-17743547
 ] 

ASF GitHub Bot commented on HDFS-17089:
---

zhangshuyan0 opened a new pull request, #5847:
URL: https://github.com/apache/hadoop/pull/5847

   
   
   ### Description of PR
   When the cache is configured to disabled (namely, 
`fs.viewfs.enable.inner.cache=false` and `fs.*.impl.disable.cache=true`), even 
if `FileSystem.close()` is called, the client cannot truly close the child file 
systems in a ViewFileSystem. This caused our long-running clients to constantly 
produce resource leaks.
   
   ### How was this patch tested?
   Add a new unit test.




> Close child files systems in ViewFileSystem when cache is disabled.
> ---
>
> Key: HDFS-17089
> URL: https://issues.apache.org/jira/browse/HDFS-17089
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Shuyan Zhang
>Priority: Major
>
> When the cache is configured to disabled (namely, 
> `fs.viewfs.enable.inner.cache=false` and `fs.*.impl.disable.cache=true`), 
> even if `FileSystem.close()` is called, the client cannot truly close the 
> child file systems in a ViewFileSystem. This caused our long-running clients 
> to constantly produce resource leaks.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17089) Close child files systems in ViewFileSystem when cache is disabled.

2023-07-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-17089:
--
Labels: pull-request-available  (was: )

> Close child files systems in ViewFileSystem when cache is disabled.
> ---
>
> Key: HDFS-17089
> URL: https://issues.apache.org/jira/browse/HDFS-17089
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Shuyan Zhang
>Priority: Major
>  Labels: pull-request-available
>
> When the cache is configured to disabled (namely, 
> `fs.viewfs.enable.inner.cache=false` and `fs.*.impl.disable.cache=true`), 
> even if `FileSystem.close()` is called, the client cannot truly close the 
> child file systems in a ViewFileSystem. This caused our long-running clients 
> to constantly produce resource leaks.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17089) Close child files systems in ViewFileSystem when cache is disabled.

2023-07-16 Thread Shuyan Zhang (Jira)
Shuyan Zhang created HDFS-17089:
---

 Summary: Close child files systems in ViewFileSystem when cache is 
disabled.
 Key: HDFS-17089
 URL: https://issues.apache.org/jira/browse/HDFS-17089
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Shuyan Zhang


When the cache is configured to disabled (namely, 
`fs.viewfs.enable.inner.cache=false` and `fs.*.impl.disable.cache=true`), even 
if `FileSystem.close()` is called, the client cannot truly close the child file 
systems in a ViewFileSystem. This caused our long-running clients to constantly 
produce resource leaks.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org