[ 
https://issues.apache.org/jira/browse/HDFS-16881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17654142#comment-17654142
 ] 

ASF GitHub Bot commented on HDFS-16881:
---------------------------------------

cnauroth commented on code in PR #5268:
URL: https://github.com/apache/hadoop/pull/5268#discussion_r1060859919


##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java:
##########
@@ -388,6 +392,12 @@ public enum DirOp {
         DFS_PROTECTED_SUBDIRECTORIES_ENABLE,
         DFS_PROTECTED_SUBDIRECTORIES_ENABLE_DEFAULT);
 
+    final long readLockThresholdMs = conf.getLong(
+        DFS_NAMENODE_READ_LOCK_REPORTING_THRESHOLD_MS_KEY,
+        DFS_NAMENODE_READ_LOCK_REPORTING_THRESHOLD_MS_DEFAULT);
+    // use half of read lock threshold

Review Comment:
   Is it necessary for this to use half? If so, can you describe why in this 
comment?



##########
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSPermissionChecker.java:
##########
@@ -446,4 +447,20 @@ private static INodeFile createINodeFile(INodeDirectory 
parent, String name,
     parent.addChild(inodeFile);
     return inodeFile;
   }
+
+  @Test
+  public void testCheckAccessControlEnforcerSlowness() throws Exception {
+    final long thresholdMs = 10;
+    final String message = FSPermissionChecker.runCheckPermission(() -> {
+      try {
+        Thread.sleep(20);
+      } catch (InterruptedException e) {
+        throw new RuntimeException(e);

Review Comment:
   I suggest adding `Thread.currentThread().interrupt();` before throwing. It 
shouldn't matter much in practice, but JUnit runner threads have some strange 
behavior when interrupted status is not restored as expected.





> Warn if AccessControlEnforcer runs for a long time to check permission
> ----------------------------------------------------------------------
>
>                 Key: HDFS-16881
>                 URL: https://issues.apache.org/jira/browse/HDFS-16881
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namanode
>            Reporter: Tsz-wo Sze
>            Assignee: Tsz-wo Sze
>            Priority: Major
>              Labels: pull-request-available
>
> AccessControlEnforcer is configurable.  If an external AccessControlEnforcer 
> runs for a long time to check permission with the FSnamesystem lock, it will 
> significantly slow down the entire Namenode.  In the JIRA, we will print a 
> WARN message when it happens.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to