[ 
https://issues.apache.org/jira/browse/HADOOP-18631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17691882#comment-17691882
 ] 

ASF GitHub Bot commented on HADOOP-18631:
-----------------------------------------

virajjasani commented on code in PR #5418:
URL: https://github.com/apache/hadoop/pull/5418#discussion_r1113758871


##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/MetricsLoggerTask.java:
##########
@@ -115,8 +111,11 @@ private String trimLine(String valueStr) {
         .substring(0, maxLogLineLength) + "...");
   }
 
-  private static boolean hasAppenders(org.apache.log4j.Logger logger) {
-    return logger.getAllAppenders().hasMoreElements();
+  // TODO : hadoop-logging module to hide log4j implementation details, this 
method
+  //  can directly call utility from hadoop-logging.
+  private static boolean hasAppenders(Logger logger) {

Review Comment:
   I was thinking the same, but we have this check for metric logger:
   
   ```
       // Skip querying metrics if there are no known appenders.
       if (!metricsLog.isInfoEnabled() || !hasAppenders(metricsLog)
           || objectName == null) {
         return;
       }
   
   ```



##########
hadoop-hdfs-project/hadoop-hdfs/src/test/resources/log4j.properties:
##########
@@ -49,4 +56,24 @@ log4j.appender.DNMETRICSRFA.MaxBackupIndex=1
 log4j.appender.DNMETRICSRFA.MaxFileSize=64MB
 
 # Supress KMS error log
-log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
\ No newline at end of file
+log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
+
+#
+# hdfs audit logging
+#
+
+# TODO : log4j2 properties to provide example for using Async appender with 
other appenders
+hdfs.audit.logger=INFO,ASYNCAPPENDER,RFAAUDIT

Review Comment:
   As far as AsyncAppender is concerned, we can use same ref. It's the other 
appenders that make things different if we want to include them.



##########
hadoop-hdfs-project/hadoop-hdfs/src/test/resources/log4j.properties:
##########
@@ -49,4 +56,24 @@ log4j.appender.DNMETRICSRFA.MaxBackupIndex=1
 log4j.appender.DNMETRICSRFA.MaxFileSize=64MB
 
 # Supress KMS error log
-log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
\ No newline at end of file
+log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
+
+#
+# hdfs audit logging
+#
+
+# TODO : log4j2 properties to provide example for using Async appender with 
other appenders
+hdfs.audit.logger=INFO,ASYNCAPPENDER,RFAAUDIT
+hdfs.audit.log.maxfilesize=256MB
+hdfs.audit.log.maxbackupindex=20
+log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}
+log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false
+log4j.appender.RFAAUDIT=org.apache.log4j.RollingFileAppender
+log4j.appender.RFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
+log4j.appender.RFAAUDIT.layout=org.apache.log4j.PatternLayout
+log4j.appender.RFAAUDIT.layout.ConversionPattern=%m%n
+log4j.appender.RFAAUDIT.MaxFileSize=${hdfs.audit.log.maxfilesize}
+log4j.appender.RFAAUDIT.MaxBackupIndex=${hdfs.audit.log.maxbackupindex}
+log4j.appender.ASYNCAPPENDER=org.apache.log4j.AsyncAppender

Review Comment:
   It turns out both log4j1 and log4j2 allows wrapping other appenders within 
AsyncAppender. However, log4j1 supports this through xml only, whereas log4j2 
supports it using both xml as well as properties.
   
   log4j1 [AsyncAppender 
javadoc](https://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/AsyncAppender.html):
    
   > **Important note**: The AsyncAppender can only be script configured using 
the 
[DOMConfigurator](https://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/xml/DOMConfigurator.html).
   
   
   
   
   Hence I was thinking, instead of creating a new custom appender that 
eventually wraps other appender with AsyncAppender and make this overall log4j2 
migration even more painful, we can wait until we replace log4j.properties with 
log4j2.properties and then we can configure AsyncAppender to wrap RFA, I have 
already tested this out locally.
   
   btw this properties file is used for test purpose only.





> Migrate Async appenders to log4j properties
> -------------------------------------------
>
>                 Key: HADOOP-18631
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18631
>             Project: Hadoop Common
>          Issue Type: Sub-task
>            Reporter: Viraj Jasani
>            Assignee: Viraj Jasani
>            Priority: Major
>              Labels: pull-request-available
>
> Before we can upgrade to log4j2, we need to migrate async appenders that we 
> add "dynamically in the code" to the log4j.properties file. Instead of using 
> core/hdfs site configs, log4j properties or system properties should be used 
> to determine if the given logger should use async appender.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to