[ https://issues.apache.org/jira/browse/HADOOP-18631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17691868#comment-17691868 ]
ASF GitHub Bot commented on HADOOP-18631: ----------------------------------------- Apache9 commented on code in PR #5418: URL: https://github.com/apache/hadoop/pull/5418#discussion_r1113753403 ########## hadoop-hdfs-project/hadoop-hdfs/src/test/resources/log4j.properties: ########## @@ -49,4 +56,24 @@ log4j.appender.DNMETRICSRFA.MaxBackupIndex=1 log4j.appender.DNMETRICSRFA.MaxFileSize=64MB # Supress KMS error log -log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF \ No newline at end of file +log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF + +# +# hdfs audit logging +# + +# TODO : log4j2 properties to provide example for using Async appender with other appenders +hdfs.audit.logger=INFO,ASYNCAPPENDER,RFAAUDIT +hdfs.audit.log.maxfilesize=256MB +hdfs.audit.log.maxbackupindex=20 +log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger} +log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false +log4j.appender.RFAAUDIT=org.apache.log4j.RollingFileAppender +log4j.appender.RFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log +log4j.appender.RFAAUDIT.layout=org.apache.log4j.PatternLayout +log4j.appender.RFAAUDIT.layout.ConversionPattern=%m%n +log4j.appender.RFAAUDIT.MaxFileSize=${hdfs.audit.log.maxfilesize} +log4j.appender.RFAAUDIT.MaxBackupIndex=${hdfs.audit.log.maxbackupindex} +log4j.appender.ASYNCAPPENDER=org.apache.log4j.AsyncAppender Review Comment: AsyncAppender itself supports logging to file? Or we need to let it wrap another appender? ########## hadoop-hdfs-project/hadoop-hdfs/src/test/resources/log4j.properties: ########## @@ -49,4 +56,24 @@ log4j.appender.DNMETRICSRFA.MaxBackupIndex=1 log4j.appender.DNMETRICSRFA.MaxFileSize=64MB # Supress KMS error log -log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF \ No newline at end of file +log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF + +# +# hdfs audit logging +# + +# TODO : log4j2 properties to provide example for using Async appender with other appenders +hdfs.audit.logger=INFO,ASYNCAPPENDER,RFAAUDIT Review Comment: I can not recall how to config log4j1 AsyncAppender, but I believe we need to use different append ref for datanode and namenode? At least they need to log to different log files? ########## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/MetricsLoggerTask.java: ########## @@ -115,8 +111,11 @@ private String trimLine(String valueStr) { .substring(0, maxLogLineLength) + "..."); } - private static boolean hasAppenders(org.apache.log4j.Logger logger) { - return logger.getAllAppenders().hasMoreElements(); + // TODO : hadoop-logging module to hide log4j implementation details, this method + // can directly call utility from hadoop-logging. + private static boolean hasAppenders(Logger logger) { Review Comment: Do we still need this method since we do not need to setup async append programmingly? > Migrate Async appenders to log4j properties > ------------------------------------------- > > Key: HADOOP-18631 > URL: https://issues.apache.org/jira/browse/HADOOP-18631 > Project: Hadoop Common > Issue Type: Sub-task > Reporter: Viraj Jasani > Assignee: Viraj Jasani > Priority: Major > Labels: pull-request-available > > Before we can upgrade to log4j2, we need to migrate async appenders that we > add "dynamically in the code" to the log4j.properties file. Instead of using > core/hdfs site configs, log4j properties or system properties should be used > to determine if the given logger should use async appender. -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org