mimaison commented on code in PR #17373:
URL: https://github.com/apache/kafka/pull/17373#discussion_r1843599782
##########
bin/connect-distributed.sh:
##########
@@ -22,8 +22,15 @@ fi
base_dir=$(dirname $0)
-if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then
+if [ -f "$base_dir/../config/connect-log4j.properties" ]; then
+ echo DEPRECATED: Using Log4j 1.x configuration file
\$KAFKA_HOME/config/connect-log4j.properties >&2
+ echo To use a Log4j 2.x configuration, create a
\$KAFKA_HOME/config/log4j2.xml file and remove the Log4j 1.x configration. >&2
Review Comment:
Why do we recommend creating an XML file? Should we point to the migration
guide and to the log4j2 example file Kafka will have under `config`
##########
connect/runtime/src/test/java/org/apache/kafka/connect/runtime/LoggersTest.java:
##########
@@ -229,22 +250,17 @@ public TestLoggers(Logger rootLogger, Logger...
knownLoggers) {
@Override
Logger lookupLogger(String logger) {
- return currentLoggers.computeIfAbsent(logger, l -> new
Logger(logger) { });
+ return currentLoggers.computeIfAbsent(logger,
LogManager::getLogger);
}
@Override
- Enumeration<Logger> currentLoggers() {
- return new Vector<>(currentLoggers.values()).elements();
+ List<Logger> currentLoggers() {
+ return new ArrayList<>(currentLoggers.values());
}
@Override
Logger rootLogger() {
return rootLogger;
}
}
-
- private Logger logger(String name) {
- return new Logger(name) { };
- }
-
-}
+}
Review Comment:
Let's keep the newline
##########
connect/runtime/src/test/resources/log4j2.properties:
##########
@@ -14,20 +14,37 @@
# See the License for the specific language governing permissions and
# limitations under the License.
##
-log4j.rootLogger=INFO, stdout
+name=ConnectRuntimeTestConfig
+appenders=console
-log4j.appender.stdout=org.apache.log4j.ConsoleAppender
-log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
+appender.console.type=Console
+appender.console.name=STDOUT
+appender.console.layout.type=PatternLayout
#
# The `%X{connector.context}` parameter in the layout includes
connector-specific and task-specific information
# in the log message, where appropriate. This makes it easier to identify
those log messages that apply to a
# specific connector. Simply add this parameter to the log layout
configuration below to include the contextual information.
#
-log4j.appender.stdout.layout.ConversionPattern=[%d] %p %X{connector.context}%m
(%c:%L)%n
-#
-# The following line includes no MDC context parameters:
-#log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c:%L)%n (%t)
+appender.console.layout.pattern=[%d] %p %X{connector.context}%m (%c:%L)%n
+
+loggers=kafka,stateChangeLogger,kafkaConnect,kafkaConsumer,coordinatorGroup
+
+rootLogger.level=INFO
+rootLogger.appenderRefs=console
+rootLogger.appenderRef.console.ref=STDOUT
+
+logger.kafka.name=kafka
+logger.kafka.level=WARN
+
+logger.stateChangeLogger.name=state.change.logger
+logger.stateChangeLogger.level=OFF
+
+logger.kafkaConnect.name=org.apache.kafka.connect
+logger.kafkaConnect.level=DEBUG
+
+# Troubleshooting KAFKA-17493.
+logger.kafkaConsumer.name=org.apache.kafka.consumer
+logger.kafkaConsumer.level=DEBUG
Review Comment:
We can remove this new line to make it clearer the comment applies to both
`org.apache.kafka.consumer` and `org.apache.kafka.coordinator.group`
##########
build.gradle:
##########
@@ -2386,7 +2394,11 @@ project(':tools') {
implementation libs.jacksonDataformatCsv
implementation libs.jacksonJDK8Datatypes
implementation libs.slf4jApi
- implementation libs.slf4jReload4j
+ implementation libs.slf4jLog4j2
+ implementation libs.log4j2Api
+ implementation libs.log4j2Core
+ implementation libs.log4j1Bridge2Api
+ implementation libs.spotbugs
Review Comment:
Do we really need this as `implementation`? This is make it part of our
release artifact.
##########
README.md:
##########
@@ -54,9 +54,9 @@ Follow instructions in https://kafka.apache.org/quickstart
./gradlew clients:test --tests
org.apache.kafka.clients.MetadataTest.testTimeToNextUpdate
### Running a particular unit/integration test with log4j output ###
-By default, there will be only small number of logs output while testing. You
can adjust it by changing the `log4j.properties` file in the module's
`src/test/resources` directory.
+By default, there will be only small number of logs output while testing. You
can adjust it by changing the `log4j2.properties` file in the module's
`src/test/resources` directory.
-For example, if you want to see more logs for clients project tests, you can
modify [the
line](https://github.com/apache/kafka/blob/trunk/clients/src/test/resources/log4j.properties#L21)
in `clients/src/test/resources/log4j.properties`
+For example, if you want to see more logs for clients project tests, you can
modify [the
line](https://github.com/apache/kafka/blob/trunk/clients/src/test/resources/log4j.properties#L21)
in `clients/src/test/resources/log4j2.properties`
Review Comment:
Why do you want to do it in a separate PR?
If we merge as is we instruct users to go check
`clients/src/test/resources/log4j2.properties` but instead link to another
file. If we update the comment we need to update the link.
##########
core/src/test/scala/other/kafka.log4j.properties:
##########
@@ -19,4 +19,4 @@ log4j.appender.KAFKA=kafka.log4j.KafkaAppender
log4j.appender.KAFKA.Port=9092
log4j.appender.KAFKA.Host=localhost
log4j.appender.KAFKA.Topic=test-logger
-log4j.appender.KAFKA.Serializer=kafka.AppenderStringSerializer
+log4j.appender.KAFKA.Serializer=kafka.AppenderStringSerializer
Review Comment:
Is this file still used?
##########
tests/kafkatest/services/performance/templates/tools_log4j.properties:
##########
@@ -22,4 +22,4 @@ log4j.appender.FILE.ImmediateFlush=true
# Set the append to false, overwrite
log4j.appender.FILE.Append=false
log4j.appender.FILE.layout=org.apache.log4j.PatternLayout
-log4j.appender.FILE.layout.conversionPattern=[%d] %p %m (%c)%n
+log4j.appender.FILE.layout.conversionPattern=[%d] %p %m (%c)%n
Review Comment:
Let's keep the new line. Same in a few other files
##########
config/connect-log4j2.properties:
##########
@@ -0,0 +1,39 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+name=ConfigConnectConfig
Review Comment:
Would `ConnectConfig` be a better name?
##########
core/src/test/scala/unit/kafka/network/SocketServerTest.scala:
##########
@@ -88,7 +89,7 @@ class SocketServerTest {
var server: SocketServer = _
val sockets = new ArrayBuffer[Socket]
- private val kafkaLogger = org.apache.log4j.LogManager.getLogger("kafka")
+ private val kafkaLogger =
org.apache.logging.log4j.LogManager.getLogger("kafka")
Review Comment:
Can we import `LogManager`?
##########
core/src/test/scala/other/kafka.log4j.properties:
##########
@@ -19,4 +19,4 @@ log4j.appender.KAFKA=kafka.log4j.KafkaAppender
log4j.appender.KAFKA.Port=9092
log4j.appender.KAFKA.Host=localhost
log4j.appender.KAFKA.Topic=test-logger
-log4j.appender.KAFKA.Serializer=kafka.AppenderStringSerializer
+log4j.appender.KAFKA.Serializer=kafka.AppenderStringSerializer
Review Comment:
Let's keep the new line.
##########
core/src/test/scala/integration/kafka/api/PlaintextAdminIntegrationTest.scala:
##########
@@ -3569,6 +3584,13 @@ object PlaintextAdminIntegrationTest {
assertEquals(LogConfig.DEFAULT_COMPRESSION_TYPE,
configs.get(brokerResource).get(ServerConfigs.COMPRESSION_TYPE_CONFIG).value)
}
+ def getTestQuorumAndGroupProtocolParametersAll() :
java.util.stream.Stream[Arguments] = {
Review Comment:
Why are we adding this method? This looks like a rebase issue.
##########
connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Loggers.java:
##########
@@ -45,9 +50,9 @@ public class Loggers {
private static final Logger log = LoggerFactory.getLogger(Loggers.class);
/**
- * Log4j uses "root" (case-insensitive) as name of the root logger.
+ * Log4j2 uses "" (empty string) as name of the root logger.
*/
- private static final String ROOT_LOGGER_NAME = "root";
+ private static final String ROOT_LOGGER_NAME = "";
Review Comment:
This effectively changes the behavior of the `/admin/loggers` endpoint of
the Connect REST API.
The endpoints accept the logger name in the path `/admin/loggers/{name}`. If
the root logger is the empty string, it's not possible to query it anymore. I
wonder if we should still expose the root logger as `root` (I assume it's
possible to rename it somewhere here or in `LoggingResource`). cc @gharris1727
WDYT
##########
bin/connect-distributed.sh:
##########
@@ -22,8 +22,15 @@ fi
base_dir=$(dirname $0)
-if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then
+if [ -f "$base_dir/../config/connect-log4j.properties" ]; then
+ echo DEPRECATED: Using Log4j 1.x configuration file
\$KAFKA_HOME/config/connect-log4j.properties >&2
+ echo To use a Log4j 2.x configuration, create a
\$KAFKA_HOME/config/log4j2.xml file and remove the Log4j 1.x configration. >&2
Review Comment:
Same in the other scripts
##########
gradle/dependencies.gradle:
##########
@@ -152,7 +152,9 @@ versions += [
// Also make sure the compression levels in
org.apache.kafka.common.record.CompressionType are still valid
zstd: "1.5.6-6",
junitPlatform: "1.10.2",
- hdrHistogram: "2.2.2"
+ hdrHistogram: "2.2.2",
+ log4j2: "2.24.1",
Review Comment:
Nit: I know dependencies are not fully ordered but can we insert it roughly
where it should be in the list instead of appending at the end.
##########
config/log4j2.properties:
##########
@@ -0,0 +1,163 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Unspecified loggers and loggers with additivity=true output to server.log
and stdout
+# Note that INFO only applies to unspecified loggers, the log level of the
child logger is used otherwise
+name=LogConfig
+appenders=stdout,kafkaAppender,stateChangeAppender,requestAppender,cleanerAppender,controllerAppender,authorizerAppender
+
+# Console appender (stdout)
+appender.stdout.type=Console
+appender.stdout.name=STDOUT
+appender.stdout.layout.type=PatternLayout
+appender.stdout.layout.pattern=[%d] %p %m (%c)%n
+
+appender.kafkaAppender.type=RollingFile
+appender.kafkaAppender.name=KafkaAppender
+appender.kafkaAppender.fileName=${kafka.logs.dir}/server.log
+appender.kafkaAppender.filePattern=${kafka.logs.dir}/server.log.%d{yyyy-MM-dd-HH}
+appender.kafkaAppender.layout.type=PatternLayout
+appender.kafkaAppender.layout.pattern=[%d] %p %m (%c)%n
+appender.kafkaAppender.policies.type=TimeBasedTriggeringPolicy
+appender.kafkaAppender.policies.interval=1
+appender.kafkaAppender.policies.modulate=true
+
+# State Change appender
+appender.stateChangeAppender.type=RollingFile
+appender.stateChangeAppender.name=StateChangeAppender
+appender.stateChangeAppender.fileName=${kafka.logs.dir}/state-change.log
+appender.stateChangeAppender.filePattern=${kafka.logs.dir}/state-change.log.%d{yyyy-MM-dd-HH}
+appender.stateChangeAppender.layout.type=PatternLayout
+appender.stateChangeAppender.layout.pattern=[%d] %p %m (%c)%n
+appender.stateChangeAppender.policies.type=TimeBasedTriggeringPolicy
+appender.stateChangeAppender.policies.interval=1
+appender.stateChangeAppender.policies.modulate=true
+
+# Request appender
+appender.requestAppender.type=RollingFile
+appender.requestAppender.name=RequestAppender
+appender.requestAppender.fileName=${kafka.logs.dir}/kafka-request.log
+appender.requestAppender.filePattern=${kafka.logs.dir}/kafka-request.log.%d{yyyy-MM-dd-HH}
+appender.requestAppender.layout.type=PatternLayout
+appender.requestAppender.layout.pattern=[%d] %p %m (%c)%n
+appender.requestAppender.policies.type=TimeBasedTriggeringPolicy
+appender.requestAppender.policies.interval=1
+appender.requestAppender.policies.modulate=true
+
+# Cleaner appender
+appender.cleanerAppender.type=RollingFile
+appender.cleanerAppender.name=CleanerAppender
+appender.cleanerAppender.fileName=${kafka.logs.dir}/log-cleaner.log
+appender.cleanerAppender.filePattern=${kafka.logs.dir}/log-cleaner.log.%d{yyyy-MM-dd-HH}
+appender.cleanerAppender.layout.type=PatternLayout
+appender.cleanerAppender.layout.pattern=[%d] %p %m (%c)%n
+appender.cleanerAppender.policies.type=TimeBasedTriggeringPolicy
+appender.cleanerAppender.policies.interval=1
+appender.cleanerAppender.policies.modulate=true
+
+# Controller appender
+appender.controllerAppender.type=RollingFile
+appender.controllerAppender.name=ControllerAppender
+appender.controllerAppender.fileName=${kafka.logs.dir}/controller.log
+appender.controllerAppender.filePattern=${kafka.logs.dir}/controller.log.%d{yyyy-MM-dd-HH}
+appender.controllerAppender.layout.type=PatternLayout
+appender.controllerAppender.layout.pattern=[%d] %p %m (%c)%n
+appender.controllerAppender.policies.type=TimeBasedTriggeringPolicy
+appender.controllerAppender.policies.interval=1
+appender.controllerAppender.policies.modulate=true
+
+# Authorizer appender
+appender.authorizerAppender.type=RollingFile
+appender.authorizerAppender.name=AuthorizerAppender
+appender.authorizerAppender.fileName=${kafka.logs.dir}/kafka-authorizer.log
+appender.authorizerAppender.filePattern=${kafka.logs.dir}/kafka-authorizer.log.%d{yyyy-MM-dd-HH}
+appender.authorizerAppender.layout.type=PatternLayout
+appender.authorizerAppender.layout.pattern=[%d] %p %m (%c)%n
+appender.authorizerAppender.policies.type=TimeBasedTriggeringPolicy
+appender.authorizerAppender.policies.interval=1
+appender.authorizerAppender.policies.modulate=true
+
+rootLogger.level=INFO
+rootLogger.appenderRefs=stdout,kafkaAppender
+rootLogger.appenderRef.stdout.ref=STDOUT
+rootLogger.appenderRef.kafkaAppender.ref=KafkaAppender
+
+loggers=zookeeper,kafka,apacheKafka,requestLogger,networkRequestChannel,apacheKafkaController,kafkaController,logCleaner,stateChangeLogger,authorizerLogger
+
+# Zookeeper logger
+logger.zookeeper.name=org.apache.zookeeper
+logger.zookeeper.level=INFO
Review Comment:
We since removed ZooKeeper from the existing log4j properties files in trunk
(https://github.com/apache/kafka/commit/085b27ec6e65565cd41336b14aed2824a6e154db),
so let's not re-add ZooKeeper stuff to remove it again later.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]