[GitHub] [kafka] C0urante commented on pull request #12366: KAFKA-14021: Implement new KIP-618 APIs in MirrorSourceConnector

2022-07-23 Thread GitBox


C0urante commented on PR #12366:
URL: https://github.com/apache/kafka/pull/12366#issuecomment-1193251313

   @OmniaGM You've been active recently with MirrorMaker 2 and seem quite 
familiar with it. Would you be willing to give this a review? I'd love it if 
someone could fact-check me to make sure I've got the MM2-specific details here 
correct.
   
   For reference:
   - Docs for `SourceConnector::exactlyOnceSupport`: 
https://github.com/apache/kafka/blob/679e9e0cee67e7d3d2ece204a421ea7da31d73e9/connect/api/src/main/java/org/apache/kafka/connect/source/SourceConnector.java#L34-L54
   - Docs and other useful info on the `exactly.once.support` connector 
property: 
https://github.com/apache/kafka/blob/679e9e0cee67e7d3d2ece204a421ea7da31d73e9/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/SourceConnectorConfig.java#L60-L84


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] bozhao12 opened a new pull request, #12435: MINOR: Fix method javadoc and typo in comments

2022-07-23 Thread GitBox


bozhao12 opened a new pull request, #12435:
URL: https://github.com/apache/kafka/pull/12435

   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] Stephan14 commented on a diff in pull request #12308: KAFKA-14009: update rebalance timeout in memory when consumers use st…

2022-07-23 Thread GitBox


Stephan14 commented on code in PR #12308:
URL: https://github.com/apache/kafka/pull/12308#discussion_r928195459


##
core/src/main/scala/kafka/coordinator/group/GroupCoordinator.scala:
##
@@ -1300,7 +1304,9 @@ class GroupCoordinator(val brokerId: Int,
 completeAndScheduleNextHeartbeatExpiration(group, member)
 
 val knownStaticMember = group.get(newMemberId)
-group.updateMember(knownStaticMember, protocols, responseCallback)
+val oldRebalanceTimeoutMs = knownStaticMember.rebalanceTimeoutMs
+val oldSessionTimeoutMs = knownStaticMember.sessionTimeoutMs
+group.updateMember(knownStaticMember, protocols, rebalanceTimeoutMs, 
sessionTimeoutMs, responseCallback)

Review Comment:
   I have add unit test about persistence failed after joining group 
successfully in `GroupCoordinatorTest`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] Stephan14 commented on a diff in pull request #12308: KAFKA-14009: update rebalance timeout in memory when consumers use st…

2022-07-23 Thread GitBox


Stephan14 commented on code in PR #12308:
URL: https://github.com/apache/kafka/pull/12308#discussion_r928195459


##
core/src/main/scala/kafka/coordinator/group/GroupCoordinator.scala:
##
@@ -1300,7 +1304,9 @@ class GroupCoordinator(val brokerId: Int,
 completeAndScheduleNextHeartbeatExpiration(group, member)
 
 val knownStaticMember = group.get(newMemberId)
-group.updateMember(knownStaticMember, protocols, responseCallback)
+val oldRebalanceTimeoutMs = knownStaticMember.rebalanceTimeoutMs
+val oldSessionTimeoutMs = knownStaticMember.sessionTimeoutMs
+group.updateMember(knownStaticMember, protocols, rebalanceTimeoutMs, 
sessionTimeoutMs, responseCallback)

Review Comment:
   I have add unit test about persistence failed after join group successfully 
in `GroupCoordinatorTest`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (KAFKA-2729) Cached zkVersion not equal to that in zookeeper, broker not recovering.

2022-07-23 Thread Haifeng Chen (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-2729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17570326#comment-17570326
 ] 

Haifeng Chen edited comment on KAFKA-2729 at 7/23/22 4:14 PM:
--

We saw this issue in 1.1 during kafka reconnects to zookeeper. It caused under 
minISR and got recovered in 2 minutes.


was (Author: chenhaifeng):
We saw this issue in 1.1 during kafka reconnects to zookeeper. It caused under 
minISR and got recovered in 2 minutes.

 

> Cached zkVersion not equal to that in zookeeper, broker not recovering.
> ---
>
> Key: KAFKA-2729
> URL: https://issues.apache.org/jira/browse/KAFKA-2729
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1, 0.9.0.0, 0.10.0.0, 0.10.1.0, 0.11.0.0, 2.4.1
>Reporter: Danil Serdyuchenko
>Assignee: Onur Karaman
>Priority: Critical
> Fix For: 1.1.0
>
>
> After a small network wobble where zookeeper nodes couldn't reach each other, 
> we started seeing a large number of undereplicated partitions. The zookeeper 
> cluster recovered, however we continued to see a large number of 
> undereplicated partitions. Two brokers in the kafka cluster were showing this 
> in the logs:
> {code}
> [2015-10-27 11:36:00,888] INFO Partition 
> [__samza_checkpoint_event-creation_1,3] on broker 5: Shrinking ISR for 
> partition [__samza_checkpoint_event-creation_1,3] from 6,5 to 5 
> (kafka.cluster.Partition)
> [2015-10-27 11:36:00,891] INFO Partition 
> [__samza_checkpoint_event-creation_1,3] on broker 5: Cached zkVersion [66] 
> not equal to that in zookeeper, skip updating ISR (kafka.cluster.Partition)
> {code}
> For all of the topics on the effected brokers. Both brokers only recovered 
> after a restart. Our own investigation yielded nothing, I was hoping you 
> could shed some light on this issue. Possibly if it's related to: 
> https://issues.apache.org/jira/browse/KAFKA-1382 , however we're using 
> 0.8.2.1.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-2729) Cached zkVersion not equal to that in zookeeper, broker not recovering.

2022-07-23 Thread Haifeng Chen (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-2729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17570326#comment-17570326
 ] 

Haifeng Chen commented on KAFKA-2729:
-

We saw this issue in 1.1 during kafka reconnects to zookeeper. It caused under 
minISR and got recovered in 2 minutes.

 

> Cached zkVersion not equal to that in zookeeper, broker not recovering.
> ---
>
> Key: KAFKA-2729
> URL: https://issues.apache.org/jira/browse/KAFKA-2729
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1, 0.9.0.0, 0.10.0.0, 0.10.1.0, 0.11.0.0, 2.4.1
>Reporter: Danil Serdyuchenko
>Assignee: Onur Karaman
>Priority: Critical
> Fix For: 1.1.0
>
>
> After a small network wobble where zookeeper nodes couldn't reach each other, 
> we started seeing a large number of undereplicated partitions. The zookeeper 
> cluster recovered, however we continued to see a large number of 
> undereplicated partitions. Two brokers in the kafka cluster were showing this 
> in the logs:
> {code}
> [2015-10-27 11:36:00,888] INFO Partition 
> [__samza_checkpoint_event-creation_1,3] on broker 5: Shrinking ISR for 
> partition [__samza_checkpoint_event-creation_1,3] from 6,5 to 5 
> (kafka.cluster.Partition)
> [2015-10-27 11:36:00,891] INFO Partition 
> [__samza_checkpoint_event-creation_1,3] on broker 5: Cached zkVersion [66] 
> not equal to that in zookeeper, skip updating ISR (kafka.cluster.Partition)
> {code}
> For all of the topics on the effected brokers. Both brokers only recovered 
> after a restart. Our own investigation yielded nothing, I was hoping you 
> could shed some light on this issue. Possibly if it's related to: 
> https://issues.apache.org/jira/browse/KAFKA-1382 , however we're using 
> 0.8.2.1.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14083) Check if we don't need to refresh time in RecordAccumulator.append

2022-07-23 Thread Kvicii.Yu (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17570311#comment-17570311
 ] 

Kvicii.Yu commented on KAFKA-14083:
---

PR merged, So we need close this.

> Check if we don't need to refresh time in RecordAccumulator.append
> --
>
> Key: KAFKA-14083
> URL: https://issues.apache.org/jira/browse/KAFKA-14083
> Project: Kafka
>  Issue Type: Task
>  Components: producer 
>Reporter: Artem Livshits
>Priority: Minor
>
> See https://github.com/apache/kafka/pull/12365/files#r912836877.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14085) Clean up usage of asserts in KafkaProducer

2022-07-23 Thread Kvicii.Yu (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17570310#comment-17570310
 ] 

Kvicii.Yu commented on KAFKA-14085:
---

PR merged, So we need close this.

> Clean up usage of asserts in KafkaProducer
> --
>
> Key: KAFKA-14085
> URL: https://issues.apache.org/jira/browse/KAFKA-14085
> Project: Kafka
>  Issue Type: Task
>  Components: producer 
>Reporter: Artem Livshits
>Priority: Minor
>
> See https://github.com/apache/kafka/pull/12365/files#r919749970



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14086) Cleanup PlaintextConsumerTest.testInterceptors to not pass null record

2022-07-23 Thread Kvicii.Yu (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17570309#comment-17570309
 ] 

Kvicii.Yu commented on KAFKA-14086:
---

PR merged, So we need close this.

> Cleanup PlaintextConsumerTest.testInterceptors to not pass null record
> --
>
> Key: KAFKA-14086
> URL: https://issues.apache.org/jira/browse/KAFKA-14086
> Project: Kafka
>  Issue Type: Task
>Reporter: Artem Livshits
>Priority: Minor
>
> See https://github.com/apache/kafka/pull/12365/files#r919746298



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14019) removeMembersFromConsumerGroup can't delete all members when there is no members already

2022-07-23 Thread Kvicii.Yu (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17570307#comment-17570307
 ] 

Kvicii.Yu commented on KAFKA-14019:
---

[~chia7712]  hi, your meaning we need change this 
RemoveMembersFromConsumerGroupOptions#removeAll method? If we do this, what 
correct things we can do?

> removeMembersFromConsumerGroup can't delete all members when there is no 
> members already
> 
>
> Key: KAFKA-14019
> URL: https://issues.apache.org/jira/browse/KAFKA-14019
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Priority: Minor
>
> The root cause is that the method fetch no member from server, so it fails to 
> construct RemoveMembersFromConsumerGroupOptions (it can't accept empty list)
> It seems to me deleting all members from a "empty" list is valid.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] showuon commented on a diff in pull request #12413: MINOR: Upgrade to Gradle 7.5

2022-07-23 Thread GitBox


showuon commented on code in PR #12413:
URL: https://github.com/apache/kafka/pull/12413#discussion_r928126295


##
build.gradle:
##
@@ -66,8 +66,10 @@ ext {
   if (JavaVersion.current().isCompatibleWith(JavaVersion.VERSION_16))
 defaultJvmArgs.addAll(
   "--add-opens=java.base/java.io=ALL-UNNAMED",
+  "--add-opens=java.base/java.lang=ALL-UNNAMED",
   "--add-opens=java.base/java.nio=ALL-UNNAMED",
   "--add-opens=java.base/java.nio.file=ALL-UNNAMED",
+  "--add-opens=java.base/java.util=ALL-UNNAMED",

Review Comment:
   Nice fix for gradle 7.5!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] ijuma commented on pull request #12413: MINOR: Upgrade to Gradle 7.5

2022-07-23 Thread GitBox


ijuma commented on PR #12413:
URL: https://github.com/apache/kafka/pull/12413#issuecomment-1193115461

   @omkreddy @showuon Any of you have a second to review this?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] ijuma commented on pull request #12413: MINOR: Upgrade to Gradle 7.5

2022-07-23 Thread GitBox


ijuma commented on PR #12413:
URL: https://github.com/apache/kafka/pull/12413#issuecomment-1193115390

   JDK 17 build passed, the others have unrelated failures.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] mdedetrich commented on a diff in pull request #11478: KAFKA-13299: Accept duplicate listener on port for IPv4/IPv6

2022-07-23 Thread GitBox


mdedetrich commented on code in PR #11478:
URL: https://github.com/apache/kafka/pull/11478#discussion_r928099336


##
core/src/main/scala/kafka/utils/CoreUtils.scala:
##
@@ -252,16 +255,63 @@ object CoreUtils {
 listenerListToEndPoints(listeners, securityProtocolMap, true)
   }
 
+  def validateOneIsIpv4AndOtherIpv6(first: String, second: String): Boolean =
+(inetAddressValidator.isValidInet4Address(first) && 
inetAddressValidator.isValidInet6Address(second)) ||
+  (inetAddressValidator.isValidInet6Address(first) && 
inetAddressValidator.isValidInet4Address(second))
+
+  def checkDuplicateListenerNames(endpoints: Seq[EndPoint], listeners: 
String): Unit = {
+val distinctListenerNames = endpoints.map(_.listenerName).distinct
+require(distinctListenerNames.size == endpoints.size, s"Each listener must 
have a different name unless you have exactly " +
+  s"one listener on IPv4 and the other IPv6 on the same port, listeners: 
$listeners")
+  }
+
+  def checkDuplicateListenerPorts(endpoints: Seq[EndPoint], listeners: 
String): Unit = {
+val distinctPorts = endpoints.map(_.port).distinct
+require(distinctPorts.size == endpoints.map(_.port).size, s"Each listener 
must have a different port, listeners: $listeners")
+  }
+
   def listenerListToEndPoints(listeners: String, securityProtocolMap: 
Map[ListenerName, SecurityProtocol], requireDistinctPorts: Boolean): 
Seq[EndPoint] = {
 def validate(endPoints: Seq[EndPoint]): Unit = {
-  // filter port 0 for unit tests
-  val portsExcludingZero = endPoints.map(_.port).filter(_ != 0)
-  val distinctListenerNames = endPoints.map(_.listenerName).distinct
-
-  require(distinctListenerNames.size == endPoints.size, s"Each listener 
must have a different name, listeners: $listeners")

Review Comment:
   So yeah I completely forgot about the fact that each listener needs its own 
unique listener name regardless of duplicate ports or not, I have just pushed a 
commit called `Restore unique listener name validation` which reverts the 
unique name validation back to what it was. The commit also updates the tests 
and the documentation in `KafkaConfig`.
   
   I didn't add the `props.put(KafkaConfig.ListenersProp, 
"SSL://[::1]:9096,PLAINTEXT://127.0.0.1:9096,SSL://127.0.0.1:9097")` test as 
you specified because there is an already existing test which is testing the 
exact same exception, i.e.
   ```
   props.put(KafkaConfig.ListenersProp, 
"PLAINTEXT://127.0.0.1:9092,PLAINTEXT://127.0.0.1:9092")
   caught = assertThrows(classOf[IllegalArgumentException], () => 
KafkaConfig.fromProps(props))
   assertTrue(caught.getMessage.contains("Each listener must have a different 
name"))
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] mdedetrich commented on a diff in pull request #11478: KAFKA-13299: Accept duplicate listener on port for IPv4/IPv6

2022-07-23 Thread GitBox


mdedetrich commented on code in PR #11478:
URL: https://github.com/apache/kafka/pull/11478#discussion_r928099336


##
core/src/main/scala/kafka/utils/CoreUtils.scala:
##
@@ -252,16 +255,63 @@ object CoreUtils {
 listenerListToEndPoints(listeners, securityProtocolMap, true)
   }
 
+  def validateOneIsIpv4AndOtherIpv6(first: String, second: String): Boolean =
+(inetAddressValidator.isValidInet4Address(first) && 
inetAddressValidator.isValidInet6Address(second)) ||
+  (inetAddressValidator.isValidInet6Address(first) && 
inetAddressValidator.isValidInet4Address(second))
+
+  def checkDuplicateListenerNames(endpoints: Seq[EndPoint], listeners: 
String): Unit = {
+val distinctListenerNames = endpoints.map(_.listenerName).distinct
+require(distinctListenerNames.size == endpoints.size, s"Each listener must 
have a different name unless you have exactly " +
+  s"one listener on IPv4 and the other IPv6 on the same port, listeners: 
$listeners")
+  }
+
+  def checkDuplicateListenerPorts(endpoints: Seq[EndPoint], listeners: 
String): Unit = {
+val distinctPorts = endpoints.map(_.port).distinct
+require(distinctPorts.size == endpoints.map(_.port).size, s"Each listener 
must have a different port, listeners: $listeners")
+  }
+
   def listenerListToEndPoints(listeners: String, securityProtocolMap: 
Map[ListenerName, SecurityProtocol], requireDistinctPorts: Boolean): 
Seq[EndPoint] = {
 def validate(endPoints: Seq[EndPoint]): Unit = {
-  // filter port 0 for unit tests
-  val portsExcludingZero = endPoints.map(_.port).filter(_ != 0)
-  val distinctListenerNames = endPoints.map(_.listenerName).distinct
-
-  require(distinctListenerNames.size == endPoints.size, s"Each listener 
must have a different name, listeners: $listeners")

Review Comment:
   So yeah I completely forgot about the fact that each listener needs its own 
unique listener name regardless of duplicate ports or not, I have just pushed a 
commit called `Restore unique listener name validation` which reverts the 
unique name validation back to what it was. The commit also updates the tests 
and the documentation in `KafkaConfig`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] qingwei91 commented on a diff in pull request #12166: KAFKA-13817 Always sync nextTimeToEmit with wall clock

2022-07-23 Thread GitBox


qingwei91 commented on code in PR #12166:
URL: https://github.com/apache/kafka/pull/12166#discussion_r928095637


##
streams/src/test/java/org/apache/kafka/streams/kstream/internals/KStreamKStreamJoinTest.java:
##
@@ -333,6 +352,87 @@ public void shouldJoinWithCustomStoreSuppliers() {
 runJoin(streamJoined.withOtherStoreSupplier(otherStoreSupplier), 
joinWindows);
 }
 
+@Test
+public void shouldThrottleEmitNonJoinedOuterRecordsEvenWhenClockDrift() {

Review Comment:
   Thanks for the advice, I will try to mimick that



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (KAFKA-14099) No REST API request logs in Kafka connect

2022-07-23 Thread Alexandre Garnier (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Garnier updated KAFKA-14099:
--
Labels: pull-request-available  (was: )

> No REST API request logs in Kafka connect
> -
>
> Key: KAFKA-14099
> URL: https://issues.apache.org/jira/browse/KAFKA-14099
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 3.2.0
>Reporter: Alexandre Garnier
>Priority: Minor
>  Labels: pull-request-available
>
> Prior to 2.2.1, when an REST API request was performed, there was a request 
> log in the log file:
> {code:java}
> [2022-07-23 07:18:16,128] INFO 172.18.0.1 - - [23/Jul/2022:07:18:16 +] 
> "GET /connectors HTTP/1.1" 200 2 "-" "curl/7.81.0" 66 
> (org.apache.kafka.connect.runtime.rest.RestServer:62)
> {code}
> Since 2.2.1, no more request logs.
>  
> With a bisect, I found the problem comes from [PR 
> 6651|https://github.com/apache/kafka/pull/6651] to fix KAFKA-8304
> From what I understand of the problem, the ContextHandlerCollection is added 
> in the Server 
> ([https://github.com/dongjinleekr/kafka/blob/63a6130af30536d67fca5802005695a84c875b5e/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java#L195])
>  before handlers are really added in the ContextHandlerCollection 
> ([https://github.com/dongjinleekr/kafka/blob/63a6130af30536d67fca5802005695a84c875b5e/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java#L296]).
> I don't know the impact on other handlers, but clearly it doesn't work for 
> the RequestLogHandler.
>  
> A solution I found for the logging issue is to set the RequestLog directly in 
> the server without using an handlers:
> {code:java}
> diff --git 
> i/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java
>  
> w/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java
> index ab18419efc..4d09cc0e6c 100644
> --- 
> i/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java
> +++ 
> w/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java
> @@ -187,6 +187,11 @@ public class RestServer {
>  public void initializeServer() {
>  log.info("Initializing REST server");
>  
> +Slf4jRequestLogWriter slf4jRequestLogWriter = new 
> Slf4jRequestLogWriter();
> +
> slf4jRequestLogWriter.setLoggerName(RestServer.class.getCanonicalName());
> +CustomRequestLog requestLog = new 
> CustomRequestLog(slf4jRequestLogWriter, CustomRequestLog.EXTENDED_NCSA_FORMAT 
> + " %{ms}T");
> +jettyServer.setRequestLog(requestLog);
> +
>  /* Needed for graceful shutdown as per `setStopTimeout` 
> documentation */
>  StatisticsHandler statsHandler = new StatisticsHandler();
>  statsHandler.setHandler(handlers);
> @@ -275,14 +280,7 @@ public class RestServer {
>  configureHttpResponsHeaderFilter(context);
>  }
>  
> -RequestLogHandler requestLogHandler = new RequestLogHandler();
> -Slf4jRequestLogWriter slf4jRequestLogWriter = new 
> Slf4jRequestLogWriter();
> -
> slf4jRequestLogWriter.setLoggerName(RestServer.class.getCanonicalName());
> -CustomRequestLog requestLog = new 
> CustomRequestLog(slf4jRequestLogWriter, CustomRequestLog.EXTENDED_NCSA_FORMAT 
> + " %{ms}T");
> -requestLogHandler.setRequestLog(requestLog);
> -
>  contextHandlers.add(new DefaultHandler());
> -contextHandlers.add(requestLogHandler);
>  
>  handlers.setHandlers(contextHandlers.toArray(new Handler[0]));
>  try {
> {code}
> Same issue raised on StackOverflow: 
> [https://stackoverflow.com/questions/67699702/no-rest-api-logs-in-kafka-connect]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] zigarn opened a new pull request, #12434: KAFKA-14099 - Fix request logs in connect

2022-07-23 Thread GitBox


zigarn opened a new pull request, #12434:
URL: https://github.com/apache/kafka/pull/12434

   [KAFKA-14099](https://issues.apache.org/jira/browse/KAFKA-14099)
   
   Restore request logs in connect. `RequestLogHandler` seems to not work.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (KAFKA-14099) No REST API request logs in Kafka connect

2022-07-23 Thread Alexandre Garnier (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Garnier updated KAFKA-14099:
--
Description: 
Prior to 2.2.1, when an REST API request was performed, there was a request log 
in the log file:
{code:java}
[2022-07-23 07:18:16,128] INFO 172.18.0.1 - - [23/Jul/2022:07:18:16 +] "GET 
/connectors HTTP/1.1" 200 2 "-" "curl/7.81.0" 66 
(org.apache.kafka.connect.runtime.rest.RestServer:62)
{code}
Since 2.2.1, no more request logs.

 

With a bisect, I found the problem comes from [PR 
6651|https://github.com/apache/kafka/pull/6651] to fix KAFKA-8304

>From what I understand of the problem, the ContextHandlerCollection is added 
>in the Server 
>([https://github.com/dongjinleekr/kafka/blob/63a6130af30536d67fca5802005695a84c875b5e/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java#L195])
> before handlers are really added in the ContextHandlerCollection 
>([https://github.com/dongjinleekr/kafka/blob/63a6130af30536d67fca5802005695a84c875b5e/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java#L296]).
I don't know the impact on other handlers, but clearly it doesn't work for the 
RequestLogHandler.

 

A solution I found for the logging issue is to set the RequestLog directly in 
the server without using an handlers:
{code:java}
diff --git 
i/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java
 
w/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java
index ab18419efc..4d09cc0e6c 100644
--- 
i/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java
+++ 
w/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java
@@ -187,6 +187,11 @@ public class RestServer {
 public void initializeServer() {
 log.info("Initializing REST server");
 
+Slf4jRequestLogWriter slf4jRequestLogWriter = new 
Slf4jRequestLogWriter();
+
slf4jRequestLogWriter.setLoggerName(RestServer.class.getCanonicalName());
+CustomRequestLog requestLog = new 
CustomRequestLog(slf4jRequestLogWriter, CustomRequestLog.EXTENDED_NCSA_FORMAT + 
" %{ms}T");
+jettyServer.setRequestLog(requestLog);
+
 /* Needed for graceful shutdown as per `setStopTimeout` documentation 
*/
 StatisticsHandler statsHandler = new StatisticsHandler();
 statsHandler.setHandler(handlers);
@@ -275,14 +280,7 @@ public class RestServer {
 configureHttpResponsHeaderFilter(context);
 }
 
-RequestLogHandler requestLogHandler = new RequestLogHandler();
-Slf4jRequestLogWriter slf4jRequestLogWriter = new 
Slf4jRequestLogWriter();
-
slf4jRequestLogWriter.setLoggerName(RestServer.class.getCanonicalName());
-CustomRequestLog requestLog = new 
CustomRequestLog(slf4jRequestLogWriter, CustomRequestLog.EXTENDED_NCSA_FORMAT + 
" %{ms}T");
-requestLogHandler.setRequestLog(requestLog);
-
 contextHandlers.add(new DefaultHandler());
-contextHandlers.add(requestLogHandler);
 
 handlers.setHandlers(contextHandlers.toArray(new Handler[0]));
 try {
{code}
Same issue raised on StackOverflow: 
[https://stackoverflow.com/questions/67699702/no-rest-api-logs-in-kafka-connect]

  was:
Prior to 2.2.1, when an REST API request was performed, there was a request log 
in the log file:
{code:java}
[2022-07-23 07:18:16,128] INFO 172.18.0.1 - - [23/Jul/2022:07:18:16 +] "GET 
/connectors HTTP/1.1" 200 2 "-" "curl/7.81.0" 66 
(org.apache.kafka.connect.runtime.rest.RestServer:62)
{code}
Since 2.2.1, no more request logs

 

With a bisect, I found the problem comes from [PR 
6651|https://github.com/apache/kafka/pull/6651] to fix KAFKA-8304

>From what I understand of the problem, the ContextHandlerCollection is added 
>in the Server 
>([https://github.com/dongjinleekr/kafka/blob/63a6130af30536d67fca5802005695a84c875b5e/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java#L195])
> before handlers are really added in the ContextHandlerCollection 
>([https://github.com/dongjinleekr/kafka/blob/63a6130af30536d67fca5802005695a84c875b5e/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java#L296]).
I don't know the impact on other handlers, but clearly it doesn't work for the 
RequestLogHandler.

 

A solution I found for the logging issue is to set the RequestLog directly in 
the server without using an handlers:
{code:java}
diff --git 
i/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java
 
w/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java
index ab18419efc..4d09cc0e6c 100644
--- 
i/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java
+++ 
w/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/Res

[jira] [Updated] (KAFKA-14099) No REST API request logs in Kafka connect

2022-07-23 Thread Alexandre Garnier (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Garnier updated KAFKA-14099:
--
Description: 
Prior to 2.2.1, when an REST API request was performed, there was a request log 
in the log file:
{code:java}
[2022-07-23 07:18:16,128] INFO 172.18.0.1 - - [23/Jul/2022:07:18:16 +] "GET 
/connectors HTTP/1.1" 200 2 "-" "curl/7.81.0" 66 
(org.apache.kafka.connect.runtime.rest.RestServer:62)
{code}
Since 2.2.1, no more request logs

 

With a bisect, I found the problem comes from [PR 
6651|https://github.com/apache/kafka/pull/6651] to fix KAFKA-8304

>From what I understand of the problem, the ContextHandlerCollection is added 
>in the Server 
>([https://github.com/dongjinleekr/kafka/blob/63a6130af30536d67fca5802005695a84c875b5e/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java#L195])
> before handlers are really added in the ContextHandlerCollection 
>([https://github.com/dongjinleekr/kafka/blob/63a6130af30536d67fca5802005695a84c875b5e/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java#L296]).
I don't know the impact on other handlers, but clearly it doesn't work for the 
RequestLogHandler.

 

A solution I found for the logging issue is to set the RequestLog directly in 
the server without using an handlers:
{code:java}
diff --git 
i/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java
 
w/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java
index ab18419efc..4d09cc0e6c 100644
--- 
i/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java
+++ 
w/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java
@@ -187,6 +187,11 @@ public class RestServer {
 public void initializeServer() {
 log.info("Initializing REST server");
 
+Slf4jRequestLogWriter slf4jRequestLogWriter = new 
Slf4jRequestLogWriter();
+
slf4jRequestLogWriter.setLoggerName(RestServer.class.getCanonicalName());
+CustomRequestLog requestLog = new 
CustomRequestLog(slf4jRequestLogWriter, CustomRequestLog.EXTENDED_NCSA_FORMAT + 
" %{ms}T");
+jettyServer.setRequestLog(requestLog);
+
 /* Needed for graceful shutdown as per `setStopTimeout` documentation 
*/
 StatisticsHandler statsHandler = new StatisticsHandler();
 statsHandler.setHandler(handlers);
@@ -275,14 +280,7 @@ public class RestServer {
 configureHttpResponsHeaderFilter(context);
 }
 
-RequestLogHandler requestLogHandler = new RequestLogHandler();
-Slf4jRequestLogWriter slf4jRequestLogWriter = new 
Slf4jRequestLogWriter();
-
slf4jRequestLogWriter.setLoggerName(RestServer.class.getCanonicalName());
-CustomRequestLog requestLog = new 
CustomRequestLog(slf4jRequestLogWriter, CustomRequestLog.EXTENDED_NCSA_FORMAT + 
" %{ms}T");
-requestLogHandler.setRequestLog(requestLog);
-
 contextHandlers.add(new DefaultHandler());
-contextHandlers.add(requestLogHandler);
 
 handlers.setHandlers(contextHandlers.toArray(new Handler[0]));
 try {
{code}
Same issue raised on StackOverflow: 
[https://stackoverflow.com/questions/67699702/no-rest-api-logs-in-kafka-connect]

  was:
Prior to 2.2.1, when an REST API request was performed, there was a request log 
in the log file:
{code}
[2022-07-23 07:18:16,128] INFO 172.18.0.1 - - [23/Jul/2022:07:18:16 +] "GET 
/connectors HTTP/1.1" 200 2 "-" "curl/7.81.0" 66 
(org.apache.kafka.connect.runtime.rest.RestServer:62)
{code}
 

With a bisect, I found the problem comes from [PR 
6651|https://github.com/apache/kafka/pull/6651] to fix KAFKA-8304

>From what I understand of the problem, the ContextHandlerCollection is added 
>in the Server 
>(https://github.com/dongjinleekr/kafka/blob/63a6130af30536d67fca5802005695a84c875b5e/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java#L195)
> before handlers are really added in the ContextHandlerCollection 
>(https://github.com/dongjinleekr/kafka/blob/63a6130af30536d67fca5802005695a84c875b5e/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java#L296).
I don't know the impact on other handlers, but clearly it doesn't work for the 
RequestLogHandler

A solution I found for the logging issue is to set the RequestLog directly in 
the server without using an handlers:

{code}
diff --git 
i/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java
 
w/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java
index ab18419efc..4d09cc0e6c 100644
--- 
i/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java
+++ 
w/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java
@@ -187,6 +187,11 @@ public class RestSe

[jira] [Created] (KAFKA-14099) No REST API request logs in Kafka connect

2022-07-23 Thread Alexandre Garnier (Jira)
Alexandre Garnier created KAFKA-14099:
-

 Summary: No REST API request logs in Kafka connect
 Key: KAFKA-14099
 URL: https://issues.apache.org/jira/browse/KAFKA-14099
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 3.2.0
Reporter: Alexandre Garnier


Prior to 2.2.1, when an REST API request was performed, there was a request log 
in the log file:
{code}
[2022-07-23 07:18:16,128] INFO 172.18.0.1 - - [23/Jul/2022:07:18:16 +] "GET 
/connectors HTTP/1.1" 200 2 "-" "curl/7.81.0" 66 
(org.apache.kafka.connect.runtime.rest.RestServer:62)
{code}
 

With a bisect, I found the problem comes from [PR 
6651|https://github.com/apache/kafka/pull/6651] to fix KAFKA-8304

>From what I understand of the problem, the ContextHandlerCollection is added 
>in the Server 
>(https://github.com/dongjinleekr/kafka/blob/63a6130af30536d67fca5802005695a84c875b5e/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java#L195)
> before handlers are really added in the ContextHandlerCollection 
>(https://github.com/dongjinleekr/kafka/blob/63a6130af30536d67fca5802005695a84c875b5e/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java#L296).
I don't know the impact on other handlers, but clearly it doesn't work for the 
RequestLogHandler

A solution I found for the logging issue is to set the RequestLog directly in 
the server without using an handlers:

{code}
diff --git 
i/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java
 
w/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java
index ab18419efc..4d09cc0e6c 100644
--- 
i/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java
+++ 
w/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java
@@ -187,6 +187,11 @@ public class RestServer {
 public void initializeServer() {
 log.info("Initializing REST server");
 
+Slf4jRequestLogWriter slf4jRequestLogWriter = new 
Slf4jRequestLogWriter();
+
slf4jRequestLogWriter.setLoggerName(RestServer.class.getCanonicalName());
+CustomRequestLog requestLog = new 
CustomRequestLog(slf4jRequestLogWriter, CustomRequestLog.EXTENDED_NCSA_FORMAT + 
" %{ms}T");
+jettyServer.setRequestLog(requestLog);
+
 /* Needed for graceful shutdown as per `setStopTimeout` documentation 
*/
 StatisticsHandler statsHandler = new StatisticsHandler();
 statsHandler.setHandler(handlers);
@@ -275,14 +280,7 @@ public class RestServer {
 configureHttpResponsHeaderFilter(context);
 }
 
-RequestLogHandler requestLogHandler = new RequestLogHandler();
-Slf4jRequestLogWriter slf4jRequestLogWriter = new 
Slf4jRequestLogWriter();
-
slf4jRequestLogWriter.setLoggerName(RestServer.class.getCanonicalName());
-CustomRequestLog requestLog = new 
CustomRequestLog(slf4jRequestLogWriter, CustomRequestLog.EXTENDED_NCSA_FORMAT + 
" %{ms}T");
-requestLogHandler.setRequestLog(requestLog);
-
 contextHandlers.add(new DefaultHandler());
-contextHandlers.add(requestLogHandler);
 
 handlers.setHandlers(contextHandlers.toArray(new Handler[0]));
 try {
{code}

Same issue raised on StackOverflow: 
https://stackoverflow.com/questions/67699702/no-rest-api-logs-in-kafka-connect



--
This message was sent by Atlassian Jira
(v8.20.10#820010)