[GitHub] [kafka] showuon opened a new pull request #11646: KAFKA-13566: producer exponential backoff implementation for KIP-580

2022-01-03 Thread GitBox


showuon opened a new pull request #11646:
URL: https://github.com/apache/kafka/pull/11646


   Follow up https://github.com/apache/kafka/pull/8846, to complete the 
Implementation of 
[KIP-580](https://cwiki.apache.org/confluence/display/KAFKA/KIP-580%3A+Exponential+Backoff+for+Kafka+Clients):
 add producer exponential backoff
   
   Updated classes:
   1.  RecordAccumulator
   2. TransactionManager
   
   Co-Authored-By: Cheng Tan <31675100+ctan...@users.noreply.github.com>
   
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] dengziming opened a new pull request #11645: KAFKA-13005: Support jbod in KRaft

2022-01-03 Thread GitBox


dengziming opened a new pull request #11645:
URL: https://github.com/apache/kafka/pull/11645


   *More detailed description of your change*
   This is based on #9577
   
   *Summary of testing strategy (including rationale)*
   1. Add unit test for the relating logic
   2. change LogDirFailureTest to support KRaft
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] showuon commented on pull request #11641: MINOR: update the comment for Utils.atomicMoveWithFallback

2022-01-03 Thread GitBox


showuon commented on pull request #11641:
URL: https://github.com/apache/kafka/pull/11641#issuecomment-1004574629


   > @showuon Thanks for the review. I am kind of confused by the build failure 
here. It's not related with my change?
   
   Don't worry, it happened sometimes. Your change is only java doc, won't 
impact the tests.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] GOODBOY008 opened a new pull request #11644: KAFKA-13433:JsonConverter's method convertToJson when field is optional with default value and value is null, return default value.

2022-01-03 Thread GitBox


GOODBOY008 opened a new pull request #11644:
URL: https://github.com/apache/kafka/pull/11644


   Issue is [here](https://issues.apache.org/jira/browse/KAFKA-13433) .
   
   Problem:
   JsonConverter's method convertToJson when field is optional with default 
value and value is null, return default value.
   It should return `null` value.I found this scene when use debezium as cdc 
source. I create a table with a field (optional with default value), insert 
into a row with 'null' value, but got default value from debezium, with debug 
source code I found kafka-connect class 'JsonConverter'.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] liuchang0520 commented on pull request #11641: MINOR: update the comment for Utils.atomicMoveWithFallback

2022-01-03 Thread GitBox


liuchang0520 commented on pull request #11641:
URL: https://github.com/apache/kafka/pull/11641#issuecomment-1004565010


   @showuon Thanks for the review. I am kind of confused by the build failure 
here. It's not related with my change?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] showuon commented on a change in pull request #11642: MINOR: Improve Connect docs

2022-01-03 Thread GitBox


showuon commented on a change in pull request #11642:
URL: https://github.com/apache/kafka/pull/11642#discussion_r777841645



##
File path: docs/connect.html
##
@@ -41,7 +41,7 @@ Running 
Kafka ConnectIn standalone mode all work is performed in a single process. This 
configuration is simpler to setup and get started with and may be useful in 
situations where only one worker makes sense (e.g. collecting log files), but 
it does not benefit from some of the features of Kafka Connect such as fault 
tolerance. You can start a standalone process with the following command:
 
 
- bin/connect-standalone.sh config/connect-standalone.properties 
connector1.properties [connector2.properties ...]
+ bin/connect-standalone.sh config/connect-standalone.properties 
connector1.properties [connector2.properties ...]
 

Review comment:
   I found we put an additional line for each command. 
   
![image](https://user-images.githubusercontent.com/43372967/148015459-d08d7609-7e19-4c7e-840b-6e37fb49578b.png)
   
   
   We should put the ending `` at the end of the command, ex:
   ```
   
bin/connect-standalone.sh config/connect-standalone.properties 
connector1.properties [connector2.properties ...]
   ```
   
   Same as other commands.
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (KAFKA-9366) Upgrade log4j to log4j2

2022-01-03 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17468340#comment-17468340
 ] 

Dongjin Lee commented on KAFKA-9366:


[~Ashoking] No, I can't certain it. Please have a look on the preview based on 
3.0.0.

> Upgrade log4j to log4j2
> ---
>
> Key: KAFKA-9366
> URL: https://issues.apache.org/jira/browse/KAFKA-9366
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.2.0, 2.1.1, 2.3.0, 2.4.0
>Reporter: leibo
>Assignee: Dongjin Lee
>Priority: Critical
>  Labels: needs-kip
> Fix For: 3.2.0
>
>
> h2. CVE-2019-17571 Detail
> Included in Log4j 1.2 is a SocketServer class that is vulnerable to 
> deserialization of untrusted data which can be exploited to remotely execute 
> arbitrary code when combined with a deserialization gadget when listening to 
> untrusted network traffic for log data. This affects Log4j versions up to 1.2 
> up to 1.2.17.
>  
> [https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17571]
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [kafka] showuon commented on a change in pull request #11564: MINOR: improve consoleProducer option description

2022-01-03 Thread GitBox


showuon commented on a change in pull request #11564:
URL: https://github.com/apache/kafka/pull/11564#discussion_r93126



##
File path: core/src/main/scala/kafka/tools/ConsoleProducer.scala
##
@@ -146,62 +146,71 @@ object ConsoleProducer {
   .describedAs("size")
   .ofType(classOf[java.lang.Integer])
   .defaultsTo(200)
-val messageSendMaxRetriesOpt = parser.accepts("message-send-max-retries", 
"Brokers can fail receiving the message for multiple reasons, and being 
unavailable transiently is just one of them. This property specifies the number 
of retries before the producer give up and drop this message.")
+val messageSendMaxRetriesOpt = parser.accepts("message-send-max-retries", 
"Brokers can fail receiving the message for multiple reasons, " +
+  "and being unavailable transiently is just one of them. This property 
specifies the number of retries before the producer give up and drop this 
message. " +
+  "This is the option to control the `retries` in producer configs.")
   .withRequiredArg
   .ofType(classOf[java.lang.Integer])
-  .defaultsTo(3)
-val retryBackoffMsOpt = parser.accepts("retry-backoff-ms", "Before each 
retry, the producer refreshes the metadata of relevant topics. Since leader 
election takes a bit of time, this property specifies the amount of time that 
the producer waits before refreshing the metadata.")
+  .defaultsTo(Integer.MAX_VALUE)

Review comment:
   Good point! Although we have `delivery.timeout.ms` to fail the send 
request, it'd be good to keep the same behavior as before. I reverted it back 
to 3 now. Thanks.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (KAFKA-13574) NotLeaderOrFollowerException thrown for a successful send

2022-01-03 Thread Jun Rao (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17468275#comment-17468275
 ] 

Jun Rao commented on KAFKA-13574:
-

[~aphyr] : Thanks for reporting this issue. [~hachikuji] : Thanks for the 
investigation.

NOT_LEADER_OR_FOLLOWER is considered an indefinite error, which means that the 
produce request may or may not have succeeded. We could wait a bit longer to 
return a more precise response. However, we need to make sure that the record 
committed at offset N is indeed the one from the producer to be acknowledged. 
To do that, we could save the leader epoch after the produce records are 
appended to the leader's log. When checking the purgatory for completion, we 
could compare the records' committed leader epoch (probably through leader 
epoch cache) with the expected one.

> NotLeaderOrFollowerException thrown for a successful send
> -
>
> Key: KAFKA-13574
> URL: https://issues.apache.org/jira/browse/KAFKA-13574
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 3.0.0
> Environment: openjdk version "11.0.13" 2021-10-19
>Reporter: Kyle Kingsbury
>Priority: Minor
>  Labels: error-handling
>
> With org.apache.kafka/kafka-clients 3.0.0, under rare circumstances involving 
> multiple node and network failures, I've observed a call to `producer.send()` 
> throw `NotLeaderOrFollowerException` for a message which later appears in 
> `consumer.poll()` return values.
> I don't have a reliable repro case for this yet, but the case I hit involved 
> retries=1000, acks=all, and idempotence enabled. I suspect what might be 
> happening here is that an initial attempt to send the message makes it to the 
> server and is committed, but the acknowledgement is lost e.g. due to timeout; 
> the Kafka producer then automatically retries the send attempt, and on that 
> retry hits a NotLeaderOrFollowerException, which is thrown back to the 
> caller. If we interpret NotLeaderOrFollowerException as a definite failure, 
> then this would constitute an aborted read.
> I've seen issues like this in a number of databases around client or 
> server-internal retry mechanisms, and I think the thing to do is: rather than 
> throwing the most *recent* error, throw the {*}most indefinite{*}. That way 
> clients know that their request may have actually succeeded, and they won't 
> (e.g.) attempt to re-submit a non-idempotent request again.
> As a side note: is there... perhaps documentation on which errors in Kafka 
> are supposed to be definite vs indefinite? NotLeaderOrFollowerException is a 
> subclass of RetriableException, but it looks like RetriableException is more 
> about transient vs permanent errors than whether it's safe to retry.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [kafka] kehuum closed pull request #11643: Fix flakiness in SaslClientsWithInvalidCredentialsTest#testProducerWithAuthenticationFailure

2022-01-03 Thread GitBox


kehuum closed pull request #11643:
URL: https://github.com/apache/kafka/pull/11643


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] kehuum opened a new pull request #11643: Fix flakiness in SaslClientsWithInvalidCredentialsTest#testProducerWithAuthenticationFailure

2022-01-03 Thread GitBox


kehuum opened a new pull request #11643:
URL: https://github.com/apache/kafka/pull/11643


   Test 
SaslClientsWithInvalidCredentialsTest#testProducerWithAuthenticationFailure 
constantly fails in both 2.11 and 2.12 build with exceeding the timeout limit 
of 5000ms when expecting certain exceptions, one example run is 
https://github.com/linkedin/kafka/runs/4695386759?check_suite_focus=true. But 
there're places where the timeout is defined as 1ms or 6ms by default 
in partitionsFor call, local run of this test also constanly fails with a 30s 
timeout. Thus increasing the timeout check to default maxBlockMs to avoid flaky 
test.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (KAFKA-13574) NotLeaderOrFollowerException thrown for a successful send

2022-01-03 Thread Jason Gustafson (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17468160#comment-17468160
 ] 

Jason Gustafson commented on KAFKA-13574:
-

[~aphyr] Thanks for the bug report. My guess is that you might be hitting a 
race condition such as the following during a leader change:

1. Leader takes the write at offset N and parks the request in purgatory to 
await replication.
2. Follower acks offset N-1 and writes the data up to offset N.
3. Follower becomes new leader before it is able to ack offset N.
4. Old leader sees the leader change and returns NOT_LEADER to the still-parked 
produce request.

In this scenario, the new leader has the data and it may become committed even 
though the produce request returned NOT_LEADER. There might be a better error 
to return in this scenario, but we do want the producer to realize that it 
needs to find the new leader. I guess we could also hold the request a bit 
longer until we know whether or not it was committed. Failing that, we should 
make it clear that NOT_LEADER does not necessarily indicate a failure to write 
the data.

[~junrao] Any thoughts?



> NotLeaderOrFollowerException thrown for a successful send
> -
>
> Key: KAFKA-13574
> URL: https://issues.apache.org/jira/browse/KAFKA-13574
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 3.0.0
> Environment: openjdk version "11.0.13" 2021-10-19
>Reporter: Kyle Kingsbury
>Priority: Minor
>  Labels: error-handling
>
> With org.apache.kafka/kafka-clients 3.0.0, under rare circumstances involving 
> multiple node and network failures, I've observed a call to `producer.send()` 
> throw `NotLeaderOrFollowerException` for a message which later appears in 
> `consumer.poll()` return values.
> I don't have a reliable repro case for this yet, but the case I hit involved 
> retries=1000, acks=all, and idempotence enabled. I suspect what might be 
> happening here is that an initial attempt to send the message makes it to the 
> server and is committed, but the acknowledgement is lost e.g. due to timeout; 
> the Kafka producer then automatically retries the send attempt, and on that 
> retry hits a NotLeaderOrFollowerException, which is thrown back to the 
> caller. If we interpret NotLeaderOrFollowerException as a definite failure, 
> then this would constitute an aborted read.
> I've seen issues like this in a number of databases around client or 
> server-internal retry mechanisms, and I think the thing to do is: rather than 
> throwing the most *recent* error, throw the {*}most indefinite{*}. That way 
> clients know that their request may have actually succeeded, and they won't 
> (e.g.) attempt to re-submit a non-idempotent request again.
> As a side note: is there... perhaps documentation on which errors in Kafka 
> are supposed to be definite vs indefinite? NotLeaderOrFollowerException is a 
> subclass of RetriableException, but it looks like RetriableException is more 
> about transient vs permanent errors than whether it's safe to retry.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [kafka] mimaison opened a new pull request #11642: MINOR: Improve Connect docs

2022-01-03 Thread GitBox


mimaison opened a new pull request #11642:
URL: https://github.com/apache/kafka/pull/11642


   - Fix indendation of code blocks
   - Add links to all SMTs and Predicates
   
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] Migueljfs commented on pull request #6295: KIP-382: MirrorMaker 2.0

2022-01-03 Thread GitBox


Migueljfs commented on pull request #6295:
URL: https://github.com/apache/kafka/pull/6295#issuecomment-1004211004


   I can't seem to find the new metrics mentioned in the KIP.
   Like @michael-trelinski said above, I've tried both kafka.connect.mirror / 
kafka.mirror.connect AND MirrorSourceConnector / MirrorSourceConnect and all 
combinations of them but still I don't see the mirror domain the respective 
mbeans.
   
   I installed JMXTERM on my mirror-maker pod and this is what I got:
   
   Domains:
   ```
   $>domains
   #following domains are available
   JMImplementation
   com.sun.management
   java.lang
   java.nio
   java.util.logging
   jdk.management.jfr
   kafka.connect
   kafka.consumer
   kafka.producer
   ```
   
   Beans for connect:
   ```
   #domain = kafka.connect:
   kafka.connect:client-id=connect-1,node-id=node--1,type=connect-node-metrics
   kafka.connect:client-id=connect-1,node-id=node-7,type=connect-node-metrics
   kafka.connect:client-id=connect-1,type=app-info
   kafka.connect:client-id=connect-1,type=connect-coordinator-metrics
   kafka.connect:client-id=connect-1,type=connect-metrics
   kafka.connect:client-id=connect-1,type=kafka-metrics-count
   kafka.connect:id="west->central",type=app-info
   kafka.connect:id=connect-1,type=app-info
   kafka.connect:type=app-info
   kafka.connect:type=connect-worker-metrics
   kafka.connect:type=connect-worker-rebalance-metrics
   kafka.connect:type=kafka-metrics-count
   #domain = kafka.consumer:
   ```
   
   As you can see, no mentions of "mirror" of any kind anywhere.
   
   Anyone able to help me understand what's going on?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] mimaison commented on a change in pull request #11564: MINOR: improve consoleProducer option description

2022-01-03 Thread GitBox


mimaison commented on a change in pull request #11564:
URL: https://github.com/apache/kafka/pull/11564#discussion_r777573963



##
File path: core/src/main/scala/kafka/tools/ConsoleProducer.scala
##
@@ -146,62 +146,71 @@ object ConsoleProducer {
   .describedAs("size")
   .ofType(classOf[java.lang.Integer])
   .defaultsTo(200)
-val messageSendMaxRetriesOpt = parser.accepts("message-send-max-retries", 
"Brokers can fail receiving the message for multiple reasons, and being 
unavailable transiently is just one of them. This property specifies the number 
of retries before the producer give up and drop this message.")
+val messageSendMaxRetriesOpt = parser.accepts("message-send-max-retries", 
"Brokers can fail receiving the message for multiple reasons, " +
+  "and being unavailable transiently is just one of them. This property 
specifies the number of retries before the producer give up and drop this 
message. " +
+  "This is the option to control the `retries` in producer configs.")
   .withRequiredArg
   .ofType(classOf[java.lang.Integer])
-  .defaultsTo(3)
-val retryBackoffMsOpt = parser.accepts("retry-backoff-ms", "Before each 
retry, the producer refreshes the metadata of relevant topics. Since leader 
election takes a bit of time, this property specifies the amount of time that 
the producer waits before refreshing the metadata.")
+  .defaultsTo(Integer.MAX_VALUE)

Review comment:
   Isn't this going to change the behavior in case there's an issue?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] jlprat commented on pull request #11518: MINOR: Upgrade to Gradle 7.3.3

2022-01-03 Thread GitBox


jlprat commented on pull request #11518:
URL: https://github.com/apache/kafka/pull/11518#issuecomment-1004142156


   Ping @ijuma: changes are applied and it should be good to merge


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (KAFKA-13374) [Docs] - All reads from the leader of the partition even after KIP-392?

2022-01-03 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17468033#comment-17468033
 ] 

ASF GitHub Bot commented on KAFKA-13374:


mimaison merged pull request #391:
URL: https://github.com/apache/kafka-site/pull/391


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [Docs] - All reads from the leader of the partition even after KIP-392?
> ---
>
> Key: KAFKA-13374
> URL: https://issues.apache.org/jira/browse/KAFKA-13374
> Project: Kafka
>  Issue Type: Bug
>Reporter: Robin Moffatt
>Assignee: Luke Chen
>Priority: Trivial
> Fix For: 3.2.0
>
>
> On `https://kafka.apache.org/documentation/#design_replicatedlog` it says
> > All reads and writes go to the leader of the partition.
> However with KIP-392 I didn't think this was the case any more. If so, the 
> doc should be updated to clarify. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [kafka] mimaison commented on pull request #11456: KAFKA-13351: Add possibility to write kafka headers in Kafka Console Producer

2022-01-03 Thread GitBox


mimaison commented on pull request #11456:
URL: https://github.com/apache/kafka/pull/11456#issuecomment-1004135258


   @florin-akermann I don't think you need to update the KIP. Thanks for 
pushing an update


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (KAFKA-9366) Upgrade log4j to log4j2

2022-01-03 Thread ashok talari (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17468014#comment-17468014
 ] 

ashok talari edited comment on KAFKA-9366 at 1/3/22, 2:21 PM:
--

Hi Dongjin Lee,

Could you please let us know if any tentative dates for official release for 
latest log4j build.


was (Author: JIRAUSER282960):
Hi Dongjin Lee,

 

Could you please let us know if any tentative dates for official release for 
latest log4j build.

> Upgrade log4j to log4j2
> ---
>
> Key: KAFKA-9366
> URL: https://issues.apache.org/jira/browse/KAFKA-9366
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.2.0, 2.1.1, 2.3.0, 2.4.0
>Reporter: leibo
>Assignee: Dongjin Lee
>Priority: Critical
>  Labels: needs-kip
> Fix For: 3.2.0
>
>
> h2. CVE-2019-17571 Detail
> Included in Log4j 1.2 is a SocketServer class that is vulnerable to 
> deserialization of untrusted data which can be exploited to remotely execute 
> arbitrary code when combined with a deserialization gadget when listening to 
> untrusted network traffic for log data. This affects Log4j versions up to 1.2 
> up to 1.2.17.
>  
> [https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17571]
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (KAFKA-9366) Upgrade log4j to log4j2

2022-01-03 Thread ashok talari (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17468014#comment-17468014
 ] 

ashok talari commented on KAFKA-9366:
-

Hi Dongjin Lee,

 

Could you please let us know if any tentative dates for official release for 
latest log4j build.

> Upgrade log4j to log4j2
> ---
>
> Key: KAFKA-9366
> URL: https://issues.apache.org/jira/browse/KAFKA-9366
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.2.0, 2.1.1, 2.3.0, 2.4.0
>Reporter: leibo
>Assignee: Dongjin Lee
>Priority: Critical
>  Labels: needs-kip
> Fix For: 3.2.0
>
>
> h2. CVE-2019-17571 Detail
> Included in Log4j 1.2 is a SocketServer class that is vulnerable to 
> deserialization of untrusted data which can be exploited to remotely execute 
> arbitrary code when combined with a deserialization gadget when listening to 
> untrusted network traffic for log data. This affects Log4j versions up to 1.2 
> up to 1.2.17.
>  
> [https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17571]
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [kafka] mimaison commented on a change in pull request #11456: KAFKA-13351: Add possibility to write kafka headers in Kafka Console Producer

2022-01-03 Thread GitBox


mimaison commented on a change in pull request #11456:
URL: https://github.com/apache/kafka/pull/11456#discussion_r777502237



##
File path: core/src/main/scala/kafka/tools/ConsoleProducer.scala
##
@@ -29,8 +25,16 @@ import kafka.utils.{CommandDefaultOptions, CommandLineUtils, 
Exit, ToolsUtils}
 import org.apache.kafka.clients.producer.internals.ErrorLoggingCallback
 import org.apache.kafka.clients.producer.{KafkaProducer, ProducerConfig, 
ProducerRecord}
 import org.apache.kafka.common.KafkaException
+import org.apache.kafka.common.header.Header
+import org.apache.kafka.common.header.internals.RecordHeader

Review comment:
   The alternative is to create a `ProducerRecord` and then do 
`record.headers.add(String, byte[])`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] showuon commented on pull request #11627: KAFKA-13565: add consumer exponential backoff for KIP-580

2022-01-03 Thread GitBox


showuon commented on pull request #11627:
URL: https://github.com/apache/kafka/pull/11627#issuecomment-1004004071


   @skaundinya15 @abbccdda @guozhangwang , please help review when available. 
Thank you.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] showuon commented on a change in pull request #11627: KAFKA-13565: add consumer exponential backoff for KIP-580

2022-01-03 Thread GitBox


showuon commented on a change in pull request #11627:
URL: https://github.com/apache/kafka/pull/11627#discussion_r777398999



##
File path: 
clients/src/test/java/org/apache/kafka/clients/consumer/KafkaConsumerTest.java
##
@@ -191,6 +192,21 @@
 
 private final Collection singleTopicPartition = 
Collections.singleton(new TopicPartition(topic, 0));
 
+private SubscriptionState subscription;
+private Time time;
+
+@BeforeEach
+public void setup() {
+this.time = new MockTime();
+// default to reset to the earliest offset
+this.subscription = 
createSubscriptionState(OffsetResetStrategy.EARLIEST);
+}
+
+private SubscriptionState createSubscriptionState(OffsetResetStrategy 
offsetResetStrategy) {
+// use static backoff time for testing
+return new SubscriptionState(new LogContext(), offsetResetStrategy, 
100, 100);
+}
+

Review comment:
   Refactor the tests




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] showuon commented on a change in pull request #11627: KAFKA-13565: add consumer exponential backoff for KIP-580

2022-01-03 Thread GitBox


showuon commented on a change in pull request #11627:
URL: https://github.com/apache/kafka/pull/11627#discussion_r777398361



##
File path: 
clients/src/main/java/org/apache/kafka/common/utils/ExponentialBackoff.java
##
@@ -32,7 +32,16 @@
 private final double expMax;
 private final long initialInterval;
 private final double jitter;
+private long attemptedCount = 0;

Review comment:
   add the `attemptedCount` in `ExponentialBackoff` class. The caller can 
use the `attemptedCount` and doesn't need to maintain the attempted count in 
their side. It's good when the `ExponentialBackoff` only has single place to 
backoff, or the caller is inside lambda expression. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] showuon commented on a change in pull request #11627: KAFKA-13565: add consumer exponential backoff for KIP-580

2022-01-03 Thread GitBox


showuon commented on a change in pull request #11627:
URL: https://github.com/apache/kafka/pull/11627#discussion_r777395790



##
File path: 
clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java
##
@@ -192,14 +214,39 @@
  * @return  The new values which have been set as 
described in postProcessParsedConfig.
  */
 public static Map 
postProcessReconnectBackoffConfigs(AbstractConfig config,
-Map 
parsedValues) {
+ 
Map parsedValues) {
 HashMap rval = new HashMap<>();
 if ((!config.originals().containsKey(RECONNECT_BACKOFF_MAX_MS_CONFIG)) 
&&
 config.originals().containsKey(RECONNECT_BACKOFF_MS_CONFIG)) {
-log.debug("Disabling exponential reconnect backoff because {} is 
set, but {} is not.",
+log.info("Disabling exponential reconnect backoff because {} is 
set, but {} is not.",

Review comment:
   Since we log `warn` when exponential `RETRY_BACKOFF_MS` and 
`SOCKET_CONNECTION_SETUP_TIMEOUT_MS` is disabled, I think we should at least 
log `info` here.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] showuon commented on a change in pull request #11627: KAFKA-13565: add consumer exponential backoff for KIP-580

2022-01-03 Thread GitBox


showuon commented on a change in pull request #11627:
URL: https://github.com/apache/kafka/pull/11627#discussion_r777395274



##
File path: 
clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java
##
@@ -102,11 +119,16 @@
 public static final String DEFAULT_SECURITY_PROTOCOL = "PLAINTEXT";
 
 public static final String SOCKET_CONNECTION_SETUP_TIMEOUT_MS_CONFIG = 
"socket.connection.setup.timeout.ms";
-public static final String SOCKET_CONNECTION_SETUP_TIMEOUT_MS_DOC = "The 
amount of time the client will wait for the socket connection to be 
established. If the connection is not built before the timeout elapses, clients 
will close the socket channel.";
+public static final String SOCKET_CONNECTION_SETUP_TIMEOUT_MS_DOC = "The 
amount of time the client will wait for the socket connection to be 
established. " +
+"If the connection is not built before the timeout elapses, clients 
will close the socket channel. " +
+"This value is the initial backoff value and will increase 
exponentially for each consecutive connection failure, " +
+"up to the socket.connection.setup.timeout.max.ms value.";
 public static final Long DEFAULT_SOCKET_CONNECTION_SETUP_TIMEOUT_MS = 10 * 
1000L;
 
 public static final String SOCKET_CONNECTION_SETUP_TIMEOUT_MAX_MS_CONFIG = 
"socket.connection.setup.timeout.max.ms";
-public static final String SOCKET_CONNECTION_SETUP_TIMEOUT_MAX_MS_DOC = 
"The maximum amount of time the client will wait for the socket connection to 
be established. The connection setup timeout will increase exponentially for 
each consecutive connection failure up to this maximum. To avoid connection 
storms, a randomization factor of 0.2 will be applied to the timeout resulting 
in a random range between 20% below and 20% above the computed value.";
+public static final String SOCKET_CONNECTION_SETUP_TIMEOUT_MAX_MS_DOC = 
"The maximum amount of time the client will wait for the socket connection to 
be established. " +
+"The connection setup timeout will increase exponentially for each 
consecutive connection failure up to this maximum. To avoid connection storms, 
" +
+"a randomization factor of 0.2 will be applied to the timeout 
resulting in a random range between 20% below and 20% above the computed 
value.";

Review comment:
   only make it into 3 lines, for better readability.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] showuon commented on a change in pull request #11627: KAFKA-13565: add consumer exponential backoff for KIP-580

2022-01-03 Thread GitBox


showuon commented on a change in pull request #11627:
URL: https://github.com/apache/kafka/pull/11627#discussion_r777394827



##
File path: 
clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java
##
@@ -70,17 +70,34 @@
 public static final String CLIENT_RACK_DOC = "A rack identifier for this 
client. This can be any string value which indicates where this client is 
physically located. It corresponds with the broker config 'broker.rack'";
 
 public static final String RECONNECT_BACKOFF_MS_CONFIG = 
"reconnect.backoff.ms";
-public static final String RECONNECT_BACKOFF_MS_DOC = "The base amount of 
time to wait before attempting to reconnect to a given host. This avoids 
repeatedly connecting to a host in a tight loop. This backoff applies to all 
connection attempts by the client to a broker.";
+public static final String RECONNECT_BACKOFF_MS_DOC = "The base amount of 
time to wait before attempting to reconnect to a given host. " +
+"This avoids repeatedly connecting to a host in a tight loop. This 
backoff applies to all connection attempts by the client to a broker. " +
+"This value is the initial backoff value and will increase 
exponentially for each consecutive connection failure, up to the 
reconnect.backoff.max.ms value.";

Review comment:
   Add the last sentence to mention this is the initial backoff value and 
will increase exponentially up to `reconnect.backoff.max.ms`.
   
   Same as below `retry.backoff.ms` and `socket.connection.setup.timeout.ms`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] showuon commented on a change in pull request #11627: KAFKA-13565: add consumer exponential backoff for KIP-580

2022-01-03 Thread GitBox


showuon commented on a change in pull request #11627:
URL: https://github.com/apache/kafka/pull/11627#discussion_r777394154



##
File path: 
clients/src/main/java/org/apache/kafka/clients/ClusterConnectionStates.java
##
@@ -536,7 +535,12 @@ private void clearAddresses() {
 }
 
 public String toString() {
-return "NodeState(" + state + ", " + lastConnectAttemptMs + ", " + 
failedAttempts + ", " + throttleUntilTimeMs + ")";
+return "NodeState(" +
+"state=" + state + ", " +
+"lastConnectAttemptMs=" + lastConnectAttemptMs + ", " +
+"failedAttempts=" + failedAttempts + ", " +
+"failedConnectAttempts=" + failedConnectAttempts + ", " +
+"throttleUntilTimeMs=" + throttleUntilTimeMs + ")";

Review comment:
   The original `toString` is unable to read because no leading variable 
name. Update it, and add `failedConnectAttempts`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] showuon commented on a change in pull request #11627: KAFKA-13565: add consumer exponential backoff for KIP-580

2022-01-03 Thread GitBox


showuon commented on a change in pull request #11627:
URL: https://github.com/apache/kafka/pull/11627#discussion_r777393726



##
File path: 
clients/src/main/java/org/apache/kafka/clients/ClusterConnectionStates.java
##
@@ -357,8 +357,7 @@ private void 
resetConnectionSetupTimeout(NodeConnectionState nodeState) {
 }
 
 /**
- * Increment the failure counter, update the node reconnect backoff 
exponentially,
- * and record the current timestamp.
+ * Increment the failure counter, update the node reconnect backoff 
exponentially.

Review comment:
   this method doesn't `record the current timestamp`. Updated it.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] showuon commented on a change in pull request #11627: KAFKA-13565: add consumer exponential backoff for KIP-580

2022-01-03 Thread GitBox


showuon commented on a change in pull request #11627:
URL: https://github.com/apache/kafka/pull/11627#discussion_r777393490



##
File path: 
clients/src/main/java/org/apache/kafka/clients/ClusterConnectionStates.java
##
@@ -372,7 +371,7 @@ private void updateReconnectBackoff(NodeConnectionState 
nodeState) {
 /**
  * Increment the failure counter and update the node connection setup 
timeout exponentially.
  * The delay is socket.connection.setup.timeout.ms * 2**(failures) * (+/- 
20% random jitter)
- * Up to a (pre-jitter) maximum of reconnect.backoff.max.ms
+ * Up to a (pre-jitter) maximum of socket.connection.setup.timeout.max.ms

Review comment:
   side fix: the java doc is talking about 
`socket.connection.setup.timeout.ms`, not `reconnect.backoff.ms`. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] hartmut-co-uk commented on pull request #7082: KAFKA-8659: SetSchemaMetadata SMT fails on records with null value and schema

2022-01-03 Thread GitBox


hartmut-co-uk commented on pull request #7082:
URL: https://github.com/apache/kafka/pull/7082#issuecomment-1003967121


   I also run into this, @bfncs are you able to pick this up again, or can I 
help?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org