[jira] [Resolved] (KAFKA-12899) Support --bootstrap-server in ReplicaVerificationTool

2024-07-07 Thread Dongjin Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjin Lee resolved KAFKA-12899.
-
Resolution: Won't Fix

Replaced by KAFKA-17073

> Support --bootstrap-server in ReplicaVerificationTool
> -
>
> Key: KAFKA-12899
> URL: https://issues.apache.org/jira/browse/KAFKA-12899
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Dongjin Lee
>Assignee: Dongjin Lee
>Priority: Minor
>  Labels: needs-kip
>
> kafka.tools.ReplicaVerificationTool still uses --broker-list, breaking 
> consistency with other (already migrated) tools.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-17073) Deprecate ReplicaVerificationTool in 3.9

2024-07-07 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-17073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17863568#comment-17863568
 ] 

Dongjin Lee commented on KAFKA-17073:
-

KIP: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=311627623

> Deprecate ReplicaVerificationTool in 3.9
> 
>
> Key: KAFKA-17073
> URL: https://issues.apache.org/jira/browse/KAFKA-17073
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Dongjin Lee
>Priority: Minor
>  Labels: need-kip
> Fix For: 3.9.0
>
>
> see discussion 
> https://lists.apache.org/thread/6zz7xwps8lq2lxfo5bhyl4cggh64c5py
> In short, the tool is useless and so it is good time to deprecate it in 3.9. 
> That enables us to remove it from 4.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-12359) Update Jetty to 11

2022-07-17 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17567589#comment-17567589
 ] 

Dongjin Lee commented on KAFKA-12359:
-

[~ijuma] Totally agree. I already pivoted the focus from a security fix to Java 
11 upgrade in 4.0. The rebasing is nearly complete and will be tested with the 
in-house fork.

> Update Jetty to 11
> --
>
> Key: KAFKA-12359
> URL: https://issues.apache.org/jira/browse/KAFKA-12359
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect, tools
>Reporter: Dongjin Lee
>Assignee: Dongjin Lee
>Priority: Major
>
> I found this problem when I was working on 
> [KAFKA-12324|https://issues.apache.org/jira/browse/KAFKA-12324].
> As of present, Kafka Connect and Trogdor are using Jetty 9. Although Jetty's 
> stable release is 9.4, the Jetty community is now moving their focus to Jetty 
> 10 and 11, which requires Java 11 as a prerequisite. To minimize potential 
> security vulnerability, Kafka should migrate into Java 11 + Jetty 11 as soon 
> as Jetty 9.4 reaches the end of life. As a note, [Jetty 9.2 reached End of 
> Life in March 
> 2018|https://www.eclipse.org/lists/jetty-announce/msg00116.html], and 9.3 
> also did in [February 
> 2020|https://www.eclipse.org/lists/jetty-announce/msg00140.html].
> In other words, the necessity of moving to Java 11 is heavily affected by 
> Jetty's maintenance plan. Jetty 9.4 seems like still be supported for a 
> certain period of time, but it is worth being aware of these relationships 
> and having a migration plan.
> Updating Jetty to 11 is not resolved by simply changing the version. Along 
> with its API changes, we have to cope with additional dependencies, [Java EE 
> class name changes|https://webtide.com/renaming-from-javax-to-jakarta/], 
> Making Jackson to compatible with the changes, etc.
> As a note: for the difference between Jetty 10 and 11, see 
> [here|https://webtide.com/jetty-10-and-11-have-arrived/] - in short, "Jetty 
> 11 is identical to Jetty 10 except that the javax.* packages now conform to 
> the new jakarta.* namespace.".



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-13969) CVE-2022-24823 in netty 4.1.76.Final

2022-07-09 Thread Dongjin Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjin Lee resolved KAFKA-13969.
-
Resolution: Duplicate

> CVE-2022-24823 in netty 4.1.76.Final
> 
>
> Key: KAFKA-13969
> URL: https://issues.apache.org/jira/browse/KAFKA-13969
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.8.1
>Reporter: Dominique Mongelli
>Priority: Minor
>
> Netty reported a new MEDIUM CVE: 
> [https://github.com/netty/netty/security/advisories/GHSA-269q-hmxg-m83q]
> NVD: [https://nvd.nist.gov/vuln/detail/CVE-2022-24823]
> It is fixed in netty 4.1.77.Final.
> It should be noted that this CVE impacts applications running on Java 6 or 
> lower.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-13761) KafkaLog4jAppender deadlocks when idempotence is enabled

2022-03-23 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17511283#comment-17511283
 ] 

Dongjin Lee commented on KAFKA-13761:
-

Hi [~yyu1993] [~showuon],

It seems like this issue is a counterpart of LOG4J2-3256, which disables 
logging from {{org.apache.kafka.common}} and {{org.apache.kafka.clients}} 
packages. How about add a similar logic to log4j-appender?

> KafkaLog4jAppender deadlocks when idempotence is enabled
> 
>
> Key: KAFKA-13761
> URL: https://issues.apache.org/jira/browse/KAFKA-13761
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 3.1.0, 3.0.0, 3.0.1
>Reporter: Yang Yu
>Priority: Major
>
> KafkaLog4jAppender instantiates a KafkaProducer to append log entries to a 
> Kafka topic. The producer.send operation may need to acquire locks during its 
> execution. This can result in deadlocks when a log entry from the producer 
> network thread is also at a log level that results in the entry being 
> appended to a Kafka topic (KAFKA-6415).
> [https://github.com/apache/kafka/pull/11691] enables idempotence by default, 
> and it introduced another place where the producer network thread can hit a 
> deadlock. When calling TransactionManger#maybeAddPartition, the producer 
> network thread will wait on the TransactionManager lock, and a deadlock can 
> happen if TransactionManager also logs at INFO level. This is causing system 
> test failures in log4j_appender_test.py
> Similar to KAFKA-6415, a workaround will be setting log level to WARN for 
> TransactionManager in VerifiableLog4jAppender.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (KAFKA-9366) Upgrade log4j to log4j2

2022-03-17 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17508199#comment-17508199
 ] 

Dongjin Lee commented on KAFKA-9366:


[~roncraig] Sorry for being late. the PR is now updated with log4j2 2.17.2.

> Upgrade log4j to log4j2
> ---
>
> Key: KAFKA-9366
> URL: https://issues.apache.org/jira/browse/KAFKA-9366
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.2.0, 2.1.1, 2.3.0, 2.4.0
>Reporter: leibo
>Assignee: Dongjin Lee
>Priority: Critical
>  Labels: needs-kip
> Fix For: 3.2.0
>
>
> h2. CVE-2019-17571 Detail
> Included in Log4j 1.2 is a SocketServer class that is vulnerable to 
> deserialization of untrusted data which can be exploited to remotely execute 
> arbitrary code when combined with a deserialization gadget when listening to 
> untrusted network traffic for log data. This affects Log4j versions up to 1.2 
> up to 1.2.17.
>  
> [https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17571]
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (KAFKA-13660) Replace log4j with reload4j

2022-02-09 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17489905#comment-17489905
 ] 

Dongjin Lee commented on KAFKA-13660:
-

Hi [~FireBurn],

Thanks for your interest in this issue. I think reload4j is a promising project 
but, it seems not proven yet. Also, the log4j issue is already under progress 
with KAFKA-9366.

Plus, these kinds of issues need a process named Kafka Improvement Proposal. 
Please have a look at [this 
page|https://cwiki.apache.org/confluence/display/kafka/kafka+improvement+proposals].

> Replace log4j with reload4j
> ---
>
> Key: KAFKA-13660
> URL: https://issues.apache.org/jira/browse/KAFKA-13660
> Project: Kafka
>  Issue Type: Bug
>  Components: logging
>Affects Versions: 2.4.0, 3.0.0
>Reporter: Mike Lothian
>Priority: Major
>
> Kafka is using a known vulnerable version of log4j, the reload4j project was 
> created by the code's original authors to address those issues. It is 
> designed as a drop in replacement without any api changes
>  
> https://reload4j.qos.ch/
>  
> I've raised a merge request, replacing log4j with reload4j, slf4j-log4j12 
> with slf4j-reload4j and bumping the slf4j version
>  
> This is my first time contributing to the Kafka project and I'm not too 
> familiar with the process, I'll go back and amend my PR with this issue number



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (KAFKA-13616) Log4j 1.X CVE-2022-23302/5/7 vulnerabilities

2022-02-09 Thread Dongjin Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjin Lee resolved KAFKA-13616.
-
Resolution: Duplicate

> Log4j 1.X CVE-2022-23302/5/7 vulnerabilities
> 
>
> Key: KAFKA-13616
> URL: https://issues.apache.org/jira/browse/KAFKA-13616
> Project: Kafka
>  Issue Type: Bug
>Reporter: Dominique Mongelli
>Priority: Major
>
> Some log4j 1.x vulnerabilities have been disclosed recently:   
>  * CVE-2022-23302: https://nvd.nist.gov/vuln/detail/CVE-2022-23302    
>  * CVE-2022-23305 : https://nvd.nist.gov/vuln/detail/CVE-2022-23305    
>  * CVE-2022-23307 : [https://nvd.nist.gov/vuln/detail/CVE-2022-23307]
> We would like to know if kafka is affected by these vulnerabilities ?
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (KAFKA-9366) Upgrade log4j to log4j2

2022-02-03 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17486321#comment-17486321
 ] 

Dongjin Lee commented on KAFKA-9366:


Hi [~noonbs], this issue will be resolved with KAFKA-12399. I expect it will be 
done in 3.2.0.

> Upgrade log4j to log4j2
> ---
>
> Key: KAFKA-9366
> URL: https://issues.apache.org/jira/browse/KAFKA-9366
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.2.0, 2.1.1, 2.3.0, 2.4.0
>Reporter: leibo
>Assignee: Dongjin Lee
>Priority: Critical
>  Labels: needs-kip
> Fix For: 3.2.0
>
>
> h2. CVE-2019-17571 Detail
> Included in Log4j 1.2 is a SocketServer class that is vulnerable to 
> deserialization of untrusted data which can be exploited to remotely execute 
> arbitrary code when combined with a deserialization gadget when listening to 
> untrusted network traffic for log data. This affects Log4j versions up to 1.2 
> up to 1.2.17.
>  
> [https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17571]
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (KAFKA-13625) Fix inconsistency in dynamic application log levels

2022-01-28 Thread Dongjin Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjin Lee updated KAFKA-13625:

Summary: Fix inconsistency in dynamic application log levels  (was: 
KIP-817: Fix inconsistency in dynamic application log levels)

> Fix inconsistency in dynamic application log levels
> ---
>
> Key: KAFKA-13625
> URL: https://issues.apache.org/jira/browse/KAFKA-13625
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Dongjin Lee
>Assignee: Dongjin Lee
>Priority: Minor
>
> As of present, there are two ways of changing log levels in Runtime:
>  # JMX API
>  # Admin API
> However, there are some inconsistencies between these two ways;
>  # JMX API allows OFF level; but Admin API does not. If the user tries to set 
> the logger's level to OFF, Kafka responds with INVALID_CONFIG (40) error.
>  # JMX API converts unsupported log level to DEBUG; but Admin API throws an 
> error. The documentation does not state this difference in semantics.
> To fix these inconsistencies, we have to:
>  # Add OFF level to LogLevelConfig.
>  # Add documentation on different semantics between JMX and Admin API.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (KAFKA-13625) KIP-817: Fix inconsistency in dynamic application log levels

2022-01-28 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17483682#comment-17483682
 ] 

Dongjin Lee commented on KAFKA-13625:
-

This issue is a derivative issue of KAFKA-7800; Found while working on 
KAFKA-9366.

> KIP-817: Fix inconsistency in dynamic application log levels
> 
>
> Key: KAFKA-13625
> URL: https://issues.apache.org/jira/browse/KAFKA-13625
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Dongjin Lee
>Assignee: Dongjin Lee
>Priority: Minor
>
> As of present, there are two ways of changing log levels in Runtime:
>  # JMX API
>  # Admin API
> However, there are some inconsistencies between these two ways;
>  # JMX API allows OFF level; but Admin API does not. If the user tries to set 
> the logger's level to OFF, Kafka responds with INVALID_CONFIG (40) error.
>  # JMX API converts unsupported log level to DEBUG; but Admin API throws an 
> error. The documentation does not state this difference in semantics.
> To fix these inconsistencies, we have to:
>  # Add OFF level to LogLevelConfig.
>  # Add documentation on different semantics between JMX and Admin API.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (KAFKA-13625) KIP-817: Fix inconsistency in dynamic application log levels

2022-01-28 Thread Dongjin Lee (Jira)
Dongjin Lee created KAFKA-13625:
---

 Summary: KIP-817: Fix inconsistency in dynamic application log 
levels
 Key: KAFKA-13625
 URL: https://issues.apache.org/jira/browse/KAFKA-13625
 Project: Kafka
  Issue Type: Improvement
  Components: core
Reporter: Dongjin Lee
Assignee: Dongjin Lee


As of present, there are two ways of changing log levels in Runtime:
 # JMX API
 # Admin API

However, there are some inconsistencies between these two ways;
 # JMX API allows OFF level; but Admin API does not. If the user tries to set 
the logger's level to OFF, Kafka responds with INVALID_CONFIG (40) error.
 # JMX API converts unsupported log level to DEBUG; but Admin API throws an 
error. The documentation does not state this difference in semantics.

To fix these inconsistencies, we have to:
 # Add OFF level to LogLevelConfig.
 # Add documentation on different semantics between JMX and Admin API.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (KAFKA-13604) Add pluggable logging framework support

2022-01-19 Thread Dongjin Lee (Jira)
Dongjin Lee created KAFKA-13604:
---

 Summary: Add pluggable logging framework support
 Key: KAFKA-13604
 URL: https://issues.apache.org/jira/browse/KAFKA-13604
 Project: Kafka
  Issue Type: Improvement
  Components: core, KafkaConnect
Reporter: Dongjin Lee
Assignee: Dongjin Lee


[discussion 
thread|https://lists.apache.org/thread/xwgt8ydvnvnqwvhpq2cobgb5wk5mg52t]

As of present, Apache Kafka is using log4j 1.x and planning to migrate into 
log4j 2.x. Dislike Kafka Streams, it calls log4j's API directly, making it hard 
for the users to replace the logging framework - also making Kafka vulnerable 
to log4j's security vulnerabilities.

Apache Kafka (with Connect) is calling log4j's API directly to support the 
dynamic logger level change feature; SLF4j does not support this feature yet, 
but [they are planning to support this|https://jira.qos.ch/browse/SLF4J-124] in 
the near future.

Supporting the pluggable logging framework by using SLF4j as a facade will 
allow the users to change the actual logging framework, reducing security 
problems freely.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (KAFKA-9366) Upgrade log4j to log4j2

2022-01-03 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17468340#comment-17468340
 ] 

Dongjin Lee commented on KAFKA-9366:


[~Ashoking] No, I can't certain it. Please have a look on the preview based on 
3.0.0.

> Upgrade log4j to log4j2
> ---
>
> Key: KAFKA-9366
> URL: https://issues.apache.org/jira/browse/KAFKA-9366
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.2.0, 2.1.1, 2.3.0, 2.4.0
>Reporter: leibo
>Assignee: Dongjin Lee
>Priority: Critical
>  Labels: needs-kip
> Fix For: 3.2.0
>
>
> h2. CVE-2019-17571 Detail
> Included in Log4j 1.2 is a SocketServer class that is vulnerable to 
> deserialization of untrusted data which can be exploited to remotely execute 
> arbitrary code when combined with a deserialization gadget when listening to 
> untrusted network traffic for log data. This affects Log4j versions up to 1.2 
> up to 1.2.17.
>  
> [https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17571]
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (KAFKA-13518) Update gson dependency

2021-12-24 Thread Dongjin Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjin Lee updated KAFKA-13518:

Summary: Update gson dependency  (was: Update gson and netty-codec in 3.0.0)

> Update gson dependency
> --
>
> Key: KAFKA-13518
> URL: https://issues.apache.org/jira/browse/KAFKA-13518
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 3.0.0
>Reporter: Pavel Kuznetsov
>Assignee: Dongjin Lee
>Priority: Major
>  Labels: security
>
> *Describe the bug*
> I checked kafka_2.13-3.0.0.tgz distribution with WhiteSource and find out 
> that some libraries have vulnerabilities.
> Here they are:
> * gson-2.8.6.jar has WS-2021-0419 vulnerability. The way to fix it is to 
> upgrade to com.google.code.gson:gson:2.8.9
> * netty-codec-4.1.65.Final.jar has CVE-2021-37136 and CVE-2021-37137 
> vulnerabilities. The way to fix it is to upgrade to 
> io.netty:netty-codec:4.1.68.Final
> *To Reproduce*
> Download kafka_2.13-3.0.0.tgz and find jars, listed above.
> Check that these jars with corresponding versions are mentioned in 
> corresponding vulnerability description.
> *Expected behavior*
> * gson upgraded to 2.8.9 or higher
> * netty-codec upgraded to 4.1.68.Final or higher
> *Actual behaviour*
> * gson is 2.8.6
> * netty-codec is 4.1.65.Final



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (KAFKA-9366) Upgrade log4j to log4j2

2021-12-18 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17461807#comment-17461807
 ] 

Dongjin Lee commented on KAFKA-9366:


[~tarungoswami] I can't give you any guarantee but, 
[here|http://home.apache.org/~dongjin/post/apache-kafka-log4j2-support/] is a 
preview based on Apache Kafka 3.0.0.

> Upgrade log4j to log4j2
> ---
>
> Key: KAFKA-9366
> URL: https://issues.apache.org/jira/browse/KAFKA-9366
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.2.0, 2.1.1, 2.3.0, 2.4.0
>Reporter: leibo
>Assignee: Dongjin Lee
>Priority: Critical
>  Labels: needs-kip
> Fix For: 3.2.0
>
>
> h2. CVE-2019-17571 Detail
> Included in Log4j 1.2 is a SocketServer class that is vulnerable to 
> deserialization of untrusted data which can be exploited to remotely execute 
> arbitrary code when combined with a deserialization gadget when listening to 
> untrusted network traffic for log data. This affects Log4j versions up to 1.2 
> up to 1.2.17.
>  
> [https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17571]
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (KAFKA-12399) Deprecate Log4J Appender

2021-12-17 Thread Dongjin Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjin Lee updated KAFKA-12399:

Description: As a following job of KAFKA-9366, we have to entirely remove 
the log4j 1.2.7 dependency from the classpath by removing dependencies on 
log4j-appender.  (was: As a following job of KAFKA-9366, we have to provide a 
log4j2 counterpart to log4j-appender.)

> Deprecate Log4J Appender
> 
>
> Key: KAFKA-12399
> URL: https://issues.apache.org/jira/browse/KAFKA-12399
> Project: Kafka
>  Issue Type: Improvement
>  Components: logging
>Reporter: Dongjin Lee
>Assignee: Dongjin Lee
>Priority: Major
>  Labels: needs-kip
>
> As a following job of KAFKA-9366, we have to entirely remove the log4j 1.2.7 
> dependency from the classpath by removing dependencies on log4j-appender.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (KAFKA-12399) Deprecate Log4J Appender

2021-12-17 Thread Dongjin Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjin Lee updated KAFKA-12399:

Summary: Deprecate Log4J Appender  (was: Add log4j2 Appender)

> Deprecate Log4J Appender
> 
>
> Key: KAFKA-12399
> URL: https://issues.apache.org/jira/browse/KAFKA-12399
> Project: Kafka
>  Issue Type: Improvement
>  Components: logging
>Reporter: Dongjin Lee
>Assignee: Dongjin Lee
>Priority: Major
>  Labels: needs-kip
>
> As a following job of KAFKA-9366, we have to provide a log4j2 counterpart to 
> log4j-appender.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (KAFKA-13547) Kafka - 1.0.0 | Remove log4j.jar

2021-12-17 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17461472#comment-17461472
 ] 

Dongjin Lee commented on KAFKA-13547:
-

[~masood31] I am currently working on a preview version based on AK 2.8.1 and 
3.0.0, and I will complete it in this weekend. It will replace all log4j 1.x 
dependency into 2.x, with backward compatibility of logger configuration.

+1. I can't guarantee when it will be merged into the official release.

> Kafka - 1.0.0 | Remove log4j.jar
> 
>
> Key: KAFKA-13547
> URL: https://issues.apache.org/jira/browse/KAFKA-13547
> Project: Kafka
>  Issue Type: Bug
>Reporter: masood
>Priority: Blocker
>
> We wanted to remove the log4j.jar but ended up with a dependency on the 
> kafka.producer.ProducerConfig.
> Caused by: java.lang.NoClassDefFoundError: org/apache/log4j/Logger
>     at kafka.utils.Logging.logger(Logging.scala:24)
>     at kafka.utils.Logging.logger$(Logging.scala:24)
>     at 
> kafka.utils.VerifiableProperties.logger$lzycompute(VerifiableProperties.scala:27)
>     at kafka.utils.VerifiableProperties.logger(VerifiableProperties.scala:27)
>     at kafka.utils.Logging.info(Logging.scala:71)
>     at kafka.utils.Logging.info$(Logging.scala:70)
>     at kafka.utils.VerifiableProperties.info(VerifiableProperties.scala:27)
>     at kafka.utils.VerifiableProperties.verify(VerifiableProperties.scala:218)
>     at kafka.producer.ProducerConfig.(ProducerConfig.scala:61)
> Is there any configuration available which can resolve this error.
> Please note we are not using log4j.properties or any other log4j logging 
> mechanism for Kafka connection in the application.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (KAFKA-13551) kafka-log4j-appender-2.1.1.jar Is cve-2021-44228 involved?

2021-12-16 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17460651#comment-17460651
 ] 

Dongjin Lee commented on KAFKA-13551:
-

In short, NO. CVE-2021-44228 is problematic only when you are using JMS 
appender.

disclaimer: I am currently working on log4j2 migration, KAFKA-9366 and 
KAFKA-12399.

>  kafka-log4j-appender-2.1.1.jar Is cve-2021-44228 involved? 
> 
>
> Key: KAFKA-13551
> URL: https://issues.apache.org/jira/browse/KAFKA-13551
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: xiansheng fu
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (KAFKA-13537) Will kafka_2.12-2.3.0 version be impacted by new zero-day exploit going on since last friday?

2021-12-15 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17460381#comment-17460381
 ] 

Dongjin Lee commented on KAFKA-13537:
-

[~rajnaik] [~showuon] Plus: for trogdor and tools module, please refer 
[KAFKA-12399|https://issues.apache.org/jira/browse/KAFKA-12399].

> Will kafka_2.12-2.3.0 version be impacted by new zero-day exploit going on 
> since last friday?
> -
>
> Key: KAFKA-13537
> URL: https://issues.apache.org/jira/browse/KAFKA-13537
> Project: Kafka
>  Issue Type: Bug
> Environment: All
>Reporter: Rajendra
>Priority: Major
>
> h3. new zero-day exploit has been reported against the popular Log4J2 library 
> which can allow an attacker to remotely execute code.
> h3. Affected Software
> A significant number of Java-based applications are using log4j as their 
> logging utility and are vulnerable to this CVE. To the best of our knowledge, 
> at least the following software may be impacted:
>  * Apache Struts
>  * Apache Solr
>  * Apache Druid
>  * Apache Flink
>  * ElasticSearch
>  * Flume
>  * Apache Dubbo
>  * Logstash
>  * Kafka
>  * Spring-Boot-starter-log4j2
> Wondering if kafka_2.12-2.3.0 is impacted. I see below libraries.
> kafka-log4j-appender-2.3.0.jar  log4j-1.2.17.jar  
> scala-logging_2.12-3.9.0.jar  slf4j-log4j12-1.7.26.jar
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (KAFKA-13547) Kafka - 1.0.0 | Remove log4j.jar

2021-12-15 Thread Dongjin Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjin Lee resolved KAFKA-13547.
-
Resolution: Duplicate

It seems like you are trying VerifiableProducer in tools, right? Removing log4j 
1.x from the tools module is working in progress in KAFKA-12399.

> Kafka - 1.0.0 | Remove log4j.jar
> 
>
> Key: KAFKA-13547
> URL: https://issues.apache.org/jira/browse/KAFKA-13547
> Project: Kafka
>  Issue Type: Bug
>Reporter: masood
>Priority: Blocker
>
> We wanted to remove the log4j.jar but ended up with a dependency on the 
> kafka.producer.ProducerConfig.
> Caused by: java.lang.NoClassDefFoundError: org/apache/log4j/Logger
>     at kafka.utils.Logging.logger(Logging.scala:24)
>     at kafka.utils.Logging.logger$(Logging.scala:24)
>     at 
> kafka.utils.VerifiableProperties.logger$lzycompute(VerifiableProperties.scala:27)
>     at kafka.utils.VerifiableProperties.logger(VerifiableProperties.scala:27)
>     at kafka.utils.Logging.info(Logging.scala:71)
>     at kafka.utils.Logging.info$(Logging.scala:70)
>     at kafka.utils.VerifiableProperties.info(VerifiableProperties.scala:27)
>     at kafka.utils.VerifiableProperties.verify(VerifiableProperties.scala:218)
>     at kafka.producer.ProducerConfig.(ProducerConfig.scala:61)
> Is there any configuration available which can resolve this error.
> Please note we are not using log4j.properties or any other log4j logging 
> mechanism for Kafka connection in the application.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (KAFKA-9366) Upgrade log4j to log4j2

2021-12-15 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17460377#comment-17460377
 ] 

Dongjin Lee commented on KAFKA-9366:


[~showuon] Sure. The PR is now updated to handle log4j2 2.16.0.

> Upgrade log4j to log4j2
> ---
>
> Key: KAFKA-9366
> URL: https://issues.apache.org/jira/browse/KAFKA-9366
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.2.0, 2.1.1, 2.3.0, 2.4.0
>Reporter: leibo
>Assignee: Dongjin Lee
>Priority: Critical
>  Labels: needs-kip
> Fix For: 3.2.0
>
>
> h2. CVE-2019-17571 Detail
> Included in Log4j 1.2 is a SocketServer class that is vulnerable to 
> deserialization of untrusted data which can be exploited to remotely execute 
> arbitrary code when combined with a deserialization gadget when listening to 
> untrusted network traffic for log data. This affects Log4j versions up to 1.2 
> up to 1.2.17.
>  
> [https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17571]
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (KAFKA-13516) Connection level metrics are not closed

2021-12-09 Thread Dongjin Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjin Lee reassigned KAFKA-13516:
---

Assignee: Dongjin Lee

> Connection level metrics are not closed
> ---
>
> Key: KAFKA-13516
> URL: https://issues.apache.org/jira/browse/KAFKA-13516
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 3.0.0
>Reporter: Aman Agarwal
>Assignee: Dongjin Lee
>Priority: Major
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Connection level metrics are not closed by the Selector on connection close, 
> hence leaking the sensors.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (KAFKA-13247) Adding functionality for loading private key entry by alias from the keystore

2021-12-08 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17455241#comment-17455241
 ] 

Dongjin Lee commented on KAFKA-13247:
-

[~tmargaryan] I see. But then, I think using a separate Keystore would be 
better. How do you think?

> Adding functionality for loading private key entry by alias from the keystore
> -
>
> Key: KAFKA-13247
> URL: https://issues.apache.org/jira/browse/KAFKA-13247
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Tigran Margaryan
>Priority: Major
>  Labels: kip-required
>
> Hello team,
> While configuring SSL for Kafka connectivity , I found out that there is no 
> possibility to choose/load the private key entry by alias from the keystore 
> defined via 
> org.apache.kafka.common.config.SslConfigs.SSL_KEYSTORE_LOCATION_CONFIG. It 
> turns out that the keystore could not have multiple private key entries .
> Kindly ask you to add that config (smth. like SSL_KEY_ALIAS_CONFIG) into 
> SslConfigs with the corresponding functionality which should load only the 
> private key entry by defined alias.
>  
> Thanks in advance. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (KAFKA-13518) Update gson and netty-codec in 3.0.0

2021-12-08 Thread Dongjin Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjin Lee reassigned KAFKA-13518:
---

Assignee: Dongjin Lee

> Update gson and netty-codec in 3.0.0
> 
>
> Key: KAFKA-13518
> URL: https://issues.apache.org/jira/browse/KAFKA-13518
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 3.0.0
>Reporter: Pavel Kuznetsov
>Assignee: Dongjin Lee
>Priority: Major
>  Labels: security
>
> *Describe the bug*
> I checked kafka_2.13-3.0.0.tgz distribution with WhiteSource and find out 
> that some libraries have vulnerabilities.
> Here they are:
> * gson-2.8.6.jar has WS-2021-0419 vulnerability. The way to fix it is to 
> upgrade to com.google.code.gson:gson:2.8.9
> * netty-codec-4.1.65.Final.jar has CVE-2021-37136 and CVE-2021-37137 
> vulnerabilities. The way to fix it is to upgrade to 
> io.netty:netty-codec:4.1.68.Final
> *To Reproduce*
> Download kafka_2.13-3.0.0.tgz and find jars, listed above.
> Check that these jars with corresponding versions are mentioned in 
> corresponding vulnerability description.
> *Expected behavior*
> * gson upgraded to 2.8.9 or higher
> * netty-codec upgraded to 4.1.68.Final or higher
> *Actual behaviour*
> * gson is 2.8.6
> * netty-codec is 4.1.65.Final



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (KAFKA-13518) Update gson and netty-codec in 3.0.0

2021-12-08 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17455192#comment-17455192
 ] 

Dongjin Lee commented on KAFKA-13518:
-

The security problems on netty-codec were already fixed with KAFKA-13294. It 
will be shipped with AK 3.1.0 and 3.0.1.

In the case of gson, this problem is introduced by spotbugs 4.2.2. [spotbugs 
4.5.0|https://mvnrepository.com/artifact/com.github.spotbugs/spotbugs/4.5.0] 
uses [gson 
2.8.9|https://github.com/google/gson/releases/tag/gson-parent-2.8.9], which 
[resolves|https://github.com/google/gson/pull/1991] WS-2021-0419 vulnerability.

> Update gson and netty-codec in 3.0.0
> 
>
> Key: KAFKA-13518
> URL: https://issues.apache.org/jira/browse/KAFKA-13518
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 3.0.0
>Reporter: Pavel Kuznetsov
>Priority: Major
>  Labels: security
>
> *Describe the bug*
> I checked kafka_2.13-3.0.0.tgz distribution with WhiteSource and find out 
> that some libraries have vulnerabilities.
> Here they are:
> * gson-2.8.6.jar has WS-2021-0419 vulnerability. The way to fix it is to 
> upgrade to com.google.code.gson:gson:2.8.9
> * netty-codec-4.1.65.Final.jar has CVE-2021-37136 and CVE-2021-37137 
> vulnerabilities. The way to fix it is to upgrade to 
> io.netty:netty-codec:4.1.68.Final
> *To Reproduce*
> Download kafka_2.13-3.0.0.tgz and find jars, listed above.
> Check that these jars with corresponding versions are mentioned in 
> corresponding vulnerability description.
> *Expected behavior*
> * gson upgraded to 2.8.9 or higher
> * netty-codec upgraded to 4.1.68.Final or higher
> *Actual behaviour*
> * gson is 2.8.6
> * netty-codec is 4.1.65.Final



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (KAFKA-13465) when auto create topics enable,server create inner topic of MirrorMaker unexpectedly

2021-11-23 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17448316#comment-17448316
 ] 

Dongjin Lee commented on KAFKA-13465:
-

It seems like you created the topic 'mm2-configs.pr.internal' with 
'cleanup.policy=delete' (default) by producing a record; It does not seem 
created by MM2. Could you have a check?

>  when auto create topics enable,server create inner topic of MirrorMaker 
> unexpectedly
> -
>
> Key: KAFKA-13465
> URL: https://issues.apache.org/jira/browse/KAFKA-13465
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 2.7.0, 2.8.0, 2.7.1, 3.0.0
>Reporter: ZhenChun Pan
>Priority: Major
>
> Hi Team
> Mirror Maker: 2.7.0
> when i enable auto create topic in both side: 
> auto.create.topics.enable=true
>  
> sometimes,mirror maker inner topic create by server not expected,mirror naker 
> start error.
> ```
> [2021-11-19 18:03:56,707] ERROR [Worker clientId=connect-2, groupId=pr-mm2] 
> Uncaught exception in herder work thread, exiting: 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> org.apache.kafka.common.config.ConfigException: Topic 
> 'mm2-configs.pr.internal' supplied via the 'config.storage.topic' property is 
> required to have 'cleanup.policy=compact' to guarantee 
> consistenc{*}#{*}#*connector configurations, but found the topic 
> currently has 'cleanup.policy=delete'. Continuing would likely result in 
> eventually losing connector configurations and problems restarting this 
> Connect cluster in the future. Change the 'config.storage.topic' property in 
> the Connect worker configurations to use a topic with 
> 'cleanup.policy=compact'.
> at 
> org.apache.kafka.connect.util.TopicAdmin.verifyTopicCleanupPolicyOnlyCompact(TopicAdmin.java:420)
> at 
> org.apache.kafka.connect.storage.KafkaConfigBackingStore$1.run(KafkaConfigBackingStore.java:501)
> at org.apache.kafka.connect.util.KafkaBasedLog.start(KafkaBasedLog.java:133)
> at 
> org.apache.kafka.connect.storage.KafkaConfigBackingStore.start(KafkaConfigBackingStore.java:268)
> at 
> org.apache.kafka.connect.runtime.AbstractHerder.startServices(AbstractHerder.java:130)
> at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:288)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> ```
> I think the solution is to exclude mirror maker innner topic when auto 
> creating topics. (AutoTopicCreationManager.scala)
> With this change, this problem already resolve for me.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (KAFKA-13294) Upgrade Netty to 4.1.68 for CVE fixes

2021-11-15 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17444139#comment-17444139
 ] 

Dongjin Lee commented on KAFKA-13294:
-

[~mimaison] +1.

> Upgrade Netty to 4.1.68 for CVE fixes
> -
>
> Key: KAFKA-13294
> URL: https://issues.apache.org/jira/browse/KAFKA-13294
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.8.0
>Reporter: Utkarsh Khare
>Assignee: Dongjin Lee
>Priority: Minor
> Fix For: 3.1.0
>
>
> netty has reported a couple of CVEs regarding the usage of Bzip2Decoder and 
> SnappyFrameDecoder. 
> Reference :
> [CVE-2021-37136 - 
> https://github.com/netty/netty/security/advisories/GHSA-grg4-wf29-r9vv|https://github.com/netty/netty/security/advisories/GHSA-grg4-wf29-r9vv]
> [CVE-2021-37137 - 
> https://github.com/netty/netty/security/advisories/GHSA-9vjp-v76f-g363|https://github.com/netty/netty/security/advisories/GHSA-9vjp-v76f-g363]
>  
> Can we upgrade Netty to version 4.1.68.Final to fix this ? 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (KAFKA-13354) Topic metrics count request rate inconsistently with other request metrics

2021-11-07 Thread Dongjin Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjin Lee reassigned KAFKA-13354:
---

Assignee: Dongjin Lee

> Topic metrics count request rate inconsistently with other request metrics
> --
>
> Key: KAFKA-13354
> URL: https://issues.apache.org/jira/browse/KAFKA-13354
> Project: Kafka
>  Issue Type: Bug
>  Components: core, metrics
>Reporter: David Mao
>Assignee: Dongjin Lee
>Priority: Minor
>
> The request rate metrics in BrokerTopicMetrics are incremented per partition 
> in a Produce request. If a produce requests has multiple partitions for the 
> same topic in the request, then the request will get counted multiple times.
> This is inconsistent with how we count request rate metrics elsewhere.
> The same applies to the TotalFetchRequest rate metric



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (KAFKA-13436) Omitted BrokerTopicMetrics metrics in the documentation

2021-11-06 Thread Dongjin Lee (Jira)
Dongjin Lee created KAFKA-13436:
---

 Summary: Omitted BrokerTopicMetrics metrics in the documentation
 Key: KAFKA-13436
 URL: https://issues.apache.org/jira/browse/KAFKA-13436
 Project: Kafka
  Issue Type: Bug
  Components: documentation
Reporter: Dongjin Lee
Assignee: Dongjin Lee


As of present, there are 18 'kafka.server:type=BrokerTopicMetrics' but, only 13 
of them are described in the documentation.

The omitted metrics are:
 * kafka.server:type=BrokerTopicMetrics,name=TotalProduceRequestsPerSec
 * kafka.server:type=BrokerTopicMetrics,name=TotalFetchRequestsPerSec
 * kafka.server:type=BrokerTopicMetrics,name=FailedProduceRequestsPerSec
 * kafka.server:type=BrokerTopicMetrics,name=FailedFetchRequestsPerSec
 * kafka.server:type=BrokerTopicMetrics,name=BytesRejectedPerSec



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (KAFKA-13430) Remove broker-wide quota properties from the documentation

2021-11-03 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17438118#comment-17438118
 ] 

Dongjin Lee commented on KAFKA-13430:
-

[~dajac] Could you kindly have a look? It seems like We should also address 
this issue in 3.1.0.

> Remove broker-wide quota properties from the documentation
> --
>
> Key: KAFKA-13430
> URL: https://issues.apache.org/jira/browse/KAFKA-13430
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Dongjin Lee
>Assignee: Dongjin Lee
>Priority: Major
>
> I found this problem while working on 
> [KAFKA-13341|https://issues.apache.org/jira/browse/KAFKA-13341].
> Broker-wide quota properties ({{quota.producer.default}}, 
> {{quota.consumer.default}}) are [removed in 
> 3.0|https://issues.apache.org/jira/browse/KAFKA-12591], but it is not applied 
> to the documentation yet.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-13430) Remove broker-wide quota properties from the documentation

2021-11-03 Thread Dongjin Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjin Lee updated KAFKA-13430:

Description: 
I found this problem while working on 
[KAFKA-13341|https://issues.apache.org/jira/browse/KAFKA-13341].

Broker-wide quota properties ({{quota.producer.default}}, 
{{quota.consumer.default}}) are [removed in 
3.0|https://issues.apache.org/jira/browse/KAFKA-12591], but it is not applied 
to the documentation yet.

  was:
I found this problem while working on 
[KAFKA-13341|https://issues.apache.org/jira/browse/KAFKA-13341].

Broker-wide quota properties ({{quota.producer.default}}, 
{{quota.consumer.default}}) are removed in 3.0, but it is not applied to the 
documentation yet.


> Remove broker-wide quota properties from the documentation
> --
>
> Key: KAFKA-13430
> URL: https://issues.apache.org/jira/browse/KAFKA-13430
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Dongjin Lee
>Assignee: Dongjin Lee
>Priority: Major
>
> I found this problem while working on 
> [KAFKA-13341|https://issues.apache.org/jira/browse/KAFKA-13341].
> Broker-wide quota properties ({{quota.producer.default}}, 
> {{quota.consumer.default}}) are [removed in 
> 3.0|https://issues.apache.org/jira/browse/KAFKA-12591], but it is not applied 
> to the documentation yet.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-13430) Remove broker-wide quota properties from the documentation

2021-11-03 Thread Dongjin Lee (Jira)
Dongjin Lee created KAFKA-13430:
---

 Summary: Remove broker-wide quota properties from the documentation
 Key: KAFKA-13430
 URL: https://issues.apache.org/jira/browse/KAFKA-13430
 Project: Kafka
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0
Reporter: Dongjin Lee
Assignee: Dongjin Lee


I found this problem while working on 
[KAFKA-13341|https://issues.apache.org/jira/browse/KAFKA-13341].

Broker-wide quota properties ({{quota.producer.default}}, 
{{quota.consumer.default}}) are removed in 3.0, but it is not applied to the 
documentation yet.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-13247) Adding functionality for loading private key entry by alias from the keystore

2021-11-02 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17437398#comment-17437398
 ] 

Dongjin Lee commented on KAFKA-13247:
-

Hi [~tmargaryan] 

Thank you for reporting this issue. I was working on the security features 
nowadays, and it seems like I can take this issue; but before that, I have one 
question: Could you clarify a case when Kafka needs this feature? AFAIK, Kafka 
Broker uses only one private key in the Keystore in most cases. Is there any 
case I have not encountered yet?

> Adding functionality for loading private key entry by alias from the keystore
> -
>
> Key: KAFKA-13247
> URL: https://issues.apache.org/jira/browse/KAFKA-13247
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Tigran Margaryan
>Priority: Major
>  Labels: kip-required
>
> Hello team,
> While configuring SSL for Kafka connectivity , I found out that there is no 
> possibility to choose/load the private key entry by alias from the keystore 
> defined via 
> org.apache.kafka.common.config.SslConfigs.SSL_KEYSTORE_LOCATION_CONFIG. It 
> turns out that the keystore could not have multiple private key entries .
> Kindly ask you to add that config (smth. like SSL_KEY_ALIAS_CONFIG) into 
> SslConfigs with the corresponding functionality which should load only the 
> private key entry by defined alias.
>  
> Thanks in advance. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-13341) Quotas are not applied to requests with null clientId

2021-10-30 Thread Dongjin Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjin Lee reassigned KAFKA-13341:
---

Assignee: Dongjin Lee

> Quotas are not applied to requests with null clientId
> -
>
> Key: KAFKA-13341
> URL: https://issues.apache.org/jira/browse/KAFKA-13341
> Project: Kafka
>  Issue Type: Bug
>Reporter: David Mao
>Assignee: Dongjin Lee
>Priority: Major
>
> ClientQuotaManager.DefaultQuotaCallback will not check for the existence of a 
> default quota if a request's clientId is null. This results in null clientIds 
> bypassing quotas.
> Null clientIds are permitted in the protocol, so this seems like a bug.
> This looks like it may be a regression introduced by 
> https://github.com/apache/kafka/pull/7372



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-7632) Support Compression Level

2021-10-29 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-7632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17435884#comment-17435884
 ] 

Dongjin Lee commented on KAFKA-7632:


[~benedikt] Oh yes, the issue description is obsolete; Thanks for reminding me. 
I updated the description accordingly.

> Support Compression Level
> -
>
> Key: KAFKA-7632
> URL: https://issues.apache.org/jira/browse/KAFKA-7632
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 2.1.0
> Environment: all
>Reporter: Dave Waters
>Assignee: Dongjin Lee
>Priority: Major
>  Labels: needs-kip
> Fix For: 3.1.0
>
>
> The compression level for ZSTD is currently set to use the default level (3), 
> which is a conservative setting that in some use cases eliminates the value 
> that ZSTD provides with improved compression. Each use case will vary, so 
> exposing the level as a producer, broker, and topic configuration setting 
> will allow the user to adjust the level.
> Since it applies to the other compression codecs, we should add the same 
> functionalities to them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-7632) Support Compression Level

2021-10-29 Thread Dongjin Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjin Lee updated KAFKA-7632:
---
Description: 
The compression level for ZSTD is currently set to use the default level (3), 
which is a conservative setting that in some use cases eliminates the value 
that ZSTD provides with improved compression. Each use case will vary, so 
exposing the level as a producer, broker, and topic configuration setting will 
allow the user to adjust the level.

Since it applies to the other compression codecs, we should add the same 
functionalities to them.

  was:
The compression level for ZSTD is currently set to use the default level (3), 
which is a conservative setting that in some use cases eliminates the value 
that ZSTD provides with improved compression. Each use case will vary, so 
exposing the level as a broker configuration setting will allow the user to 
adjust the level.

Since it applies to the other compression codecs, we should add the same 
functionalities to them.


> Support Compression Level
> -
>
> Key: KAFKA-7632
> URL: https://issues.apache.org/jira/browse/KAFKA-7632
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 2.1.0
> Environment: all
>Reporter: Dave Waters
>Assignee: Dongjin Lee
>Priority: Major
>  Labels: needs-kip
> Fix For: 3.1.0
>
>
> The compression level for ZSTD is currently set to use the default level (3), 
> which is a conservative setting that in some use cases eliminates the value 
> that ZSTD provides with improved compression. Each use case will vary, so 
> exposing the level as a producer, broker, and topic configuration setting 
> will allow the user to adjust the level.
> Since it applies to the other compression codecs, we should add the same 
> functionalities to them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-13294) Upgrade Netty to 4.1.68 for CVE fixes

2021-10-28 Thread Dongjin Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjin Lee reassigned KAFKA-13294:
---

Assignee: Dongjin Lee  (was: Utkarsh Khare)

> Upgrade Netty to 4.1.68 for CVE fixes
> -
>
> Key: KAFKA-13294
> URL: https://issues.apache.org/jira/browse/KAFKA-13294
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.8.0
>Reporter: Utkarsh Khare
>Assignee: Dongjin Lee
>Priority: Minor
>
> netty has reported a couple of CVEs regarding the usage of Bzip2Decoder and 
> SnappyFrameDecoder. 
> Reference :
> [CVE-2021-37136 - 
> https://github.com/netty/netty/security/advisories/GHSA-grg4-wf29-r9vv|https://github.com/netty/netty/security/advisories/GHSA-grg4-wf29-r9vv]
> [CVE-2021-37137 - 
> https://github.com/netty/netty/security/advisories/GHSA-9vjp-v76f-g363|https://github.com/netty/netty/security/advisories/GHSA-9vjp-v76f-g363]
>  
> Can we upgrade Netty to version 4.1.68.Final to fix this ? 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-13294) Upgrade Netty to 4.1.68 for CVE fixes

2021-10-28 Thread Dongjin Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjin Lee resolved KAFKA-13294.
-
Resolution: Fixed

[~mimaison] Here it is.

> Upgrade Netty to 4.1.68 for CVE fixes
> -
>
> Key: KAFKA-13294
> URL: https://issues.apache.org/jira/browse/KAFKA-13294
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.8.0
>Reporter: Utkarsh Khare
>Assignee: Dongjin Lee
>Priority: Minor
>
> netty has reported a couple of CVEs regarding the usage of Bzip2Decoder and 
> SnappyFrameDecoder. 
> Reference :
> [CVE-2021-37136 - 
> https://github.com/netty/netty/security/advisories/GHSA-grg4-wf29-r9vv|https://github.com/netty/netty/security/advisories/GHSA-grg4-wf29-r9vv]
> [CVE-2021-37137 - 
> https://github.com/netty/netty/security/advisories/GHSA-9vjp-v76f-g363|https://github.com/netty/netty/security/advisories/GHSA-9vjp-v76f-g363]
>  
> Can we upgrade Netty to version 4.1.68.Final to fix this ? 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-13392) Timeout Exception triggering reassign partitions with --bootstrap-server option

2021-10-24 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17433522#comment-17433522
 ] 

Dongjin Lee commented on KAFKA-13392:
-

[~korjek] It seems like you are contacting the brokers with '--bootstrap-server 
{the-third-broker}'. Isn't it?

> Timeout Exception triggering reassign partitions with --bootstrap-server 
> option
> ---
>
> Key: KAFKA-13392
> URL: https://issues.apache.org/jira/browse/KAFKA-13392
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 2.8.0
>Reporter: Yevgeniy Korin
>Priority: Minor
>
> *Scenario when we faced with this issue:*
>  One of three brokers is down. Add another (fourth) broker and try to 
> reassign partitions using '--bootstrap-server'
>  option.
> *What's failed:*
> {code:java}
> /opt/kafka/bin/kafka-reassign-partitions.sh --bootstrap-server 
> xxx.xxx.xxx.xxx:9092 --reassignment-json-file 
> /tmp/reassignment-20211021130718.json --throttle 1 --execute{code}
> failed with
> {code:java}
> Error: org.apache.kafka.common.errors.TimeoutException: 
> Call(callName=incrementalAlterConfigs, deadlineMs=1634811369255, tries=1, 
> nextAllowedTryMs=1634811369356) timed out at 1634811369256 after 1 attempt(s)
>  java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TimeoutException: 
> Call(callName=incrementalAlterConfigs, deadlineMs=1634811369255, tries=1, 
> nextAllowedTryMs=1634811369356) timed out at 1634811369256 after 1 attempt(s)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
>  at 
> kafka.admin.ReassignPartitionsCommand$.modifyInterBrokerThrottle(ReassignPartitionsCommand.scala:1435)
>  at 
> kafka.admin.ReassignPartitionsCommand$.modifyReassignmentThrottle(ReassignPartitionsCommand.scala:1412)
>  at 
> kafka.admin.ReassignPartitionsCommand$.executeAssignment(ReassignPartitionsCommand.scala:974)
>  at 
> kafka.admin.ReassignPartitionsCommand$.handleAction(ReassignPartitionsCommand.scala:255)
>  at 
> kafka.admin.ReassignPartitionsCommand$.main(ReassignPartitionsCommand.scala:216)
>  at 
> kafka.admin.ReassignPartitionsCommand.main(ReassignPartitionsCommand.scala)
>  Caused by: org.apache.kafka.common.errors.TimeoutException: 
> Call(callName=incrementalAlterConfigs, deadlineMs=1634811369255, tries=1, 
> nextAllowedTryMs=1634811369356) timed out at 1634811369256 after 1 attempt(s)
>  Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out 
> waiting for a node assignment. Call: incrementalAlterConfigs{code}
>  *Expected behavio**:*
>  partition reassignment process started.
> *Workaround:*
>  Trigger partition reassignment process using '--zookeeper' option:
> {code:java}
> /opt/kafka/bin/kafka-reassign-partitions.sh --zookeeper 
> zookeeper.my.company:2181/kafka-cluster --reassignment-json-file 
> /tmp/reassignment-20211021130718.json --throttle 1 --execute{code}
>  *Additional info:*
>  We are able to trigger partition reassignment using '--bootstrap-server' 
> option with no exceptions when all four brokers are alive.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-13397) Honor 'replication.policy.separator' configuration when creating MirrorMaker2 internal topics

2021-10-24 Thread Dongjin Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjin Lee updated KAFKA-13397:

Description: 
This issue is a follow-up of KAFKA-10777 
([KIP-690|https://cwiki.apache.org/confluence/display/KAFKA/KIP-690%3A+Add+additional+configuration+to+control+MirrorMaker+2+internal+topics+naming+convention]).

As of present, a user can set custom 'replication.policy.separator' 
configuration in MirrorMaker 2. It determines the topic name of internal topics 
like heartbeats, checkpoints, and offset-syncs 
([KIP-690|https://cwiki.apache.org/confluence/display/KAFKA/KIP-690%3A+Add+additional+configuration+to+control+MirrorMaker+2+internal+topics+naming+convention]).

However, there are some glitches here:
 # MirrorMaker2 creates internal topics to track the offsets, configs, and 
status of the MM2 tasks. But, these topics are not affected by a custom 
'replication.policy.separator' settings - that is, these topics may be 
replicated against the user`s intention.
 # The internal topic names include a dash in their name (e.g., 
'mm2-offsets.\{source}.internal') so, a single '-' should be disallowed when 
configuring 'replication.policy.separator'.

  was:
This issue is a follow-up of KAFKA-10777 
([KIP-690|https://cwiki.apache.org/confluence/display/KAFKA/KIP-690%3A+Add+additional+configuration+to+control+MirrorMaker+2+internal+topics+naming+convention])
 and KAFKA-12379 
([KIP-716|https://cwiki.apache.org/confluence/display/KAFKA/KIP-716%3A+Allow+configuring+the+location+of+the+offset-syncs+topic+with+MirrorMaker2]).

As of present, a user can set custom 'replication.policy.separator' 
configuration in MirrorMaker 2. It determines the topic name of internal topics 
like heartbeats, checkpoints, and offset-syncs 
([KIP-690|https://cwiki.apache.org/confluence/display/KAFKA/KIP-690%3A+Add+additional+configuration+to+control+MirrorMaker+2+internal+topics+naming+convention]).
 Also, the user can configure on which side the offset-syncs topic will be 
created 
([KIP-716|https://cwiki.apache.org/confluence/display/KAFKA/KIP-716%3A+Allow+configuring+the+location+of+the+offset-syncs+topic+with+MirrorMaker2])
 with 'offset-syncs.topic.location'.

However, there are some glitches here:
 # MirrorMaker2 creates internal topics to track the offsets, configs, and 
status of the MM2 tasks. But, these topics are not affected by a custom 
'replication.policy.separator' settings - that is, these topics may be 
replicated against the user`s intention.
 # The internal topic names include a dash in their name (e.g., 
'mm2-offsets.\{source}.internal') so, a single '-' should be disallowed when 
configuring 'replication.policy.separator'. 
 # By default, the offset-syncs topic is created in the source cluster and also 
follows the 'replication.policy.separator' configuration of the source side. 
But if the user changes the location of the offset-syncs topic to target, it 
still follows the source's 'replication.policy.separator' configuration, not 
target one.

These issues seem like a corner case between the two above KIPs.


> Honor 'replication.policy.separator' configuration when creating MirrorMaker2 
> internal topics
> -
>
> Key: KAFKA-13397
> URL: https://issues.apache.org/jira/browse/KAFKA-13397
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Reporter: Dongjin Lee
>Assignee: Dongjin Lee
>Priority: Minor
>
> This issue is a follow-up of KAFKA-10777 
> ([KIP-690|https://cwiki.apache.org/confluence/display/KAFKA/KIP-690%3A+Add+additional+configuration+to+control+MirrorMaker+2+internal+topics+naming+convention]).
> As of present, a user can set custom 'replication.policy.separator' 
> configuration in MirrorMaker 2. It determines the topic name of internal 
> topics like heartbeats, checkpoints, and offset-syncs 
> ([KIP-690|https://cwiki.apache.org/confluence/display/KAFKA/KIP-690%3A+Add+additional+configuration+to+control+MirrorMaker+2+internal+topics+naming+convention]).
> However, there are some glitches here:
>  # MirrorMaker2 creates internal topics to track the offsets, configs, and 
> status of the MM2 tasks. But, these topics are not affected by a custom 
> 'replication.policy.separator' settings - that is, these topics may be 
> replicated against the user`s intention.
>  # The internal topic names include a dash in their name (e.g., 
> 'mm2-offsets.\{source}.internal') so, a single '-' should be disallowed when 
> configuring 'replication.policy.separator'.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-13397) Honor 'replication.policy.separator' configuration when creating MirrorMaker2 internal topics

2021-10-24 Thread Dongjin Lee (Jira)
Dongjin Lee created KAFKA-13397:
---

 Summary: Honor 'replication.policy.separator' configuration when 
creating MirrorMaker2 internal topics
 Key: KAFKA-13397
 URL: https://issues.apache.org/jira/browse/KAFKA-13397
 Project: Kafka
  Issue Type: Bug
  Components: mirrormaker
Reporter: Dongjin Lee
Assignee: Dongjin Lee


This issue is a follow-up of KAFKA-10777 
([KIP-690|https://cwiki.apache.org/confluence/display/KAFKA/KIP-690%3A+Add+additional+configuration+to+control+MirrorMaker+2+internal+topics+naming+convention])
 and KAFKA-12379 
([KIP-716|https://cwiki.apache.org/confluence/display/KAFKA/KIP-716%3A+Allow+configuring+the+location+of+the+offset-syncs+topic+with+MirrorMaker2]).

As of present, a user can set custom 'replication.policy.separator' 
configuration in MirrorMaker 2. It determines the topic name of internal topics 
like heartbeats, checkpoints, and offset-syncs 
([KIP-690|https://cwiki.apache.org/confluence/display/KAFKA/KIP-690%3A+Add+additional+configuration+to+control+MirrorMaker+2+internal+topics+naming+convention]).
 Also, the user can configure on which side the offset-syncs topic will be 
created 
([KIP-716|https://cwiki.apache.org/confluence/display/KAFKA/KIP-716%3A+Allow+configuring+the+location+of+the+offset-syncs+topic+with+MirrorMaker2])
 with 'offset-syncs.topic.location'.

However, there are some glitches here:
 # MirrorMaker2 creates internal topics to track the offsets, configs, and 
status of the MM2 tasks. But, these topics are not affected by a custom 
'replication.policy.separator' settings - that is, these topics may be 
replicated against the user`s intention.
 # The internal topic names include a dash in their name (e.g., 
'mm2-offsets.\{source}.internal') so, a single '-' should be disallowed when 
configuring 'replication.policy.separator'. 
 # By default, the offset-syncs topic is created in the source cluster and also 
follows the 'replication.policy.separator' configuration of the source side. 
But if the user changes the location of the offset-syncs topic to target, it 
still follows the source's 'replication.policy.separator' configuration, not 
target one.

These issues seem like a corner case between the two above KIPs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-13352) Kafka Client does not support passwords starting with number in jaas config

2021-10-24 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17433415#comment-17433415
 ] 

Dongjin Lee commented on KAFKA-13352:
-

Hi [~bvn13], I looked into this issue a little more. In short, the problem is 
caused by "a string value that contains numbers, -, _, and $ symbol," rather 
than "beginning with number". Please refer to the PR I made.

> Kafka Client does not support passwords starting with number in jaas config
> ---
>
> Key: KAFKA-13352
> URL: https://issues.apache.org/jira/browse/KAFKA-13352
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.7.1
>Reporter: Vyacheslav Boyko
>Assignee: Dongjin Lee
>Priority: Trivial
>
> I'm trying to connect to Kafka with Apache Camel's component.
> I have SASL JAAS CONFIG param as:
> {code:java}
> "org.apache.kafka.common.security.plain.PlainLoginModule required 
> username=pf_kafka_card-products password=8GMf0yWkLHrI4cNYYoyHGxclkXCLSCGJ;" 
> {code}
> And I faced an issue during my application starts:
> {code:java}
> Caused by: java.lang.IllegalArgumentException: Value not specified for key 
> 'password' in JAAS config {code}
> I have tried to inspect this issue. I prepared a block of code to reproduce 
> it (Original code is in JaasConfig.java in kafka-clients-...jar). Here it is:
> {code:java}
> public static void main(String[] args) {
> String test = "org.apache.kafka.common.security.plain.PlainLoginModule 
> required username=pf_kafka_card-products 
> password=8GMf0yWkLHrI4cNYYoyHGxclkXCLSCGJ;";
> testJaasConfig(test);
> //SpringApplication.run(CardApplication.class, args);
> }
> private static void testJaasConfig(String config) {
> StreamTokenizer tokenizer = new StreamTokenizer(new StringReader(config));
> tokenizer.slashSlashComments(true);
> tokenizer.slashStarComments(true);
> tokenizer.wordChars('-', '-');
> tokenizer.wordChars('_', '_');
> tokenizer.wordChars('$', '$');
> tokenizer.wordChars('0', '9');
> List configEntries;
> try {
> configEntries = new ArrayList<>();
> while (tokenizer.nextToken() != StreamTokenizer.TT_EOF) {
> configEntries.add(parseAppConfigurationEntry(tokenizer));
> }
> if (configEntries.isEmpty())
> throw new IllegalArgumentException("Login module not specified in 
> JAAS config");
> } catch (IOException e) {
> throw new KafkaException("Unexpected exception while parsing JAAS 
> config");
> }
> }
> private static AppConfigurationEntry 
> parseAppConfigurationEntry(StreamTokenizer tokenizer) throws IOException {
> String loginModule = tokenizer.sval;
> if (tokenizer.nextToken() == StreamTokenizer.TT_EOF)
> throw new IllegalArgumentException("Login module control flag not 
> specified in JAAS config");
> AppConfigurationEntry.LoginModuleControlFlag controlFlag = 
> loginModuleControlFlag(tokenizer.sval);
> Map options = new HashMap<>();
> while (tokenizer.nextToken() != StreamTokenizer.TT_EOF && tokenizer.ttype 
> != ';') {
> String key = tokenizer.sval;
> if (tokenizer.nextToken() != '=' || tokenizer.nextToken() == 
> StreamTokenizer.TT_EOF || tokenizer.sval == null)
> throw new IllegalArgumentException("Value not specified for key 
> '" + key + "' in JAAS config");
> String value = tokenizer.sval;
> options.put(key, value);
> }
> if (tokenizer.ttype != ';')
> throw new IllegalArgumentException("JAAS config entry not terminated 
> by semi-colon");
> return new AppConfigurationEntry(loginModule, controlFlag, options);
> }
> private static AppConfigurationEntry.LoginModuleControlFlag 
> loginModuleControlFlag(String flag) {
> if (flag == null)
> throw new IllegalArgumentException("Login module control flag is not 
> available in the JAAS config");
> AppConfigurationEntry.LoginModuleControlFlag controlFlag;
> switch (flag.toUpperCase(Locale.ROOT)) {
> case "REQUIRED":
> controlFlag = 
> AppConfigurationEntry.LoginModuleControlFlag.REQUIRED;
> break;
> case "REQUISITE":
> controlFlag = 
> AppConfigurationEntry.LoginModuleControlFlag.REQUISITE;
> break;
> case "SUFFICIENT":
> controlFlag = 
> AppConfigurationEntry.LoginModuleControlFlag.SUFFICIENT;
> break;
> case "OPTIONAL":
> controlFlag = 
> AppConfigurationEntry.LoginModuleControlFlag.OPTIONAL;
> break;
> default:
> throw new IllegalArgumentException("Invalid login module control 
> flag '" + flag + "' in JAAS config");
> }
> return controlFlag;
> }
>  {code}
> I have solved this

[jira] [Assigned] (KAFKA-13352) Kafka Client does not support passwords starting with number in jaas config

2021-10-24 Thread Dongjin Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjin Lee reassigned KAFKA-13352:
---

Assignee: Dongjin Lee

> Kafka Client does not support passwords starting with number in jaas config
> ---
>
> Key: KAFKA-13352
> URL: https://issues.apache.org/jira/browse/KAFKA-13352
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.7.1
>Reporter: Vyacheslav Boyko
>Assignee: Dongjin Lee
>Priority: Trivial
>
> I'm trying to connect to Kafka with Apache Camel's component.
> I have SASL JAAS CONFIG param as:
> {code:java}
> "org.apache.kafka.common.security.plain.PlainLoginModule required 
> username=pf_kafka_card-products password=8GMf0yWkLHrI4cNYYoyHGxclkXCLSCGJ;" 
> {code}
> And I faced an issue during my application starts:
> {code:java}
> Caused by: java.lang.IllegalArgumentException: Value not specified for key 
> 'password' in JAAS config {code}
> I have tried to inspect this issue. I prepared a block of code to reproduce 
> it (Original code is in JaasConfig.java in kafka-clients-...jar). Here it is:
> {code:java}
> public static void main(String[] args) {
> String test = "org.apache.kafka.common.security.plain.PlainLoginModule 
> required username=pf_kafka_card-products 
> password=8GMf0yWkLHrI4cNYYoyHGxclkXCLSCGJ;";
> testJaasConfig(test);
> //SpringApplication.run(CardApplication.class, args);
> }
> private static void testJaasConfig(String config) {
> StreamTokenizer tokenizer = new StreamTokenizer(new StringReader(config));
> tokenizer.slashSlashComments(true);
> tokenizer.slashStarComments(true);
> tokenizer.wordChars('-', '-');
> tokenizer.wordChars('_', '_');
> tokenizer.wordChars('$', '$');
> tokenizer.wordChars('0', '9');
> List configEntries;
> try {
> configEntries = new ArrayList<>();
> while (tokenizer.nextToken() != StreamTokenizer.TT_EOF) {
> configEntries.add(parseAppConfigurationEntry(tokenizer));
> }
> if (configEntries.isEmpty())
> throw new IllegalArgumentException("Login module not specified in 
> JAAS config");
> } catch (IOException e) {
> throw new KafkaException("Unexpected exception while parsing JAAS 
> config");
> }
> }
> private static AppConfigurationEntry 
> parseAppConfigurationEntry(StreamTokenizer tokenizer) throws IOException {
> String loginModule = tokenizer.sval;
> if (tokenizer.nextToken() == StreamTokenizer.TT_EOF)
> throw new IllegalArgumentException("Login module control flag not 
> specified in JAAS config");
> AppConfigurationEntry.LoginModuleControlFlag controlFlag = 
> loginModuleControlFlag(tokenizer.sval);
> Map options = new HashMap<>();
> while (tokenizer.nextToken() != StreamTokenizer.TT_EOF && tokenizer.ttype 
> != ';') {
> String key = tokenizer.sval;
> if (tokenizer.nextToken() != '=' || tokenizer.nextToken() == 
> StreamTokenizer.TT_EOF || tokenizer.sval == null)
> throw new IllegalArgumentException("Value not specified for key 
> '" + key + "' in JAAS config");
> String value = tokenizer.sval;
> options.put(key, value);
> }
> if (tokenizer.ttype != ';')
> throw new IllegalArgumentException("JAAS config entry not terminated 
> by semi-colon");
> return new AppConfigurationEntry(loginModule, controlFlag, options);
> }
> private static AppConfigurationEntry.LoginModuleControlFlag 
> loginModuleControlFlag(String flag) {
> if (flag == null)
> throw new IllegalArgumentException("Login module control flag is not 
> available in the JAAS config");
> AppConfigurationEntry.LoginModuleControlFlag controlFlag;
> switch (flag.toUpperCase(Locale.ROOT)) {
> case "REQUIRED":
> controlFlag = 
> AppConfigurationEntry.LoginModuleControlFlag.REQUIRED;
> break;
> case "REQUISITE":
> controlFlag = 
> AppConfigurationEntry.LoginModuleControlFlag.REQUISITE;
> break;
> case "SUFFICIENT":
> controlFlag = 
> AppConfigurationEntry.LoginModuleControlFlag.SUFFICIENT;
> break;
> case "OPTIONAL":
> controlFlag = 
> AppConfigurationEntry.LoginModuleControlFlag.OPTIONAL;
> break;
> default:
> throw new IllegalArgumentException("Invalid login module control 
> flag '" + flag + "' in JAAS config");
> }
> return controlFlag;
> }
>  {code}
> I have solved this issue by changing my password from
> {code:java}
> 8GMf0yWkLHrI4cNYYoyHGxclkXCLSCGJ {code}
> to
> {code:java}
> aaa {code}
> This leads me to suggestion that Tokenizer interprets any leading digit as 
> 'bad' symbol and it breaks to parse the

[jira] [Commented] (KAFKA-13376) Allow MirrorMaker 2 producer and consumer customization per replication flow

2021-10-22 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17432946#comment-17432946
 ] 

Dongjin Lee commented on KAFKA-13376:
-

Hi [~ivanyu],

As of present, the replication-level client configuration is not working 
correctly (although the MirrorMakerConfig's Javadoc states that it works); and 
[this 
KIP|https://cwiki.apache.org/confluence/display/KAFKA/KIP-781%3A+Improve+MirrorMaker2%27s+client+configuration]
 (KAFKA-13365) states the rule for replication-level client configuration like 
the following:
{quote}{{A->B.producer.\{property-name}}} and {{A->B.admin.\{property-name}}} 
are applied to target cluster clients and {{A->B.consumer.\{property-name}}} is 
applied to source cluster clients.
{quote}
If not, please leave a message on the mailing list 
[here|https://lists.apache.org/thread.html/r2ab9df6e620eb3ac8a0e1dd7a2f833b1978452aa6bec21c38c1dde18%40%3Cdev.kafka.apache.org%3E].

> Allow MirrorMaker 2 producer and consumer customization per replication flow
> 
>
> Key: KAFKA-13376
> URL: https://issues.apache.org/jira/browse/KAFKA-13376
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Reporter: Ivan Yurchenko
>Priority: Minor
>
> Currently, it's possible to set producer and consumer configurations for a 
> cluster in MirrorMaker 2, like this:
> {noformat}
> {source}.consumer.{consumer_config_name}
> {target}.producer.{producer_config_name}
> {noformat}
> However, in some cases it makes sense to set these configs differently for 
> different replication flows (e.g. when they have different latency/throughput 
> trade-offs), something like:
> {noformat}
> {source}->{target}.{source}.consumer.{consumer_config_name}
> {source}->{target}.{target}.producer.{producer_config_name}
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-13365) Improve MirrorMaker2's client configuration

2021-10-22 Thread Dongjin Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjin Lee updated KAFKA-13365:

Description: 
As of present, MirrorMaker 2 (aka MM2) 's client configurtaion feature has some 
problems:
 # The replication-level client configuration works only to the common 
properties like {{bootstrap.servers}}, {{security.protocol}}, ssl, sasl, etc; 
that is, a configuration like {{'A→B.producer.batch.size'}} is ignored.
 ## Also, which admin client is affected by the replication-level configuration 
like A→B.admin.retry.backoff.ms is unclear; MM2 uses two admin clients for both 
upstream and downstream clusters, respectively.
 # MM2 is based on Kafka Connect framework's connector; Since MM2 Connectors 
({{MirrorSourceConnector}}, {{MirrorCheckpointConnector}}, and 
{{MirrorHeartbeatConnector}}) are source connectors, they use producer instance 
created by Kafka Connector, which uses {{'producer.override.\{property-name}'}} 
in connector configuration; But, {{'target.producer.\{property-name}'}} are not 
automatically applied to {{'producer.override.\{property-name}'}} so not 
actually applied to producer instance.
 # MM2 requires to define the {{'bootstrap.servers'}} of the clusters in 
cluster-level, like {{'A.bootstrap.servers'}} or {{'B.bootstrap.servers'}}; but 
it also allows to override them in cluster-level and replication-level configs, 
like {{'A.producer.bootstrap.servers'}} or 
{{'A→B.consumer.bootstrap.servers'}}; actually these configurations are not 
used but, it would be better to ignore it and give a warning.

  was:
As of present, the producer-specific options or consumer-only settings are not 
configurable in MirrorMaker 2, dislike to the documentation.

{{us-east.producer.acks = all // ignored}}
{{us-west.consumer.max.poll.interval.ms = 12 // also ignored}}


> Improve MirrorMaker2's client configuration
> ---
>
> Key: KAFKA-13365
> URL: https://issues.apache.org/jira/browse/KAFKA-13365
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Reporter: Dongjin Lee
>Assignee: Dongjin Lee
>Priority: Critical
>  Labels: needs-kip
>
> As of present, MirrorMaker 2 (aka MM2) 's client configurtaion feature has 
> some problems:
>  # The replication-level client configuration works only to the common 
> properties like {{bootstrap.servers}}, {{security.protocol}}, ssl, sasl, etc; 
> that is, a configuration like {{'A→B.producer.batch.size'}} is ignored.
>  ## Also, which admin client is affected by the replication-level 
> configuration like A→B.admin.retry.backoff.ms is unclear; MM2 uses two admin 
> clients for both upstream and downstream clusters, respectively.
>  # MM2 is based on Kafka Connect framework's connector; Since MM2 Connectors 
> ({{MirrorSourceConnector}}, {{MirrorCheckpointConnector}}, and 
> {{MirrorHeartbeatConnector}}) are source connectors, they use producer 
> instance created by Kafka Connector, which uses 
> {{'producer.override.\{property-name}'}} in connector configuration; But, 
> {{'target.producer.\{property-name}'}} are not automatically applied to 
> {{'producer.override.\{property-name}'}} so not actually applied to producer 
> instance.
>  # MM2 requires to define the {{'bootstrap.servers'}} of the clusters in 
> cluster-level, like {{'A.bootstrap.servers'}} or {{'B.bootstrap.servers'}}; 
> but it also allows to override them in cluster-level and replication-level 
> configs, like {{'A.producer.bootstrap.servers'}} or 
> {{'A→B.consumer.bootstrap.servers'}}; actually these configurations are not 
> used but, it would be better to ignore it and give a warning.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-13365) Improve MirrorMaker2's client configuration

2021-10-22 Thread Dongjin Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjin Lee updated KAFKA-13365:

Summary: Improve MirrorMaker2's client configuration  (was: Allow 
MirrorMaker 2 to override the client configurations)

> Improve MirrorMaker2's client configuration
> ---
>
> Key: KAFKA-13365
> URL: https://issues.apache.org/jira/browse/KAFKA-13365
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Reporter: Dongjin Lee
>Assignee: Dongjin Lee
>Priority: Critical
>  Labels: needs-kip
>
> As of present, the producer-specific options or consumer-only settings are 
> not configurable in MirrorMaker 2, dislike to the documentation.
> {{us-east.producer.acks = all // ignored}}
> {{us-west.consumer.max.poll.interval.ms = 12 // also ignored}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-7632) Support Compression Level

2021-10-18 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-7632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17430082#comment-17430082
 ] 

Dongjin Lee commented on KAFKA-7632:


[~dajac] Agree. It seems like it was mistakenly marked to a blocker while 
updating the target version.

> Support Compression Level
> -
>
> Key: KAFKA-7632
> URL: https://issues.apache.org/jira/browse/KAFKA-7632
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 2.1.0
> Environment: all
>Reporter: Dave Waters
>Assignee: Dongjin Lee
>Priority: Blocker
>  Labels: needs-kip
> Fix For: 3.1.0
>
>
> The compression level for ZSTD is currently set to use the default level (3), 
> which is a conservative setting that in some use cases eliminates the value 
> that ZSTD provides with improved compression. Each use case will vary, so 
> exposing the level as a broker configuration setting will allow the user to 
> adjust the level.
> Since it applies to the other compression codecs, we should add the same 
> functionalities to them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-13365) Allow MirrorMaker 2 to override the client configurations

2021-10-14 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17428934#comment-17428934
 ] 

Dongjin Lee commented on KAFKA-13365:
-

I found it in 3.0.0 (see 
[here|https://github.com/dongjinleekr/kafka/blob/etc/mm2-client-config-bug/connect/mirror/src/test/java/org/apache/kafka/connect/mirror/MirrorMakerConfigTest.java#L77]),
 but it seems like it affects all the previous versions from 2.4.0. It seems 
like there was some mistake in the code review process - you know, MM2 is one 
of the most complex features in Apache Kafka.

As I stated in the 
[KIP|https://cwiki.apache.org/confluence/display/KAFKA/KIP-781%3A+Allow+MirrorMaker+2+to+override+the+client+configurations],
 this problem is organized into the following 3:
 - MM2's client settings do not work correctly: it does not accept 
client-specific properties like acks, max.partition.fetch.bytes, etc.
 - MM2 does not support Kafka Connect's client configuration override 
functionality.
 - There is no preference or priority defined between 1 and 2. Not in the 
original KIP nor code.

I already resolved problem 1, but found some glitches in 2 while testing. I am 
now completing it, and it seems like I can finalize it this weekend.

The problem you raised with KAFKA-13376 may be a part of 1 and 3, which regards 
the preference on MM2 specific client configuration.

Currently, MM2 supports client configuration on cluster-target specific 
configuration only, but you think MM2 needs to support a flow-specific 
configuration, isn't it? I will update the KIP with this problem also. Thanks 
for reporting this issue, and I linked it with this one.

> Allow MirrorMaker 2 to override the client configurations
> -
>
> Key: KAFKA-13365
> URL: https://issues.apache.org/jira/browse/KAFKA-13365
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Reporter: Dongjin Lee
>Assignee: Dongjin Lee
>Priority: Critical
>  Labels: needs-kip
>
> As of present, the producer-specific options or consumer-only settings are 
> not configurable in MirrorMaker 2, dislike to the documentation.
> {{us-east.producer.acks = all // ignored}}
> {{us-west.consumer.max.poll.interval.ms = 12 // also ignored}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-13365) Allow MirrorMaker 2 to override the client configurations

2021-10-11 Thread Dongjin Lee (Jira)
Dongjin Lee created KAFKA-13365:
---

 Summary: Allow MirrorMaker 2 to override the client configurations
 Key: KAFKA-13365
 URL: https://issues.apache.org/jira/browse/KAFKA-13365
 Project: Kafka
  Issue Type: Bug
  Components: mirrormaker
Reporter: Dongjin Lee
Assignee: Dongjin Lee


As of present, the producer-specific options or consumer-only settings are not 
configurable in MirrorMaker 2, dislike to the documentation.

{{us-east.producer.acks = all // ignored}}
{{us-west.consumer.max.poll.interval.ms = 12 // also ignored}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-13365) Allow MirrorMaker 2 to override the client configurations

2021-10-11 Thread Dongjin Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjin Lee updated KAFKA-13365:

Labels: needs-kip  (was: )

> Allow MirrorMaker 2 to override the client configurations
> -
>
> Key: KAFKA-13365
> URL: https://issues.apache.org/jira/browse/KAFKA-13365
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Reporter: Dongjin Lee
>Assignee: Dongjin Lee
>Priority: Critical
>  Labels: needs-kip
>
> As of present, the producer-specific options or consumer-only settings are 
> not configurable in MirrorMaker 2, dislike to the documentation.
> {{us-east.producer.acks = all // ignored}}
> {{us-west.consumer.max.poll.interval.ms = 12 // also ignored}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-13361) Support fine-grained compression options

2021-10-10 Thread Dongjin Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjin Lee updated KAFKA-13361:

Labels: needs-kip  (was: )

> Support fine-grained compression options
> 
>
> Key: KAFKA-13361
> URL: https://issues.apache.org/jira/browse/KAFKA-13361
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, core
>Reporter: Dongjin Lee
>Assignee: Dongjin Lee
>Priority: Major
>  Labels: needs-kip
>
> Adds the following options into the Producer, Broker, and Topic 
> configurations:
>  * compression.gzip.buffer: the buffer size that feeds raw input into the 
> Deflator or is fed by the uncompressed output from the Deflator. (available: 
> [512, ), default: 8192(=8kb).)
>  * compression.snappy.block: the block size that snappy uses. (available: 
> [1024, ), default: 32768(=32kb).)
>  * compression.lz4.block: the block size that lz4 uses. (available: [4, 7], 
> (means 64kb, 256kb, 1mb, 4mb respectively), default: 4.)
>  * compression.zstd.window: enables long mode; the log of the window size 
> that zstd uses to memorize the compressing data. (available: [10, 22], 
> default: 0 (disables long mode.))
> All of the above are different but somewhat in common from the point of 
> compression process in that it impacts the memorize size during the process.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-13361) Support fine-grained compression options

2021-10-10 Thread Dongjin Lee (Jira)
Dongjin Lee created KAFKA-13361:
---

 Summary: Support fine-grained compression options
 Key: KAFKA-13361
 URL: https://issues.apache.org/jira/browse/KAFKA-13361
 Project: Kafka
  Issue Type: Improvement
  Components: clients, core
Reporter: Dongjin Lee
Assignee: Dongjin Lee


Adds the following options into the Producer, Broker, and Topic configurations:
 * compression.gzip.buffer: the buffer size that feeds raw input into the 
Deflator or is fed by the uncompressed output from the Deflator. (available: 
[512, ), default: 8192(=8kb).)
 * compression.snappy.block: the block size that snappy uses. (available: 
[1024, ), default: 32768(=32kb).)
 * compression.lz4.block: the block size that lz4 uses. (available: [4, 7], 
(means 64kb, 256kb, 1mb, 4mb respectively), default: 4.)
 * compression.zstd.window: enables long mode; the log of the window size that 
zstd uses to memorize the compressing data. (available: [10, 22], default: 0 
(disables long mode.))

All of the above are different but somewhat in common from the point of 
compression process in that it impacts the memorize size during the process.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9366) Upgrade log4j to log4j2

2021-09-05 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17410283#comment-17410283
 ] 

Dongjin Lee commented on KAFKA-9366:


ALL // Sorry for being late. This KIP was originally passed for AK 3.0 but 
dropped from the release for the lack of review. I think it will be included in 
3.1.

If you need this feature urgently, please have a look at a custom build of 
2.7.0 [here|http://home.apache.org/~dongjin/post/apache-kafka-log4j2-support/]. 
I could not complete it for 2.8.0 and 3.0 for my medical concerns but will 
resume the work now.

If you are using log4j-appender, please have a look at 
[KIP-719|https://cwiki.apache.org/confluence/display/KAFKA/KIP-719%3A+Add+Log4J2+Appender].
 This KIP proposes a log4j2 equivalent for log4j-appender. I am also working on 
a custom release of it.

Thank you again for your interest in my workings.

> Upgrade log4j to log4j2
> ---
>
> Key: KAFKA-9366
> URL: https://issues.apache.org/jira/browse/KAFKA-9366
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.2.0, 2.1.1, 2.3.0, 2.4.0
>Reporter: leibo
>Assignee: Dongjin Lee
>Priority: Critical
>  Labels: needs-kip
> Fix For: 3.1.0
>
>
> h2. CVE-2019-17571 Detail
> Included in Log4j 1.2 is a SocketServer class that is vulnerable to 
> deserialization of untrusted data which can be exploited to remotely execute 
> arbitrary code when combined with a deserialization gadget when listening to 
> untrusted network traffic for log data. This affects Log4j versions up to 1.2 
> up to 1.2.17.
>  
> [https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17571]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9366) Upgrade log4j to log4j2

2021-07-14 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17380951#comment-17380951
 ] 

Dongjin Lee commented on KAFKA-9366:


[~kkonstantine] Got it.

> Upgrade log4j to log4j2
> ---
>
> Key: KAFKA-9366
> URL: https://issues.apache.org/jira/browse/KAFKA-9366
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.2.0, 2.1.1, 2.3.0, 2.4.0
>Reporter: leibo
>Assignee: Dongjin Lee
>Priority: Critical
>  Labels: needs-kip
> Fix For: 3.1.0
>
>
> h2. CVE-2019-17571 Detail
> Included in Log4j 1.2 is a SocketServer class that is vulnerable to 
> deserialization of untrusted data which can be exploited to remotely execute 
> arbitrary code when combined with a deserialization gadget when listening to 
> untrusted network traffic for log data. This affects Log4j versions up to 1.2 
> up to 1.2.17.
>  
> [https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17571]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-13048) Update vulnerable dependencies in 2.8.0

2021-07-08 Thread Dongjin Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjin Lee resolved KAFKA-13048.
-
Resolution: Duplicate

> Update vulnerable dependencies in 2.8.0
> ---
>
> Key: KAFKA-13048
> URL: https://issues.apache.org/jira/browse/KAFKA-13048
> Project: Kafka
>  Issue Type: Bug
>  Components: core, KafkaConnect
>Affects Versions: 2.8.0
>Reporter: Pavel Kuznetsov
>Priority: Major
>  Labels: security
>
> **Describe the bug**
> I checked kafka_2.13-2.8.0.tgz distribution with WhiteSource and find out 
> that some libraries have vulnerabilities.
> Here they are:
> - jetty-http-9.4.40.v20210413.jar has CVE-2021-28169 vulnerability. The way 
> to fix it is to upgrade to org.eclipse.jetty:jetty-http:9.4.41.v20210516
> - jetty-server-9.4.40.v20210413.jar has CVE-2021-28169 and CVE-2021-34428 
> vulnerabilities. The way to fix it is to upgrade to 
> org.eclipse.jetty:jetty-server:9.4.41.v20210516
> - jetty-servlets-9.4.40.v20210413.jar has CVE-2021-28169 vulnerability. The 
> way to fix it is to upgrade to 
> org.eclipse.jetty:jetty-servlets:9.4.41.v20210516
> **To Reproduce**
> Download kafka_2.13-2.8.0.tgz and find jars, listed above.
> Check that these jars with corresponding versions are mentioned in 
> corresponding vulnerability description.
> **Expected behavior**
> - jetty-http upgraded to 9.4.41.v20210516 or higher
> - jetty-server upgraded to 9.4.41.v20210516 or higher
> - jetty-servlets upgraded to 9.4.41.v20210516 or higher
> **Actual behaviour**
> - jetty-http is 9.4.40.v20210413
> - jetty-server is 9.4.40.v20210413
> - jetty-servlets is 9.4.40.v20210413



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-13027) Support for Jakarta EE 9.x to allow applications to migrate

2021-07-02 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17373488#comment-17373488
 ] 

Dongjin Lee commented on KAFKA-13027:
-

Hi [~frocarz],

Thank you for raising this issue. I have worked with several Jetty-related 
issues in Kafka Connect and now testing a Java 11/Jetty 11 migration of Kafka 
Connect. (note: I linked the issue.)

I totally agree that we need to provide a way for the connect users to prepare 
the migration into the Jakarta namespace. However, although Jakarta 9.0 
supports Java 8, the Jetty 9.4.x (currently used one) does not support the 
Jakarta 9; Jetty 11 (which requires Java 11 and Servlet 5.0) will be the first 
version supports the Jakarta namespace first. 
([#1|https://www.eclipse.org/jetty/download.php] 
[#2|https://www.eclipse.org/jetty/]) So I think we can't do this right now.

+1. In fact, I am working on a preview version with Java 11/Jetty 1 now. As you 
can see [here|https://github.com/dongjinleekr/kafka/tree/feature/KAFKA-12359], 
it does not include any `javax.*` packages in `connect:api`. Could you have a 
look at when the preview is released?

> Support for Jakarta EE 9.x to allow applications to migrate
> ---
>
> Key: KAFKA-13027
> URL: https://issues.apache.org/jira/browse/KAFKA-13027
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Frode Carlsen
>Priority: Major
>
> Some of the kafka libraries (such as connect-api) have direct dependencies on 
> older Java EE 8 specifications (e.g. javax.ws.rs:javax.ws.rs-api:2.1.1).
> This creates issues in environments upgrading to Jakarta 9.0 and beyond (9.1 
> requires minimum Java 11).  For example upgrading web application servers 
> such as migrating to Jetty 11.
> The  main thing preventing backwards compatibility is that the package 
> namespace has moved from "*javax.**" to "*jakarta.**", along with a few 
> namespace changes in XML configuration files. (new specifications are 
> published here [https://jakarta.ee/specifications/,] along with references to 
> official artifacts and compliant implementations).
> From KAFKA-12894 (KIP-705) it appears dropping support for java 8 won't 
> happen till Q4 2022, which makes it harder to migrate to Jakarta 9.1, but 9.0 
> is still Java 8 compatible.
> Therefore, to allow projects that use Kafka client libraries to migrate prior 
> to the full work being completed in a future Kafka version, would it be 
> possible to generate Jakarta 9 compatible artifacts and dual publish these 
> for libraries that now depend on javax.ws.rs / javax.servlet and similar? 
> This is done by a number of open source libraries, as an alternative to 
> having different release branches for the time being.   Other than the 
> namespace change in 9.0 and minimum java LTS version in 9.1, the apis are 
> fully compatible with Java EE 8.
> As a suggestion, this fairly easy to do automaticallly using the 
> [https://github.com/eclipse/transformer/] for migration (most projects end up 
> publishing under artifacts with a either "-jakarta" as a suffix on the 
> artifactId or classifier)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12985) CVE-2021-28169 - Upgrade jetty to 9.4.41

2021-06-22 Thread Dongjin Lee (Jira)
Dongjin Lee created KAFKA-12985:
---

 Summary: CVE-2021-28169 - Upgrade jetty to 9.4.41
 Key: KAFKA-12985
 URL: https://issues.apache.org/jira/browse/KAFKA-12985
 Project: Kafka
  Issue Type: Task
  Components: security
Reporter: Dongjin Lee
Assignee: Dongjin Lee


CVE-2021-28169 vulnerability affects Jetty versions up to 9.4.40. For more 
information see https://nvd.nist.gov/vuln/detail/CVE-2021-28169

Upgrading to Jetty version 9.4.41 should address this issue 
(https://github.com/eclipse/jetty.project/security/advisories/GHSA-gwcr-j4wh-j3cq).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10269) AdminClient ListOffsetsResultInfo/timestamp is always -1

2021-06-11 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17361464#comment-17361464
 ] 

Dongjin Lee commented on KAFKA-10269:
-

[~d-t-w]

> is it difficult or expensive to return the timestamp in the case of Earliest 
> or Latest Spec?

I can't be certain. AFAIK this is for a historical reason or backward 
compatibility.

> In terms of API, is there any other way to identify the timestamp of the 
> first or last offset?

Seek to the first or last position with Consumer. 2. Fetch a record, check the 
timestamp.

> AdminClient ListOffsetsResultInfo/timestamp is always -1
> 
>
> Key: KAFKA-10269
> URL: https://issues.apache.org/jira/browse/KAFKA-10269
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 2.5.0
>Reporter: Derek Troy-West
>Priority: Minor
>
> When using AdminClient/listOffsets the resulting ListOffsetResultInfos appear 
> to always have a timestamp of -1.
> I've run listOffsets against live clusters with multiple Kafka versions (from 
> 1.0 to 2.5) with both CreateTIme and LogAppendTime for 
> message.timestamp.type, every result has -1 timestamp.
> e.g. 
> {{org.apache.kafka.clients.admin.ListOffsetsResult$ListOffsetsResultInfo#}}{{0x5c3a771}}
> ListOffsetsResultInfo(} offset=23016, timestamp=-1, 
> {{leaderEpoch=Optional[0])}}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12928) Add a check whether the Task's statestore is actually a directory

2021-06-10 Thread Dongjin Lee (Jira)
Dongjin Lee created KAFKA-12928:
---

 Summary: Add a check whether the Task's statestore is actually a 
directory
 Key: KAFKA-12928
 URL: https://issues.apache.org/jira/browse/KAFKA-12928
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Dongjin Lee
Assignee: Dongjin Lee


I found this problem while working on 
[KAFKA-10585|https://issues.apache.org/jira/browse/KAFKA-10585].

As of present, StateDirectory checks whether the Task's statestore directory 
exists and, if not, creates it. Since it does not check whether it is actually 
a directory, for example, if a regular file occupies the Task's statestore's 
path, the validation logic may be detoured.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12911) Configure automatic formatter for org.apache.kafka.streams.processor

2021-06-07 Thread Dongjin Lee (Jira)
Dongjin Lee created KAFKA-12911:
---

 Summary: Configure automatic formatter for 
org.apache.kafka.streams.processor
 Key: KAFKA-12911
 URL: https://issues.apache.org/jira/browse/KAFKA-12911
 Project: Kafka
  Issue Type: Sub-task
Reporter: Dongjin Lee
Assignee: Dongjin Lee


As an incremental approach to introduce automatic code formatter, we will 
configure automatic formatter for org.apache.kafka.streams.processor package 
with 127 (main) + 78 (test) = 205 files.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12910) Configure automatic formatter for org.apache.kafka.streams.state

2021-06-07 Thread Dongjin Lee (Jira)
Dongjin Lee created KAFKA-12910:
---

 Summary: Configure automatic formatter for 
org.apache.kafka.streams.state
 Key: KAFKA-12910
 URL: https://issues.apache.org/jira/browse/KAFKA-12910
 Project: Kafka
  Issue Type: Sub-task
Reporter: Dongjin Lee
Assignee: Dongjin Lee


As of 48379bd6e5, there are 893 java files in streams module.

As an incremental approach to introduce automatic code formatter, we will 
configure automatic formatter for org.apache.kafka.streams.processor package 
with 147 (main) + 91 (test) = 238 files 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12899) Support --bootstrap-server in ReplicaVerificationTool

2021-06-05 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17357947#comment-17357947
 ] 

Dongjin Lee commented on KAFKA-12899:
-

Hi [~guozhang],

Please don't leave this issue. I checked all the remaining tools, and this is 
the last piece of KIP-499 with kafka-streams-application-reset. (linked) I just 
filed a small KIP on this.

> Support --bootstrap-server in ReplicaVerificationTool
> -
>
> Key: KAFKA-12899
> URL: https://issues.apache.org/jira/browse/KAFKA-12899
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Dongjin Lee
>Assignee: Dongjin Lee
>Priority: Minor
>  Labels: needs-kip
> Fix For: 3.0.0
>
>
> kafka.tools.ReplicaVerificationTool still uses --broker-list, breaking 
> consistency with other (already migrated) tools.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12899) Support --bootstrap-server in ReplicaVerificationTool

2021-06-05 Thread Dongjin Lee (Jira)
Dongjin Lee created KAFKA-12899:
---

 Summary: Support --bootstrap-server in ReplicaVerificationTool
 Key: KAFKA-12899
 URL: https://issues.apache.org/jira/browse/KAFKA-12899
 Project: Kafka
  Issue Type: Improvement
  Components: tools
Reporter: Dongjin Lee
Assignee: Dongjin Lee
 Fix For: 3.0.0


kafka.tools.ReplicaVerificationTool still uses --broker-list, breaking 
consistency with other (already migrated) tools.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10269) AdminClient ListOffsetsResultInfo/timestamp is always -1

2021-06-05 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17357938#comment-17357938
 ] 

Dongjin Lee commented on KAFKA-10269:
-

[~huxi_2b] It seems like this is not a bug. Reviewing the code, I found the 
following:

1. AdminClient#listOffsets can take OffsetSpec parameter per TopicPartition, 
which can be one of EarliestSpec, LatestSpec, TimestampSpec.
2. For EarliestSpec and LatestSpec, a timestamp is not returned. see: 
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/log/Log.scala#L1310
 Instead, -1 is returned.
3. For TimestampSpec, the designated timestamp is returned: 
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/cluster/Partition.scala#L1152

I think this issue is rather a lack of documentation.

> AdminClient ListOffsetsResultInfo/timestamp is always -1
> 
>
> Key: KAFKA-10269
> URL: https://issues.apache.org/jira/browse/KAFKA-10269
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 2.5.0
>Reporter: Derek Troy-West
>Priority: Minor
>
> When using AdminClient/listOffsets the resulting ListOffsetResultInfos appear 
> to always have a timestamp of -1.
> I've run listOffsets against live clusters with multiple Kafka versions (from 
> 1.0 to 2.5) with both CreateTIme and LogAppendTime for 
> message.timestamp.type, every result has -1 timestamp.
> e.g. 
> {{org.apache.kafka.clients.admin.ListOffsetsResult$ListOffsetsResultInfo#}}{{0x5c3a771}}
> ListOffsetsResultInfo(} offset=23016, timestamp=-1, 
> {{leaderEpoch=Optional[0])}}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-7632) Support Compression Level

2021-06-05 Thread Dongjin Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjin Lee updated KAFKA-7632:
---
Summary: Support Compression Level  (was: Allow fine-grained configuration 
for compression)

> Support Compression Level
> -
>
> Key: KAFKA-7632
> URL: https://issues.apache.org/jira/browse/KAFKA-7632
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 2.1.0
> Environment: all
>Reporter: Dave Waters
>Assignee: Dongjin Lee
>Priority: Major
>  Labels: needs-kip
>
> The compression level for ZSTD is currently set to use the default level (3), 
> which is a conservative setting that in some use cases eliminates the value 
> that ZSTD provides with improved compression. Each use case will vary, so 
> exposing the level as a broker configuration setting will allow the user to 
> adjust the level.
> Since it applies to the other compression codecs, we should add the same 
> functionalities to them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12869) Update vulnerable dependencies

2021-05-31 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17354800#comment-17354800
 ] 

Dongjin Lee commented on KAFKA-12869:
-

[~ijuma] I checked the versions. All of them are already fixed in trunk, but 
not included in 2.7.1.

> Update vulnerable dependencies
> --
>
> Key: KAFKA-12869
> URL: https://issues.apache.org/jira/browse/KAFKA-12869
> Project: Kafka
>  Issue Type: Bug
>  Components: core, KafkaConnect
>Affects Versions: 2.7.1
>Reporter: Pavel Kuznetsov
>Priority: Major
>  Labels: security
>
> *Description*
> I checked kafka_2.13-2.7.1.tgz distribution with WhiteSource and find out 
> that some libraries have vulnerabilities.
> Here they are:
> * jetty-io-9.4.38.v20210224.jar has CVE-2021-28165 vulnerability. The way to 
> fix it is to upgrade to org.eclipse.jetty:jetty-io:9.4.39 or 
> org.eclipse.jetty:jetty-io:10.0.2 or org.eclipse.jetty:jetty-io:11.0.2
> * jersey-common-2.31.jar has CVE-2021-28168 vulnerability. The way to fix it 
> is to upgrade to org.glassfish.jersey.core:jersey-common:2.34
> * jetty-server-9.4.38.v20210224.jar has CVE-2021-28164 vulnerability. The way 
> to fix it is to upgrade to org.eclipse.jetty:jetty-webapp:9.4.39
> *To Reproduce*
> Download kafka_2.13-2.7.1.tgz and find jars, listed above.
> Check that these jars with corresponding versions are mentioned in 
> corresponding vulnerability description.
> *Expected*
> * jetty-io upgraded to 9.4.39 or higher
> * jersey-common upgraded to 2.34 or higher
> * jetty-server upgraded to jetty-webapp:9.4.39 or higher
> *Actual*
> * jetty-io is 9.4.38
> * jersey-common is 2.31
> * jetty-server is 9.4.38



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-12820) Upgrade maven-artifact dependency to resolve CVE-2021-26291

2021-05-20 Thread Dongjin Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjin Lee reassigned KAFKA-12820:
---

Assignee: Dongjin Lee

> Upgrade maven-artifact dependency to resolve CVE-2021-26291
> ---
>
> Key: KAFKA-12820
> URL: https://issues.apache.org/jira/browse/KAFKA-12820
> Project: Kafka
>  Issue Type: Task
>  Components: build
>Affects Versions: 2.6.1, 2.8.0, 2.7.1
>Reporter: Boojapho
>Assignee: Dongjin Lee
>Priority: Major
>
> Current Gradle builds of Kafka contain a dependency of `maven-artifact` 
> version 3.6.3, which contains CVE-2021-26291 
> ([https://nvd.nist.gov/vuln/detail/CVE-2021-26291).]  This vulnerability has 
> been fixed in Maven 3.8.1 
> ([https://maven.apache.org/docs/3.8.1/release-notes.html]).  Apache Kafka 
> should update `dependencies.gradle` to use the latest `maven-artifact` 
> library to eliminate this vulnerability.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12768) Mirrormaker2 consumer config not using newly assigned client id

2021-05-13 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17344283#comment-17344283
 ] 

Dongjin Lee commented on KAFKA-12768:
-

I reviewed this problem and found the following:

The MirrorMaker 1 as a standalone application had been included in the Kafka 
distribution for a long time ago and MirrorMaker 2 was added with 
[KIP-382|https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0]
 by [~ryannedolan] in 2.4.0, with Kafka Connect support.

MirrorMaker 1 supported multi-cluster mirroring with 'clusters' config 
property. (Which is what you are using here.) However, since Kafka connector 
supports point to point mirroring only, this property is not supported - in 
short, *the configuration is a little bit different between standalone 
application way and connector plugin way, especially for 'clusters'.* In this 
case, you should use 'source.cluster.consumer.client.id', not 
'source.consumer.client.id' nor 'source.client.id'.

I guess [~ryannedolan] had no choice here for the inherent difference between 
standalone application and connector. But I think this glitch may confuse, and 
document it would be worth, for those who without historical context.

[~ryannedolan] [~tombentley] How do you think? Do we need some documentation 
here?

> Mirrormaker2 consumer config not using newly assigned client id
> ---
>
> Key: KAFKA-12768
> URL: https://issues.apache.org/jira/browse/KAFKA-12768
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 2.6.0
>Reporter: Vincent
>Priority: Major
>
> Component: MirrorMaker2 from the 2.6.0 distribution.
> We tried to set quotas based client.id in mirrormaker2. We tried the setting 
> source.consumer.client.id and source.client.id properties with no luck.
> I was able to update the consumer client id using the 
> US->EUROPE.consumer.client.id config (from the customer) with a single 
> instance of MM2. With a single instance, everything works fine without any 
> issue. However, we are running 2 instances of MirrorMaker 2 with tasks.max 
> set to 2 and it doesn't work with multiple MM2 processes. We also tried 
> stopping all mirromaker2 instances and starting them again but it didn't help.
> Currently, the only workaround is to recreate (or rename) the Connect 
> internal topics.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12771) CheckStyle attempted upgrade (8.36.2 -->> 8.41.1) summons a pack of 'Indentation' errors

2021-05-10 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17342288#comment-17342288
 ] 

Dongjin Lee commented on KAFKA-12771:
-

Hi [~dejan2609], This issue is addressed in KAFKA-12572. Could you have [a 
look|https://github.com/apache/kafka/pull/10428]? It seems like we can upgrade 
checkstyle version after it is merged.

> CheckStyle attempted upgrade (8.36.2 -->> 8.41.1) summons a pack of 
> 'Indentation' errors
> 
>
> Key: KAFKA-12771
> URL: https://issues.apache.org/jira/browse/KAFKA-12771
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Reporter: Dejan Stojadinović
>Assignee: Dejan Stojadinović
>Priority: Minor
>
> ^*Prologue*: 
> [https://github.com/apache/kafka/pull/10656#issuecomment-836071563]^
> *Scenario:*
>  * bump CheckStyle to a more recent version (8.36.2 -->> 8.41.1)
>  * introduce (temporarily !) maxErrors CheckStyle property (in order to count 
> errors)
>  * execute gradle command: *_./gradlew checkstyleMain checkstyleTest_*
>  * around 50 of 'Indentation' CheckStyle errors (across 18 source code files) 
> are shown
> *Note:* there were some changes related to indentation between CheckStyle 
> *8.36.2* and *8.41.1*: 
> [https://checkstyle.sourceforge.io/releasenotes.html#Release_8.41.1]
> *What can be done (options):*
>  # relax CheckStyle 'Indentation' rules (if possible)
>  # comply with new CheckStyle 'Indentation' rules (and change/fix indentation 
> fir these source code files)
>  # there are some slim chances that this is a some kind of CheckStyle 
> regression (maybe similar to this one: 
> [https://github.com/checkstyle/checkstyle/issues/9341]). This should be 
> checked with CheckStyle team.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12756) Update Zookeeper to 3.6.3 or higher

2021-05-06 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17340233#comment-17340233
 ] 

Dongjin Lee commented on KAFKA-12756:
-

https://github.com/apache/kafka/pull/10642

> Update Zookeeper to 3.6.3 or higher
> ---
>
> Key: KAFKA-12756
> URL: https://issues.apache.org/jira/browse/KAFKA-12756
> Project: Kafka
>  Issue Type: Task
>Affects Versions: 2.7.0, 2.8.0
>Reporter: Boojapho
>Assignee: Dongjin Lee
>Priority: Major
>
> Zookeeper 3.6.3 or higher provides a security fix for 
> [CVE-21409]([https://nvd.nist.gov/vuln/detail/CVE-2021-21409)] which should 
> be included in Apache Kafka to eliminate the vulnerability.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-12756) Update Zookeeper to 3.6.3 or higher

2021-05-06 Thread Dongjin Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjin Lee reassigned KAFKA-12756:
---

Assignee: Dongjin Lee

> Update Zookeeper to 3.6.3 or higher
> ---
>
> Key: KAFKA-12756
> URL: https://issues.apache.org/jira/browse/KAFKA-12756
> Project: Kafka
>  Issue Type: Task
>Affects Versions: 2.7.0, 2.8.0
>Reporter: Boojapho
>Assignee: Dongjin Lee
>Priority: Major
>
> Zookeeper 3.6.3 or higher provides a security fix for 
> [CVE-21409]([https://nvd.nist.gov/vuln/detail/CVE-2021-21409)] which should 
> be included in Apache Kafka to eliminate the vulnerability.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12752) CVE-2021-28168 upgrade jersey to 2.34 or 3.02

2021-05-06 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17340231#comment-17340231
 ] 

Dongjin Lee commented on KAFKA-12752:
-

https://github.com/apache/kafka/pull/10641

> CVE-2021-28168 upgrade jersey to 2.34 or 3.02
> -
>
> Key: KAFKA-12752
> URL: https://issues.apache.org/jira/browse/KAFKA-12752
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: John Stacy
>Assignee: Dongjin Lee
>Priority: Major
>  Labels: CVE, security
>
> [https://nvd.nist.gov/vuln/detail/CVE-2021-28168]
> CVE-2021-28168 affects jersey versions <=2.33, <=3.0.1. Upgrading to 2.34 or 
> 3.02 should resolve the issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-12752) CVE-2021-28168 upgrade jersey to 2.34 or 3.02

2021-05-06 Thread Dongjin Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjin Lee reassigned KAFKA-12752:
---

Assignee: Dongjin Lee

> CVE-2021-28168 upgrade jersey to 2.34 or 3.02
> -
>
> Key: KAFKA-12752
> URL: https://issues.apache.org/jira/browse/KAFKA-12752
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: John Stacy
>Assignee: Dongjin Lee
>Priority: Major
>  Labels: CVE, security
>
> [https://nvd.nist.gov/vuln/detail/CVE-2021-28168]
> CVE-2021-28168 affects jersey versions <=2.33, <=3.0.1. Upgrading to 2.34 or 
> 3.02 should resolve the issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12703) Unencrypted PEM files can't be loaded

2021-05-02 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17338000#comment-17338000
 ] 

Dongjin Lee commented on KAFKA-12703:
-

[~trobador] It seems like you are right. According to 
[KIP-651|https://cwiki.apache.org/confluence/display/KAFKA/KIP-651+-+Support+PEM+format+for+SSL+certificates+and+private+key]
 which introduced 'ssl.key.password', it states "If the key is encrypted, key 
password must be specified using 'ssl.key.password'." In other words, it allows 
key password may not be specified.

 [~rsivaram] [~omkreddy] Could you have a look? I thought you must be the best 
reviewer since you wrote or reviewed the KIP.

> Unencrypted PEM files can't be loaded
> -
>
> Key: KAFKA-12703
> URL: https://issues.apache.org/jira/browse/KAFKA-12703
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.8.0
>Reporter: Brian Bascoy
>Priority: Major
>
> Unencrypted PEM files seem to be internally [supported in the 
> codebase|https://github.com/apache/kafka/blob/a46beb9d29781e0709baf596601122f770a5fa31/clients/src/main/java/org/apache/kafka/common/security/ssl/DefaultSslEngineFactory.java#L509]
>  but setting an ssl.key.password is currently enforced by createKeystore (on 
> DefaultSslEngineFactory). I was unable to find a reason for this, so I wonder 
> if this limitation could simply be removed:
>  
> [https://github.com/pera/kafka/commit/8df2feab5fc6955cf8c89a7d132f05d8f562e16b]
>  
> Thanks



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12669) Add deleteRange to WindowStore / KeyValueStore interfaces

2021-04-14 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17321923#comment-17321923
 ] 

Dongjin Lee commented on KAFKA-12669:
-

[~guozhang] You mean, like this one? 
https://github.com/dongjinleekr/kafka/blob/feature/KAFKA-12669/streams/src/main/java/org/apache/kafka/streams/state/KeyValueStore.java

If it is okay, may I take it? I have a custom implementation of 
`KeyValueStore#deleteRange(K, K)`. I think I can make use of it for this issue.

> Add deleteRange to WindowStore / KeyValueStore interfaces
> -
>
> Key: KAFKA-12669
> URL: https://issues.apache.org/jira/browse/KAFKA-12669
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Guozhang Wang
>Priority: Major
>  Labels: needs-kip
>
> We can consider adding such APIs where the underlying implementation classes 
> have better optimizations than deleting the keys as get-and-delete one by one.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-12655) CVE-2021-28165 - Upgrade jetty to 9.4.39

2021-04-12 Thread Dongjin Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjin Lee reassigned KAFKA-12655:
---

Assignee: Dongjin Lee

> CVE-2021-28165 - Upgrade jetty to 9.4.39
> 
>
> Key: KAFKA-12655
> URL: https://issues.apache.org/jira/browse/KAFKA-12655
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.7.0, 2.6.1
>Reporter: Edwin Hobor
>Assignee: Dongjin Lee
>Priority: Major
>  Labels: CVE, security
>
> *CVE-2021-28165* vulnerability affects Jetty versions up to *9.4.38*. For 
> more information see 
> [https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-28165] 
> Upgrading to Jetty version *9.4.39* should address this issue 
> ([https://github.com/eclipse/jetty.project/releases/tag/jetty-9.4.39.v20210325)|https://github.com/eclipse/jetty.project/releases/tag/jetty-9.4.39.v20210325].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8926) Log Cleaner thread dies when log.cleaner.min.cleanable.ratio is set to 0

2021-04-04 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17314499#comment-17314499
 ] 

Dongjin Lee commented on KAFKA-8926:


Hi [~jmcmillan],

I reviewed the code related to log cleaner but could not find related codes or 
failed to reproduce them. Is it also reproducible in recent releases like 2.7?

> Log Cleaner thread dies when log.cleaner.min.cleanable.ratio is set to 0
> 
>
> Key: KAFKA-8926
> URL: https://issues.apache.org/jira/browse/KAFKA-8926
> Project: Kafka
>  Issue Type: Bug
>  Components: log cleaner
>Affects Versions: 2.3.0
>Reporter: Jordan Mcmillan
>Priority: Major
>
> I applied a dynamic configuration change using the following command:
> {code:bash}
> kafka-configs --bootstrap-server `hostname`:9093 --command-config 
> /etc/kafka/consumer.properties --entity-type brokers --entity-name 1 --alter 
> --add-config log.cleaner.min.cleanable.ratio=0.00
> {code}
> I applied the change to each node in a 6 node cluster. This caused the log 
> cleaner thread to die on each node. I reverted the config and restarted the 
> broker to restore functionality.
> Also interesting (although a bit of a digression) is that I was not able to 
> restart the log cleaning thread without restarting the broker. I tried 
> running:
> {code:bash}
> kafka-configs --bootstrap-server `hostname`:9093 --command-config 
> /etc/kafka/consumer.properties --entity-type brokers --entity-name 1 --alter 
> --add-config log.cleaner.threads=3
> {code}
> The log-cleaner log file showed the threads starting, but they never actually 
> starting cleaning topic partitions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12613) Inconsistencies between Kafka Config and Log Config

2021-04-04 Thread Dongjin Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjin Lee updated KAFKA-12613:

Description: 
I found this problem while investigating KAFKA-8926.

Some broker-wide configurations (defined in KafkaConfig) are mapped with 
log-wide configurations (defined in LogConfig), providing a default value. You 
can find the complete mapping list in `LogConfig.TopicConfigSynonyms`.

The problem is, *some configuration properties' validation is different between 
KafkaConfig and LogConfig*:

!20210404-161832.png!

These inconsistencies cause some problems with the dynamic configuration 
feature. When a user dynamically configures the broker configuration with 
`AdminClient#alterConfigs`, the submitted config is validated with KafkaConfig, 
which lacks some validation logic - as a result, they bypasses the correct 
validation.

For example, a user can set `log.cleaner.min.cleanable.ratio` to -0.5 - which 
is obviously prohibited in LogConfig.
 * I could not reproduce the situation KAFKA-8926 describes, but fixing this 
problem also resolves KAFKA-8926.

  was:
I found this problem while investigating KAFKA-8926.

Some broker-wide configurations (defined in KafkaConfig) are mapped with 
log-wide configurations (defined in LogConfig), providing a default value. You 
can find the complete mapping list in `LogConfig.TopicConfigSynonyms`.

The problem is, *some configuration properties' validation is different between 
KafkaConfig and LogConfig*:

!20210404-161832.png!

These inconsistencies cause some problems with the dynamic configuration 
feature. When a user dynamically configures the broker configuration with 
`AdminClient#alterConfigs`, the submitted config is validated with KafkaConfig, 
which lacks some validation logic - as a result, they bypasses the correct 
validation.

For example, the user can set `log.cleaner.min.cleanable.ratio` to -0.5 - which 
is obviously prohibited in LogConfig.
 * I could not reproduce the situation KAFKA-8926 describes, but fixing this 
problem also resolves KAFKA-8926.


> Inconsistencies between Kafka Config and Log Config
> ---
>
> Key: KAFKA-12613
> URL: https://issues.apache.org/jira/browse/KAFKA-12613
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Dongjin Lee
>Assignee: Dongjin Lee
>Priority: Major
> Attachments: 20210404-161832.png
>
>
> I found this problem while investigating KAFKA-8926.
> Some broker-wide configurations (defined in KafkaConfig) are mapped with 
> log-wide configurations (defined in LogConfig), providing a default value. 
> You can find the complete mapping list in `LogConfig.TopicConfigSynonyms`.
> The problem is, *some configuration properties' validation is different 
> between KafkaConfig and LogConfig*:
> !20210404-161832.png!
> These inconsistencies cause some problems with the dynamic configuration 
> feature. When a user dynamically configures the broker configuration with 
> `AdminClient#alterConfigs`, the submitted config is validated with 
> KafkaConfig, which lacks some validation logic - as a result, they bypasses 
> the correct validation.
> For example, a user can set `log.cleaner.min.cleanable.ratio` to -0.5 - which 
> is obviously prohibited in LogConfig.
>  * I could not reproduce the situation KAFKA-8926 describes, but fixing this 
> problem also resolves KAFKA-8926.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12613) Inconsistencies between Kafka Config and Log Config

2021-04-04 Thread Dongjin Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjin Lee updated KAFKA-12613:

Description: 
I found this problem while investigating KAFKA-8926.

Some broker-wide configurations (defined in KafkaConfig) are mapped with 
log-wide configurations (defined in LogConfig), providing a default value. You 
can find the complete mapping list in `LogConfig.TopicConfigSynonyms`.

The problem is, *some configuration properties' validation is different between 
KafkaConfig and LogConfig*:

!20210404-161832.png!

These inconsistencies cause some problems with the dynamic configuration 
feature. When a user dynamically configures the broker configuration with 
`AdminClient#alterConfigs`, the submitted config is validated with KafkaConfig, 
which lacks some validation logic - as a result, they bypasses the correct 
validation.

For example, the user can set `log.cleaner.min.cleanable.ratio` to -0.5 - which 
is obviously prohibited in LogConfig.
 * I could not reproduce the situation KAFKA-8926 describes, but fixing this 
problem also resolves KAFKA-8926.

  was:
I found this problem while investigating KAFKA-8926.

Some broker-wide configurations (defined in KafkaConfig) are mapped with 
log-wide configurations (defined in LogConfig), providing a default value. You 
can find the complete mapping list in `LogConfig.TopicConfigSynonyms`.

The problem is, *some configuration properties' validation is different between 
KafkaConfig and LogConfig*:

!20210404-161832.png!

These inconsistencies cause some problems with the dynamic configuration 
feature. When a user dynamically configures the broker configuration with 
`AdminClient#alterConfigs`, the submitted config is validated with KafkaConfig, 
which lacks some validation logic - as a result, they bypasses the correct 
validation.

For example, the user can set `log.cleaner.min.cleanable.ratio` to 0.0 or -0.5 
- which is obviously meaningless.

* I could not reproduce the situation KAFKA-8926 describes, but fixing this 
problem also resolves KAFKA-8926.


> Inconsistencies between Kafka Config and Log Config
> ---
>
> Key: KAFKA-12613
> URL: https://issues.apache.org/jira/browse/KAFKA-12613
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Dongjin Lee
>Assignee: Dongjin Lee
>Priority: Major
> Attachments: 20210404-161832.png
>
>
> I found this problem while investigating KAFKA-8926.
> Some broker-wide configurations (defined in KafkaConfig) are mapped with 
> log-wide configurations (defined in LogConfig), providing a default value. 
> You can find the complete mapping list in `LogConfig.TopicConfigSynonyms`.
> The problem is, *some configuration properties' validation is different 
> between KafkaConfig and LogConfig*:
> !20210404-161832.png!
> These inconsistencies cause some problems with the dynamic configuration 
> feature. When a user dynamically configures the broker configuration with 
> `AdminClient#alterConfigs`, the submitted config is validated with 
> KafkaConfig, which lacks some validation logic - as a result, they bypasses 
> the correct validation.
> For example, the user can set `log.cleaner.min.cleanable.ratio` to -0.5 - 
> which is obviously prohibited in LogConfig.
>  * I could not reproduce the situation KAFKA-8926 describes, but fixing this 
> problem also resolves KAFKA-8926.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12613) Inconsistencies between Kafka Config and Log Config

2021-04-04 Thread Dongjin Lee (Jira)
Dongjin Lee created KAFKA-12613:
---

 Summary: Inconsistencies between Kafka Config and Log Config
 Key: KAFKA-12613
 URL: https://issues.apache.org/jira/browse/KAFKA-12613
 Project: Kafka
  Issue Type: Bug
  Components: core
Reporter: Dongjin Lee
Assignee: Dongjin Lee
 Attachments: 20210404-161832.png

I found this problem while investigating KAFKA-8926.

Some broker-wide configurations (defined in KafkaConfig) are mapped with 
log-wide configurations (defined in LogConfig), providing a default value. You 
can find the complete mapping list in `LogConfig.TopicConfigSynonyms`.

The problem is, *some configuration properties' validation is different between 
KafkaConfig and LogConfig*:

!20210404-161832.png!

These inconsistencies cause some problems with the dynamic configuration 
feature. When a user dynamically configures the broker configuration with 
`AdminClient#alterConfigs`, the submitted config is validated with KafkaConfig, 
which lacks some validation logic - as a result, they bypasses the correct 
validation.

For example, the user can set `log.cleaner.min.cleanable.ratio` to 0.0 or -0.5 
- which is obviously meaningless.

* I could not reproduce the situation KAFKA-8926 describes, but fixing this 
problem also resolves KAFKA-8926.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10787) Introduce an import order in Java sources

2021-03-29 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17310608#comment-17310608
 ] 

Dongjin Lee commented on KAFKA-10787:
-

Converted into an umbrella issue in preference of an incremental approach. see: 
https://github.com/apache/kafka/pull/10428

> Introduce an import order in Java sources
> -
>
> Key: KAFKA-10787
> URL: https://issues.apache.org/jira/browse/KAFKA-10787
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Dongjin Lee
>Assignee: Dongjin Lee
>Priority: Major
>
> As of present, Kafka uses a relatively strict code style for Java code, 
> except import order. For this reason, the code formatting settings of every 
> local dev environment are different from person to person, resulting in 
> countless meaningless import order changes in the PR.
> This issue aims to define and apply a 3-group import order, like the 
> following:
> 1. Project packages: {{kafka.*}}, {{org.apache.kafka.*}} 
> 2. Third Party packages: {{com.*}}, {{net.*}}, {{org.*}}
> 3. Java packages: {{java.*}}, {{javax.*}}
> Discussion Thread: 
> https://lists.apache.org/thread.html/rf6f49c845a3d48efe8a91916c8fbaddb76da17742eef06798fc5b24d%40%3Cdev.kafka.apache.org%3E



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12572) Add import ordering checkstyle rule and configure an automatic formatter

2021-03-29 Thread Dongjin Lee (Jira)
Dongjin Lee created KAFKA-12572:
---

 Summary: Add import ordering checkstyle rule and configure an 
automatic formatter
 Key: KAFKA-12572
 URL: https://issues.apache.org/jira/browse/KAFKA-12572
 Project: Kafka
  Issue Type: Sub-task
  Components: build
Reporter: Dongjin Lee
Assignee: Dongjin Lee


# Add import ordering checkstyle rules.
# Configure an automatic formatter which satisfies 1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10830) Kafka Producer API should throw unwrapped exceptions

2021-03-22 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17306695#comment-17306695
 ] 

Dongjin Lee commented on KAFKA-10830:
-

Hi [~guozhang], I see. It would be much better.

> Kafka Producer API should throw unwrapped exceptions
> 
>
> Key: KAFKA-10830
> URL: https://issues.apache.org/jira/browse/KAFKA-10830
> Project: Kafka
>  Issue Type: Improvement
>  Components: producer 
>Reporter: Guozhang Wang
>Priority: Major
>
> Today in various KafkaProducer APIs (especially send and transaction related) 
> we wrap many of the underlying exception with a KafkaException. In some 
> nested calls we may even wrap it more than once. Although the initial goal is 
> to not expose the root cause directly to users, it also brings confusion to 
> advanced user's error handling that some KafkaException wrapped root cause 
> may be handled differently.
> Since all of those exceptions are public classes anyways (since one can still 
> get them via exception.root()) and they are all inheriting KafkaException, 
> I'd suggest we do not do any wrapping any more and throw the exception 
> directly. For those users who just capture all KafkaException and handle them 
> universally it is still compatible; but for those users who want to handle 
> exceptions differently it would introduce an easier way.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10830) Kafka Producer API should throw unwrapped exceptions

2021-03-17 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17303865#comment-17303865
 ] 

Dongjin Lee commented on KAFKA-10830:
-

[~guozhang] [~bchen225242] I inspected this problem (yes, I have also suffered 
from this 'turtles all the way down' exceptions) and have an idea:
 # If KafkaException is directly instantiated and the given cause is also (a 
kind of) KafkaException, skip the instantiation. (i.e., returns the cause 
instance.)
 # If a child class is instantiated and given cause is (a kind of) 
KafkaException, skip the direct cause. (i.e., instantiate with the cause of 
cause instance.)

Do you think this strategy is reasonable? If then, I will open a PR.

> Kafka Producer API should throw unwrapped exceptions
> 
>
> Key: KAFKA-10830
> URL: https://issues.apache.org/jira/browse/KAFKA-10830
> Project: Kafka
>  Issue Type: Improvement
>  Components: producer 
>Reporter: Guozhang Wang
>Priority: Major
>
> Today in various KafkaProducer APIs (especially send and transaction related) 
> we wrap many of the underlying exception with a KafkaException. In some 
> nested calls we may even wrap it more than once. Although the initial goal is 
> to not expose the root cause directly to users, it also brings confusion to 
> advanced user's error handling that some KafkaException wrapped root cause 
> may be handled differently.
> Since all of those exceptions are public classes anyways (since one can still 
> get them via exception.root()) and they are all inheriting KafkaException, 
> I'd suggest we do not do any wrapping any more and throw the exception 
> directly. For those users who just capture all KafkaException and handle them 
> universally it is still compatible; but for those users who want to handle 
> exceptions differently it would introduce an easier way.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12409) Leaking gauge in ReplicaManager

2021-03-03 Thread Dongjin Lee (Jira)
Dongjin Lee created KAFKA-12409:
---

 Summary: Leaking gauge in ReplicaManager
 Key: KAFKA-12409
 URL: https://issues.apache.org/jira/browse/KAFKA-12409
 Project: Kafka
  Issue Type: Improvement
  Components: core
Reporter: Dongjin Lee
Assignee: Dongjin Lee


'ReassigningPartitions' metric is not removed while shutting down 
ReplicaManager. All other gauges are removed correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12408) Document omitted ReplicaManager metrics

2021-03-03 Thread Dongjin Lee (Jira)
Dongjin Lee created KAFKA-12408:
---

 Summary: Document omitted ReplicaManager metrics
 Key: KAFKA-12408
 URL: https://issues.apache.org/jira/browse/KAFKA-12408
 Project: Kafka
  Issue Type: Improvement
  Components: documentation
Reporter: Dongjin Lee
Assignee: Dongjin Lee


There are several problems in ReplicaManager metrics documentation:
 * kafka.server:type=ReplicaManager,name=OfflineReplicaCount is omitted.
 * kafka.server:type=ReplicaManager,name=FailedIsrUpdatesPerSec is omitted.
 * kafka.server:type=ReplicaManager,name=[PartitionCount|LeaderCount]'s 
descriptions are omitted: 'mostly even across brokers'.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12407) Document omitted Controller Health Metrics

2021-03-03 Thread Dongjin Lee (Jira)
Dongjin Lee created KAFKA-12407:
---

 Summary: Document omitted Controller Health Metrics
 Key: KAFKA-12407
 URL: https://issues.apache.org/jira/browse/KAFKA-12407
 Project: Kafka
  Issue Type: Improvement
  Components: documentation
Reporter: Dongjin Lee
Assignee: Dongjin Lee


[KIP-237|https://cwiki.apache.org/confluence/display/KAFKA/KIP-237%3A+More+Controller+Health+Metrics]
 introduced 3 controller health metics like the following, but none of them are 
documented.
 * kafka.controller:type=ControllerEventManager,name=EventQueueSize
 * kafka.controller:type=ControllerEventManager,name=EventQueueTimeMs
 * 
kafka.controller:type=ControllerChannelManager,name=RequestRateAndQueueTimeMs,brokerId=\{broker-id}

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12359) Update Jetty to 11

2021-03-02 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17293489#comment-17293489
 ] 

Dongjin Lee commented on KAFKA-12359:
-

[~jrstacy] Thanks for reporting. Since this issue is about upgrading Jetty to 
11.x, I opened another issue for the vulnerability: KAFKA-12400.

> Update Jetty to 11
> --
>
> Key: KAFKA-12359
> URL: https://issues.apache.org/jira/browse/KAFKA-12359
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect, tools
>Reporter: Dongjin Lee
>Assignee: Dongjin Lee
>Priority: Major
>
> I found this problem when I was working on 
> [KAFKA-12324|https://issues.apache.org/jira/browse/KAFKA-12324].
> As of present, Kafka Connect and Trogdor are using Jetty 9. Although Jetty's 
> stable release is 9.4, the Jetty community is now moving their focus to Jetty 
> 10 and 11, which requires Java 11 as a prerequisite. To minimize potential 
> security vulnerability, Kafka should migrate into Java 11 + Jetty 11 as soon 
> as Jetty 9.4 reaches the end of life. As a note, [Jetty 9.2 reached End of 
> Life in March 
> 2018|https://www.eclipse.org/lists/jetty-announce/msg00116.html], and 9.3 
> also did in [February 
> 2020|https://www.eclipse.org/lists/jetty-announce/msg00140.html].
> In other words, the necessity of moving to Java 11 is heavily affected by 
> Jetty's maintenance plan. Jetty 9.4 seems like still be supported for a 
> certain period of time, but it is worth being aware of these relationships 
> and having a migration plan.
> Updating Jetty to 11 is not resolved by simply changing the version. Along 
> with its API changes, we have to cope with additional dependencies, [Java EE 
> class name changes|https://webtide.com/renaming-from-javax-to-jakarta/], 
> Making Jackson to compatible with the changes, etc.
> As a note: for the difference between Jetty 10 and 11, see 
> [here|https://webtide.com/jetty-10-and-11-have-arrived/] - in short, "Jetty 
> 11 is identical to Jetty 10 except that the javax.* packages now conform to 
> the new jakarta.* namespace.".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12400) Upgrade jetty to fix CVE-2020-27223

2021-03-02 Thread Dongjin Lee (Jira)
Dongjin Lee created KAFKA-12400:
---

 Summary: Upgrade jetty to fix CVE-2020-27223
 Key: KAFKA-12400
 URL: https://issues.apache.org/jira/browse/KAFKA-12400
 Project: Kafka
  Issue Type: Improvement
Reporter: Dongjin Lee
Assignee: Dongjin Lee
 Fix For: 2.8.0, 2.7.1, 2.6.2


h3. CVE-2020-27223 Detail

In Eclipse Jetty 9.4.6.v20170531 to 9.4.36.v20210114 (inclusive), 10.0.0, and 
11.0.0 when Jetty handles a request containing multiple Accept headers with a 
large number of quality (i.e. q) parameters, the server may enter a denial of 
service (DoS) state due to high CPU usage processing those quality values, 
resulting in minutes of CPU time exhausted processing those quality values.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12399) Add log4j2 Appender

2021-03-01 Thread Dongjin Lee (Jira)
Dongjin Lee created KAFKA-12399:
---

 Summary: Add log4j2 Appender
 Key: KAFKA-12399
 URL: https://issues.apache.org/jira/browse/KAFKA-12399
 Project: Kafka
  Issue Type: Improvement
  Components: logging
Reporter: Dongjin Lee
Assignee: Dongjin Lee


As a following job of KAFKA-9366, we have to provide a log4j2 counterpart to 
log4j-appender.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-12389) Upgrade of netty-codec due to CVE-2021-21290

2021-03-01 Thread Dongjin Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjin Lee reassigned KAFKA-12389:
---

Assignee: Dongjin Lee

> Upgrade of netty-codec due to CVE-2021-21290
> 
>
> Key: KAFKA-12389
> URL: https://issues.apache.org/jira/browse/KAFKA-12389
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.0
>Reporter: Dominique Mongelli
>Assignee: Dongjin Lee
>Priority: Major
>
> Our security tool raised the following security flaw on kafka 2.7: 
> [https://nvd.nist.gov/vuln/detail/CVE-2021-21290]
> It is a vulnerability related to jar *netty-codec-4.1.51.Final.jar*.
> Looking at source code, the netty-codec in trunk and 2.7.0 branches are still 
> vulnerable.
> Based on netty issue tracker, the vulnerability is fixed in 4.1.59.Final: 
> https://github.com/netty/netty/security/advisories/GHSA-5mcr-gq6c-3hq2



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12358) Migrate to Java 11

2021-02-22 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17288436#comment-17288436
 ] 

Dongjin Lee commented on KAFKA-12358:
-

Hi Ismael,

Thanks for your kind advice. Of course, I don't expect we can do this migration 
in the near future. The reason why I created this issue is for calling 
attention to the relationship with the Jetty project. Let's wait for the 
community's feedback. Thanks again!

> Migrate to Java 11
> --
>
> Key: KAFKA-12358
> URL: https://issues.apache.org/jira/browse/KAFKA-12358
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Dongjin Lee
>Assignee: Dongjin Lee
>Priority: Major
>  Labels: needs-kip
>
> I found this problem when I was working on 
> [KAFKA-12324|https://issues.apache.org/jira/browse/KAFKA-12324].
> As of present, Kafka Connect and Trogdor are using Jetty 9. Although Jetty's 
> stable release is 9.4, the Jetty community is now moving their focus to Jetty 
> 10 and 11, which requires Java 11 as a prerequisite. To minimize potential 
> security vulnerability, Kafka should migrate into Java 11 + Jetty 11 as soon 
> as Jetty 9.4 reaches the end of life. As a note, [Jetty 9.2 reached End of 
> Life in March 
> 2018|https://www.eclipse.org/lists/jetty-announce/msg00116.html] and 9.3 also 
> did in [February 
> 2020|https://www.eclipse.org/lists/jetty-announce/msg00140.html].
> In other words, the necessity of moving to Java 11 is heavily affected by 
> Jetty's maintenance plan. Jetty 9.4 seems like still be supported for a 
> certain period of time, but it is worth being aware of these relationships 
> and having a migration plan beforehand.
> For the java-scala compatibility, we have no issue. The recommended Scala 
> versions to Java 11 are 2.13.0 and 2.12.4, and we are already using the later 
> version. See: 
> https://docs.scala-lang.org/overviews/jdk-compatibility/overview.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12358) Migrate to Java 11

2021-02-22 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17288430#comment-17288430
 ] 

Dongjin Lee commented on KAFKA-12358:
-

https://github.com/apache/kafka/pull/10176

Here is a draft implementation of Java 11 migration. If anyone encounters this 
issue, please notify me - then I will submit the KIP. Thanks in advance.

> Migrate to Java 11
> --
>
> Key: KAFKA-12358
> URL: https://issues.apache.org/jira/browse/KAFKA-12358
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Dongjin Lee
>Assignee: Dongjin Lee
>Priority: Major
>  Labels: needs-kip
>
> I found this problem when I was working on 
> [KAFKA-12324|https://issues.apache.org/jira/browse/KAFKA-12324].
> As of present, Kafka Connect and Trogdor are using Jetty 9. Although Jetty's 
> stable release is 9.4, the Jetty community is now moving their focus to Jetty 
> 10 and 11, which requires Java 11 as a prerequisite. To minimize potential 
> security vulnerability, Kafka should migrate into Java 11 + Jetty 11 as soon 
> as Jetty 9.4 reaches the end of life. As a note, [Jetty 9.2 reached End of 
> Life in March 
> 2018|https://www.eclipse.org/lists/jetty-announce/msg00116.html] and 9.3 also 
> did in [February 
> 2020|https://www.eclipse.org/lists/jetty-announce/msg00140.html].
> In other words, the necessity of moving to Java 11 is heavily affected by 
> Jetty's maintenance plan. Jetty 9.4 seems like still be supported for a 
> certain period of time, but it is worth being aware of these relationships 
> and having a migration plan beforehand.
> For the java-scala compatibility, we have no issue. The recommended Scala 
> versions to Java 11 are 2.13.0 and 2.12.4, and we are already using the later 
> version. See: 
> https://docs.scala-lang.org/overviews/jdk-compatibility/overview.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12359) Update Jetty to 11

2021-02-22 Thread Dongjin Lee (Jira)
Dongjin Lee created KAFKA-12359:
---

 Summary: Update Jetty to 11
 Key: KAFKA-12359
 URL: https://issues.apache.org/jira/browse/KAFKA-12359
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect, tools
Reporter: Dongjin Lee
Assignee: Dongjin Lee


I found this problem when I was working on 
[KAFKA-12324|https://issues.apache.org/jira/browse/KAFKA-12324].

As of present, Kafka Connect and Trogdor are using Jetty 9. Although Jetty's 
stable release is 9.4, the Jetty community is now moving their focus to Jetty 
10 and 11, which requires Java 11 as a prerequisite. To minimize potential 
security vulnerability, Kafka should migrate into Java 11 + Jetty 11 as soon as 
Jetty 9.4 reaches the end of life. As a note, [Jetty 9.2 reached End of Life in 
March 2018|https://www.eclipse.org/lists/jetty-announce/msg00116.html], and 9.3 
also did in [February 
2020|https://www.eclipse.org/lists/jetty-announce/msg00140.html].

In other words, the necessity of moving to Java 11 is heavily affected by 
Jetty's maintenance plan. Jetty 9.4 seems like still be supported for a certain 
period of time, but it is worth being aware of these relationships and having a 
migration plan.

Updating Jetty to 11 is not resolved by simply changing the version. Along with 
its API changes, we have to cope with additional dependencies, [Java EE class 
name changes|https://webtide.com/renaming-from-javax-to-jakarta/], Making 
Jackson to compatible with the changes, etc.

As a note: for the difference between Jetty 10 and 11, see 
[here|https://webtide.com/jetty-10-and-11-have-arrived/] - in short, "Jetty 11 
is identical to Jetty 10 except that the javax.* packages now conform to the 
new jakarta.* namespace.".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12358) Migrate to Java 11

2021-02-22 Thread Dongjin Lee (Jira)
Dongjin Lee created KAFKA-12358:
---

 Summary: Migrate to Java 11
 Key: KAFKA-12358
 URL: https://issues.apache.org/jira/browse/KAFKA-12358
 Project: Kafka
  Issue Type: Improvement
Reporter: Dongjin Lee
Assignee: Dongjin Lee


I found this problem when I was working on 
[KAFKA-12324|https://issues.apache.org/jira/browse/KAFKA-12324].

As of present, Kafka Connect and Trogdor are using Jetty 9. Although Jetty's 
stable release is 9.4, the Jetty community is now moving their focus to Jetty 
10 and 11, which requires Java 11 as a prerequisite. To minimize potential 
security vulnerability, Kafka should migrate into Java 11 + Jetty 11 as soon as 
Jetty 9.4 reaches the end of life. As a note, [Jetty 9.2 reached End of Life in 
March 2018|https://www.eclipse.org/lists/jetty-announce/msg00116.html] and 9.3 
also did in [February 
2020|https://www.eclipse.org/lists/jetty-announce/msg00140.html].

In other words, the necessity of moving to Java 11 is heavily affected by 
Jetty's maintenance plan. Jetty 9.4 seems like still be supported for a certain 
period of time, but it is worth being aware of these relationships and having a 
migration plan beforehand.

For the java-scala compatibility, we have no issue. The recommended Scala 
versions to Java 11 are 2.13.0 and 2.12.4, and we are already using the later 
version. See: 
https://docs.scala-lang.org/overviews/jdk-compatibility/overview.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12324) Upgrade jetty to fix CVE-2020-27218

2021-02-12 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17283584#comment-17283584
 ] 

Dongjin Lee commented on KAFKA-12324:
-

I am now working on this issue. But the Jetty's upgrade is a little bit 
complicated than expected for API changes.

> Upgrade jetty to fix CVE-2020-27218
> ---
>
> Key: KAFKA-12324
> URL: https://issues.apache.org/jira/browse/KAFKA-12324
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: John Stacy
>Assignee: Dongjin Lee
>Priority: Major
>
> h3. CVE-2020-27218 Detail
> In Eclipse Jetty version 9.4.0.RC0 to 9.4.34.v20201102, 10.0.0.alpha0 to 
> 10.0.0.beta2, and 11.0.0.alpha0 to 11.0.0.beta2, if GZIP request body 
> inflation is enabled and requests from different clients are multiplexed onto 
> a single connection, and if an attacker can send a request with a body that 
> is received entirely but not consumed by the application, then a subsequent 
> request on the same connection will see that body prepended to its body. The 
> attacker will not see any data but may inject data into the body of the 
> subsequent request.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-12324) Upgrade jetty to fix CVE-2020-27218

2021-02-12 Thread Dongjin Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjin Lee reassigned KAFKA-12324:
---

Assignee: Dongjin Lee

> Upgrade jetty to fix CVE-2020-27218
> ---
>
> Key: KAFKA-12324
> URL: https://issues.apache.org/jira/browse/KAFKA-12324
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: John Stacy
>Assignee: Dongjin Lee
>Priority: Major
>
> h3. CVE-2020-27218 Detail
> In Eclipse Jetty version 9.4.0.RC0 to 9.4.34.v20201102, 10.0.0.alpha0 to 
> 10.0.0.beta2, and 11.0.0.alpha0 to 11.0.0.beta2, if GZIP request body 
> inflation is enabled and requests from different clients are multiplexed onto 
> a single connection, and if an attacker can send a request with a body that 
> is received entirely but not consumed by the application, then a subsequent 
> request on the same connection will see that body prepended to its body. The 
> attacker will not see any data but may inject data into the body of the 
> subsequent request.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12325) Is Kafka affected by Scala security vulnerability (CVE-2017-15288)?

2021-02-12 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17283582#comment-17283582
 ] 

Dongjin Lee commented on KAFKA-12325:
-

As of present, the default version for Scala 2.12 is 2.12.13 so not effected by 
CVE-2017-15288. It seems like it would be better to close the issue.

> Is Kafka affected by Scala security vulnerability (CVE-2017-15288)?
> ---
>
> Key: KAFKA-12325
> URL: https://issues.apache.org/jira/browse/KAFKA-12325
> Project: Kafka
>  Issue Type: Bug
>Reporter: John Stacy
>Priority: Major
>
> h3. CVE-2017-15288 Detail
> The compilation daemon in Scala before 2.10.7, 2.11.x before 2.11.12, and 
> 2.12.x before 2.12.4 uses weak permissions for private files in 
> /tmp/scala-devel/${USER:shared}/scalac-compile-server-port, which allows 
> local users to write to arbitrary class files and consequently gain 
> privileges.
> h3. Scala security update
> https://www.scala-lang.org/news/security-update-nov17.html
> h3. Libraries Bundled with Kafka 2.7.0 with Scala 2.12
> kafka_2.12-2.7.0/libs/jackson-module-scala_2.12-2.10.5.jar
> kafka_2.12-2.7.0/libs/scala-collection-compat_2.12-2.2.0.jar
> kafka_2.12-2.7.0/libs/scala-java8-compat_2.12-0.9.1.jar
> kafka_2.12-2.7.0/libs/scala-logging_2.12-3.9.2.jar
> kafka_2.12-2.7.0/libs/scala-reflect-2.12.12.jar
> kafka_2.12-2.7.0/libs/scala-library-2.12.12.jar
> kafka_2.12-2.7.0/libs/kafka-streams-scala_2.12-2.7.0.jar
> It is unclear, but it appears that some of the 2.12 jars that Kafka is using 
> are not at the recommended version per the Scala security update. Perhaps the 
> ones that are not yet at 2.12.4 are not affected by the vulnerability? If 
> that is the case, please disregard, but if not, then the minimum version 
> should include the patch.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   >