[jira] [Commented] (KAFKA-12847) Dockerfile needed for kafka system tests needs changes

2021-06-10 Thread Chia-Ping Tsai (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17360767#comment-17360767
 ] 

Chia-Ping Tsai commented on KAFKA-12847:


>  it works as non-root but as root user running the sysTests, they fail as 
> shown above. So, it works as non-root but as root user running the sysTests, 
> they fail as shown above. So, using a non-conflicting name such as UID_DUCKER 
> should allow it to work for root as well which

Please take a look at my first comment "Not only system tests but also UT/IT 
expect to be run by non-root. ". I'm not convinced that running test by root is 
a normal way.

>  I believe is valid or maybe a note in README asking to run only as non-root 
> is also suitable.

this is a good idea.





> Dockerfile needed for kafka system tests needs changes
> --
>
> Key: KAFKA-12847
> URL: https://issues.apache.org/jira/browse/KAFKA-12847
> Project: Kafka
>  Issue Type: Bug
>  Components: system tests
>Affects Versions: 2.8.0, 2.7.1
> Environment: Issue tested in environments below but is independent of 
> h/w arch. or Linux flavor: -
> 1.) RHEL-8.3 on x86_64 
> 2.) RHEL-8.3 on IBM Power (ppc64le)
> 3.) apache/kafka branch tested: trunk (master)
>Reporter: Abhijit Mane
>Assignee: Abhijit Mane
>Priority: Major
>  Labels: easyfix
> Attachments: Dockerfile.upstream, 截圖 2021-06-05 上午1.53.17.png
>
>
> Hello,
> I tried apache/kafka system tests as per documentation: -
> ([https://github.com/apache/kafka/tree/trunk/tests#readme|https://github.com/apache/kafka/tree/trunk/tests#readme_])
> =
>  PROBLEM
>  ~~
> 1.) As root user, clone kafka github repo and start "kafka system tests"
>  # git clone [https://github.com/apache/kafka.git]
>  # cd kafka
>  # ./gradlew clean systemTestLibs
>  # bash tests/docker/run_tests.sh
> 2.) Dockerfile issue - 
> [https://github.com/apache/kafka/blob/trunk/tests/docker/Dockerfile]
> This file has an *UID* entry as shown below: -
>  ---
>  ARG *UID*="1000"
>  RUN useradd -u $*UID* ducker
> // {color:#de350b}*Error during docker build*{color} => useradd: UID 0 is not 
> unique, root user id is 0
>  ---
>  I ran everything as root which means the built-in bash environment variable 
> 'UID' always
> resolves to 0 and can't be changed. Hence, the docker build fails. The issue 
> should be seen even if run as non-root.
> 3.) Next, as root, as per README, I ran: -
> server:/kafka> *bash tests/docker/run_tests.sh*
> The ducker tool builds the container images & switches to user '*ducker*' 
> inside the container
> & maps kafka root dir ('kafka') from host to '/opt/kafka-dev' in the 
> container.
> Ref: 
> [https://github.com/apache/kafka/blob/trunk/tests/docker/ducker-ak|https://github.com/apache/kafka/blob/trunk/tests/docker/ducker-ak]
> Ex:  docker run -d *-v "${kafka_dir}:/opt/kafka-dev"* 
> This fails as the 'ducker' user has *no write permissions* to create files 
> under 'kafka' root dir. Hence, it needs to be made writeable.
> // *chmod -R a+w kafka* 
>  – needed as container is run as 'ducker' and needs write access since kafka 
> root volume from host is mapped to container as "/opt/kafka-dev" where the 
> 'ducker' user writes logs
>  =
> =
>  *FIXES needed*
>  ~
>  1.) Dockerfile - 
> [https://github.com/apache/kafka/blob/trunk/tests/docker/Dockerfile]
>  Change 'UID' to '*UID_DUCKER*'.
> This won't conflict with built in bash env. var UID and the docker image 
> build should succeed.
>  ---
>  ARG *UID_DUCKER*="1000"
>  RUN useradd -u $*UID_DUCKER* ducker
> // *{color:#57d9a3}No Error{color}* => No conflict with built-in UID
>  ---
> 2.) README needs an update where we must ensure the kafka root dir from where 
> the tests 
>  are launched is writeable to allow the 'ducker' user to create results/logs.
>  # chmod -R a+w kafka
> With this, I was able to get the docker images built and system tests started 
> successfully.
>  =
> Also, I wonder whether or not upstream Dockerfile & System tests are part of 
> CI/CD and get tested for every PR. If so, this issue should have been caught.
>  
> *Question to kafka SME*
>  -
>  Do you believe this is a valid problem with the Dockerfile and the fix is 
> acceptable? 
>  Please let me know and I am happy to submit a PR with this fix.
> Thanks,
>  Abhijit



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12847) Dockerfile needed for kafka system tests needs changes

2021-06-13 Thread Chia-Ping Tsai (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17362522#comment-17362522
 ] 

Chia-Ping Tsai commented on KAFKA-12847:


> Do we just close this out with maybe a note you want to make in README when 
> you get a chance requiring to run all tests as non-root (if that sounds 
> logical to you)?

Sound good to me

> Dockerfile needed for kafka system tests needs changes
> --
>
> Key: KAFKA-12847
> URL: https://issues.apache.org/jira/browse/KAFKA-12847
> Project: Kafka
>  Issue Type: Bug
>  Components: system tests
>Affects Versions: 2.8.0, 2.7.1
> Environment: Issue tested in environments below but is independent of 
> h/w arch. or Linux flavor: -
> 1.) RHEL-8.3 on x86_64 
> 2.) RHEL-8.3 on IBM Power (ppc64le)
> 3.) apache/kafka branch tested: trunk (master)
>Reporter: Abhijit Mane
>Assignee: Abhijit Mane
>Priority: Major
>  Labels: easyfix
> Attachments: Dockerfile.upstream, 截圖 2021-06-05 上午1.53.17.png
>
>
> Hello,
> I tried apache/kafka system tests as per documentation: -
> ([https://github.com/apache/kafka/tree/trunk/tests#readme|https://github.com/apache/kafka/tree/trunk/tests#readme_])
> =
>  PROBLEM
>  ~~
> 1.) As root user, clone kafka github repo and start "kafka system tests"
>  # git clone [https://github.com/apache/kafka.git]
>  # cd kafka
>  # ./gradlew clean systemTestLibs
>  # bash tests/docker/run_tests.sh
> 2.) Dockerfile issue - 
> [https://github.com/apache/kafka/blob/trunk/tests/docker/Dockerfile]
> This file has an *UID* entry as shown below: -
>  ---
>  ARG *UID*="1000"
>  RUN useradd -u $*UID* ducker
> // {color:#de350b}*Error during docker build*{color} => useradd: UID 0 is not 
> unique, root user id is 0
>  ---
>  I ran everything as root which means the built-in bash environment variable 
> 'UID' always
> resolves to 0 and can't be changed. Hence, the docker build fails. The issue 
> should be seen even if run as non-root.
> 3.) Next, as root, as per README, I ran: -
> server:/kafka> *bash tests/docker/run_tests.sh*
> The ducker tool builds the container images & switches to user '*ducker*' 
> inside the container
> & maps kafka root dir ('kafka') from host to '/opt/kafka-dev' in the 
> container.
> Ref: 
> [https://github.com/apache/kafka/blob/trunk/tests/docker/ducker-ak|https://github.com/apache/kafka/blob/trunk/tests/docker/ducker-ak]
> Ex:  docker run -d *-v "${kafka_dir}:/opt/kafka-dev"* 
> This fails as the 'ducker' user has *no write permissions* to create files 
> under 'kafka' root dir. Hence, it needs to be made writeable.
> // *chmod -R a+w kafka* 
>  – needed as container is run as 'ducker' and needs write access since kafka 
> root volume from host is mapped to container as "/opt/kafka-dev" where the 
> 'ducker' user writes logs
>  =
> =
>  *FIXES needed*
>  ~
>  1.) Dockerfile - 
> [https://github.com/apache/kafka/blob/trunk/tests/docker/Dockerfile]
>  Change 'UID' to '*UID_DUCKER*'.
> This won't conflict with built in bash env. var UID and the docker image 
> build should succeed.
>  ---
>  ARG *UID_DUCKER*="1000"
>  RUN useradd -u $*UID_DUCKER* ducker
> // *{color:#57d9a3}No Error{color}* => No conflict with built-in UID
>  ---
> 2.) README needs an update where we must ensure the kafka root dir from where 
> the tests 
>  are launched is writeable to allow the 'ducker' user to create results/logs.
>  # chmod -R a+w kafka
> With this, I was able to get the docker images built and system tests started 
> successfully.
>  =
> Also, I wonder whether or not upstream Dockerfile & System tests are part of 
> CI/CD and get tested for every PR. If so, this issue should have been caught.
>  
> *Question to kafka SME*
>  -
>  Do you believe this is a valid problem with the Dockerfile and the fix is 
> acceptable? 
>  Please let me know and I am happy to submit a PR with this fix.
> Thanks,
>  Abhijit



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12662) add unit test for ProducerPerformance

2021-06-17 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-12662.

Fix Version/s: 3.0.0
   Resolution: Fixed

> add unit test for ProducerPerformance
> -
>
> Key: KAFKA-12662
> URL: https://issues.apache.org/jira/browse/KAFKA-12662
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Chun-Hao Tang
>Priority: Major
> Fix For: 3.0.0
>
>
> ProducerPerformance is a useful tool which offers an official way to test 
> produce performance. Hence, it would be better to add enough tests for it. 
> (In fact, it has no unit tests currently).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12847) Dockerfile needed for kafka system tests needs changes

2021-06-02 Thread Chia-Ping Tsai (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17355801#comment-17355801
 ] 

Chia-Ping Tsai commented on KAFKA-12847:


{quote}
ARG UID="1000" ==> Has no effect as UID value is read-only in bash
{quote}

The UID can be override by 
https://github.com/apache/kafka/blob/trunk/tests/docker/ducker-ak#L231

{quote}
// Error during docker build => useradd: UID 0 is not unique, root user id is 0
{quote}

Could you check the UID in your machine? For example: `echo $UID`. It seems 
your local machine does not have variable UID.



> Dockerfile needed for kafka system tests needs changes
> --
>
> Key: KAFKA-12847
> URL: https://issues.apache.org/jira/browse/KAFKA-12847
> Project: Kafka
>  Issue Type: Bug
>  Components: system tests
>Affects Versions: 2.8.0, 2.7.1
> Environment: Issue tested in environments below but is independent of 
> h/w arch. or Linux flavor: -
> 1.) RHEL-8.3 on x86_64 
> 2.) RHEL-8.3 on IBM Power (ppc64le)
> 3.) apache/kafka branch tested: trunk (master)
>Reporter: Abhijit Mane
>Assignee: Abhijit Mane
>Priority: Major
>  Labels: easyfix
> Attachments: Dockerfile.upstream
>
>
> Hello,
> I tried apache/kafka system tests as per documentation: -
> ([https://github.com/apache/kafka/tree/trunk/tests#readme|https://github.com/apache/kafka/tree/trunk/tests#readme_])
> =
>  PROBLEM
>  ~~
> 1.) As root user, clone kafka github repo and start "kafka system tests"
>  # git clone [https://github.com/apache/kafka.git]
>  # cd kafka
>  # ./gradlew clean systemTestLibs
>  # bash tests/docker/run_tests.sh
> 2.) Dockerfile issue - 
> [https://github.com/apache/kafka/blob/trunk/tests/docker/Dockerfile]
> This file has an *UID* entry as shown below: -
>  ---
>  ARG *UID*="1000"
>  RUN useradd -u $*UID* ducker
> // {color:#de350b}*Error during docker build*{color} => useradd: UID 0 is not 
> unique, root user id is 0
>  ---
>  I ran everything as root which means the built-in bash environment variable 
> 'UID' always
> resolves to 0 and can't be changed. Hence, the docker build fails. The issue 
> should be seen even if run as non-root.
> 3.) Next, as root, as per README, I ran: -
> server:/kafka> *bash tests/docker/run_tests.sh*
> The ducker tool builds the container images & switches to user '*ducker*' 
> inside the container
> & maps kafka root dir ('kafka') from host to '/opt/kafka-dev' in the 
> container.
> Ref: 
> [https://github.com/apache/kafka/blob/trunk/tests/docker/ducker-ak|https://github.com/apache/kafka/blob/trunk/tests/docker/ducker-ak]
> Ex:  docker run -d *-v "${kafka_dir}:/opt/kafka-dev"* 
> This fails as the 'ducker' user has *no write permissions* to create files 
> under 'kafka' root dir. Hence, it needs to be made writeable.
> // *chmod -R a+w kafka* 
>  – needed as container is run as 'ducker' and needs write access since kafka 
> root volume from host is mapped to container as "/opt/kafka-dev" where the 
> 'ducker' user writes logs
>  =
> =
>  *FIXES needed*
>  ~
>  1.) Dockerfile - 
> [https://github.com/apache/kafka/blob/trunk/tests/docker/Dockerfile]
>  Change 'UID' to '*UID_DUCKER*'.
> This won't conflict with built in bash env. var UID and the docker image 
> build should succeed.
>  ---
>  ARG *UID_DUCKER*="1000"
>  RUN useradd -u $*UID_DUCKER* ducker
> // *{color:#57d9a3}No Error{color}* => No conflict with built-in UID
>  ---
> 2.) README needs an update where we must ensure the kafka root dir from where 
> the tests 
>  are launched is writeable to allow the 'ducker' user to create results/logs.
>  # chmod -R a+w kafka
> With this, I was able to get the docker images built and system tests started 
> successfully.
>  =
> Also, I wonder whether or not upstream Dockerfile & System tests are part of 
> CI/CD and get tested for every PR. If so, this issue should have been caught.
>  
> *Question to kafka SME*
>  -
>  Do you believe this is a valid problem with the Dockerfile and the fix is 
> acceptable? 
>  Please let me know and I am happy to submit a PR with this fix.
> Thanks,
>  Abhijit



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12847) Dockerfile needed for kafka system tests needs changes

2021-06-07 Thread Chia-Ping Tsai (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358384#comment-17358384
 ] 

Chia-Ping Tsai commented on KAFKA-12847:


> // Error during docker build => useradd: UID 0 is not unique, root user id is > 0

not sure why your UID is 0 when you run `bash tests/docker/run_tests.sh`. The 
UID environment variable should be existent in bash env (see 
https://www.gnu.org/software/bash/manual/bash.html#Bash-Variables). 

> Just curious to know if you did not face this issue and what I maybe missing.

Our CI run the script (run_tests.sh) on ubuntu 20.04, ubuntu 21.04, centos 7 
and macOS m1. All works well. 

> Dockerfile needed for kafka system tests needs changes
> --
>
> Key: KAFKA-12847
> URL: https://issues.apache.org/jira/browse/KAFKA-12847
> Project: Kafka
>  Issue Type: Bug
>  Components: system tests
>Affects Versions: 2.8.0, 2.7.1
> Environment: Issue tested in environments below but is independent of 
> h/w arch. or Linux flavor: -
> 1.) RHEL-8.3 on x86_64 
> 2.) RHEL-8.3 on IBM Power (ppc64le)
> 3.) apache/kafka branch tested: trunk (master)
>Reporter: Abhijit Mane
>Assignee: Abhijit Mane
>Priority: Major
>  Labels: easyfix
> Attachments: Dockerfile.upstream, 截圖 2021-06-05 上午1.53.17.png
>
>
> Hello,
> I tried apache/kafka system tests as per documentation: -
> ([https://github.com/apache/kafka/tree/trunk/tests#readme|https://github.com/apache/kafka/tree/trunk/tests#readme_])
> =
>  PROBLEM
>  ~~
> 1.) As root user, clone kafka github repo and start "kafka system tests"
>  # git clone [https://github.com/apache/kafka.git]
>  # cd kafka
>  # ./gradlew clean systemTestLibs
>  # bash tests/docker/run_tests.sh
> 2.) Dockerfile issue - 
> [https://github.com/apache/kafka/blob/trunk/tests/docker/Dockerfile]
> This file has an *UID* entry as shown below: -
>  ---
>  ARG *UID*="1000"
>  RUN useradd -u $*UID* ducker
> // {color:#de350b}*Error during docker build*{color} => useradd: UID 0 is not 
> unique, root user id is 0
>  ---
>  I ran everything as root which means the built-in bash environment variable 
> 'UID' always
> resolves to 0 and can't be changed. Hence, the docker build fails. The issue 
> should be seen even if run as non-root.
> 3.) Next, as root, as per README, I ran: -
> server:/kafka> *bash tests/docker/run_tests.sh*
> The ducker tool builds the container images & switches to user '*ducker*' 
> inside the container
> & maps kafka root dir ('kafka') from host to '/opt/kafka-dev' in the 
> container.
> Ref: 
> [https://github.com/apache/kafka/blob/trunk/tests/docker/ducker-ak|https://github.com/apache/kafka/blob/trunk/tests/docker/ducker-ak]
> Ex:  docker run -d *-v "${kafka_dir}:/opt/kafka-dev"* 
> This fails as the 'ducker' user has *no write permissions* to create files 
> under 'kafka' root dir. Hence, it needs to be made writeable.
> // *chmod -R a+w kafka* 
>  – needed as container is run as 'ducker' and needs write access since kafka 
> root volume from host is mapped to container as "/opt/kafka-dev" where the 
> 'ducker' user writes logs
>  =
> =
>  *FIXES needed*
>  ~
>  1.) Dockerfile - 
> [https://github.com/apache/kafka/blob/trunk/tests/docker/Dockerfile]
>  Change 'UID' to '*UID_DUCKER*'.
> This won't conflict with built in bash env. var UID and the docker image 
> build should succeed.
>  ---
>  ARG *UID_DUCKER*="1000"
>  RUN useradd -u $*UID_DUCKER* ducker
> // *{color:#57d9a3}No Error{color}* => No conflict with built-in UID
>  ---
> 2.) README needs an update where we must ensure the kafka root dir from where 
> the tests 
>  are launched is writeable to allow the 'ducker' user to create results/logs.
>  # chmod -R a+w kafka
> With this, I was able to get the docker images built and system tests started 
> successfully.
>  =
> Also, I wonder whether or not upstream Dockerfile & System tests are part of 
> CI/CD and get tested for every PR. If so, this issue should have been caught.
>  
> *Question to kafka SME*
>  -
>  Do you believe this is a valid problem with the Dockerfile and the fix is 
> acceptable? 
>  Please let me know and I am happy to submit a PR with this fix.
> Thanks,
>  Abhijit



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12847) Dockerfile needed for kafka system tests needs changes

2021-06-04 Thread Chia-Ping Tsai (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17357529#comment-17357529
 ] 

Chia-Ping Tsai commented on KAFKA-12847:


{quote}
Are you able to run sysTests as is after cloning and without making any changes 
as the README states?
{quote}

yep, we have a CI to run system tests periodically (please see attachment)


> Dockerfile needed for kafka system tests needs changes
> --
>
> Key: KAFKA-12847
> URL: https://issues.apache.org/jira/browse/KAFKA-12847
> Project: Kafka
>  Issue Type: Bug
>  Components: system tests
>Affects Versions: 2.8.0, 2.7.1
> Environment: Issue tested in environments below but is independent of 
> h/w arch. or Linux flavor: -
> 1.) RHEL-8.3 on x86_64 
> 2.) RHEL-8.3 on IBM Power (ppc64le)
> 3.) apache/kafka branch tested: trunk (master)
>Reporter: Abhijit Mane
>Assignee: Abhijit Mane
>Priority: Major
>  Labels: easyfix
> Attachments: Dockerfile.upstream, 截圖 2021-06-05 上午1.53.17.png
>
>
> Hello,
> I tried apache/kafka system tests as per documentation: -
> ([https://github.com/apache/kafka/tree/trunk/tests#readme|https://github.com/apache/kafka/tree/trunk/tests#readme_])
> =
>  PROBLEM
>  ~~
> 1.) As root user, clone kafka github repo and start "kafka system tests"
>  # git clone [https://github.com/apache/kafka.git]
>  # cd kafka
>  # ./gradlew clean systemTestLibs
>  # bash tests/docker/run_tests.sh
> 2.) Dockerfile issue - 
> [https://github.com/apache/kafka/blob/trunk/tests/docker/Dockerfile]
> This file has an *UID* entry as shown below: -
>  ---
>  ARG *UID*="1000"
>  RUN useradd -u $*UID* ducker
> // {color:#de350b}*Error during docker build*{color} => useradd: UID 0 is not 
> unique, root user id is 0
>  ---
>  I ran everything as root which means the built-in bash environment variable 
> 'UID' always
> resolves to 0 and can't be changed. Hence, the docker build fails. The issue 
> should be seen even if run as non-root.
> 3.) Next, as root, as per README, I ran: -
> server:/kafka> *bash tests/docker/run_tests.sh*
> The ducker tool builds the container images & switches to user '*ducker*' 
> inside the container
> & maps kafka root dir ('kafka') from host to '/opt/kafka-dev' in the 
> container.
> Ref: 
> [https://github.com/apache/kafka/blob/trunk/tests/docker/ducker-ak|https://github.com/apache/kafka/blob/trunk/tests/docker/ducker-ak]
> Ex:  docker run -d *-v "${kafka_dir}:/opt/kafka-dev"* 
> This fails as the 'ducker' user has *no write permissions* to create files 
> under 'kafka' root dir. Hence, it needs to be made writeable.
> // *chmod -R a+w kafka* 
>  – needed as container is run as 'ducker' and needs write access since kafka 
> root volume from host is mapped to container as "/opt/kafka-dev" where the 
> 'ducker' user writes logs
>  =
> =
>  *FIXES needed*
>  ~
>  1.) Dockerfile - 
> [https://github.com/apache/kafka/blob/trunk/tests/docker/Dockerfile]
>  Change 'UID' to '*UID_DUCKER*'.
> This won't conflict with built in bash env. var UID and the docker image 
> build should succeed.
>  ---
>  ARG *UID_DUCKER*="1000"
>  RUN useradd -u $*UID_DUCKER* ducker
> // *{color:#57d9a3}No Error{color}* => No conflict with built-in UID
>  ---
> 2.) README needs an update where we must ensure the kafka root dir from where 
> the tests 
>  are launched is writeable to allow the 'ducker' user to create results/logs.
>  # chmod -R a+w kafka
> With this, I was able to get the docker images built and system tests started 
> successfully.
>  =
> Also, I wonder whether or not upstream Dockerfile & System tests are part of 
> CI/CD and get tested for every PR. If so, this issue should have been caught.
>  
> *Question to kafka SME*
>  -
>  Do you believe this is a valid problem with the Dockerfile and the fix is 
> acceptable? 
>  Please let me know and I am happy to submit a PR with this fix.
> Thanks,
>  Abhijit



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12847) Dockerfile needed for kafka system tests needs changes

2021-06-04 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated KAFKA-12847:
---
Attachment: 截圖 2021-06-05 上午1.53.17.png

> Dockerfile needed for kafka system tests needs changes
> --
>
> Key: KAFKA-12847
> URL: https://issues.apache.org/jira/browse/KAFKA-12847
> Project: Kafka
>  Issue Type: Bug
>  Components: system tests
>Affects Versions: 2.8.0, 2.7.1
> Environment: Issue tested in environments below but is independent of 
> h/w arch. or Linux flavor: -
> 1.) RHEL-8.3 on x86_64 
> 2.) RHEL-8.3 on IBM Power (ppc64le)
> 3.) apache/kafka branch tested: trunk (master)
>Reporter: Abhijit Mane
>Assignee: Abhijit Mane
>Priority: Major
>  Labels: easyfix
> Attachments: Dockerfile.upstream, 截圖 2021-06-05 上午1.53.17.png
>
>
> Hello,
> I tried apache/kafka system tests as per documentation: -
> ([https://github.com/apache/kafka/tree/trunk/tests#readme|https://github.com/apache/kafka/tree/trunk/tests#readme_])
> =
>  PROBLEM
>  ~~
> 1.) As root user, clone kafka github repo and start "kafka system tests"
>  # git clone [https://github.com/apache/kafka.git]
>  # cd kafka
>  # ./gradlew clean systemTestLibs
>  # bash tests/docker/run_tests.sh
> 2.) Dockerfile issue - 
> [https://github.com/apache/kafka/blob/trunk/tests/docker/Dockerfile]
> This file has an *UID* entry as shown below: -
>  ---
>  ARG *UID*="1000"
>  RUN useradd -u $*UID* ducker
> // {color:#de350b}*Error during docker build*{color} => useradd: UID 0 is not 
> unique, root user id is 0
>  ---
>  I ran everything as root which means the built-in bash environment variable 
> 'UID' always
> resolves to 0 and can't be changed. Hence, the docker build fails. The issue 
> should be seen even if run as non-root.
> 3.) Next, as root, as per README, I ran: -
> server:/kafka> *bash tests/docker/run_tests.sh*
> The ducker tool builds the container images & switches to user '*ducker*' 
> inside the container
> & maps kafka root dir ('kafka') from host to '/opt/kafka-dev' in the 
> container.
> Ref: 
> [https://github.com/apache/kafka/blob/trunk/tests/docker/ducker-ak|https://github.com/apache/kafka/blob/trunk/tests/docker/ducker-ak]
> Ex:  docker run -d *-v "${kafka_dir}:/opt/kafka-dev"* 
> This fails as the 'ducker' user has *no write permissions* to create files 
> under 'kafka' root dir. Hence, it needs to be made writeable.
> // *chmod -R a+w kafka* 
>  – needed as container is run as 'ducker' and needs write access since kafka 
> root volume from host is mapped to container as "/opt/kafka-dev" where the 
> 'ducker' user writes logs
>  =
> =
>  *FIXES needed*
>  ~
>  1.) Dockerfile - 
> [https://github.com/apache/kafka/blob/trunk/tests/docker/Dockerfile]
>  Change 'UID' to '*UID_DUCKER*'.
> This won't conflict with built in bash env. var UID and the docker image 
> build should succeed.
>  ---
>  ARG *UID_DUCKER*="1000"
>  RUN useradd -u $*UID_DUCKER* ducker
> // *{color:#57d9a3}No Error{color}* => No conflict with built-in UID
>  ---
> 2.) README needs an update where we must ensure the kafka root dir from where 
> the tests 
>  are launched is writeable to allow the 'ducker' user to create results/logs.
>  # chmod -R a+w kafka
> With this, I was able to get the docker images built and system tests started 
> successfully.
>  =
> Also, I wonder whether or not upstream Dockerfile & System tests are part of 
> CI/CD and get tested for every PR. If so, this issue should have been caught.
>  
> *Question to kafka SME*
>  -
>  Do you believe this is a valid problem with the Dockerfile and the fix is 
> acceptable? 
>  Please let me know and I am happy to submit a PR with this fix.
> Thanks,
>  Abhijit



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12847) Dockerfile needed for kafka system tests needs changes

2021-06-04 Thread Chia-Ping Tsai (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17357532#comment-17357532
 ] 

Chia-Ping Tsai commented on KAFKA-12847:


{quote}
string literal "UID" itself whose value can't be altered in bash.
{quote}

Pardon me, I failed to catch your point. The UID set by 
ducker-ak(https://github.com/apache/kafka/blob/trunk/tests/docker/ducker-ak#L231)
 is used to change the UID used by container. We aligned the UID inside/outside 
container to make container (used to run system tests) can modify the mounted 
folder.

 

> Dockerfile needed for kafka system tests needs changes
> --
>
> Key: KAFKA-12847
> URL: https://issues.apache.org/jira/browse/KAFKA-12847
> Project: Kafka
>  Issue Type: Bug
>  Components: system tests
>Affects Versions: 2.8.0, 2.7.1
> Environment: Issue tested in environments below but is independent of 
> h/w arch. or Linux flavor: -
> 1.) RHEL-8.3 on x86_64 
> 2.) RHEL-8.3 on IBM Power (ppc64le)
> 3.) apache/kafka branch tested: trunk (master)
>Reporter: Abhijit Mane
>Assignee: Abhijit Mane
>Priority: Major
>  Labels: easyfix
> Attachments: Dockerfile.upstream, 截圖 2021-06-05 上午1.53.17.png
>
>
> Hello,
> I tried apache/kafka system tests as per documentation: -
> ([https://github.com/apache/kafka/tree/trunk/tests#readme|https://github.com/apache/kafka/tree/trunk/tests#readme_])
> =
>  PROBLEM
>  ~~
> 1.) As root user, clone kafka github repo and start "kafka system tests"
>  # git clone [https://github.com/apache/kafka.git]
>  # cd kafka
>  # ./gradlew clean systemTestLibs
>  # bash tests/docker/run_tests.sh
> 2.) Dockerfile issue - 
> [https://github.com/apache/kafka/blob/trunk/tests/docker/Dockerfile]
> This file has an *UID* entry as shown below: -
>  ---
>  ARG *UID*="1000"
>  RUN useradd -u $*UID* ducker
> // {color:#de350b}*Error during docker build*{color} => useradd: UID 0 is not 
> unique, root user id is 0
>  ---
>  I ran everything as root which means the built-in bash environment variable 
> 'UID' always
> resolves to 0 and can't be changed. Hence, the docker build fails. The issue 
> should be seen even if run as non-root.
> 3.) Next, as root, as per README, I ran: -
> server:/kafka> *bash tests/docker/run_tests.sh*
> The ducker tool builds the container images & switches to user '*ducker*' 
> inside the container
> & maps kafka root dir ('kafka') from host to '/opt/kafka-dev' in the 
> container.
> Ref: 
> [https://github.com/apache/kafka/blob/trunk/tests/docker/ducker-ak|https://github.com/apache/kafka/blob/trunk/tests/docker/ducker-ak]
> Ex:  docker run -d *-v "${kafka_dir}:/opt/kafka-dev"* 
> This fails as the 'ducker' user has *no write permissions* to create files 
> under 'kafka' root dir. Hence, it needs to be made writeable.
> // *chmod -R a+w kafka* 
>  – needed as container is run as 'ducker' and needs write access since kafka 
> root volume from host is mapped to container as "/opt/kafka-dev" where the 
> 'ducker' user writes logs
>  =
> =
>  *FIXES needed*
>  ~
>  1.) Dockerfile - 
> [https://github.com/apache/kafka/blob/trunk/tests/docker/Dockerfile]
>  Change 'UID' to '*UID_DUCKER*'.
> This won't conflict with built in bash env. var UID and the docker image 
> build should succeed.
>  ---
>  ARG *UID_DUCKER*="1000"
>  RUN useradd -u $*UID_DUCKER* ducker
> // *{color:#57d9a3}No Error{color}* => No conflict with built-in UID
>  ---
> 2.) README needs an update where we must ensure the kafka root dir from where 
> the tests 
>  are launched is writeable to allow the 'ducker' user to create results/logs.
>  # chmod -R a+w kafka
> With this, I was able to get the docker images built and system tests started 
> successfully.
>  =
> Also, I wonder whether or not upstream Dockerfile & System tests are part of 
> CI/CD and get tested for every PR. If so, this issue should have been caught.
>  
> *Question to kafka SME*
>  -
>  Do you believe this is a valid problem with the Dockerfile and the fix is 
> acceptable? 
>  Please let me know and I am happy to submit a PR with this fix.
> Thanks,
>  Abhijit



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12847) Dockerfile needed for kafka system tests needs changes

2021-05-31 Thread Chia-Ping Tsai (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17354293#comment-17354293
 ] 

Chia-Ping Tsai commented on KAFKA-12847:


Could you try to use non-root to run kafka tests? IIRC, Not only system tests 
but also UT/IT expect to be run by non-root. 

 

>  Also, I wonder whether or not upstream Dockerfile & System tests are part of 
>CI/CD and get tested for every PR. If so, this issue should have been caught.

We don't run system tests for each PR since it is too expensive. Instead, we 
make sure all system tests pass before release.

> Dockerfile needed for kafka system tests needs changes
> --
>
> Key: KAFKA-12847
> URL: https://issues.apache.org/jira/browse/KAFKA-12847
> Project: Kafka
>  Issue Type: Bug
>  Components: system tests
>Affects Versions: 2.8.0, 2.7.1
> Environment: Issue tested in environments below but is independent of 
> h/w arch. or Linux flavor: -
> 1.) RHEL-8.3 on x86_64 
> 2.) RHEL-8.3 on IBM Power (ppc64le)
> 3.) apache/kafka branch tested: trunk (master)
>Reporter: Abhijit Mane
>Assignee: Abhijit Mane
>Priority: Major
>  Labels: easyfix
> Attachments: Dockerfile.upstream
>
>
> Hello,
> I tried apache/kafka system tests as per documentation: -
> ([https://github.com/apache/kafka/tree/trunk/tests#readme|https://github.com/apache/kafka/tree/trunk/tests#readme_])
> =
>  PROBLEM
>  ~~
> 1.) As root user, clone kafka github repo and start "kafka system tests"
>  # git clone [https://github.com/apache/kafka.git]
>  # cd kafka
>  # ./gradlew clean systemTestLibs
>  # bash tests/docker/run_tests.sh
> 2.) Dockerfile issue - 
> [https://github.com/apache/kafka/blob/trunk/tests/docker/Dockerfile]
> This file has an *UID* entry as shown below: -
>  ---
>  ARG *UID*="1000"
>  RUN useradd -u $*UID* ducker
> // {color:#de350b}*Error during docker build*{color} => useradd: UID 0 is not 
> unique, root user id is 0
>  ---
>  I ran everything as root which means the built-in bash environment variable 
> 'UID' always
> resolves to 0 and can't be changed. Hence, the docker build fails. The issue 
> should be seen even if run as non-root.
> 3.) Next, as root, as per README, I ran: -
> server:/kafka> *bash tests/docker/run_tests.sh*
> The ducker tool builds the container images & switches to user '*ducker*' 
> inside the container
> & maps kafka root dir ('kafka') from host to '/opt/kafka-dev' in the 
> container.
> Ref: 
> [https://github.com/apache/kafka/blob/trunk/tests/docker/ducker-ak|https://github.com/apache/kafka/blob/trunk/tests/docker/ducker-ak]
> Ex:  docker run -d *-v "${kafka_dir}:/opt/kafka-dev"* 
> This fails as the 'ducker' user has *no write permissions* to create files 
> under 'kafka' root dir. Hence, it needs to be made writeable.
> // *chmod -R a+w kafka* 
>  – needed as container is run as 'ducker' and needs write access since kafka 
> root volume from host is mapped to container as "/opt/kafka-dev" where the 
> 'ducker' user writes logs
>  =
> =
>  *FIXES needed*
>  ~
>  1.) Dockerfile - 
> [https://github.com/apache/kafka/blob/trunk/tests/docker/Dockerfile]
>  Change 'UID' to '*UID_DUCKER*'.
> This won't conflict with built in bash env. var UID and the docker image 
> build should succeed.
>  ---
>  ARG *UID_DUCKER*="1000"
>  RUN useradd -u $*UID_DUCKER* ducker
> // *{color:#57d9a3}No Error{color}* => No conflict with built-in UID
>  ---
> 2.) README needs an update where we must ensure the kafka root dir from where 
> the tests 
>  are launched is writeable to allow the 'ducker' user to create results/logs.
>  # chmod -R a+w kafka
> With this, I was able to get the docker images built and system tests started 
> successfully.
>  =
> Also, I wonder whether or not upstream Dockerfile & System tests are part of 
> CI/CD and get tested for every PR. If so, this issue should have been caught.
>  
> *Question to kafka SME*
>  -
>  Do you believe this is a valid problem with the Dockerfile and the fix is 
> acceptable? 
>  Please let me know and I am happy to submit a PR with this fix.
> Thanks,
>  Abhijit



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-13030) FindCoordinators batching causes slow poll when requesting older broker

2021-07-02 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated KAFKA-13030:
---
Summary: FindCoordinators batching causes slow poll when requesting older 
broker  (was: FindCoordinators batching make slow poll when requesting older 
broker)

> FindCoordinators batching causes slow poll when requesting older broker
> ---
>
> Key: KAFKA-13030
> URL: https://issues.apache.org/jira/browse/KAFKA-13030
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Major
>
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/internals/AbstractCoordinator.java#L888
> {quote}
> if (e instanceof NoBatchedFindCoordinatorsException) {
> batchFindCoordinator = false;
> clearFindCoordinatorFuture();
> lookupCoordinator();
> return;
> } 
> {quote}
> The current request future is NOT updated so it can't be completed until 
> timeout. It causes a slow poll when users first poll data from older broker.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-13030) FindCoordinators batching make slow poll when requesting older broker

2021-07-02 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-13030:
--

 Summary: FindCoordinators batching make slow poll when requesting 
older broker
 Key: KAFKA-13030
 URL: https://issues.apache.org/jira/browse/KAFKA-13030
 Project: Kafka
  Issue Type: Improvement
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/internals/AbstractCoordinator.java#L888

{quote}
if (e instanceof NoBatchedFindCoordinatorsException) {
batchFindCoordinator = false;
clearFindCoordinatorFuture();
lookupCoordinator();
return;
} 
{quote}


The current request future is NOT updated so it can't be completed until 
timeout. It causes a slow poll when users first poll data from older broker.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12999) NPE when accessing RecordHeader.key() concurrently

2021-06-26 Thread Chia-Ping Tsai (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17370142#comment-17370142
 ] 

Chia-Ping Tsai commented on KAFKA-12999:


Thanks for this ticket. As juma explained, the class is not designed for 
thread-safe.

As modern Java make optimization for single thread in sync block, adding sync 
to make it thread-safe seems to be fine to me. Of course, it needs KIP :)


> NPE when accessing RecordHeader.key() concurrently
> --
>
> Key: KAFKA-12999
> URL: https://issues.apache.org/jira/browse/KAFKA-12999
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.8.0
>Reporter: Antonio Tomac
>Priority: Minor
>
> h2. Summary
> After upgrading clients to {{2.8.0}}, reading {{ConsumerRecord}}'s header 
> keys started resulting in occasional {{java.lang.NullPointerException}} in 
> case of concurrent access from multiple(2) threads.
> h2. Where
> NPE happens here 
> [RecordHeader.java:45|https://github.com/apache/kafka/blob/2.8.0/clients/src/main/java/org/apache/kafka/common/header/internals/RecordHeader.java#L45]:
> {code:java}
> public String key() {
> if (key == null) {
> key = Utils.utf8(keyBuffer, keyBuffer.remaining()); // NPE here 
> keyBuffer = null;
> }
> return key;
> }
> {code}
> h2. When/why
> Cause of issue is introduced by changes of KAFKA-10438 to avoid unnecessary 
> creation of key's *{color:#0747a6}{{String}}{color}* when it might never be 
> used.
>  It is good optimization but this *lazy* initialization of field 
> {{RecordHeader.key}} creates a problem if being accessed/initialized by 2 
> threads concurrently since it's now no longer read-only operation and there 
> is race between initializing {color:#0747a6}*{{key}}*{color} and nullifying 
> {color:#0747a6}*{{keyBuffer}}*{color}
> h2. Simple workaround
> Upon consuming record(s) and before passing 
> {color:#0747a6}*{{ConsumerRecord}}*{color} to multiple processing threads, 
> eagerly initialize all header keys by iterating through headers and invoking 
> {color:#0747a6}*{{key()}}*{color} or even 
> {color:#0747a6}*{{ConsumerRecord.headers().hashCode()}}*{color} which will 
> initialize all keys (and header values too)
> h2. Consequences
> Current implementation renders RecordHeader not thread-safe for read-only 
> access.
> h2. Reproducibility
> With enough iterations it's always possible to reproduce (at least on my 
> local)
>  Here is minimal snippet to reproduce:
> {code:java}
> @Test
> public void testConcurrentKeyInit() throws ExecutionException, 
> InterruptedException {
> ByteBuffer keyBuffer = 
> ByteBuffer.wrap("key".getBytes(StandardCharsets.UTF_8));
> ByteBuffer valueBuffer = 
> ByteBuffer.wrap("value".getBytes(StandardCharsets.UTF_8));
> ExecutorService executorService = Executors.newSingleThreadExecutor();
> try {
> for (int i = 0; i < 1_000_000; i++) {
> RecordHeader header = new RecordHeader(keyBuffer, valueBuffer);
> Future future = executorService.submit(header::key);
> assertEquals("key", header.key());
> assertEquals("key", future.get());
> }
> } finally {
> executorService.shutdown();
> }
> }
> {code}
> h2. Possible solution #1
> Leave implementation as-is but somehow document this to users.
> h2. Possible solution #2
> Add some concurrency primitives to current implementation
> *   simply adding {color:#0747a6}*{{synchronized}}*{color} on method 
> *{color:#0747a6}{{key()}}{color}* (and on *{color:#0747a6}{{value()}}{color}* 
> too) gives correct behaviour avoiding race-conditions. 
> * JMH benchmark comparing *{color:#0747a6}{{key()}}{color}* with and without 
> {color:#0747a6}*{{synchronized}}*{color} showed no significant performance 
> penalty
> {code}
> Benchmark  Mode  Cnt   Score   Error  Units
> RecordHeaderBenchmark.key  avgt15  31.308 ± 7.862  ns/op
> RecordHeaderBenchmark.synchronizedKey  avgt15  31.853 ± 7.096  ns/op
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-13030) FindCoordinators batching causes slow poll when requesting older broker

2021-07-04 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-13030.

Resolution: Duplicate

close as duplicate 
(https://github.com/apache/kafka/pull/10963#issuecomment-873593708)

> FindCoordinators batching causes slow poll when requesting older broker
> ---
>
> Key: KAFKA-13030
> URL: https://issues.apache.org/jira/browse/KAFKA-13030
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Major
>
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/internals/AbstractCoordinator.java#L888
> {quote}
> if (e instanceof NoBatchedFindCoordinatorsException) {
> batchFindCoordinator = false;
> clearFindCoordinatorFuture();
> lookupCoordinator();
> return;
> } 
> {quote}
> The current request future is NOT updated so it can't be completed until 
> timeout. It causes a slow poll when users first poll data from older broker.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12661) ConfigEntry#equal does not compare other fields when value is NOT null

2021-05-01 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-12661.

Fix Version/s: 3.0.0
   Resolution: Fixed

> ConfigEntry#equal does not compare other fields when value is NOT null 
> ---
>
> Key: KAFKA-12661
> URL: https://issues.apache.org/jira/browse/KAFKA-12661
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Fix For: 3.0.0
>
>
> {code:java}
> return this.name.equals(that.name) &&
> this.value != null ? this.value.equals(that.value) : 
> that.value == null &&
> this.isSensitive == that.isSensitive &&
> this.isReadOnly == that.isReadOnly &&
> this.source == that.source &&
> Objects.equals(this.synonyms, that.synonyms);
> {code}
> the second value of ternary operator is "that.value == null &&
> this.isSensitive == that.isSensitive &&
> this.isReadOnly == that.isReadOnly &&
> this.source == that.source &&
> Objects.equals(this.synonyms, that.synonyms);" rather than 
> "that.value == null". Hence, it does not compare other fields when value is 
> not null.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12708) Rewrite org.apache.kafka.test.Microbenchmarks by JMH

2021-04-22 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-12708:
--

 Summary: Rewrite org.apache.kafka.test.Microbenchmarks by JMH
 Key: KAFKA-12708
 URL: https://issues.apache.org/jira/browse/KAFKA-12708
 Project: Kafka
  Issue Type: Task
Reporter: Chia-Ping Tsai


The benchmark code is a bit obsolete and it would be better to rewrite it by JMH



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12371) MirrorMaker 2.0 documentation is incorrect

2021-04-22 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-12371.

Resolution: Duplicate

> MirrorMaker 2.0 documentation is incorrect
> --
>
> Key: KAFKA-12371
> URL: https://issues.apache.org/jira/browse/KAFKA-12371
> Project: Kafka
>  Issue Type: Improvement
>  Components: docs, documentation
>Affects Versions: 2.7.0
>Reporter: Scott Kirkpatrick
>Priority: Minor
>
> There are a few places in the official MirrorMaker 2.0 docs that are either 
> confusing or incorrect. Here are a few examples I've found:
> The documentation for the 'sync.group.offsets.enabled' config states that 
> it's enabled by default 
> [here|https://github.com/apache/kafka-site/blob/61f4707381c369a98a7a77e1a7c3a11d5983909c/27/ops.html#L802],
>  but the actual source code indicates that it's disabled by default 
> [here|https://github.com/apache/kafka/blob/f75efb96fae99a22eb54b5d0ef4e23b28fe8cd2d/connect/mirror/src/main/java/org/apache/kafka/connect/mirror/MirrorConnectorConfig.java#L185].
>  I'm unsure if the intent is to have it enabled or disabled by default.
> There are also some numerical typos, 
> [here|https://github.com/apache/kafka-site/blob/61f4707381c369a98a7a77e1a7c3a11d5983909c/27/ops.html#L791]
>  and 
> [here|https://github.com/apache/kafka-site/blob/61f4707381c369a98a7a77e1a7c3a11d5983909c/27/ops.html#L793].
>  These lines state that the default is 6000 seconds (and incorrectly that 
> it's equal to 10 minutes), but the actual default is 600 seconds, shown 
> [here|https://github.com/apache/kafka/blob/f75efb96fae99a22eb54b5d0ef4e23b28fe8cd2d/connect/mirror/src/main/java/org/apache/kafka/connect/mirror/MirrorConnectorConfig.java#L145]
>  and 
> [here|https://github.com/apache/kafka/blob/f75efb96fae99a22eb54b5d0ef4e23b28fe8cd2d/connect/mirror/src/main/java/org/apache/kafka/connect/mirror/MirrorConnectorConfig.java#L152]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12529) kafka-configs.sh does not work while changing the sasl jaas configurations.

2021-04-22 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-12529.

Resolution: Duplicate

see KAFKA-12530

> kafka-configs.sh does not work while changing the sasl jaas configurations.
> ---
>
> Key: KAFKA-12529
> URL: https://issues.apache.org/jira/browse/KAFKA-12529
> Project: Kafka
>  Issue Type: Bug
>Reporter: kaushik srinivas
>Priority: Major
>
> We are trying to use kafka-configs script to modify the sasl jaas 
> configurations, but unable to do so.
> Command used:
> ./kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers 
> --entity-name 59 --alter --add-config 'sasl.jaas.config=KafkaServer \{\n 
> org.apache.kafka.common.security.plain.PlainLoginModule required \n 
> username=\"test\" \n password=\"test\"; \n };'
> error:
> requirement failed: Invalid entity config: all configs to be added must be in 
> the format "key=val".
> command 2:
> kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers 
> --entity-name 59 --alter --add-config 
> 'sasl.jaas.config=[username=test,password=test]'
> output:
> command does not return , but kafka broker logs below error:
> DEBUG", "neid":"kafka-cfd5ccf2af7f47868e83471a5b603408", "system":"kafka", 
> "time":"2021-03-23T08:29:00.946", "timezone":"UTC", 
> "log":\{"message":"data-plane-kafka-network-thread-1001-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-2
>  - org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - 
> Set SASL server state to FAILED during authentication"}}
> {"type":"log", "host":"kf-kaudynamic-0", "level":"INFO", 
> "neid":"kafka-cfd5ccf2af7f47868e83471a5b603408", "system":"kafka", 
> "time":"2021-03-23T08:29:00.946", "timezone":"UTC", 
> "log":\{"message":"data-plane-kafka-network-thread-1001-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-2
>  - org.apache.kafka.common.network.Selector - [SocketServer brokerId=1001] 
> Failed authentication with /127.0.0.1 (Unexpected Kafka request of type 
> METADATA during SASL handshake.)"}}
> We have below issues:
> 1. If one installs kafka broker with SASL mechanism and wants to change the 
> SASL jaas config via kafka-configs scripts, how is it supposed to be done ?
>  does kafka-configs needs client credentials to do the same ? 
> 2. Can anyone point us to example commands of kafka-configs to alter the 
> sasl.jaas.config property of kafka broker. We do not see any documentation or 
> examples for the same.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12528) kafka-configs.sh does not work while changing the sasl jaas configurations.

2021-04-22 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-12528.

Resolution: Duplicate

see KAFKA-12530

> kafka-configs.sh does not work while changing the sasl jaas configurations.
> ---
>
> Key: KAFKA-12528
> URL: https://issues.apache.org/jira/browse/KAFKA-12528
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, core
>Reporter: kaushik srinivas
>Priority: Major
>
> We are trying to modify the sasl jaas configurations for the kafka broker 
> runtime using the dynamic config update functionality using the 
> kafka-configs.sh script. But we are unable to get it working.
> Below is our command:
> ./kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers 
> --entity-name 59 --alter --add-config 'sasl.jaas.config=KafkaServer \{\n 
> org.apache.kafka.common.security.plain.PlainLoginModule required \n 
> username=\"test\" \n password=\"test\"; \n };'
>  
> command is exiting with error:
> requirement failed: Invalid entity config: all configs to be added must be in 
> the format "key=val".
>  
> we also tried below format as well:
> kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers 
> --entity-name 59 --alter --add-config 
> 'sasl.jaas.config=[username=test,password=test]'
> command does not return but the kafka broker logs prints the below error 
> messages.
> org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set 
> SASL server state to FAILED during authentication"}}
> {"type":"log", "host":"kf-kaudynamic-0", "level":"INFO", 
> "neid":"kafka-cfd5ccf2af7f47868e83471a5b603408", "system":"kafka", 
> "time":"2021-03-23T08:29:00.946", "timezone":"UTC", 
> "log":\{"message":"data-plane-kafka-network-thread-1001-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-2
>  - org.apache.kafka.common.network.Selector - [SocketServer brokerId=1001] 
> Failed authentication with /127.0.0.1 (Unexpected Kafka request of type 
> METADATA during SASL handshake.)"}}
>  
> 1. If one has SASL enabled and with a single listener, how are we supposed to 
> change the sasl credentials using this command ?
> 2. can anyone point us out to some example commands for modifying the sasl 
> jaas configurations ?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12684) The valid partition list is incorrectly replaced by the successfully elected partition list

2021-04-25 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-12684.

Resolution: Fixed

> The valid partition list is incorrectly replaced by the successfully elected 
> partition list
> ---
>
> Key: KAFKA-12684
> URL: https://issues.apache.org/jira/browse/KAFKA-12684
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Wenbing Shen
>Assignee: Wenbing Shen
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: election-preferred-leader.png, non-preferred-leader.png
>
>
> When using the kafka-election-tool for preferred replica election, if there 
> are partitions in the elected list that are in the preferred replica, the 
> list of partitions already in the preferred replica will be replaced by the 
> successfully elected partition list.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12702) Unhandled exception caught in InterBrokerSendThread

2021-04-25 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-12702.

Fix Version/s: 2.8.1
   3.0.0
   Resolution: Fixed

> Unhandled exception caught in InterBrokerSendThread
> ---
>
> Key: KAFKA-12702
> URL: https://issues.apache.org/jira/browse/KAFKA-12702
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Wenbing Shen
>Assignee: Wenbing Shen
>Priority: Blocker
> Fix For: 3.0.0, 2.8.1
>
> Attachments: afterFixing.png, beforeFixing.png, 
> image-2021-04-21-17-12-28-471.png
>
>
> In kraft mode, if listeners and advertised.listeners are not configured with 
> host addresses, the host parameter value of Listener in 
> BrokerRegistrationRequestData will be null. When the broker is started, a 
> null pointer exception will be thrown, causing startup failure.
> A feasible solution is to replace the empty host of endPoint in 
> advertisedListeners with InetAddress.getLocalHost.getCanonicalHostName in 
> Broker Server when building networkListeners.
> The following is the debug log:
> before fixing:
> [2021-04-21 14:15:20,032] DEBUG (broker-2-to-controller-send-thread 
> org.apache.kafka.clients.NetworkClient 522) [broker-2-to-controller] Sending 
> BROKER_REGISTRATION request with header RequestHeader(apiKey=BROKER_REGIS
> TRATION, apiVersion=0, clientId=2, correlationId=6) and timeout 3 to node 
> 2: BrokerRegistrationRequestData(brokerId=2, 
> clusterId='nCqve6D1TEef3NpQniA0Mg', incarnationId=X8w4_1DFT2yUjOm6asPjIQ, 
> listeners=[Listener(n
> ame='PLAINTEXT', {color:#FF}host=null,{color} port=9092, 
> securityProtocol=0)], features=[], rack=null)
> [2021-04-21 14:15:20,033] ERROR (broker-2-to-controller-send-thread 
> kafka.server.BrokerToControllerRequestThread 76) 
> [broker-2-to-controller-send-thread]: unhandled exception caught in 
> InterBrokerSendThread
> java.lang.NullPointerException
>  at 
> org.apache.kafka.common.message.BrokerRegistrationRequestData$Listener.addSize(BrokerRegistrationRequestData.java:515)
>  at 
> org.apache.kafka.common.message.BrokerRegistrationRequestData.addSize(BrokerRegistrationRequestData.java:216)
>  at 
> org.apache.kafka.common.protocol.SendBuilder.buildSend(SendBuilder.java:218)
>  at 
> org.apache.kafka.common.protocol.SendBuilder.buildRequestSend(SendBuilder.java:187)
>  at 
> org.apache.kafka.common.requests.AbstractRequest.toSend(AbstractRequest.java:101)
>  at org.apache.kafka.clients.NetworkClient.doSend(NetworkClient.java:525)
>  at org.apache.kafka.clients.NetworkClient.doSend(NetworkClient.java:501)
>  at org.apache.kafka.clients.NetworkClient.send(NetworkClient.java:461)
>  at 
> kafka.common.InterBrokerSendThread.$anonfun$sendRequests$1(InterBrokerSendThread.scala:104)
>  at 
> kafka.common.InterBrokerSendThread.$anonfun$sendRequests$1$adapted(InterBrokerSendThread.scala:99)
>  at kafka.common.InterBrokerSendThread$$Lambda$259/910445654.apply(Unknown 
> Source)
>  at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:563)
>  at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:561)
>  at scala.collection.AbstractIterable.foreach(Iterable.scala:919)
>  at 
> kafka.common.InterBrokerSendThread.sendRequests(InterBrokerSendThread.scala:99)
>  at 
> kafka.common.InterBrokerSendThread.pollOnce(InterBrokerSendThread.scala:73)
>  at 
> kafka.server.BrokerToControllerRequestThread.doWork(BrokerToControllerChannelManager.scala:368)
>  at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)
> [2021-04-21 14:15:20,034] INFO (broker-2-to-controller-send-thread 
> kafka.server.BrokerToControllerRequestThread 66) 
> [broker-2-to-controller-send-thread]: Stopped
> after fixing:
> [2021-04-21 15:05:01,095] DEBUG (BrokerToControllerChannelManager broker=2 
> name=heartbeat org.apache.kafka.clients.NetworkClient 512) 
> [BrokerToControllerChannelManager broker=2 name=heartbeat] Sending 
> BROKER_REGISTRATI
> ON request with header RequestHeader(apiKey=BROKER_REGISTRATION, 
> apiVersion=0, clientId=2, correlationId=0) and timeout 3 to node 2: 
> BrokerRegistrationRequestData(brokerId=2, clusterId='nCqve6D1TEef3NpQniA0Mg', 
> inc
> arnationId=xF29h_IRR1KzrERWwssQ2w, listeners=[Listener(name='PLAINTEXT', 
> host='hdpxxx.cn', port=9092, securityProtocol=0)], features=[], rack=null)
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12577) Remove deprecated `ConfigEntry` constructor

2021-05-03 Thread Chia-Ping Tsai (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17338270#comment-17338270
 ] 

Chia-Ping Tsai commented on KAFKA-12577:


Looks like the PR ( [GitHub Pull Request 
#10436|https://github.com/apache/kafka/pull/10436]) gets merged already. Could 
we resolve this jira?

> Remove deprecated `ConfigEntry` constructor
> ---
>
> Key: KAFKA-12577
> URL: https://issues.apache.org/jira/browse/KAFKA-12577
> Project: Kafka
>  Issue Type: Improvement
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Minor
> Fix For: 3.0.0
>
>
> ConfigEntry's constructor was deprecated in 1.1.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12309) The revocation algorithm produces uneven distributions

2021-02-07 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-12309:
--

 Summary: The revocation algorithm produces uneven distributions
 Key: KAFKA-12309
 URL: https://issues.apache.org/jira/browse/KAFKA-12309
 Project: Kafka
  Issue Type: Bug
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


Assignments:

"W0" -> 8 connectors/tasks
"W1" -> 8 connectors/tasks
(New) "W2" -> 0 connectors/tasks

Revoked (trunk)
"W0" -> 2 connectors/tasks
"W1" -> 2 connectors/tasks

Revoked (expected)
"W0" -> 2 connectors/tasks
"W1" -> 3 connectors/tasks




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12311) The decoder in DumpLogSegments can't access properties

2021-02-08 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-12311:
--

 Summary: The decoder in DumpLogSegments can't access properties 
 Key: KAFKA-12311
 URL: https://issues.apache.org/jira/browse/KAFKA-12311
 Project: Kafka
  Issue Type: Bug
Reporter: Chia-Ping Tsai


see 
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/tools/DumpLogSegments.scala#L420

We always pass empty properties to decoder.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10835) Replace Runnable and Callable overrides with lambdas in Connect

2021-02-08 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-10835.

Fix Version/s: 2.8.0
   Resolution: Fixed

> Replace Runnable and Callable overrides with lambdas in Connect
> ---
>
> Key: KAFKA-10835
> URL: https://issues.apache.org/jira/browse/KAFKA-10835
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Konstantine Karantasis
>Assignee: Lev Zemlyanov
>Priority: Minor
> Fix For: 2.8.0
>
>
> We've been using Java 8 for sometime now of course. Replacing the overrides 
> from the pre-Java 8 era will simplify some parts of the code and will reduce 
> verbosity. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10834) Remove redundant type casts in Connect

2021-02-08 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-10834.

Fix Version/s: 2.8.0
   Resolution: Fixed

> Remove redundant type casts in Connect
> --
>
> Key: KAFKA-10834
> URL: https://issues.apache.org/jira/browse/KAFKA-10834
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Konstantine Karantasis
>Assignee: Lev Zemlyanov
>Priority: Minor
> Fix For: 2.8.0
>
>
> Some type casts in the code base are not required any more and can be 
> removed. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-12321) the comparison function for uuid type should be 'equals' rather than '=='

2021-02-10 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai reassigned KAFKA-12321:
--

Assignee: Chia-Ping Tsai

> the comparison function for uuid type should be 'equals' rather than '=='
> -
>
> Key: KAFKA-12321
> URL: https://issues.apache.org/jira/browse/KAFKA-12321
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Major
>
> trunk:
> {code:java}
> if (this.taggedUuid != Uuid.fromString("H3KKO4NTRPaCWtEmm3vW7A"))
> {code}
> expected:
> {code:java}
> if (!this.taggedUuid.equals(Uuid.fromString("H3KKO4NTRPaCWtEmm3vW7A")))
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12321) the comparison function for uuid type should be 'equals' rather than '=='

2021-02-10 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-12321:
--

 Summary: the comparison function for uuid type should be 'equals' 
rather than '=='
 Key: KAFKA-12321
 URL: https://issues.apache.org/jira/browse/KAFKA-12321
 Project: Kafka
  Issue Type: Bug
Reporter: Chia-Ping Tsai


trunk:
{code:java}
if (this.taggedUuid != Uuid.fromString("H3KKO4NTRPaCWtEmm3vW7A"))
{code}

expected:
{code:java}
if (!this.taggedUuid.equals(Uuid.fromString("H3KKO4NTRPaCWtEmm3vW7A")))
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12206) o.a.k.common.Uuid should implement Comparable

2021-01-26 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-12206.

Fix Version/s: 2.8.0
 Assignee: Colin McCabe
   Resolution: Fixed

> o.a.k.common.Uuid should implement Comparable 
> --
>
> Key: KAFKA-12206
> URL: https://issues.apache.org/jira/browse/KAFKA-12206
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Colin McCabe
>Assignee: Colin McCabe
>Priority: Major
> Fix For: 2.8.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12246) Remove redundant suppression in KafkaAdminClient

2021-01-28 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated KAFKA-12246:
---
Summary: Remove redundant suppression in KafkaAdminClient  (was: Remove 
redundant comments in KafkaAdminClient)

> Remove redundant suppression in KafkaAdminClient
> 
>
> Key: KAFKA-12246
> URL: https://issues.apache.org/jira/browse/KAFKA-12246
> Project: Kafka
>  Issue Type: Improvement
>Reporter: YI-CHEN WANG
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12283) Flaky Test RebalanceSourceConnectorsIntegrationTest#testMultipleWorkersRejoining

2021-03-25 Thread Chia-Ping Tsai (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17309147#comment-17309147
 ] 

Chia-Ping Tsai commented on KAFKA-12283:


Sure!

> Flaky Test 
> RebalanceSourceConnectorsIntegrationTest#testMultipleWorkersRejoining
> 
>
> Key: KAFKA-12283
> URL: https://issues.apache.org/jira/browse/KAFKA-12283
> Project: Kafka
>  Issue Type: Test
>  Components: KafkaConnect, unit tests
>Reporter: Matthias J. Sax
>Assignee: Chia-Ping Tsai
>Priority: Critical
>  Labels: flaky-test
>
> https://github.com/apache/kafka/pull/1/checks?check_run_id=1820092809
> {quote} {{java.lang.AssertionError: Tasks are imbalanced: 
> localhost:36037=[seq-source13-0, seq-source13-1, seq-source13-2, 
> seq-source13-3, seq-source12-0, seq-source12-1, seq-source12-2, 
> seq-source12-3]
> localhost:43563=[seq-source11-0, seq-source11-2, seq-source10-0, 
> seq-source10-2]
> localhost:46539=[seq-source11-1, seq-source11-3, seq-source10-1, 
> seq-source10-3]
>   at org.junit.Assert.fail(Assert.java:89)
>   at org.junit.Assert.assertTrue(Assert.java:42)
>   at 
> org.apache.kafka.connect.integration.RebalanceSourceConnectorsIntegrationTest.assertConnectorAndTasksAreUniqueAndBalanced(RebalanceSourceConnectorsIntegrationTest.java:362)
>   at 
> org.apache.kafka.test.TestUtils.lambda$waitForCondition$3(TestUtils.java:303)
>   at 
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:351)
>   at 
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:319)
>   at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:300)
>   at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:290)
>   at 
> org.apache.kafka.connect.integration.RebalanceSourceConnectorsIntegrationTest.testMultipleWorkersRejoining(RebalanceSourceConnectorsIntegrationTest.java:313)}}
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12227) Add method "Producer#send" to return CompletionStage instead of Future

2021-03-31 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated KAFKA-12227:
---
Summary:  Add method "Producer#send" to return CompletionStage instead of 
Future  (was:  Add method "Producer#produce" to return CompletionStage instead 
of Future)

>  Add method "Producer#send" to return CompletionStage instead of Future
> ---
>
> Key: KAFKA-12227
> URL: https://issues.apache.org/jira/browse/KAFKA-12227
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Major
>
> Producer and KafkaProducer return a java.util.concurrent.Future from their 
> send methods. This makes it challenging to write asynchronous non-blocking 
> code given Future's limited interface. Since Kafka now requires Java 8, we 
> now have the option of using CompletionStage and/or CompletableFuture that 
> were introduced to solve this issue. It's worth noting that the Kafka 
> AdminClient solved this issue by using org.apache.kafka.common.KafkaFuture as 
> Java 7 support was still required then.
>  
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-XXX%3A+Return+CompletableFuture+from+KafkaProducer.send



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12587) Remove KafkaPrincipal#fromString

2021-03-30 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-12587:
--

 Summary: Remove KafkaPrincipal#fromString
 Key: KAFKA-12587
 URL: https://issues.apache.org/jira/browse/KAFKA-12587
 Project: Kafka
  Issue Type: Task
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12587) Remove KafkaPrincipal#fromString for 3.0

2021-03-30 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated KAFKA-12587:
---
Summary: Remove KafkaPrincipal#fromString for 3.0  (was: Remove 
KafkaPrincipal#fromString)

> Remove KafkaPrincipal#fromString for 3.0
> 
>
> Key: KAFKA-12587
> URL: https://issues.apache.org/jira/browse/KAFKA-12587
> Project: Kafka
>  Issue Type: Task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12587) Remove KafkaPrincipal#fromString for 3.0

2021-04-01 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-12587.

Resolution: Fixed

> Remove KafkaPrincipal#fromString for 3.0
> 
>
> Key: KAFKA-12587
> URL: https://issues.apache.org/jira/browse/KAFKA-12587
> Project: Kafka
>  Issue Type: Task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10769) Remove JoinGroupRequest#containsValidPattern as it is duplicate to Topic#containsValidPattern

2021-04-06 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-10769.

Fix Version/s: 3.0.0
   Resolution: Fixed

> Remove JoinGroupRequest#containsValidPattern as it is duplicate to 
> Topic#containsValidPattern
> -
>
> Key: KAFKA-10769
> URL: https://issues.apache.org/jira/browse/KAFKA-10769
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: highluck
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0
>
>
> as title. Remove the redundant code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12384) Flaky Test ListOffsetsRequestTest.testResponseIncludesLeaderEpoch

2021-04-06 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-12384.

Fix Version/s: 3.0.0
   Resolution: Fixed

> Flaky Test ListOffsetsRequestTest.testResponseIncludesLeaderEpoch
> -
>
> Key: KAFKA-12384
> URL: https://issues.apache.org/jira/browse/KAFKA-12384
> Project: Kafka
>  Issue Type: Test
>  Components: core, unit tests
>Reporter: Matthias J. Sax
>Assignee: Chia-Ping Tsai
>Priority: Critical
>  Labels: flaky-test
> Fix For: 3.0.0
>
>
> {quote}org.opentest4j.AssertionFailedError: expected: <(0,0)> but was: 
> <(-1,-1)> at 
> org.junit.jupiter.api.AssertionUtils.fail(AssertionUtils.java:55) at 
> org.junit.jupiter.api.AssertionUtils.failNotEqual(AssertionUtils.java:62) at 
> org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:182) at 
> org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:177) at 
> org.junit.jupiter.api.Assertions.assertEquals(Assertions.java:1124) at 
> kafka.server.ListOffsetsRequestTest.testResponseIncludesLeaderEpoch(ListOffsetsRequestTest.scala:172){quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12627) Unify MemoryPool and BufferSupplier

2021-04-06 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-12627:
--

 Summary: Unify MemoryPool and BufferSupplier
 Key: KAFKA-12627
 URL: https://issues.apache.org/jira/browse/KAFKA-12627
 Project: Kafka
  Issue Type: Task
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


Both I/O thread and network thread need memory management but we, currently, 
give them different interface (MemoryPool v.s BufferSupplier). That is weird 
and so we should consider unifying them to eliminate duplicate code. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12454) Add ERROR logging on kafka-log-dirs when given brokerIds do not exist in current kafka cluster

2021-03-18 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-12454.

Resolution: Fixed

> Add ERROR logging on kafka-log-dirs when given brokerIds do not  exist in 
> current kafka cluster
> ---
>
> Key: KAFKA-12454
> URL: https://issues.apache.org/jira/browse/KAFKA-12454
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Wenbing Shen
>Assignee: Wenbing Shen
>Priority: Minor
> Fix For: 3.0.0
>
>
> When non-existent brokerIds value are given, the kafka-log-dirs tool will 
> have a timeout error:
> Exception in thread "main" java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node 
> assignment. Call: describeLogDirs
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
>  at kafka.admin.LogDirsCommand$.describe(LogDirsCommand.scala:50)
>  at kafka.admin.LogDirsCommand$.main(LogDirsCommand.scala:36)
>  at kafka.admin.LogDirsCommand.main(LogDirsCommand.scala)
> Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting 
> for a node assignment. Call: describeLogDirs
>  
> When the brokerId entered by the user does not exist, an error message 
> indicating that the node is not present should be printed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-12239) Unclear warning message in JmxReporter, when getting missing JMX attribute

2021-03-17 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai reassigned KAFKA-12239:
--

Assignee: Gérald Quintana

> Unclear warning message in JmxReporter, when getting missing JMX attribute
> --
>
> Key: KAFKA-12239
> URL: https://issues.apache.org/jira/browse/KAFKA-12239
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 2.4.1
>Reporter: Gérald Quintana
>Assignee: Gérald Quintana
>Priority: Major
>
> When collecting bulk metrics, this warning message in logs is unhelpful, it 
> is impossible to determine which MBean is missing this attribute and fix the 
> metric collector:
>  
> {noformat}
> [2021-01-26T15:43:41,078][WARN ][org.apache.kafka.common.metrics.JmxReporter] 
> Error getting JMX attribute 'records-lag-max'
> javax.management.AttributeNotFoundException: Could not find attribute 
> records-lag-max
>  at 
> org.apache.kafka.common.metrics.JmxReporter$KafkaMbean.getAttribute(JmxReporter.java:192)
>  ~[?:?]
>  at 
> org.apache.kafka.common.metrics.JmxReporter$KafkaMbean.getAttributes(JmxReporter.java:200)
>  ~[?:?]
>  at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttributes(DefaultMBeanServerInterceptor.java:709)
>  ~[?:1.8.0_202]
> {noformat}
> Il would be very useful, to have the MBean object name in the error message.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10697) Remove ProduceResponse.responses

2021-03-18 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-10697.

Fix Version/s: 3.0.0
   Resolution: Fixed

> Remove ProduceResponse.responses
> 
>
> Key: KAFKA-10697
> URL: https://issues.apache.org/jira/browse/KAFKA-10697
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Chun-Hao Tang
>Priority: Minor
> Fix For: 3.0.0
>
>
> This is a follow-up of KAFKA-9628.
> related discussion: 
> https://github.com/apache/kafka/pull/9401#discussion_r518984349



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12385) Remove FetchResponse#responseData

2021-03-18 Thread Chia-Ping Tsai (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17304553#comment-17304553
 ] 

Chia-Ping Tsai commented on KAFKA-12385:


This issue is blocked as we are introducing topic id :(

> Remove FetchResponse#responseData
> -
>
> Key: KAFKA-12385
> URL: https://issues.apache.org/jira/browse/KAFKA-12385
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Major
>
> reference to [https://github.com/apache/kafka/pull/9758#discussion_r584142074]
> We can rewrite related code to avoid using stale data structure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12442) Upgrade ZSTD JNI from 1.4.8-4 to 1.4.9-1

2021-03-11 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-12442.

Fix Version/s: 3.0.0
   Resolution: Fixed

> Upgrade ZSTD JNI from 1.4.8-4 to 1.4.9-1
> 
>
> Key: KAFKA-12442
> URL: https://issues.apache.org/jira/browse/KAFKA-12442
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Dongjoon Hyun
>Assignee: Dongjoon Hyun
>Priority: Major
> Fix For: 3.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-12442) Upgrade ZSTD JNI from 1.4.8-4 to 1.4.9-1

2021-03-11 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai reassigned KAFKA-12442:
--

Assignee: Dongjoon Hyun

> Upgrade ZSTD JNI from 1.4.8-4 to 1.4.9-1
> 
>
> Key: KAFKA-12442
> URL: https://issues.apache.org/jira/browse/KAFKA-12442
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Dongjoon Hyun
>Assignee: Dongjoon Hyun
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12350) document about refresh.topics.interval.seconds default value is not right

2021-02-24 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-12350.

Resolution: Fixed

> document  about refresh.topics.interval.seconds default value is not right 
> ---
>
> Key: KAFKA-12350
> URL: https://issues.apache.org/jira/browse/KAFKA-12350
> Project: Kafka
>  Issue Type: Bug
>  Components: website
>Affects Versions: 2.7.0
>Reporter: superheizai
>Assignee: Luke Chen
>Priority: Minor
> Fix For: 3.0.0
>
>
> The config, refresh.topics.interval.seconds, described in document  give the 
> defalut value 6000, ten minutes. 600 seconds or 100 minutes?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12410) Remove FetchResponse#of

2021-03-04 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-12410:
--

 Summary: Remove FetchResponse#of
 Key: KAFKA-12410
 URL: https://issues.apache.org/jira/browse/KAFKA-12410
 Project: Kafka
  Issue Type: Improvement
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


That helper methods introduce a couple of collections/groups so it would be 
better to replace all usages by FetchResponseData.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12273) InterBrokerSendThread#pollOnce throws FatalExitError even though it is shutdown correctly

2021-02-22 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-12273.

Resolution: Fixed

> InterBrokerSendThread#pollOnce throws FatalExitError even though it is 
> shutdown correctly
> -
>
> Key: KAFKA-12273
> URL: https://issues.apache.org/jira/browse/KAFKA-12273
> Project: Kafka
>  Issue Type: Test
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Major
> Fix For: 3.0.0, 2.8.0
>
>
> kafka tests sometimes shutdown gradle with non-zero code. The (one of) root 
> cause is that InterBrokerSendThread#pollOnce encounters DisconnectException 
> when NetworkClient is closing. DisconnectException should be viewed as 
> "expected" error as we do close it. In other words, 
> InterBrokerSendThread#pollOnce should swallow it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12410) KafkaAPis ought to group fetch data before generating fetch response

2021-03-04 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated KAFKA-12410:
---
Summary: KafkaAPis ought to group fetch data before generating fetch 
response  (was: Remove FetchResponse#of)

> KafkaAPis ought to group fetch data before generating fetch response
> 
>
> Key: KAFKA-12410
> URL: https://issues.apache.org/jira/browse/KAFKA-12410
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Major
>
> That helper methods introduce a couple of collections/groups so it would be 
> better to replace all usages by FetchResponseData.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12387) Avoid unnecessary copy of FetchResponse data in handleFetchRequest

2021-02-27 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-12387:
--

 Summary: Avoid unnecessary copy of FetchResponse data in 
handleFetchRequest 
 Key: KAFKA-12387
 URL: https://issues.apache.org/jira/browse/KAFKA-12387
 Project: Kafka
  Issue Type: Improvement
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


the inner method "processResponseCallback" and "maybeConvertFetchedData" create 
many copy from input data. After [https://github.com/apache/kafka/pull/9758] is 
merged, the data is changed to auto-generated data which is mutable. Hence, we 
can mutate input data to avoid copy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12385) Remove FetchResponse#responseData

2021-02-27 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-12385:
--

 Summary: Remove FetchResponse#responseData
 Key: KAFKA-12385
 URL: https://issues.apache.org/jira/browse/KAFKA-12385
 Project: Kafka
  Issue Type: Bug
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


reference to [https://github.com/apache/kafka/pull/9758#discussion_r584142074]

We can rewrite related code to avoid using stale data structure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12626) RaftClusterTest and ClusterTestExtensionTest failures

2021-04-07 Thread Chia-Ping Tsai (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17316375#comment-17316375
 ] 

Chia-Ping Tsai commented on KAFKA-12626:


This should be fixed by https://github.com/apache/kafka/pull/10496

> RaftClusterTest and ClusterTestExtensionTest failures
> -
>
> Key: KAFKA-12626
> URL: https://issues.apache.org/jira/browse/KAFKA-12626
> Project: Kafka
>  Issue Type: Bug
>Reporter: Justine Olshan
>Priority: Major
>
> RaftClusterTest and ClusterTestExtensionsTest.[Quorum 2] 
> Name=cluster-tests-2, security=PLAINTEXT are failing due to
> {noformat}
> java.util.concurrent.ExecutionException: java.lang.ClassNotFoundException: 
> org.apache.kafka.controller.NoOpSnapshotWriterBuilder{noformat}
> I think it is related to the changes from 
> [https://github.com/apache/kafka/commit/7bc84d6ced71056dbb4cecdc9abbdbd7d8a5aa10#diff-77dc2adb187fd078084644613cff2b53021c8a5fbcdcfa116515734609d1332aR210]
>  specifically this part of the code 
> [https://github.com/apache/kafka/blob/33d0445b8408289800352de7822340028782a154/metadata/src/main/java/org/apache/kafka/controller/QuorumController.java#L210]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12611) Fix using random payload in ProducerPerformance incorrectly

2021-04-13 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-12611.

Fix Version/s: 3.0.0
   Resolution: Fixed

> Fix using random payload in ProducerPerformance incorrectly
> ---
>
> Key: KAFKA-12611
> URL: https://issues.apache.org/jira/browse/KAFKA-12611
> Project: Kafka
>  Issue Type: Bug
>Reporter: Xie Lei
>Assignee: Xie Lei
>Priority: Major
> Fix For: 3.0.0
>
>
> In ProducerPerformance, random payload always same. it has a great impact 
> when use the compression.type option.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12655) CVE-2021-28165 - Upgrade jetty to 9.4.39

2021-04-13 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated KAFKA-12655:
---
Fix Version/s: 2.6.2
   2.7.1
   2.8.0

> CVE-2021-28165 - Upgrade jetty to 9.4.39
> 
>
> Key: KAFKA-12655
> URL: https://issues.apache.org/jira/browse/KAFKA-12655
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.7.0, 2.6.1
>Reporter: Edwin Hobor
>Assignee: Dongjin Lee
>Priority: Major
>  Labels: CVE, security
> Fix For: 3.0.0, 2.8.0, 2.7.1, 2.6.2
>
>
> *CVE-2021-28165* vulnerability affects Jetty versions up to *9.4.38*. For 
> more information see 
> [https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-28165] 
> Upgrading to Jetty version *9.4.39* should address this issue 
> ([https://github.com/eclipse/jetty.project/releases/tag/jetty-9.4.39.v20210325)|https://github.com/eclipse/jetty.project/releases/tag/jetty-9.4.39.v20210325].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12655) CVE-2021-28165 - Upgrade jetty to 9.4.39

2021-04-13 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-12655.

Fix Version/s: 3.0.0
   Resolution: Fixed

> CVE-2021-28165 - Upgrade jetty to 9.4.39
> 
>
> Key: KAFKA-12655
> URL: https://issues.apache.org/jira/browse/KAFKA-12655
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.7.0, 2.6.1
>Reporter: Edwin Hobor
>Assignee: Dongjin Lee
>Priority: Major
>  Labels: CVE, security
> Fix For: 3.0.0
>
>
> *CVE-2021-28165* vulnerability affects Jetty versions up to *9.4.38*. For 
> more information see 
> [https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-28165] 
> Upgrading to Jetty version *9.4.39* should address this issue 
> ([https://github.com/eclipse/jetty.project/releases/tag/jetty-9.4.39.v20210325)|https://github.com/eclipse/jetty.project/releases/tag/jetty-9.4.39.v20210325].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-12611) Fix using random payload in ProducerPerformance incorrectly

2021-04-13 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai reassigned KAFKA-12611:
--

Assignee: Xie Lei

> Fix using random payload in ProducerPerformance incorrectly
> ---
>
> Key: KAFKA-12611
> URL: https://issues.apache.org/jira/browse/KAFKA-12611
> Project: Kafka
>  Issue Type: Bug
>Reporter: Xie Lei
>Assignee: Xie Lei
>Priority: Major
>
> In ProducerPerformance, random payload always same. it has a great impact 
> when use the compression.type option.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12662) add unit test for ProducerPerformance

2021-04-13 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-12662:
--

 Summary: add unit test for ProducerPerformance
 Key: KAFKA-12662
 URL: https://issues.apache.org/jira/browse/KAFKA-12662
 Project: Kafka
  Issue Type: Improvement
Reporter: Chia-Ping Tsai


ProducerPerformance is a useful tool which offers an official way to test 
produce performance. Hence, it would be better to add enough tests for it. (In 
fact, it has no unit tests currently).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-12702) Unhandled exception caught in InterBrokerSendThread

2021-04-21 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai reassigned KAFKA-12702:
--

Assignee: Wenbing Shen

> Unhandled exception caught in InterBrokerSendThread
> ---
>
> Key: KAFKA-12702
> URL: https://issues.apache.org/jira/browse/KAFKA-12702
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Wenbing Shen
>Assignee: Wenbing Shen
>Priority: Blocker
> Attachments: afterFixing.png, beforeFixing.png, 
> image-2021-04-21-17-12-28-471.png
>
>
> In kraft mode, if listeners and advertised.listeners are not configured with 
> host addresses, the host parameter value of Listener in 
> BrokerRegistrationRequestData will be null. When the broker is started, a 
> null pointer exception will be thrown, causing startup failure.
> A feasible solution is to replace the empty host of endPoint in 
> advertisedListeners with InetAddress.getLocalHost.getCanonicalHostName in 
> Broker Server when building networkListeners.
> The following is the debug log:
> before fixing:
> [2021-04-21 14:15:20,032] DEBUG (broker-2-to-controller-send-thread 
> org.apache.kafka.clients.NetworkClient 522) [broker-2-to-controller] Sending 
> BROKER_REGISTRATION request with header RequestHeader(apiKey=BROKER_REGIS
> TRATION, apiVersion=0, clientId=2, correlationId=6) and timeout 3 to node 
> 2: BrokerRegistrationRequestData(brokerId=2, 
> clusterId='nCqve6D1TEef3NpQniA0Mg', incarnationId=X8w4_1DFT2yUjOm6asPjIQ, 
> listeners=[Listener(n
> ame='PLAINTEXT', {color:#FF}host=null,{color} port=9092, 
> securityProtocol=0)], features=[], rack=null)
> [2021-04-21 14:15:20,033] ERROR (broker-2-to-controller-send-thread 
> kafka.server.BrokerToControllerRequestThread 76) 
> [broker-2-to-controller-send-thread]: unhandled exception caught in 
> InterBrokerSendThread
> java.lang.NullPointerException
>  at 
> org.apache.kafka.common.message.BrokerRegistrationRequestData$Listener.addSize(BrokerRegistrationRequestData.java:515)
>  at 
> org.apache.kafka.common.message.BrokerRegistrationRequestData.addSize(BrokerRegistrationRequestData.java:216)
>  at 
> org.apache.kafka.common.protocol.SendBuilder.buildSend(SendBuilder.java:218)
>  at 
> org.apache.kafka.common.protocol.SendBuilder.buildRequestSend(SendBuilder.java:187)
>  at 
> org.apache.kafka.common.requests.AbstractRequest.toSend(AbstractRequest.java:101)
>  at org.apache.kafka.clients.NetworkClient.doSend(NetworkClient.java:525)
>  at org.apache.kafka.clients.NetworkClient.doSend(NetworkClient.java:501)
>  at org.apache.kafka.clients.NetworkClient.send(NetworkClient.java:461)
>  at 
> kafka.common.InterBrokerSendThread.$anonfun$sendRequests$1(InterBrokerSendThread.scala:104)
>  at 
> kafka.common.InterBrokerSendThread.$anonfun$sendRequests$1$adapted(InterBrokerSendThread.scala:99)
>  at kafka.common.InterBrokerSendThread$$Lambda$259/910445654.apply(Unknown 
> Source)
>  at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:563)
>  at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:561)
>  at scala.collection.AbstractIterable.foreach(Iterable.scala:919)
>  at 
> kafka.common.InterBrokerSendThread.sendRequests(InterBrokerSendThread.scala:99)
>  at 
> kafka.common.InterBrokerSendThread.pollOnce(InterBrokerSendThread.scala:73)
>  at 
> kafka.server.BrokerToControllerRequestThread.doWork(BrokerToControllerChannelManager.scala:368)
>  at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)
> [2021-04-21 14:15:20,034] INFO (broker-2-to-controller-send-thread 
> kafka.server.BrokerToControllerRequestThread 66) 
> [broker-2-to-controller-send-thread]: Stopped
> after fixing:
> [2021-04-21 15:05:01,095] DEBUG (BrokerToControllerChannelManager broker=2 
> name=heartbeat org.apache.kafka.clients.NetworkClient 512) 
> [BrokerToControllerChannelManager broker=2 name=heartbeat] Sending 
> BROKER_REGISTRATI
> ON request with header RequestHeader(apiKey=BROKER_REGISTRATION, 
> apiVersion=0, clientId=2, correlationId=0) and timeout 3 to node 2: 
> BrokerRegistrationRequestData(brokerId=2, clusterId='nCqve6D1TEef3NpQniA0Mg', 
> inc
> arnationId=xF29h_IRR1KzrERWwssQ2w, listeners=[Listener(name='PLAINTEXT', 
> host='hdpxxx.cn', port=9092, securityProtocol=0)], features=[], rack=null)
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-12661) ConfigEntry#equal does not compare other fields when value is NOT null

2021-04-12 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai reassigned KAFKA-12661:
--

Assignee: Chia-Ping Tsai

> ConfigEntry#equal does not compare other fields when value is NOT null 
> ---
>
> Key: KAFKA-12661
> URL: https://issues.apache.org/jira/browse/KAFKA-12661
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Major
>
> {code:java}
> return this.name.equals(that.name) &&
> this.value != null ? this.value.equals(that.value) : 
> that.value == null &&
> this.isSensitive == that.isSensitive &&
> this.isReadOnly == that.isReadOnly &&
> this.source == that.source &&
> Objects.equals(this.synonyms, that.synonyms);
> {code}
> the second value of ternary operator is "that.value == null &&
> this.isSensitive == that.isSensitive &&
> this.isReadOnly == that.isReadOnly &&
> this.source == that.source &&
> Objects.equals(this.synonyms, that.synonyms);" rather than 
> "that.value == null". Hence, it does not compare other fields when value is 
> not null.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12661) ConfigEntry#equal does not compare other fields when value is NOT null

2021-04-12 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-12661:
--

 Summary: ConfigEntry#equal does not compare other fields when 
value is NOT null 
 Key: KAFKA-12661
 URL: https://issues.apache.org/jira/browse/KAFKA-12661
 Project: Kafka
  Issue Type: Bug
Reporter: Chia-Ping Tsai


{code:java}
return this.name.equals(that.name) &&
this.value != null ? this.value.equals(that.value) : that.value 
== null &&
this.isSensitive == that.isSensitive &&
this.isReadOnly == that.isReadOnly &&
this.source == that.source &&
Objects.equals(this.synonyms, that.synonyms);
{code}

the second value of ternary operator is "that.value == null &&
this.isSensitive == that.isSensitive &&
this.isReadOnly == that.isReadOnly &&
this.source == that.source &&
Objects.equals(this.synonyms, that.synonyms);" rather than 
"that.value == null". Hence, it does not compare other fields when value is not 
null.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12661) ConfigEntry#equal does not compare other fields when value is NOT null

2021-04-12 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated KAFKA-12661:
---
Priority: Minor  (was: Major)

> ConfigEntry#equal does not compare other fields when value is NOT null 
> ---
>
> Key: KAFKA-12661
> URL: https://issues.apache.org/jira/browse/KAFKA-12661
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
>
> {code:java}
> return this.name.equals(that.name) &&
> this.value != null ? this.value.equals(that.value) : 
> that.value == null &&
> this.isSensitive == that.isSensitive &&
> this.isReadOnly == that.isReadOnly &&
> this.source == that.source &&
> Objects.equals(this.synonyms, that.synonyms);
> {code}
> the second value of ternary operator is "that.value == null &&
> this.isSensitive == that.isSensitive &&
> this.isReadOnly == that.isReadOnly &&
> this.source == that.source &&
> Objects.equals(this.synonyms, that.synonyms);" rather than 
> "that.value == null". Hence, it does not compare other fields when value is 
> not null.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12321) the comparison function for uuid type should be 'equals' rather than '=='

2021-02-12 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-12321.

Resolution: Fixed

> the comparison function for uuid type should be 'equals' rather than '=='
> -
>
> Key: KAFKA-12321
> URL: https://issues.apache.org/jira/browse/KAFKA-12321
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Major
> Fix For: 2.8.0, 2.7.1, 2.6.2
>
>
> trunk:
> {code:java}
> if (this.taggedUuid != Uuid.fromString("H3KKO4NTRPaCWtEmm3vW7A"))
> {code}
> expected:
> {code:java}
> if (!this.taggedUuid.equals(Uuid.fromString("H3KKO4NTRPaCWtEmm3vW7A")))
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12321) the comparison function for uuid type should be 'equals' rather than '=='

2021-02-12 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated KAFKA-12321:
---
Fix Version/s: 2.6.2
   2.7.1
   2.8.0

> the comparison function for uuid type should be 'equals' rather than '=='
> -
>
> Key: KAFKA-12321
> URL: https://issues.apache.org/jira/browse/KAFKA-12321
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Major
> Fix For: 2.8.0, 2.7.1, 2.6.2
>
>
> trunk:
> {code:java}
> if (this.taggedUuid != Uuid.fromString("H3KKO4NTRPaCWtEmm3vW7A"))
> {code}
> expected:
> {code:java}
> if (!this.taggedUuid.equals(Uuid.fromString("H3KKO4NTRPaCWtEmm3vW7A")))
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12273) InterBrokerSendThread#pollOnce throws FatalExitError even though it is shutdown correctly

2021-02-16 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated KAFKA-12273:
---
Fix Version/s: 2.8.0

> InterBrokerSendThread#pollOnce throws FatalExitError even though it is 
> shutdown correctly
> -
>
> Key: KAFKA-12273
> URL: https://issues.apache.org/jira/browse/KAFKA-12273
> Project: Kafka
>  Issue Type: Test
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Major
> Fix For: 2.8.0
>
>
> kafka tests sometimes shutdown gradle with non-zero code. The (one of) root 
> cause is that InterBrokerSendThread#pollOnce encounters DisconnectException 
> when NetworkClient is closing. DisconnectException should be viewed as 
> "expected" error as we do close it. In other words, 
> InterBrokerSendThread#pollOnce should swallow it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12335) Upgrade junit from 5.7.0 to 5.7.1

2021-02-17 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-12335:
--

 Summary: Upgrade junit from 5.7.0 to 5.7.1
 Key: KAFKA-12335
 URL: https://issues.apache.org/jira/browse/KAFKA-12335
 Project: Kafka
  Issue Type: Improvement
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


junit 5.7.1 release notes: [https://junit.org/junit5/docs/5.7.1/release-notes/]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10885) Refactor MemoryRecordsBuilderTest/MemoryRecordsTest to avoid a lot of (unnecessary) ignored test cases

2021-02-17 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-10885.

Resolution: Fixed

> Refactor MemoryRecordsBuilderTest/MemoryRecordsTest to avoid a lot of 
> (unnecessary) ignored test cases
> --
>
> Key: KAFKA-10885
> URL: https://issues.apache.org/jira/browse/KAFKA-10885
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: GeordieMai
>Priority: Major
>  Labels: newbie
> Fix For: 3.0.0
>
>
> {quote}private void assumeAtLeastV2OrNotZstd(byte magic)
> Unknown macro: \{ assumeTrue(compressionType != CompressionType.ZSTD || magic 
> >= MAGIC_VALUE_V2); }{quote}
> MemoryRecordsBuilderTest/MemoryRecordsTest use aforementioned method to avoid 
> testing zstd on unsupported magic code. However, it produces some unnecessary 
> ignored test cases. Personally, it could be separated to different test 
> classes for each magic code as we do assign specify magic code to each test 
> cases.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10885) Refactor MemoryRecordsBuilderTest/MemoryRecordsTest to avoid a lot of (unnecessary) ignored test cases

2021-02-17 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated KAFKA-10885:
---
Fix Version/s: 3.0.0

> Refactor MemoryRecordsBuilderTest/MemoryRecordsTest to avoid a lot of 
> (unnecessary) ignored test cases
> --
>
> Key: KAFKA-10885
> URL: https://issues.apache.org/jira/browse/KAFKA-10885
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: GeordieMai
>Priority: Major
>  Labels: newbie
> Fix For: 3.0.0
>
>
> {quote}private void assumeAtLeastV2OrNotZstd(byte magic)
> Unknown macro: \{ assumeTrue(compressionType != CompressionType.ZSTD || magic 
> >= MAGIC_VALUE_V2); }{quote}
> MemoryRecordsBuilderTest/MemoryRecordsTest use aforementioned method to avoid 
> testing zstd on unsupported magic code. However, it produces some unnecessary 
> ignored test cases. Personally, it could be separated to different test 
> classes for each magic code as we do assign specify magic code to each test 
> cases.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-12284) Flaky Test MirrorConnectorsIntegrationSSLTest#testOneWayReplicationWithAutoOffsetSync

2021-02-04 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai reassigned KAFKA-12284:
--

Assignee: Chia-Ping Tsai

> Flaky Test 
> MirrorConnectorsIntegrationSSLTest#testOneWayReplicationWithAutoOffsetSync
> -
>
> Key: KAFKA-12284
> URL: https://issues.apache.org/jira/browse/KAFKA-12284
> Project: Kafka
>  Issue Type: Test
>  Components: mirrormaker, unit tests
>Reporter: Matthias J. Sax
>Assignee: Chia-Ping Tsai
>Priority: Critical
>  Labels: flaky-test
>
> [https://github.com/apache/kafka/pull/9997/checks?check_run_id=1820178470]
> {quote} {{java.lang.RuntimeException: 
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TopicExistsException: Topic 
> 'primary.test-topic-2' already exists.
>   at 
> org.apache.kafka.connect.util.clusters.EmbeddedKafkaCluster.createTopic(EmbeddedKafkaCluster.java:366)
>   at 
> org.apache.kafka.connect.util.clusters.EmbeddedKafkaCluster.createTopic(EmbeddedKafkaCluster.java:341)
>   at 
> org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationBaseTest.testOneWayReplicationWithAutoOffsetSync(MirrorConnectorsIntegrationBaseTest.java:419)}}
> [...]
>  
> {{Caused by: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TopicExistsException: Topic 
> 'primary.test-topic-2' already exists.
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
>   at 
> org.apache.kafka.connect.util.clusters.EmbeddedKafkaCluster.createTopic(EmbeddedKafkaCluster.java:364)
>   ... 92 more
> Caused by: org.apache.kafka.common.errors.TopicExistsException: Topic 
> 'primary.test-topic-2' already exists.}}
> {quote}
> STDOUT
> {quote} {{2021-02-03 04ː19ː15,975] ERROR [MirrorHeartbeatConnector|task-0] 
> WorkerSourceTask\{id=MirrorHeartbeatConnector-0} failed to send record to 
> heartbeats:  (org.apache.kafka.connect.runtime.WorkerSourceTask:354)
> org.apache.kafka.common.KafkaException: Producer is closed forcefully.
>   at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.abortBatches(RecordAccumulator.java:750)
>   at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.abortIncompleteBatches(RecordAccumulator.java:737)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:282)
>   at java.lang.Thread.run(Thread.java:748)}}{quote}
> {quote} {{[2021-02-03 04ː19ː36,767] ERROR Could not check connector state 
> info. 
> (org.apache.kafka.connect.util.clusters.EmbeddedConnectClusterAssertions:420)
> org.apache.kafka.connect.runtime.rest.errors.ConnectRestException: Could not 
> read connector state. Error response: \{"error_code":404,"message":"No status 
> found for connector MirrorSourceConnector"}
>   at 
> org.apache.kafka.connect.util.clusters.EmbeddedConnectCluster.connectorStatus(EmbeddedConnectCluster.java:466)
>   at 
> org.apache.kafka.connect.util.clusters.EmbeddedConnectClusterAssertions.checkConnectorState(EmbeddedConnectClusterAssertions.java:413)
>   at 
> org.apache.kafka.connect.util.clusters.EmbeddedConnectClusterAssertions.lambda$assertConnectorAndAtLeastNumTasksAreRunning$16(EmbeddedConnectClusterAssertions.java:286)
>   at 
> org.apache.kafka.test.TestUtils.lambda$waitForCondition$3(TestUtils.java:303)
>   at 
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:351)
>   at 
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:319)
>   at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:300)
>   at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:290)
>   at 
> org.apache.kafka.connect.util.clusters.EmbeddedConnectClusterAssertions.assertConnectorAndAtLeastNumTasksAreRunning(EmbeddedConnectClusterAssertions.java:285)
>   at 
> org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationBaseTest.waitUntilMirrorMakerIsRunning(MirrorConnectorsIntegrationBaseTest.java:458)
>   at 
> org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationBaseTest.testReplication(MirrorConnectorsIntegrationBaseTest.java:225)}}
> {{}}
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-12283) Flaky Test RebalanceSourceConnectorsIntegrationTest#testMultipleWorkersRejoining

2021-02-04 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai reassigned KAFKA-12283:
--

Assignee: Chia-Ping Tsai

> Flaky Test 
> RebalanceSourceConnectorsIntegrationTest#testMultipleWorkersRejoining
> 
>
> Key: KAFKA-12283
> URL: https://issues.apache.org/jira/browse/KAFKA-12283
> Project: Kafka
>  Issue Type: Test
>  Components: KafkaConnect, unit tests
>Reporter: Matthias J. Sax
>Assignee: Chia-Ping Tsai
>Priority: Critical
>  Labels: flaky-test
>
> https://github.com/apache/kafka/pull/1/checks?check_run_id=1820092809
> {quote} {{java.lang.AssertionError: Tasks are imbalanced: 
> localhost:36037=[seq-source13-0, seq-source13-1, seq-source13-2, 
> seq-source13-3, seq-source12-0, seq-source12-1, seq-source12-2, 
> seq-source12-3]
> localhost:43563=[seq-source11-0, seq-source11-2, seq-source10-0, 
> seq-source10-2]
> localhost:46539=[seq-source11-1, seq-source11-3, seq-source10-1, 
> seq-source10-3]
>   at org.junit.Assert.fail(Assert.java:89)
>   at org.junit.Assert.assertTrue(Assert.java:42)
>   at 
> org.apache.kafka.connect.integration.RebalanceSourceConnectorsIntegrationTest.assertConnectorAndTasksAreUniqueAndBalanced(RebalanceSourceConnectorsIntegrationTest.java:362)
>   at 
> org.apache.kafka.test.TestUtils.lambda$waitForCondition$3(TestUtils.java:303)
>   at 
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:351)
>   at 
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:319)
>   at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:300)
>   at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:290)
>   at 
> org.apache.kafka.connect.integration.RebalanceSourceConnectorsIntegrationTest.testMultipleWorkersRejoining(RebalanceSourceConnectorsIntegrationTest.java:313)}}
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-12284) Flaky Test MirrorConnectorsIntegrationSSLTest#testOneWayReplicationWithAutoOffsetSync

2021-02-04 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai reassigned KAFKA-12284:
--

Assignee: (was: Chia-Ping Tsai)

> Flaky Test 
> MirrorConnectorsIntegrationSSLTest#testOneWayReplicationWithAutoOffsetSync
> -
>
> Key: KAFKA-12284
> URL: https://issues.apache.org/jira/browse/KAFKA-12284
> Project: Kafka
>  Issue Type: Test
>  Components: mirrormaker, unit tests
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://github.com/apache/kafka/pull/9997/checks?check_run_id=1820178470]
> {quote} {{java.lang.RuntimeException: 
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TopicExistsException: Topic 
> 'primary.test-topic-2' already exists.
>   at 
> org.apache.kafka.connect.util.clusters.EmbeddedKafkaCluster.createTopic(EmbeddedKafkaCluster.java:366)
>   at 
> org.apache.kafka.connect.util.clusters.EmbeddedKafkaCluster.createTopic(EmbeddedKafkaCluster.java:341)
>   at 
> org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationBaseTest.testOneWayReplicationWithAutoOffsetSync(MirrorConnectorsIntegrationBaseTest.java:419)}}
> [...]
>  
> {{Caused by: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TopicExistsException: Topic 
> 'primary.test-topic-2' already exists.
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
>   at 
> org.apache.kafka.connect.util.clusters.EmbeddedKafkaCluster.createTopic(EmbeddedKafkaCluster.java:364)
>   ... 92 more
> Caused by: org.apache.kafka.common.errors.TopicExistsException: Topic 
> 'primary.test-topic-2' already exists.}}
> {quote}
> STDOUT
> {quote} {{2021-02-03 04ː19ː15,975] ERROR [MirrorHeartbeatConnector|task-0] 
> WorkerSourceTask\{id=MirrorHeartbeatConnector-0} failed to send record to 
> heartbeats:  (org.apache.kafka.connect.runtime.WorkerSourceTask:354)
> org.apache.kafka.common.KafkaException: Producer is closed forcefully.
>   at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.abortBatches(RecordAccumulator.java:750)
>   at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.abortIncompleteBatches(RecordAccumulator.java:737)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:282)
>   at java.lang.Thread.run(Thread.java:748)}}{quote}
> {quote} {{[2021-02-03 04ː19ː36,767] ERROR Could not check connector state 
> info. 
> (org.apache.kafka.connect.util.clusters.EmbeddedConnectClusterAssertions:420)
> org.apache.kafka.connect.runtime.rest.errors.ConnectRestException: Could not 
> read connector state. Error response: \{"error_code":404,"message":"No status 
> found for connector MirrorSourceConnector"}
>   at 
> org.apache.kafka.connect.util.clusters.EmbeddedConnectCluster.connectorStatus(EmbeddedConnectCluster.java:466)
>   at 
> org.apache.kafka.connect.util.clusters.EmbeddedConnectClusterAssertions.checkConnectorState(EmbeddedConnectClusterAssertions.java:413)
>   at 
> org.apache.kafka.connect.util.clusters.EmbeddedConnectClusterAssertions.lambda$assertConnectorAndAtLeastNumTasksAreRunning$16(EmbeddedConnectClusterAssertions.java:286)
>   at 
> org.apache.kafka.test.TestUtils.lambda$waitForCondition$3(TestUtils.java:303)
>   at 
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:351)
>   at 
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:319)
>   at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:300)
>   at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:290)
>   at 
> org.apache.kafka.connect.util.clusters.EmbeddedConnectClusterAssertions.assertConnectorAndAtLeastNumTasksAreRunning(EmbeddedConnectClusterAssertions.java:285)
>   at 
> org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationBaseTest.waitUntilMirrorMakerIsRunning(MirrorConnectorsIntegrationBaseTest.java:458)
>   at 
> org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationBaseTest.testReplication(MirrorConnectorsIntegrationBaseTest.java:225)}}
> {{}}
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12335) Upgrade junit from 5.7.0 to 5.7.1

2021-02-19 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-12335.

Fix Version/s: 3.0.0
   Resolution: Fixed

> Upgrade junit from 5.7.0 to 5.7.1
> -
>
> Key: KAFKA-12335
> URL: https://issues.apache.org/jira/browse/KAFKA-12335
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Major
> Fix For: 3.0.0
>
>
> junit 5.7.1 release notes: 
> [https://junit.org/junit5/docs/5.7.1/release-notes/]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12339) Add retry to admin client's listOffsets

2021-02-19 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated KAFKA-12339:
---
Summary: Add retry to admin client's listOffsets  (was: Starting new 
connector cluster with new internal topics encounters 
UnknownTopicOrPartitionException)

> Add retry to admin client's listOffsets
> ---
>
> Key: KAFKA-12339
> URL: https://issues.apache.org/jira/browse/KAFKA-12339
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.5.2, 2.8.0, 2.7.1, 2.6.2
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Blocker
>
> After upgrading our connector env to 2.9.0-SNAPSHOT, sometimes the connect 
> cluster encounters following error.
> {quote}Uncaught exception in herder work thread, exiting:  
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder:324)
> org.apache.kafka.connect.errors.ConnectException: Error while getting end 
> offsets for topic 'connect-storage-topic-connect-cluster-1'
> at org.apache.kafka.connect.util.TopicAdmin.endOffsets(TopicAdmin.java:689)
> at 
> org.apache.kafka.connect.util.KafkaBasedLog.readToLogEnd(KafkaBasedLog.java:338)
> at org.apache.kafka.connect.util.KafkaBasedLog.start(KafkaBasedLog.java:195)
> at 
> org.apache.kafka.connect.storage.KafkaStatusBackingStore.start(KafkaStatusBackingStore.java:216)
> at 
> org.apache.kafka.connect.runtime.AbstractHerder.startServices(AbstractHerder.java:129)
> at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:310)
> at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
> at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> at java.base/java.lang.Thread.run(Thread.java:834)
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
> at org.apache.kafka.connect.util.TopicAdmin.endOffsets(TopicAdmin.java:668)
> ... 10 more
> {quote}
> [https://github.com/apache/kafka/pull/9780] added shared admin to get end 
> offsets. KafkaAdmin#listOffsets does not handle topic-level error, hence the 
> UnknownTopicOrPartitionException on topic-level can obstruct worker from 
> running when the new internal topic is NOT synced to all brokers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12356) Drain the existing requests when shutdowning InterBrokerSendThread

2021-02-22 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-12356:
--

 Summary: Drain the existing requests when shutdowning 
InterBrokerSendThread
 Key: KAFKA-12356
 URL: https://issues.apache.org/jira/browse/KAFKA-12356
 Project: Kafka
  Issue Type: Bug
Reporter: Chia-Ping Tsai


see https://github.com/apache/kafka/pull/10024#discussion_r571853293
{quote}
We would ideally drain the existing requests (up to a timeout), not accept new 
ones and then close the network client.
{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12273) InterBrokerSendThread#pollOnce throws FatalExitError even though it is shutdown correctly

2021-02-18 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated KAFKA-12273:
---
Fix Version/s: 3.0.0

> InterBrokerSendThread#pollOnce throws FatalExitError even though it is 
> shutdown correctly
> -
>
> Key: KAFKA-12273
> URL: https://issues.apache.org/jira/browse/KAFKA-12273
> Project: Kafka
>  Issue Type: Test
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Major
> Fix For: 3.0.0, 2.8.0
>
>
> kafka tests sometimes shutdown gradle with non-zero code. The (one of) root 
> cause is that InterBrokerSendThread#pollOnce encounters DisconnectException 
> when NetworkClient is closing. DisconnectException should be viewed as 
> "expected" error as we do close it. In other words, 
> InterBrokerSendThread#pollOnce should swallow it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12339) Starting new connector cluster with new internal topics encounters UnknownTopicOrPartitionException

2021-02-18 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated KAFKA-12339:
---
Description: 
After upgrading our connector env to 2.9.0-SNAPSHOT, sometimes the connect 
cluster encounters following error.
{quote}Uncaught exception in herder work thread, exiting:  
(org.apache.kafka.connect.runtime.distributed.DistributedHerder:324)

org.apache.kafka.connect.errors.ConnectException: Error while getting end 
offsets for topic 'connect-storage-topic-connect-cluster-1'

at org.apache.kafka.connect.util.TopicAdmin.endOffsets(TopicAdmin.java:689)

at 
org.apache.kafka.connect.util.KafkaBasedLog.readToLogEnd(KafkaBasedLog.java:338)

at org.apache.kafka.connect.util.KafkaBasedLog.start(KafkaBasedLog.java:195)

at 
org.apache.kafka.connect.storage.KafkaStatusBackingStore.start(KafkaStatusBackingStore.java:216)

at 
org.apache.kafka.connect.runtime.AbstractHerder.startServices(AbstractHerder.java:129)

at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:310)

at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)

at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)

at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)

at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)

at java.base/java.lang.Thread.run(Thread.java:834)

Caused by: java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.

at 
org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)

at 
org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)

at 
org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)

at 
org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)

at org.apache.kafka.connect.util.TopicAdmin.endOffsets(TopicAdmin.java:668)

... 10 more
{quote}
[https://github.com/apache/kafka/pull/9780] added shared admin to get end 
offsets. KafkaAdmin#listOffsets does not handle topic-level error, hence the 
UnknownTopicOrPartitionException on topic-level can obstruct worker from 
running when the new internal topic is NOT synced to all brokers.

  was:
After migrating our connector env to 2.9.0-SNAPSHOT, it start to fail to deploy 
connector cluster. The error message is shown below.

{quote}

Uncaught exception in herder work thread, exiting:  
(org.apache.kafka.connect.runtime.distributed.DistributedHerder:324)

org.apache.kafka.connect.errors.ConnectException: Error while getting end 
offsets for topic 'connect-storage-topic-connect-cluster-1'

at org.apache.kafka.connect.util.TopicAdmin.endOffsets(TopicAdmin.java:689)

at 
org.apache.kafka.connect.util.KafkaBasedLog.readToLogEnd(KafkaBasedLog.java:338)

at org.apache.kafka.connect.util.KafkaBasedLog.start(KafkaBasedLog.java:195)

at 
org.apache.kafka.connect.storage.KafkaStatusBackingStore.start(KafkaStatusBackingStore.java:216)

at 
org.apache.kafka.connect.runtime.AbstractHerder.startServices(AbstractHerder.java:129)

at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:310)

at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)

at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)

at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)

at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)

at java.base/java.lang.Thread.run(Thread.java:834)

Caused by: java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.

at 
org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)

at 
org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)

at 
org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)

at 
org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)

at org.apache.kafka.connect.util.TopicAdmin.endOffsets(TopicAdmin.java:668)

... 10 more

{quote}

https://github.com/apache/kafka/pull/9780 added shared admin to get end 
offsets. KafkaAdmin#listOffsets does not handle topic-level error, hence the 
UnknownTopicOrPartitionException on topic-level can obstruct worker from 
running when the new internal topic is NOT synced to all brokers.


> Starting new connector cluster with new internal topics encounters 
> UnknownTopicOrPartitionException
> ---
>
> Key: KAFKA-12339
> 

[jira] [Updated] (KAFKA-12339) Starting new connector cluster with new internal topics encounters UnknownTopicOrPartitionException

2021-02-18 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated KAFKA-12339:
---
Summary: Starting new connector cluster with new internal topics encounters 
UnknownTopicOrPartitionException  (was: Starting new connector cluster with new 
internal topics is unstable due to UnknownTopicOrPartitionException)

> Starting new connector cluster with new internal topics encounters 
> UnknownTopicOrPartitionException
> ---
>
> Key: KAFKA-12339
> URL: https://issues.apache.org/jira/browse/KAFKA-12339
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Major
>
> After migrating our connector env to 2.9.0-SNAPSHOT, it start to fail to 
> deploy connector cluster. The error message is shown below.
> {quote}
> Uncaught exception in herder work thread, exiting:  
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder:324)
> org.apache.kafka.connect.errors.ConnectException: Error while getting end 
> offsets for topic 'connect-storage-topic-connect-cluster-1'
> at org.apache.kafka.connect.util.TopicAdmin.endOffsets(TopicAdmin.java:689)
> at 
> org.apache.kafka.connect.util.KafkaBasedLog.readToLogEnd(KafkaBasedLog.java:338)
> at org.apache.kafka.connect.util.KafkaBasedLog.start(KafkaBasedLog.java:195)
> at 
> org.apache.kafka.connect.storage.KafkaStatusBackingStore.start(KafkaStatusBackingStore.java:216)
> at 
> org.apache.kafka.connect.runtime.AbstractHerder.startServices(AbstractHerder.java:129)
> at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:310)
> at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
> at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> at java.base/java.lang.Thread.run(Thread.java:834)
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
> at org.apache.kafka.connect.util.TopicAdmin.endOffsets(TopicAdmin.java:668)
> ... 10 more
> {quote}
> https://github.com/apache/kafka/pull/9780 added shared admin to get end 
> offsets. KafkaAdmin#listOffsets does not handle topic-level error, hence the 
> UnknownTopicOrPartitionException on topic-level can obstruct worker from 
> running when the new internal topic is NOT synced to all brokers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12339) Starting new connector cluster with new internal topics is unstable due to UnknownTopicOrPartitionException

2021-02-18 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-12339:
--

 Summary: Starting new connector cluster with new internal topics 
is unstable due to UnknownTopicOrPartitionException
 Key: KAFKA-12339
 URL: https://issues.apache.org/jira/browse/KAFKA-12339
 Project: Kafka
  Issue Type: Bug
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


After migrating our connector env to 2.9.0-SNAPSHOT, it start to fail to deploy 
connector cluster. The error message is shown below.

{quote}

Uncaught exception in herder work thread, exiting:  
(org.apache.kafka.connect.runtime.distributed.DistributedHerder:324)

org.apache.kafka.connect.errors.ConnectException: Error while getting end 
offsets for topic 'connect-storage-topic-connect-cluster-1'

at org.apache.kafka.connect.util.TopicAdmin.endOffsets(TopicAdmin.java:689)

at 
org.apache.kafka.connect.util.KafkaBasedLog.readToLogEnd(KafkaBasedLog.java:338)

at org.apache.kafka.connect.util.KafkaBasedLog.start(KafkaBasedLog.java:195)

at 
org.apache.kafka.connect.storage.KafkaStatusBackingStore.start(KafkaStatusBackingStore.java:216)

at 
org.apache.kafka.connect.runtime.AbstractHerder.startServices(AbstractHerder.java:129)

at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:310)

at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)

at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)

at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)

at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)

at java.base/java.lang.Thread.run(Thread.java:834)

Caused by: java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.

at 
org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)

at 
org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)

at 
org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)

at 
org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)

at org.apache.kafka.connect.util.TopicAdmin.endOffsets(TopicAdmin.java:668)

... 10 more

{quote}

https://github.com/apache/kafka/pull/9780 added shared admin to get end 
offsets. KafkaAdmin#listOffsets does not handle topic-level error, hence the 
UnknownTopicOrPartitionException on topic-level can obstruct worker from 
running when the new internal topic is NOT synced to all brokers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10859) add @Test annotation to FileStreamSourceTaskTest.testInvalidFile and reduce the loop count to speedup the test

2021-09-04 Thread Chia-Ping Tsai (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409895#comment-17409895
 ] 

Chia-Ping Tsai commented on KAFKA-10859:


{quote}

could you please take a look at my PR?

{quote}

sure!

> add @Test annotation to FileStreamSourceTaskTest.testInvalidFile and reduce 
> the loop count to speedup the test
> --
>
> Key: KAFKA-10859
> URL: https://issues.apache.org/jira/browse/KAFKA-10859
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Tom Bentley
>Priority: Major
>  Labels: newbie
>
> FileStreamSourceTaskTest.testInvalidFile miss a `@Test` annotation. Also, it 
> loops 100 times which spend about 2m to complete a unit test.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-14019) removeMembersFromConsumerGroup can't delete all members when there is no members already

2022-06-24 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-14019:
--

 Summary: removeMembersFromConsumerGroup can't delete all members 
when there is no members already
 Key: KAFKA-14019
 URL: https://issues.apache.org/jira/browse/KAFKA-14019
 Project: Kafka
  Issue Type: Bug
Reporter: Chia-Ping Tsai


The root cause is that the method fetch no member from server, so it fails to 
construct RemoveMembersFromConsumerGroupOptions (it can't accept empty list)

It seems to me deleting all members from a "empty" list is valid.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (KAFKA-14544) The "is-future" should be removed from metrics tags after future log becomes current log

2022-12-25 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-14544.

Resolution: Fixed

> The "is-future" should be removed from metrics tags after future log becomes 
> current log
> 
>
> Key: KAFKA-14544
> URL: https://issues.apache.org/jira/browse/KAFKA-14544
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Fix For: 3.5.0
>
>
> we don't remove "is-future=true" tag from future log after the future log 
> becomes "current" log. It causes two potential issues:
>  # the metrics monitors can't get metrics of Log if they don't trace the 
> property "is-future=true".
>  # all Log metrics of specify partition get removed if the partition is moved 
> to another folder again.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-14544) The "is-future" should be removed from metrics tags after future log becomes current log

2022-12-25 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated KAFKA-14544:
---
Fix Version/s: 3.5.0

> The "is-future" should be removed from metrics tags after future log becomes 
> current log
> 
>
> Key: KAFKA-14544
> URL: https://issues.apache.org/jira/browse/KAFKA-14544
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Fix For: 3.5.0
>
>
> we don't remove "is-future=true" tag from future log after the future log 
> becomes "current" log. It causes two potential issues:
>  # the metrics monitors can't get metrics of Log if they don't trace the 
> property "is-future=true".
>  # all Log metrics of specify partition get removed if the partition is moved 
> to another folder again.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-9087) ReplicaAlterLogDirs stuck and restart fails with java.lang.IllegalStateException: Offset mismatch for the future replica

2023-01-05 Thread Chia-Ping Tsai (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17655155#comment-17655155
 ] 

Chia-Ping Tsai commented on KAFKA-9087:
---

[~junrao] Please take a look at [https://github.com/apache/kafka/pull/13075]

> ReplicaAlterLogDirs stuck and restart fails with 
> java.lang.IllegalStateException: Offset mismatch for the future replica
> 
>
> Key: KAFKA-9087
> URL: https://issues.apache.org/jira/browse/KAFKA-9087
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.2.0
>Reporter: Gregory Koshelev
>Priority: Major
>
> I've started multiple replica movements between log directories and some 
> partitions were stuck. After the restart of the broker I've got exception in 
> server.log:
> {noformat}
> [2019-06-11 17:58:46,304] ERROR [ReplicaAlterLogDirsThread-1]: Error due to 
> (kafka.server.ReplicaAlterLogDirsThread)
>  org.apache.kafka.common.KafkaException: Error processing data for partition 
> metrics_timers-35 offset 4224887
>  at 
> kafka.server.AbstractFetcherThread.$anonfun$processFetchRequest$7(AbstractFetcherThread.scala:342)
>  at scala.Option.foreach(Option.scala:274)
>  at 
> kafka.server.AbstractFetcherThread.$anonfun$processFetchRequest$6(AbstractFetcherThread.scala:300)
>  at 
> kafka.server.AbstractFetcherThread.$anonfun$processFetchRequest$6$adapted(AbstractFetcherThread.scala:299)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
>  at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
>  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
>  at 
> kafka.server.AbstractFetcherThread.$anonfun$processFetchRequest$5(AbstractFetcherThread.scala:299)
>  at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
>  at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
>  at 
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:299)
>  at 
> kafka.server.AbstractFetcherThread.$anonfun$maybeFetch$3(AbstractFetcherThread.scala:132)
>  at 
> kafka.server.AbstractFetcherThread.$anonfun$maybeFetch$3$adapted(AbstractFetcherThread.scala:131)
>  at scala.Option.foreach(Option.scala:274)
>  at 
> kafka.server.AbstractFetcherThread.maybeFetch(AbstractFetcherThread.scala:131)
>  at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:113)
>  at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
>  Caused by: java.lang.IllegalStateException: Offset mismatch for the future 
> replica metrics_timers-35: fetched offset = 4224887, log end offset = 0.
>  at 
> kafka.server.ReplicaAlterLogDirsThread.processPartitionData(ReplicaAlterLogDirsThread.scala:107)
>  at 
> kafka.server.AbstractFetcherThread.$anonfun$processFetchRequest$7(AbstractFetcherThread.scala:311)
>  ... 16 more
>  [2019-06-11 17:58:46,305] INFO [ReplicaAlterLogDirsThread-1]: Stopped 
> (kafka.server.ReplicaAlterLogDirsThread)
> {noformat}
> Also, ReplicaAlterLogDirsThread has been stopped. Further restarts do not fix 
> the problem. To fix it I've stopped the broker and remove all the stuck 
> future partitions.
> Detailed log below
> {noformat}
> [2019-06-11 12:09:52,833] INFO [Log partition=metrics_timers-35, 
> dir=/storage2/kafka/data] Truncating to 4224887 has no effect as the largest 
> offset in the log is 4224886 (kafka.log.Log)
> [2019-06-11 12:21:34,979] INFO [Log partition=metrics_timers-35, 
> dir=/storage2/kafka/data] Loading producer state till offset 4224887 with 
> message format version 2 (kafka.log.Log)
> [2019-06-11 12:21:34,980] INFO [ProducerStateManager 
> partition=metrics_timers-35] Loading producer state from snapshot file 
> '/storage2/kafka/data/metrics_timers-35/04224887.snapshot' 
> (kafka.log.ProducerStateManager)
> [2019-06-11 12:21:34,980] INFO [Log partition=metrics_timers-35, 
> dir=/storage2/kafka/data] Completed load of log with 1 segments, log start 
> offset 4120720 and log end offset 4224887 in 70 ms (kafka.log.Log)
> [2019-06-11 12:21:45,307] INFO Replica loaded for partition metrics_timers-35 
> with initial high watermark 0 (kafka.cluster.Replica)
> [2019-06-11 12:21:45,307] INFO Replica loaded for partition metrics_timers-35 
> with initial high watermark 0 (kafka.cluster.Replica)
> [2019-06-11 12:21:45,307] INFO Replica loaded for partition metrics_timers-35 
> with initial high watermark 4224887 (kafka.cluster.Replica)
> [2019-06-11 12:21:47,090] INFO [Log partition=metrics_timers-35, 
> dir=/storage2/kafka/data] Truncating to 4224887 has no effect as the largest 
> offset in the log is 4224886 (kafka.log.Log)
> [2019-06-11 12:30:04,757] INFO [ReplicaFetcher replicaId=1, leaderId=2, 
> fetcherId=0] Retrying 

[jira] [Resolved] (KAFKA-9087) ReplicaAlterLogDirs stuck and restart fails with java.lang.IllegalStateException: Offset mismatch for the future replica

2023-01-06 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-9087.
---
Fix Version/s: 3.5.0
 Assignee: Chia-Ping Tsai
   Resolution: Fixed

> ReplicaAlterLogDirs stuck and restart fails with 
> java.lang.IllegalStateException: Offset mismatch for the future replica
> 
>
> Key: KAFKA-9087
> URL: https://issues.apache.org/jira/browse/KAFKA-9087
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.2.0
>Reporter: Gregory Koshelev
>Assignee: Chia-Ping Tsai
>Priority: Major
> Fix For: 3.5.0
>
>
> I've started multiple replica movements between log directories and some 
> partitions were stuck. After the restart of the broker I've got exception in 
> server.log:
> {noformat}
> [2019-06-11 17:58:46,304] ERROR [ReplicaAlterLogDirsThread-1]: Error due to 
> (kafka.server.ReplicaAlterLogDirsThread)
>  org.apache.kafka.common.KafkaException: Error processing data for partition 
> metrics_timers-35 offset 4224887
>  at 
> kafka.server.AbstractFetcherThread.$anonfun$processFetchRequest$7(AbstractFetcherThread.scala:342)
>  at scala.Option.foreach(Option.scala:274)
>  at 
> kafka.server.AbstractFetcherThread.$anonfun$processFetchRequest$6(AbstractFetcherThread.scala:300)
>  at 
> kafka.server.AbstractFetcherThread.$anonfun$processFetchRequest$6$adapted(AbstractFetcherThread.scala:299)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
>  at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
>  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
>  at 
> kafka.server.AbstractFetcherThread.$anonfun$processFetchRequest$5(AbstractFetcherThread.scala:299)
>  at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
>  at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
>  at 
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:299)
>  at 
> kafka.server.AbstractFetcherThread.$anonfun$maybeFetch$3(AbstractFetcherThread.scala:132)
>  at 
> kafka.server.AbstractFetcherThread.$anonfun$maybeFetch$3$adapted(AbstractFetcherThread.scala:131)
>  at scala.Option.foreach(Option.scala:274)
>  at 
> kafka.server.AbstractFetcherThread.maybeFetch(AbstractFetcherThread.scala:131)
>  at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:113)
>  at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
>  Caused by: java.lang.IllegalStateException: Offset mismatch for the future 
> replica metrics_timers-35: fetched offset = 4224887, log end offset = 0.
>  at 
> kafka.server.ReplicaAlterLogDirsThread.processPartitionData(ReplicaAlterLogDirsThread.scala:107)
>  at 
> kafka.server.AbstractFetcherThread.$anonfun$processFetchRequest$7(AbstractFetcherThread.scala:311)
>  ... 16 more
>  [2019-06-11 17:58:46,305] INFO [ReplicaAlterLogDirsThread-1]: Stopped 
> (kafka.server.ReplicaAlterLogDirsThread)
> {noformat}
> Also, ReplicaAlterLogDirsThread has been stopped. Further restarts do not fix 
> the problem. To fix it I've stopped the broker and remove all the stuck 
> future partitions.
> Detailed log below
> {noformat}
> [2019-06-11 12:09:52,833] INFO [Log partition=metrics_timers-35, 
> dir=/storage2/kafka/data] Truncating to 4224887 has no effect as the largest 
> offset in the log is 4224886 (kafka.log.Log)
> [2019-06-11 12:21:34,979] INFO [Log partition=metrics_timers-35, 
> dir=/storage2/kafka/data] Loading producer state till offset 4224887 with 
> message format version 2 (kafka.log.Log)
> [2019-06-11 12:21:34,980] INFO [ProducerStateManager 
> partition=metrics_timers-35] Loading producer state from snapshot file 
> '/storage2/kafka/data/metrics_timers-35/04224887.snapshot' 
> (kafka.log.ProducerStateManager)
> [2019-06-11 12:21:34,980] INFO [Log partition=metrics_timers-35, 
> dir=/storage2/kafka/data] Completed load of log with 1 segments, log start 
> offset 4120720 and log end offset 4224887 in 70 ms (kafka.log.Log)
> [2019-06-11 12:21:45,307] INFO Replica loaded for partition metrics_timers-35 
> with initial high watermark 0 (kafka.cluster.Replica)
> [2019-06-11 12:21:45,307] INFO Replica loaded for partition metrics_timers-35 
> with initial high watermark 0 (kafka.cluster.Replica)
> [2019-06-11 12:21:45,307] INFO Replica loaded for partition metrics_timers-35 
> with initial high watermark 4224887 (kafka.cluster.Replica)
> [2019-06-11 12:21:47,090] INFO [Log partition=metrics_timers-35, 
> dir=/storage2/kafka/data] Truncating to 4224887 has no effect as the largest 
> offset in the log is 4224886 (kafka.log.Log)
> [2019-06-11 12:30:04,757] INFO [ReplicaFetcher replicaId=1, 

[jira] [Commented] (KAFKA-9087) ReplicaAlterLogDirs stuck and restart fails with java.lang.IllegalStateException: Offset mismatch for the future replica

2023-01-03 Thread Chia-Ping Tsai (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17654133#comment-17654133
 ] 

Chia-Ping Tsai commented on KAFKA-9087:
---

[~junrao] Sorry for late response.
{quote}So, ReplicaAlterLogDirsThread is supposed to ignore the old fetched data 
and fetch again using the new fetch offset. I am wondering why that didn't 
happen.
{quote}
You are right. The true root cause is shown below.
 # tp-0 is located at broker-0:/tmp/data0
 # move tp-0 from /tmp/data0 to /tmp/data1. It will create a new future log 
([https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/ReplicaManager.scala#L765])
 and ReplicaAlterLogDirsThread. The new future log does not have leader epoch 
before it sync data
 # file a partition reassignment to trigger LeaderAndIsrRequest request. The 
request will update the partition state of ReplicaAlterLogDirsThread 
([https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/ReplicaManager.scala#L1565]),
 and the new offset of partition state is set with highWatermark of log
 # ReplicaAlterLogDirsThread uses the high watermark instead of 
OffsetsForLeaderEpoch API if there is no epoch cache.
 # The future log is new, so its end offset is 0. And the offset mismatch ( 0 
v.s high watermark of log) causes the error.

In short, the race condition of processing LeaderAndIsrRequest and 
AlterReplicaLogDirsRequest causes this error (on V2 message format). Also, the 
error can be reproduced easily on V1 since there is no epoch cache. I’m not 
sure why it used log.highWatermark 
([https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/ReplicaManager.scala#L1559]).
 The ReplicaAlterLogDirsThread checks the offset of “future log” rather than 
“log. Hence, here is my two cents, we can replace log.highWatermark by 
futureLog.highWatermark to resolve this issue. I tested it on our cluster and 
it works well (on both V1 and V2).

> ReplicaAlterLogDirs stuck and restart fails with 
> java.lang.IllegalStateException: Offset mismatch for the future replica
> 
>
> Key: KAFKA-9087
> URL: https://issues.apache.org/jira/browse/KAFKA-9087
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.2.0
>Reporter: Gregory Koshelev
>Priority: Major
>
> I've started multiple replica movements between log directories and some 
> partitions were stuck. After the restart of the broker I've got exception in 
> server.log:
> {noformat}
> [2019-06-11 17:58:46,304] ERROR [ReplicaAlterLogDirsThread-1]: Error due to 
> (kafka.server.ReplicaAlterLogDirsThread)
>  org.apache.kafka.common.KafkaException: Error processing data for partition 
> metrics_timers-35 offset 4224887
>  at 
> kafka.server.AbstractFetcherThread.$anonfun$processFetchRequest$7(AbstractFetcherThread.scala:342)
>  at scala.Option.foreach(Option.scala:274)
>  at 
> kafka.server.AbstractFetcherThread.$anonfun$processFetchRequest$6(AbstractFetcherThread.scala:300)
>  at 
> kafka.server.AbstractFetcherThread.$anonfun$processFetchRequest$6$adapted(AbstractFetcherThread.scala:299)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
>  at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
>  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
>  at 
> kafka.server.AbstractFetcherThread.$anonfun$processFetchRequest$5(AbstractFetcherThread.scala:299)
>  at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
>  at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
>  at 
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:299)
>  at 
> kafka.server.AbstractFetcherThread.$anonfun$maybeFetch$3(AbstractFetcherThread.scala:132)
>  at 
> kafka.server.AbstractFetcherThread.$anonfun$maybeFetch$3$adapted(AbstractFetcherThread.scala:131)
>  at scala.Option.foreach(Option.scala:274)
>  at 
> kafka.server.AbstractFetcherThread.maybeFetch(AbstractFetcherThread.scala:131)
>  at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:113)
>  at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
>  Caused by: java.lang.IllegalStateException: Offset mismatch for the future 
> replica metrics_timers-35: fetched offset = 4224887, log end offset = 0.
>  at 
> kafka.server.ReplicaAlterLogDirsThread.processPartitionData(ReplicaAlterLogDirsThread.scala:107)
>  at 
> kafka.server.AbstractFetcherThread.$anonfun$processFetchRequest$7(AbstractFetcherThread.scala:311)
>  ... 16 more
>  [2019-06-11 17:58:46,305] INFO [ReplicaAlterLogDirsThread-1]: Stopped 
> (kafka.server.ReplicaAlterLogDirsThread)
> {noformat}
> Also, ReplicaAlterLogDirsThread has been stopped. 

[jira] [Created] (KAFKA-14544) The "is-future" should be removed from metrics tags after future log becomes current log

2022-12-21 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-14544:
--

 Summary: The "is-future" should be removed from metrics tags after 
future log becomes current log
 Key: KAFKA-14544
 URL: https://issues.apache.org/jira/browse/KAFKA-14544
 Project: Kafka
  Issue Type: Bug
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


we don't remove "is-future=true" tag from future log after the future log 
becomes "current" log. It causes two potential issues:
 # the metrics monitors can't get metrics of Log if they don't trace the 
property "is-future=true".
 # all Log metrics of specify partition get removed if the partition is moved 
to another folder again.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-9087) ReplicaAlterLogDirs stuck and restart fails with java.lang.IllegalStateException: Offset mismatch for the future replica

2022-12-21 Thread Chia-Ping Tsai (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17650979#comment-17650979
 ] 

Chia-Ping Tsai commented on KAFKA-9087:
---

We encountered this error also. The root cause is about race condition.
 # ReplicaAlterLogDirsThread has fetched the data for topic partition
 # ReplicaManager#alterReplicaLogDirs changes the future log (the start offset 
is reset to 0)
 # ReplicaManager#alterReplicaLogDirs call 
AbstractFetcherManager#addFetcherForPartitions to add the topic partition (it 
just change the partition state in the ReplicaAlterLogDirsThread)
 # ReplicaAlterLogDirsThread starts to process the fetched data, and it throws 
IllegalStateException because the future log get renewed and start offset is 
zero.

This bug causes the future log of topic partition can't get synced forever as 
the topic partition is marked as failed. It seems to me that we should return 
None instead of throwing IllegalStateException when start offset of future log 
is zero. [~junrao] WDYT? 

> ReplicaAlterLogDirs stuck and restart fails with 
> java.lang.IllegalStateException: Offset mismatch for the future replica
> 
>
> Key: KAFKA-9087
> URL: https://issues.apache.org/jira/browse/KAFKA-9087
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.2.0
>Reporter: Gregory Koshelev
>Priority: Major
>
> I've started multiple replica movements between log directories and some 
> partitions were stuck. After the restart of the broker I've got exception in 
> server.log:
> {noformat}
> [2019-06-11 17:58:46,304] ERROR [ReplicaAlterLogDirsThread-1]: Error due to 
> (kafka.server.ReplicaAlterLogDirsThread)
>  org.apache.kafka.common.KafkaException: Error processing data for partition 
> metrics_timers-35 offset 4224887
>  at 
> kafka.server.AbstractFetcherThread.$anonfun$processFetchRequest$7(AbstractFetcherThread.scala:342)
>  at scala.Option.foreach(Option.scala:274)
>  at 
> kafka.server.AbstractFetcherThread.$anonfun$processFetchRequest$6(AbstractFetcherThread.scala:300)
>  at 
> kafka.server.AbstractFetcherThread.$anonfun$processFetchRequest$6$adapted(AbstractFetcherThread.scala:299)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
>  at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
>  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
>  at 
> kafka.server.AbstractFetcherThread.$anonfun$processFetchRequest$5(AbstractFetcherThread.scala:299)
>  at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
>  at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
>  at 
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:299)
>  at 
> kafka.server.AbstractFetcherThread.$anonfun$maybeFetch$3(AbstractFetcherThread.scala:132)
>  at 
> kafka.server.AbstractFetcherThread.$anonfun$maybeFetch$3$adapted(AbstractFetcherThread.scala:131)
>  at scala.Option.foreach(Option.scala:274)
>  at 
> kafka.server.AbstractFetcherThread.maybeFetch(AbstractFetcherThread.scala:131)
>  at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:113)
>  at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
>  Caused by: java.lang.IllegalStateException: Offset mismatch for the future 
> replica metrics_timers-35: fetched offset = 4224887, log end offset = 0.
>  at 
> kafka.server.ReplicaAlterLogDirsThread.processPartitionData(ReplicaAlterLogDirsThread.scala:107)
>  at 
> kafka.server.AbstractFetcherThread.$anonfun$processFetchRequest$7(AbstractFetcherThread.scala:311)
>  ... 16 more
>  [2019-06-11 17:58:46,305] INFO [ReplicaAlterLogDirsThread-1]: Stopped 
> (kafka.server.ReplicaAlterLogDirsThread)
> {noformat}
> Also, ReplicaAlterLogDirsThread has been stopped. Further restarts do not fix 
> the problem. To fix it I've stopped the broker and remove all the stuck 
> future partitions.
> Detailed log below
> {noformat}
> [2019-06-11 12:09:52,833] INFO [Log partition=metrics_timers-35, 
> dir=/storage2/kafka/data] Truncating to 4224887 has no effect as the largest 
> offset in the log is 4224886 (kafka.log.Log)
> [2019-06-11 12:21:34,979] INFO [Log partition=metrics_timers-35, 
> dir=/storage2/kafka/data] Loading producer state till offset 4224887 with 
> message format version 2 (kafka.log.Log)
> [2019-06-11 12:21:34,980] INFO [ProducerStateManager 
> partition=metrics_timers-35] Loading producer state from snapshot file 
> '/storage2/kafka/data/metrics_timers-35/04224887.snapshot' 
> (kafka.log.ProducerStateManager)
> [2019-06-11 12:21:34,980] INFO [Log partition=metrics_timers-35, 
> dir=/storage2/kafka/data] Completed load of log with 1 segments, log start 
> offset 

[jira] [Created] (KAFKA-14811) The forwarding requests are discarded when network client is changed to/from zk/Kraft

2023-03-15 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-14811:
--

 Summary: The forwarding requests are discarded when network client 
is changed to/from zk/Kraft
 Key: KAFKA-14811
 URL: https://issues.apache.org/jira/browse/KAFKA-14811
 Project: Kafka
  Issue Type: Bug
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


We don't check the in-flight requests when closing stale network client. If the 
in-flight requests are related to metadata request from client, the client will 
get timeout exception. If the in-flight requests are related to ISR/leader, the 
partition can't be written as it can't meet mini ISR.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14788) Add TopicsToDeleteCount and ReplicasToDeleteCount to QuorumController

2023-03-07 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-14788:
--

 Summary: Add TopicsToDeleteCount and ReplicasToDeleteCount to 
QuorumController
 Key: KAFKA-14788
 URL: https://issues.apache.org/jira/browse/KAFKA-14788
 Project: Kafka
  Issue Type: Improvement
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


TopicsToDeleteCount and ReplicasToDeleteCount are useful to trace the data 
removing when using prometheus. As a consequence, we should bring them back 
from zk quorum to Kraft.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-10244) An new java interface to replace 'kafka.common.MessageReader'

2023-03-25 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-10244.

Fix Version/s: 3.5.0
   Resolution: Fixed

> An new java interface to replace 'kafka.common.MessageReader'
> -
>
> Key: KAFKA-10244
> URL: https://issues.apache.org/jira/browse/KAFKA-10244
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
>  Labels: need-kip, needs-kip
> Fix For: 3.5.0
>
>
> inspired by 
> https://github.com/apache/kafka/commit/caa806cd82fb9fa88510c81de53e69ac9846311d.
> kafka.common.MessageReader is a pure scala trait and we should offer a java 
> replacement to users.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-6891) send.buffer.bytes should be allowed to set -1 in KafkaConnect

2023-03-21 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-6891.
---
Fix Version/s: 3.5.0
   Resolution: Fixed

> send.buffer.bytes should be allowed to set -1 in KafkaConnect
> -
>
> Key: KAFKA-6891
> URL: https://issues.apache.org/jira/browse/KAFKA-6891
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.1.0
>Reporter: Oleg Kuznetsov
>Assignee: Zheng-Xian Li
>Priority: Major
> Fix For: 3.5.0
>
>
> *send.buffer.bytes* and *receive.buffer.bytes* are declared with *atLeast(0)* 
> constraint in *DistributedConfig*, whereas *-1* should be also allowed to set



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-14812) ProducerPerformance still counting successful sending in console when sending failed

2023-03-21 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai reassigned KAFKA-14812:
--

Assignee: hudeqi

> ProducerPerformance still counting successful sending in console when sending 
> failed
> 
>
> Key: KAFKA-14812
> URL: https://issues.apache.org/jira/browse/KAFKA-14812
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.3.2
>Reporter: hudeqi
>Assignee: hudeqi
>Priority: Major
>  Labels: kafka-producer-perf-test, tools
> Attachments: WechatIMG27.jpeg
>
>
> When using ProducerPerformance, I found that when the sending fails, it is 
> still counted as successfully sent by stat and the metrics are printed in 
> console. For example, when there is no write permission and cannot be written 
> in, the sending success rate is still magically displayed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-14295) FetchMessageConversionsPerSec meter not recorded

2023-02-20 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai reassigned KAFKA-14295:
--

Assignee: Chia-Ping Tsai

> FetchMessageConversionsPerSec meter not recorded
> 
>
> Key: KAFKA-14295
> URL: https://issues.apache.org/jira/browse/KAFKA-14295
> Project: Kafka
>  Issue Type: Bug
>Reporter: David Mao
>Assignee: Chia-Ping Tsai
>Priority: Major
>
> The broker topic metric FetchMessageConversionsPerSec doesn't get recorded on 
> a fetch message conversion.
> The bug is that we pass in a callback that expects a MultiRecordsSend in 
> KafkaApis:
> {code:java}
> def updateConversionStats(send: Send): Unit = {
>   send match {
> case send: MultiRecordsSend if send.recordConversionStats != null =>
>   send.recordConversionStats.asScala.toMap.foreach {
> case (tp, stats) => updateRecordConversionStats(request, tp, stats)
>   }
> case _ =>
>   }
> } {code}
> But we call this callback with a NetworkSend in the SocketServer:
> {code:java}
> selector.completedSends.forEach { send =>
>   try {
> val response = inflightResponses.remove(send.destinationId).getOrElse {
>   throw new IllegalStateException(s"Send for ${send.destinationId} 
> completed, but not in `inflightResponses`")
> }
> updateRequestMetrics(response)
> // Invoke send completion callback
> response.onComplete.foreach(onComplete => onComplete(send))
> ...{code}
> Note that Selector.completedSends returns a collection of NetworkSend



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14295) FetchMessageConversionsPerSec meter not recorded

2023-02-18 Thread Chia-Ping Tsai (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17690821#comment-17690821
 ] 

Chia-Ping Tsai commented on KAFKA-14295:


I encountered this issue too. [~david.mao] Are you going to file PR?

> FetchMessageConversionsPerSec meter not recorded
> 
>
> Key: KAFKA-14295
> URL: https://issues.apache.org/jira/browse/KAFKA-14295
> Project: Kafka
>  Issue Type: Bug
>Reporter: David Mao
>Priority: Major
>
> The broker topic metric FetchMessageConversionsPerSec doesn't get recorded on 
> a fetch message conversion.
> The bug is that we pass in a callback that expects a MultiRecordsSend in 
> KafkaApis:
> {code:java}
> def updateConversionStats(send: Send): Unit = {
>   send match {
> case send: MultiRecordsSend if send.recordConversionStats != null =>
>   send.recordConversionStats.asScala.toMap.foreach {
> case (tp, stats) => updateRecordConversionStats(request, tp, stats)
>   }
> case _ =>
>   }
> } {code}
> But we call this callback with a NetworkSend in the SocketServer:
> {code:java}
> selector.completedSends.forEach { send =>
>   try {
> val response = inflightResponses.remove(send.destinationId).getOrElse {
>   throw new IllegalStateException(s"Send for ${send.destinationId} 
> completed, but not in `inflightResponses`")
> }
> updateRequestMetrics(response)
> // Invoke send completion callback
> response.onComplete.foreach(onComplete => onComplete(send))
> ...{code}
> Note that Selector.completedSends returns a collection of NetworkSend



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-13874) Avoid synchronization in SocketServer metrics

2023-02-21 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai reassigned KAFKA-13874:
--

Assignee: Chia-Ping Tsai

> Avoid synchronization in SocketServer metrics
> -
>
> Key: KAFKA-13874
> URL: https://issues.apache.org/jira/browse/KAFKA-13874
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Colin McCabe
>Assignee: Chia-Ping Tsai
>Priority: Major
>
> For performance reasons, we should avoid synchronization in SocketServer 
> metrics like NetworkProcessorAvgIdlePercent



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14774) the removed listeners should not be reconfigurable

2023-03-02 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-14774:
--

 Summary: the removed listeners should not be reconfigurable
 Key: KAFKA-14774
 URL: https://issues.apache.org/jira/browse/KAFKA-14774
 Project: Kafka
  Issue Type: Bug
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


Users can alter broker configuration to remove specify listeners. However, the 
removed listeners are NOT removed from `reconfigurables` list. It can result in 
the idle processors if users increases the network threads subsequently.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14295) FetchMessageConversionsPerSec meter not recorded

2023-02-23 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-14295.

Fix Version/s: 3.5.0
   Resolution: Fixed

> FetchMessageConversionsPerSec meter not recorded
> 
>
> Key: KAFKA-14295
> URL: https://issues.apache.org/jira/browse/KAFKA-14295
> Project: Kafka
>  Issue Type: Bug
>Reporter: David Mao
>Assignee: Chia-Ping Tsai
>Priority: Major
> Fix For: 3.5.0
>
>
> The broker topic metric FetchMessageConversionsPerSec doesn't get recorded on 
> a fetch message conversion.
> The bug is that we pass in a callback that expects a MultiRecordsSend in 
> KafkaApis:
> {code:java}
> def updateConversionStats(send: Send): Unit = {
>   send match {
> case send: MultiRecordsSend if send.recordConversionStats != null =>
>   send.recordConversionStats.asScala.toMap.foreach {
> case (tp, stats) => updateRecordConversionStats(request, tp, stats)
>   }
> case _ =>
>   }
> } {code}
> But we call this callback with a NetworkSend in the SocketServer:
> {code:java}
> selector.completedSends.forEach { send =>
>   try {
> val response = inflightResponses.remove(send.destinationId).getOrElse {
>   throw new IllegalStateException(s"Send for ${send.destinationId} 
> completed, but not in `inflightResponses`")
> }
> updateRequestMetrics(response)
> // Invoke send completion callback
> response.onComplete.foreach(onComplete => onComplete(send))
> ...{code}
> Note that Selector.completedSends returns a collection of NetworkSend



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-14853) the serializer/deserialize which extends ClusterResourceListener is not added to Metadata

2023-03-27 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated KAFKA-14853:
---
Summary: the serializer/deserialize which extends ClusterResourceListener 
is not added to Metadata  (was: the serializer/deserialize which extends 
ClusterResourceListener is never called)

> the serializer/deserialize which extends ClusterResourceListener is not added 
> to Metadata
> -
>
> Key: KAFKA-14853
> URL: https://issues.apache.org/jira/browse/KAFKA-14853
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
>
> I noticed this issue when reviewing  KAFKA-14848 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14853) the serializer/deserialize which extends ClusterResourceListener is never called

2023-03-27 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-14853:
--

 Summary: the serializer/deserialize which extends 
ClusterResourceListener is never called
 Key: KAFKA-14853
 URL: https://issues.apache.org/jira/browse/KAFKA-14853
 Project: Kafka
  Issue Type: Bug
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


I noticed this issue when reviewing  KAFKA-14848 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14774) the removed listeners should not be reconfigurable

2023-03-27 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-14774.

Fix Version/s: 3.5.0
   Resolution: Fixed

> the removed listeners should not be reconfigurable
> --
>
> Key: KAFKA-14774
> URL: https://issues.apache.org/jira/browse/KAFKA-14774
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Major
> Fix For: 3.5.0
>
>
> Users can alter broker configuration to remove specify listeners. However, 
> the removed listeners are NOT removed from `reconfigurables` list. It can 
> result in the idle processors if users increases the network threads 
> subsequently.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14874) Unable to create > 5000 topics for once when using Kraft

2023-03-31 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-14874:
--

 Summary: Unable to create > 5000 topics for once when using Kraft
 Key: KAFKA-14874
 URL: https://issues.apache.org/jira/browse/KAFKA-14874
 Project: Kafka
  Issue Type: Bug
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


the error happens due to 
[https://github.com/apache/kafka/blob/trunk/metadata/src/main/java/org/apache/kafka/controller/QuorumController.java#L779]

I encountered this error when creating >5000 topics for mirroring the cluster 
from zk to Kraft. The operation of creating a bunch of topics is allowed by 
zk-based kafka.

It seems to me there are two improvements for this issue.

1) add more precise error message for such case.

2) make `maxRecordsPerBatch` configurable (there is already a setter 
[https://github.com/apache/kafka/blob/trunk/metadata/src/main/java/org/apache/kafka/controller/QuorumController.java#L272])



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-14774) the removed listeners should not be reconfigurable

2023-03-29 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated KAFKA-14774:
---
Fix Version/s: 3.4.1

> the removed listeners should not be reconfigurable
> --
>
> Key: KAFKA-14774
> URL: https://issues.apache.org/jira/browse/KAFKA-14774
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Major
> Fix For: 3.5.0, 3.4.1
>
>
> Users can alter broker configuration to remove specify listeners. However, 
> the removed listeners are NOT removed from `reconfigurables` list. It can 
> result in the idle processors if users increases the network threads 
> subsequently.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14853) the serializer/deserialize which extends ClusterResourceListener is not added to Metadata

2023-03-29 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-14853.

Fix Version/s: 3.5.0
   Resolution: Fixed

> the serializer/deserialize which extends ClusterResourceListener is not added 
> to Metadata
> -
>
> Key: KAFKA-14853
> URL: https://issues.apache.org/jira/browse/KAFKA-14853
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Fix For: 3.5.0
>
>
> I noticed this issue when reviewing  KAFKA-14848 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


<    1   2   3   4   5   6   7   8   9   10   >