[jira] [Updated] (KAFKA-17000) Occasional AuthorizerTest thread leak

2024-06-20 Thread Andras Katona (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona updated KAFKA-17000:
--
Description: 
h2. error during AclAuthorizer.configure
{noformat}
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = 
Session expired for /kafka-acl/DelegationToken
at 
app//org.apache.zookeeper.KeeperException.create(KeeperException.java:134)
at 
app//org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
at 
app//kafka.zookeeper.AsyncResponse.maybeThrow(ZooKeeperClient.scala:570)
at app//kafka.zk.KafkaZkClient.createRecursive(KafkaZkClient.scala:1883)
at 
app//kafka.zk.KafkaZkClient.$anonfun$createAclPaths$2(KafkaZkClient.scala:1294)
at 
app//kafka.zk.KafkaZkClient.$anonfun$createAclPaths$2$adapted(KafkaZkClient.scala:1294)
at app//scala.collection.immutable.HashSet.foreach(HashSet.scala:958)
at 
app//kafka.zk.KafkaZkClient.$anonfun$createAclPaths$1(KafkaZkClient.scala:1294)
at 
app//kafka.zk.KafkaZkClient.$anonfun$createAclPaths$1$adapted(KafkaZkClient.scala:1292)
at app//scala.collection.IterableOnceOps.foreach(IterableOnce.scala:576)
at 
app//scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:574)
at app//scala.collection.AbstractIterable.foreach(Iterable.scala:933)
at app//kafka.zk.KafkaZkClient.createAclPaths(KafkaZkClient.scala:1292)
at 
app//kafka.security.authorizer.AclAuthorizer.configure(AclAuthorizer.scala:212)
at 
app//kafka.security.authorizer.AuthorizerTest.createAclAuthorizer(AuthorizerTest.scala:1182)
at 
app//kafka.security.authorizer.AuthorizerTest.createAuthorizer(AuthorizerTest.scala:1175)
at 
app//kafka.security.authorizer.AuthorizerTest.setUp(AuthorizerTest.scala:95)
at jdk.internal.reflect.GeneratedMethodAccessor138.invoke(Unknown 
Source)
at 
java.base@11.0.15/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base@11.0.15/java.lang.reflect.Method.invoke(Method.java:566)
at 
app//org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:728)
at 
app//org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
at 
app//org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
at 
app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptLifecycleMethod(TimeoutExtension.java:128)
at 
app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptBeforeEachMethod(TimeoutExtension.java:78)
at 
app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
at 
app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
at 
app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
at 
app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
at 
app//org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeMethodInExtensionContext(ClassBasedTestDescriptor.java:521)
at 
app//org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$synthesizeBeforeEachMethodAdapter$23(ClassBasedTestDescriptor.java:506)
at 
app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeEachMethods$3(TestMethodTestDescriptor.java:175)
at 
app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeMethodsOrCallbacksUntilExceptionOccurs$6(TestMethodTestDescriptor.java:203)
at 
app//org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at 
app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeBeforeMethodsOrCallbacksUntilExceptionOccurs(TestMethodTestDescriptor.java:203)
at 
app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeBeforeEachMethods(TestMethodTestD

[jira] [Created] (KAFKA-17000) Occasional AuthorizerTest thread leak

2024-06-20 Thread Andras Katona (Jira)
Andras Katona created KAFKA-17000:
-

 Summary: Occasional AuthorizerTest thread leak
 Key: KAFKA-17000
 URL: https://issues.apache.org/jira/browse/KAFKA-17000
 Project: Kafka
  Issue Type: Test
Reporter: Andras Katona


h2. error
{noformat}
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = 
Session expired for /kafka-acl/DelegationToken
at 
app//org.apache.zookeeper.KeeperException.create(KeeperException.java:134)
at 
app//org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
at 
app//kafka.zookeeper.AsyncResponse.maybeThrow(ZooKeeperClient.scala:570)
at app//kafka.zk.KafkaZkClient.createRecursive(KafkaZkClient.scala:1883)
at 
app//kafka.zk.KafkaZkClient.$anonfun$createAclPaths$2(KafkaZkClient.scala:1294)
at 
app//kafka.zk.KafkaZkClient.$anonfun$createAclPaths$2$adapted(KafkaZkClient.scala:1294)
at app//scala.collection.immutable.HashSet.foreach(HashSet.scala:958)
at 
app//kafka.zk.KafkaZkClient.$anonfun$createAclPaths$1(KafkaZkClient.scala:1294)
at 
app//kafka.zk.KafkaZkClient.$anonfun$createAclPaths$1$adapted(KafkaZkClient.scala:1292)
at app//scala.collection.IterableOnceOps.foreach(IterableOnce.scala:576)
at 
app//scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:574)
at app//scala.collection.AbstractIterable.foreach(Iterable.scala:933)
at app//kafka.zk.KafkaZkClient.createAclPaths(KafkaZkClient.scala:1292)
at 
app//kafka.security.authorizer.AclAuthorizer.configure(AclAuthorizer.scala:212)
at 
app//kafka.security.authorizer.AuthorizerTest.createAclAuthorizer(AuthorizerTest.scala:1182)
at 
app//kafka.security.authorizer.AuthorizerTest.createAuthorizer(AuthorizerTest.scala:1175)
at 
app//kafka.security.authorizer.AuthorizerTest.setUp(AuthorizerTest.scala:95)
at jdk.internal.reflect.GeneratedMethodAccessor138.invoke(Unknown 
Source)
at 
java.base@11.0.15/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base@11.0.15/java.lang.reflect.Method.invoke(Method.java:566)
at 
app//org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:728)
at 
app//org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
at 
app//org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
at 
app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptLifecycleMethod(TimeoutExtension.java:128)
at 
app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptBeforeEachMethod(TimeoutExtension.java:78)
at 
app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
at 
app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
at 
app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
at 
app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
at 
app//org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeMethodInExtensionContext(ClassBasedTestDescriptor.java:521)
at 
app//org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$synthesizeBeforeEachMethodAdapter$23(ClassBasedTestDescriptor.java:506)
at 
app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeEachMethods$3(TestMethodTestDescriptor.java:175)
at 
app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeMethodsOrCallbacksUntilExceptionOccurs$6(TestMethodTestDescriptor.java:203)
at 
app//org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at 
app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeBeforeMethodsOrCallbacksUntilExceptionOccurs(TestMethodTestDescriptor.java:203)
at 
app//org.junit.jupite

[jira] [Assigned] (KAFKA-17000) Occasional AuthorizerTest thread leak

2024-06-20 Thread Andras Katona (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona reassigned KAFKA-17000:
-

Assignee: Andras Katona

> Occasional AuthorizerTest thread leak
> -
>
> Key: KAFKA-17000
> URL: https://issues.apache.org/jira/browse/KAFKA-17000
> Project: Kafka
>  Issue Type: Test
>Reporter: Andras Katona
>Assignee: Andras Katona
>Priority: Major
>
> h2. error
> {noformat}
> org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode 
> = Session expired for /kafka-acl/DelegationToken
>   at 
> app//org.apache.zookeeper.KeeperException.create(KeeperException.java:134)
>   at 
> app//org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
>   at 
> app//kafka.zookeeper.AsyncResponse.maybeThrow(ZooKeeperClient.scala:570)
>   at app//kafka.zk.KafkaZkClient.createRecursive(KafkaZkClient.scala:1883)
>   at 
> app//kafka.zk.KafkaZkClient.$anonfun$createAclPaths$2(KafkaZkClient.scala:1294)
>   at 
> app//kafka.zk.KafkaZkClient.$anonfun$createAclPaths$2$adapted(KafkaZkClient.scala:1294)
>   at app//scala.collection.immutable.HashSet.foreach(HashSet.scala:958)
>   at 
> app//kafka.zk.KafkaZkClient.$anonfun$createAclPaths$1(KafkaZkClient.scala:1294)
>   at 
> app//kafka.zk.KafkaZkClient.$anonfun$createAclPaths$1$adapted(KafkaZkClient.scala:1292)
>   at app//scala.collection.IterableOnceOps.foreach(IterableOnce.scala:576)
>   at 
> app//scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:574)
>   at app//scala.collection.AbstractIterable.foreach(Iterable.scala:933)
>   at app//kafka.zk.KafkaZkClient.createAclPaths(KafkaZkClient.scala:1292)
>   at 
> app//kafka.security.authorizer.AclAuthorizer.configure(AclAuthorizer.scala:212)
>   at 
> app//kafka.security.authorizer.AuthorizerTest.createAclAuthorizer(AuthorizerTest.scala:1182)
>   at 
> app//kafka.security.authorizer.AuthorizerTest.createAuthorizer(AuthorizerTest.scala:1175)
>   at 
> app//kafka.security.authorizer.AuthorizerTest.setUp(AuthorizerTest.scala:95)
>   at jdk.internal.reflect.GeneratedMethodAccessor138.invoke(Unknown 
> Source)
>   at 
> java.base@11.0.15/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base@11.0.15/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> app//org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:728)
>   at 
> app//org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
>   at 
> app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
>   at 
> app//org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
>   at 
> app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptLifecycleMethod(TimeoutExtension.java:128)
>   at 
> app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptBeforeEachMethod(TimeoutExtension.java:78)
>   at 
> app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
>   at 
> app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
>   at 
> app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
>   at 
> app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
>   at 
> app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
>   at 
> app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
>   at 
> app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
>   at 
> app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
>   at 
> app//org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeMethodInExtensionContext(ClassBasedTestDescriptor.java:521)
>   at 
> app//org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$synthesizeBeforeEachMethodAdapter$23(ClassBasedTestDescriptor.java:506)
>   at 
> app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeEachMethods$3(TestMethodTestDescriptor.java:175)
>   at 
> app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeMethodsOrCallbacksUntilExceptionOccurs$6(TestMethodTestDescriptor

[jira] [Assigned] (KAFKA-15915) Flaky test - testUnrecoverableError - ProducerIdManagerTest

2024-03-27 Thread Andras Katona (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona reassigned KAFKA-15915:
-

Assignee: Andras Katona

> Flaky test - testUnrecoverableError - ProducerIdManagerTest
> ---
>
> Key: KAFKA-15915
> URL: https://issues.apache.org/jira/browse/KAFKA-15915
> Project: Kafka
>  Issue Type: Bug
>Reporter: Lucas Brutschy
>Assignee: Andras Katona
>Priority: Major
>  Labels: flaky-test
>
> Test intermittently gives the following result:
> {code}
> java.lang.UnsupportedOperationException: Success.failed
>   at scala.util.Success.failed(Try.scala:277)
>   at 
> kafka.coordinator.transaction.ProducerIdManagerTest.verifyFailure(ProducerIdManagerTest.scala:234)
>   at 
> kafka.coordinator.transaction.ProducerIdManagerTest.testUnrecoverableErrors(ProducerIdManagerTest.scala:199)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-13922) Unable to generate coverage reports for the whole project

2023-12-20 Thread Andras Katona (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17799084#comment-17799084
 ] 

Andras Katona edited comment on KAFKA-13922 at 12/20/23 4:15 PM:
-

Okay, I was trying to configure the jacoco-report-aggregation for the project, 
because we're removing the rootreport part from the build.gradle, that's not 
working any more. But failed to include that aggregation plugin in the 
timeframe I allocated myself to try. It's not adding much, sonar and jenkins 
are able to fetch the coverage reports from the build dirs (they would not use 
the aggregated one anyway), so I convinced myself that it's fine to stop trying 
:).

I'm doing a last round of tests with my adjustments (see the above mentioned 
[PR|https://github.com/apache/kafka/pull/11982/files])


was (Author: akatona):
Okay, I was trying to configure the jacoco-report-aggregation for the project, 
because we're removing the rootreport part from the build.gradle, that's not 
working any more. But failed to include that aggregation plugin in the 
timeframe I allocated myself to try. It's not adding much, sonar and jenkins 
are able to fetch the coverage reports from the build dirs (they would not use 
the aggregated one anyway), so I convinced myself that it's fine to stop trying 
:).

I'm doing a last round of tests with my adjustments (see the above mentioned 
[PR}https://github.com/apache/kafka/pull/11982/files])

> Unable to generate coverage reports for the whole project
> -
>
> Key: KAFKA-13922
> URL: https://issues.apache.org/jira/browse/KAFKA-13922
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.5.0
>Reporter: Patrik Nagy
>Assignee: Andras Katona
>Priority: Minor
>  Labels: cloudera
>
> It is documented in the project that if we need code coverage reports for the 
> whole project, we need to run something like this where we enabled the test 
> coverage flag and run the reportCoverage task:
> {code:java}
> ./gradlew reportCoverage -PenableTestCoverage=true -Dorg.gradle.parallel=false
> {code}
> If I run it, the build will fail in the end because of jacocoRootReport:
> {code:java}
> 14:34:41 > Task :jacocoRootReport FAILED
> 14:34:41 
> 14:34:41 FAILURE: Build failed with an exception.
> 14:34:41 
> 14:34:41 * What went wrong:
> 14:34:41 Some problems were found with the configuration of task 
> ':jacocoRootReport' (type 'JacocoReport').
> 14:34:41   - Type 'org.gradle.testing.jacoco.tasks.JacocoReport' property 
> 'jacocoClasspath' doesn't have a configured value.
> 14:34:41 
> 14:34:41 Reason: This property isn't marked as optional and no value has 
> been configured.
> 14:34:41 
> 14:34:41 Possible solutions:
> 14:34:41   1. Assign a value to 'jacocoClasspath'.
> 14:34:41   2. Mark property 'jacocoClasspath' as optional.
> 14:34:41 
> 14:34:41 Please refer to 
> https://docs.gradle.org/7.3.3/userguide/validation_problems.html#value_not_set
>  for more details about this problem.
> 14:34:41   - Type 'org.gradle.testing.jacoco.tasks.JacocoReport' property 
> 'reports.enabledReports.html.outputLocation' doesn't have a configured value.
> 14:34:41 
> 14:34:41 Reason: This property isn't marked as optional and no value has 
> been configured.
> 14:34:41 
> 14:34:41 Possible solutions:
> 14:34:41   1. Assign a value to 
> 'reports.enabledReports.html.outputLocation'.
> 14:34:41   2. Mark property 'reports.enabledReports.html.outputLocation' 
> as optional.
> 14:34:41 
> 14:34:41 Please refer to 
> https://docs.gradle.org/7.3.3/userguide/validation_problems.html#value_not_set
>  for more details about this problem.
> 14:34:41   - Type 'org.gradle.testing.jacoco.tasks.JacocoReport' property 
> 'reports.enabledReports.xml.outputLocation' doesn't have a configured value.
> 14:34:41 
> 14:34:41 Reason: This property isn't marked as optional and no value has 
> been configured.
> 14:34:41 
> 14:34:41 Possible solutions:
> 14:34:41   1. Assign a value to 
> 'reports.enabledReports.xml.outputLocation'.
> 14:34:41   2. Mark property 'reports.enabledReports.xml.outputLocation' 
> as optional. {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-13922) Unable to generate coverage reports for the whole project

2023-12-20 Thread Andras Katona (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17799084#comment-17799084
 ] 

Andras Katona edited comment on KAFKA-13922 at 12/20/23 4:15 PM:
-

Okay, I was trying to configure the jacoco-report-aggregation for the project, 
because we're removing the rootreport part from the build.gradle, that's not 
working any more. But failed to include that aggregation plugin in the 
timeframe I allocated myself to try. It's not adding much, sonar and jenkins 
are able to fetch the coverage reports from the build dirs (they would not use 
the aggregated one anyway), so I convinced myself that it's fine to stop trying 
:).

I'm doing a last round of tests with my adjustments (see the above mentioned 
[PR}https://github.com/apache/kafka/pull/11982/files])


was (Author: akatona):
Okay, I was trying to configure the jacoco-report-aggregation for the project, 
because we're removing the rootreport part from the build.gradle, that's not 
working any more. But failed to include that aggregation plugin in the 
timeframe I allocated myself to try. It's not adding much, sonar and jenkins 
are able to fetch the coverage reports from the build dirs (they would not use 
the aggregated one anyway), so I convinced myself that it's fine to stop trying 
:).


> Unable to generate coverage reports for the whole project
> -
>
> Key: KAFKA-13922
> URL: https://issues.apache.org/jira/browse/KAFKA-13922
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.5.0
>Reporter: Patrik Nagy
>Assignee: Andras Katona
>Priority: Minor
>  Labels: cloudera
>
> It is documented in the project that if we need code coverage reports for the 
> whole project, we need to run something like this where we enabled the test 
> coverage flag and run the reportCoverage task:
> {code:java}
> ./gradlew reportCoverage -PenableTestCoverage=true -Dorg.gradle.parallel=false
> {code}
> If I run it, the build will fail in the end because of jacocoRootReport:
> {code:java}
> 14:34:41 > Task :jacocoRootReport FAILED
> 14:34:41 
> 14:34:41 FAILURE: Build failed with an exception.
> 14:34:41 
> 14:34:41 * What went wrong:
> 14:34:41 Some problems were found with the configuration of task 
> ':jacocoRootReport' (type 'JacocoReport').
> 14:34:41   - Type 'org.gradle.testing.jacoco.tasks.JacocoReport' property 
> 'jacocoClasspath' doesn't have a configured value.
> 14:34:41 
> 14:34:41 Reason: This property isn't marked as optional and no value has 
> been configured.
> 14:34:41 
> 14:34:41 Possible solutions:
> 14:34:41   1. Assign a value to 'jacocoClasspath'.
> 14:34:41   2. Mark property 'jacocoClasspath' as optional.
> 14:34:41 
> 14:34:41 Please refer to 
> https://docs.gradle.org/7.3.3/userguide/validation_problems.html#value_not_set
>  for more details about this problem.
> 14:34:41   - Type 'org.gradle.testing.jacoco.tasks.JacocoReport' property 
> 'reports.enabledReports.html.outputLocation' doesn't have a configured value.
> 14:34:41 
> 14:34:41 Reason: This property isn't marked as optional and no value has 
> been configured.
> 14:34:41 
> 14:34:41 Possible solutions:
> 14:34:41   1. Assign a value to 
> 'reports.enabledReports.html.outputLocation'.
> 14:34:41   2. Mark property 'reports.enabledReports.html.outputLocation' 
> as optional.
> 14:34:41 
> 14:34:41 Please refer to 
> https://docs.gradle.org/7.3.3/userguide/validation_problems.html#value_not_set
>  for more details about this problem.
> 14:34:41   - Type 'org.gradle.testing.jacoco.tasks.JacocoReport' property 
> 'reports.enabledReports.xml.outputLocation' doesn't have a configured value.
> 14:34:41 
> 14:34:41 Reason: This property isn't marked as optional and no value has 
> been configured.
> 14:34:41 
> 14:34:41 Possible solutions:
> 14:34:41   1. Assign a value to 
> 'reports.enabledReports.xml.outputLocation'.
> 14:34:41   2. Mark property 'reports.enabledReports.xml.outputLocation' 
> as optional. {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-13922) Unable to generate coverage reports for the whole project

2023-12-20 Thread Andras Katona (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17799084#comment-17799084
 ] 

Andras Katona commented on KAFKA-13922:
---

Okay, I was trying to configure the jacoco-report-aggregation for the project, 
because we're removing the rootreport part from the build.gradle, that's not 
working any more. But failed to include that aggregation plugin in the 
timeframe I allocated myself to try. It's not adding much, sonar and jenkins 
are able to fetch the coverage reports from the build dirs (they would not use 
the aggregated one anyway), so I convinced myself that it's fine to stop trying 
:).


> Unable to generate coverage reports for the whole project
> -
>
> Key: KAFKA-13922
> URL: https://issues.apache.org/jira/browse/KAFKA-13922
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.5.0
>Reporter: Patrik Nagy
>Assignee: Andras Katona
>Priority: Minor
>  Labels: cloudera
>
> It is documented in the project that if we need code coverage reports for the 
> whole project, we need to run something like this where we enabled the test 
> coverage flag and run the reportCoverage task:
> {code:java}
> ./gradlew reportCoverage -PenableTestCoverage=true -Dorg.gradle.parallel=false
> {code}
> If I run it, the build will fail in the end because of jacocoRootReport:
> {code:java}
> 14:34:41 > Task :jacocoRootReport FAILED
> 14:34:41 
> 14:34:41 FAILURE: Build failed with an exception.
> 14:34:41 
> 14:34:41 * What went wrong:
> 14:34:41 Some problems were found with the configuration of task 
> ':jacocoRootReport' (type 'JacocoReport').
> 14:34:41   - Type 'org.gradle.testing.jacoco.tasks.JacocoReport' property 
> 'jacocoClasspath' doesn't have a configured value.
> 14:34:41 
> 14:34:41 Reason: This property isn't marked as optional and no value has 
> been configured.
> 14:34:41 
> 14:34:41 Possible solutions:
> 14:34:41   1. Assign a value to 'jacocoClasspath'.
> 14:34:41   2. Mark property 'jacocoClasspath' as optional.
> 14:34:41 
> 14:34:41 Please refer to 
> https://docs.gradle.org/7.3.3/userguide/validation_problems.html#value_not_set
>  for more details about this problem.
> 14:34:41   - Type 'org.gradle.testing.jacoco.tasks.JacocoReport' property 
> 'reports.enabledReports.html.outputLocation' doesn't have a configured value.
> 14:34:41 
> 14:34:41 Reason: This property isn't marked as optional and no value has 
> been configured.
> 14:34:41 
> 14:34:41 Possible solutions:
> 14:34:41   1. Assign a value to 
> 'reports.enabledReports.html.outputLocation'.
> 14:34:41   2. Mark property 'reports.enabledReports.html.outputLocation' 
> as optional.
> 14:34:41 
> 14:34:41 Please refer to 
> https://docs.gradle.org/7.3.3/userguide/validation_problems.html#value_not_set
>  for more details about this problem.
> 14:34:41   - Type 'org.gradle.testing.jacoco.tasks.JacocoReport' property 
> 'reports.enabledReports.xml.outputLocation' doesn't have a configured value.
> 14:34:41 
> 14:34:41 Reason: This property isn't marked as optional and no value has 
> been configured.
> 14:34:41 
> 14:34:41 Possible solutions:
> 14:34:41   1. Assign a value to 
> 'reports.enabledReports.xml.outputLocation'.
> 14:34:41   2. Mark property 'reports.enabledReports.xml.outputLocation' 
> as optional. {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-13337) Scanning for Connect plugins can fail with AccessDeniedException

2023-06-02 Thread Andras Katona (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17728616#comment-17728616
 ] 

Andras Katona commented on KAFKA-13337:
---

This MINOR PR is fixing the plugin loading error handling:
https://github.com/apache/kafka/pull/13334

> Scanning for Connect plugins can fail with AccessDeniedException
> 
>
> Key: KAFKA-13337
> URL: https://issues.apache.org/jira/browse/KAFKA-13337
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.7.1, 2.6.2, 3.1.0, 2.8.1
>Reporter: Tamás Héri
>Assignee: Andras Katona
>Priority: Minor
> Fix For: 3.6.0
>
>
> During Connect plugin path scan, if an unreadable file/directory is found, 
> Connect will fail with an {{AccessDeniedException}}. As the directories/files 
> can be unreadable, it is best to skip them in this case. See referenced PR.
>  
> {noformat}
> java.nio.file.AccessDeniedException: 
> /tmp/junit8905851398112785578/plugins/.protected
>   at 
> java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:90)
>   at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
>   at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116)
>   at 
> java.base/sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:432)
>   at java.base/java.nio.file.Files.newDirectoryStream(Files.java:604)
>   at 
> org.apache.kafka.connect.runtime.isolation.PluginUtils.pluginUrls(PluginUtils.java:276)
>   at 
> org.apache.kafka.connect.runtime.isolation.PluginUtilsTest.testPluginUrlsWithProtectedDirectory(PluginUtilsTest.java:481)
> ...
> {noformat}
> Connect server fails with the following exception, (I created an "aaa" 
> directory only readable by root
> {noformat}
> Could not get listing for plugin path: /var/lib/kafka. Ignoring.
> java.nio.file.AccessDeniedException: /var/lib/kafka/
>   at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
>   at 
> sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:427)
>   at java.nio.file.Files.newDirectoryStream(Files.java:589)
>   at 
> org.apache.kafka.connect.runtime.isolation.PluginUtils.pluginUrls(PluginUtils.java:231)
>   at 
> org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.registerPlugin(DelegatingClassLoader.java:241)
>   at 
> org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initPluginLoader(DelegatingClassLoader.java:222)
>   at 
> org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initLoaders(DelegatingClassLoader.java:199)
>   at 
> org.apache.kafka.connect.runtime.isolation.Plugins.(Plugins.java:60)
>   at 
> org.apache.kafka.connect.cli.ConnectDistributed.startConnect(ConnectDistributed.java:91)
> {noformat}
> Additional note:
> Connect server would not stop normally but an extension couldn't be found 
> because of this in my case which killed connect at later point.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-13504) Retry connect internal topics' creation in case of InvalidReplicationFactorException

2023-05-19 Thread Andras Katona (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona reassigned KAFKA-13504:
-

Assignee: Andras Katona  (was: Viktor Somogyi-Vass)

> Retry connect internal topics' creation in case of 
> InvalidReplicationFactorException
> 
>
> Key: KAFKA-13504
> URL: https://issues.apache.org/jira/browse/KAFKA-13504
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Andras Katona
>Assignee: Andras Katona
>Priority: Major
>  Labels: cloudera
>
> In case the Kafka Broker cluster and the Kafka Connect cluster is started 
> together and Connect would want to create its topics, there's a high chance 
> to fail the creation with InvalidReplicationFactorException.
> {noformat}
> ERROR org.apache.kafka.connect.runtime.distributed.DistributedHerder [Worker 
> clientId=connect-1, groupId=connect-cluster] Uncaught exception in herder 
> work thread, exiting: 
> org.apache.kafka.connect.errors.ConnectException: Error while attempting to 
> create/find topic(s) 'connect-offsets'
> ...
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication 
> factor: 3 larger than available brokers: 2.
> {noformat}
> Introducing a retry logic here would make Connect a bit more robust.
> New configurations:
> * offset.storage.topic.create.retries
> * offset.storage.topic.create.retry.backoff.ms
> * config.storage.topic.create.retries
> * config.storage.topic.create.retry.backoff.ms
> * status.storage.topic.create.retries
> * status.storage.topic.create.retry.backoff.ms



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-13922) Unable to generate coverage reports for the whole project

2023-03-24 Thread Andras Katona (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17704565#comment-17704565
 ] 

Andras Katona commented on KAFKA-13922:
---

I have a jacoco related PR, which i want to finish up, so I'm picking this one 
up too. checking how it's working on trunk.
(https://github.com/apache/kafka/pull/11982)

> Unable to generate coverage reports for the whole project
> -
>
> Key: KAFKA-13922
> URL: https://issues.apache.org/jira/browse/KAFKA-13922
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.5.0
>Reporter: Patrik Nagy
>Assignee: Andras Katona
>Priority: Minor
>
> It is documented in the project that if we need code coverage reports for the 
> whole project, we need to run something like this where we enabled the test 
> coverage flag and run the reportCoverage task:
> {code:java}
> ./gradlew reportCoverage -PenableTestCoverage=true -Dorg.gradle.parallel=false
> {code}
> If I run it, the build will fail in the end because of jacocoRootReport:
> {code:java}
> 14:34:41 > Task :jacocoRootReport FAILED
> 14:34:41 
> 14:34:41 FAILURE: Build failed with an exception.
> 14:34:41 
> 14:34:41 * What went wrong:
> 14:34:41 Some problems were found with the configuration of task 
> ':jacocoRootReport' (type 'JacocoReport').
> 14:34:41   - Type 'org.gradle.testing.jacoco.tasks.JacocoReport' property 
> 'jacocoClasspath' doesn't have a configured value.
> 14:34:41 
> 14:34:41 Reason: This property isn't marked as optional and no value has 
> been configured.
> 14:34:41 
> 14:34:41 Possible solutions:
> 14:34:41   1. Assign a value to 'jacocoClasspath'.
> 14:34:41   2. Mark property 'jacocoClasspath' as optional.
> 14:34:41 
> 14:34:41 Please refer to 
> https://docs.gradle.org/7.3.3/userguide/validation_problems.html#value_not_set
>  for more details about this problem.
> 14:34:41   - Type 'org.gradle.testing.jacoco.tasks.JacocoReport' property 
> 'reports.enabledReports.html.outputLocation' doesn't have a configured value.
> 14:34:41 
> 14:34:41 Reason: This property isn't marked as optional and no value has 
> been configured.
> 14:34:41 
> 14:34:41 Possible solutions:
> 14:34:41   1. Assign a value to 
> 'reports.enabledReports.html.outputLocation'.
> 14:34:41   2. Mark property 'reports.enabledReports.html.outputLocation' 
> as optional.
> 14:34:41 
> 14:34:41 Please refer to 
> https://docs.gradle.org/7.3.3/userguide/validation_problems.html#value_not_set
>  for more details about this problem.
> 14:34:41   - Type 'org.gradle.testing.jacoco.tasks.JacocoReport' property 
> 'reports.enabledReports.xml.outputLocation' doesn't have a configured value.
> 14:34:41 
> 14:34:41 Reason: This property isn't marked as optional and no value has 
> been configured.
> 14:34:41 
> 14:34:41 Possible solutions:
> 14:34:41   1. Assign a value to 
> 'reports.enabledReports.xml.outputLocation'.
> 14:34:41   2. Mark property 'reports.enabledReports.xml.outputLocation' 
> as optional. {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-13922) Unable to generate coverage reports for the whole project

2023-03-24 Thread Andras Katona (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona reassigned KAFKA-13922:
-

Assignee: Andras Katona

> Unable to generate coverage reports for the whole project
> -
>
> Key: KAFKA-13922
> URL: https://issues.apache.org/jira/browse/KAFKA-13922
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.5.0
>Reporter: Patrik Nagy
>Assignee: Andras Katona
>Priority: Minor
>
> It is documented in the project that if we need code coverage reports for the 
> whole project, we need to run something like this where we enabled the test 
> coverage flag and run the reportCoverage task:
> {code:java}
> ./gradlew reportCoverage -PenableTestCoverage=true -Dorg.gradle.parallel=false
> {code}
> If I run it, the build will fail in the end because of jacocoRootReport:
> {code:java}
> 14:34:41 > Task :jacocoRootReport FAILED
> 14:34:41 
> 14:34:41 FAILURE: Build failed with an exception.
> 14:34:41 
> 14:34:41 * What went wrong:
> 14:34:41 Some problems were found with the configuration of task 
> ':jacocoRootReport' (type 'JacocoReport').
> 14:34:41   - Type 'org.gradle.testing.jacoco.tasks.JacocoReport' property 
> 'jacocoClasspath' doesn't have a configured value.
> 14:34:41 
> 14:34:41 Reason: This property isn't marked as optional and no value has 
> been configured.
> 14:34:41 
> 14:34:41 Possible solutions:
> 14:34:41   1. Assign a value to 'jacocoClasspath'.
> 14:34:41   2. Mark property 'jacocoClasspath' as optional.
> 14:34:41 
> 14:34:41 Please refer to 
> https://docs.gradle.org/7.3.3/userguide/validation_problems.html#value_not_set
>  for more details about this problem.
> 14:34:41   - Type 'org.gradle.testing.jacoco.tasks.JacocoReport' property 
> 'reports.enabledReports.html.outputLocation' doesn't have a configured value.
> 14:34:41 
> 14:34:41 Reason: This property isn't marked as optional and no value has 
> been configured.
> 14:34:41 
> 14:34:41 Possible solutions:
> 14:34:41   1. Assign a value to 
> 'reports.enabledReports.html.outputLocation'.
> 14:34:41   2. Mark property 'reports.enabledReports.html.outputLocation' 
> as optional.
> 14:34:41 
> 14:34:41 Please refer to 
> https://docs.gradle.org/7.3.3/userguide/validation_problems.html#value_not_set
>  for more details about this problem.
> 14:34:41   - Type 'org.gradle.testing.jacoco.tasks.JacocoReport' property 
> 'reports.enabledReports.xml.outputLocation' doesn't have a configured value.
> 14:34:41 
> 14:34:41 Reason: This property isn't marked as optional and no value has 
> been configured.
> 14:34:41 
> 14:34:41 Possible solutions:
> 14:34:41   1. Assign a value to 
> 'reports.enabledReports.xml.outputLocation'.
> 14:34:41   2. Mark property 'reports.enabledReports.xml.outputLocation' 
> as optional. {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14387) kafka.common.KafkaException | kafka_2.12-3.3.1.jar

2023-02-27 Thread Andras Katona (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona resolved KAFKA-14387.
---
Resolution: Information Provided

> kafka.common.KafkaException  | kafka_2.12-3.3.1.jar
> ---
>
> Key: KAFKA-14387
> URL: https://issues.apache.org/jira/browse/KAFKA-14387
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 3.3.1
>Reporter: masood
>Priority: Major
>
> It appears, Kafka.common.KafkaException is deprecated in 
> kafka_2.12-3.3.1.jar. 
> Please let me know which exception should be used.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14387) kafka.common.KafkaException | kafka_2.12-3.3.1.jar

2023-02-27 Thread Andras Katona (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17694068#comment-17694068
 ] 

Andras Katona commented on KAFKA-14387:
---

it was removed here:
https://github.com/apache/kafka/commit/c9c03dd7ef9ff4edf2596e905cabececc72a9e9d

its commit message 
{quote}
Use the standard org.apache.kafka.common.KafkaException instead of 
kafka.common.KafkaException.
{quote}

> kafka.common.KafkaException  | kafka_2.12-3.3.1.jar
> ---
>
> Key: KAFKA-14387
> URL: https://issues.apache.org/jira/browse/KAFKA-14387
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 3.3.1
>Reporter: masood
>Priority: Major
>
> It appears, Kafka.common.KafkaException is deprecated in 
> kafka_2.12-3.3.1.jar. 
> Please let me know which exception should be used.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14738) Topic disappears from kafka_topic.sh --list after modifying it with kafka_acl.sh

2023-02-27 Thread Andras Katona (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona resolved KAFKA-14738.
---
Resolution: Not A Bug

> Topic disappears from kafka_topic.sh --list after modifying it with 
> kafka_acl.sh
> 
>
> Key: KAFKA-14738
> URL: https://issues.apache.org/jira/browse/KAFKA-14738
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 3.2.3
>Reporter: Gabriel Lukacs
>Priority: Major
>
> Topic is not listed via kafka-topics.sh --list after modifying it with 
> kafka-acls.sh (-add --allow-principal User:CN=test --operation Read):
> $ /opt/kafka/bin/kafka-topics.sh --create --bootstrap-server kafka:9092 
> --topic test2 --replication-factor 1 --partitions 50
> Created topic test2.
> $ /opt/kafka/bin/kafka-topics.sh --list --bootstrap-server kafka:9092 --topic 
> test2
> test2
> $ /opt/kafka/bin/kafka-acls.sh --bootstrap-server kafka:9092 --topic test2 
> --add --allow-principal User:CN=test --operation Read
> Adding ACLs for resource `ResourcePattern(resourceType=TOPIC, name=test2, 
> patternType=LITERAL)`:
>         (principal=User:CN=test, host=*, operation=READ, permissionType=ALLOW)
> Current ACLs for resource `ResourcePattern(resourceType=TOPIC, name=test2, 
> patternType=LITERAL)`:
>         (principal=User:CN=test, host=*, operation=READ, permissionType=ALLOW)
> $ /opt/kafka/bin/kafka-topics.sh --list --bootstrap-server kafka:9092 --topic 
> test2                                   
> $ /opt/kafka/bin/kafka-topics.sh --create --bootstrap-server kafka:9092 
> --topic test2
> Error while executing topic command : Topic 'test2' already exists.
> [2023-02-21 16:37:39,185] ERROR 
> org.apache.kafka.common.errors.TopicExistsException: Topic 'test2' already 
> exists.
>  (kafka.admin.TopicCommand$)
> $ /opt/kafka/bin/kafka-topics.sh --delete --bootstrap-server kafka:9092 
> --topic test2
> Error while executing topic command : Topic 'test2' does not exist as expected
> [2023-02-21 16:37:49,485] ERROR java.lang.IllegalArgumentException: Topic 
> 'test2' does not exist as expected
>         at 
> kafka.admin.TopicCommand$.kafka$admin$TopicCommand$$ensureTopicExists(TopicCommand.scala:401)
>         at 
> kafka.admin.TopicCommand$TopicService.deleteTopic(TopicCommand.scala:361)
>         at kafka.admin.TopicCommand$.main(TopicCommand.scala:63)
>         at kafka.admin.TopicCommand.main(TopicCommand.scala)
>  (kafka.admin.TopicCommand$)
> $ /opt/kafka/bin/kafka-topics.sh --version
> 3.2.3 (Commit:50029d3ed8ba576f)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14041) Avoid the keyword var for a variable declaration in ConfigTransformer

2023-02-27 Thread Andras Katona (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17694048#comment-17694048
 ] 

Andras Katona commented on KAFKA-14041:
---

Java compiler doesn't mind. {{var}} can be used as variable name.

> Avoid the keyword var for a variable declaration in ConfigTransformer
> -
>
> Key: KAFKA-14041
> URL: https://issues.apache.org/jira/browse/KAFKA-14041
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Reporter: QualiteSys QualiteSys
>Priority: Major
>
> In the file 
> clients\src\main\java\org\apache\kafka\common\config\ConfigTransformer.java a 
> variable named var is declared :
> line 84 : for (ConfigVariable var : vars) {
> Since it is a java keyword, could the variable name be changed ?
> Thanks



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-13970) TopicAdmin topic creation should be retried on TimeoutException

2022-07-15 Thread Andras Katona (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17567389#comment-17567389
 ] 

Andras Katona commented on KAFKA-13970:
---

adding this to the pr of KAFKA-13504

> TopicAdmin topic creation should be retried on TimeoutException
> ---
>
> Key: KAFKA-13970
> URL: https://issues.apache.org/jira/browse/KAFKA-13970
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Daniel Urban
>Assignee: Sagar Rao
>Priority: Major
>
> org.apache.kafka.connect.util.TopicAdmin#createTopicsWithRetry handles the 
> case when there aren't enough brokers in the cluster to create a topic with 
> the expected replication factor. This logic should also handle the case when 
> there are 0 brokers in the cluster, and should retry in that case.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-13504) Retry connect internal topics' creation in case of InvalidReplicationFactorException

2021-12-03 Thread Andras Katona (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona updated KAFKA-13504:
--
Description: 
In case the Kafka Broker cluster and the Kafka Connect cluster is started 
together and Connect would want to create its topics, there's a high chance to 
fail the creation with InvalidReplicationFactorException.
{noformat}
ERROR org.apache.kafka.connect.runtime.distributed.DistributedHerder [Worker 
clientId=connect-1, groupId=connect-cluster] Uncaught exception in herder work 
thread, exiting: 
org.apache.kafka.connect.errors.ConnectException: Error while attempting to 
create/find topic(s) 'connect-offsets'
...
Caused by: java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication 
factor: 3 larger than available brokers: 2.
{noformat}
Introducing a retry logic here would make Connect a bit more robust.

New configurations:
* offset.storage.topic.create.retries
* offset.storage.topic.create.retry.backoff.ms
* config.storage.topic.create.retries
* config.storage.topic.create.retry.backoff.ms
* status.storage.topic.create.retries
* status.storage.topic.create.retry.backoff.ms


  was:In case the Kafka Broker cluster and the Kafka Connect cluster is started 
together and Connect would want to create its topics, there's a high chance to 
fail the creation with 


> Retry connect internal topics' creation in case of 
> InvalidReplicationFactorException
> 
>
> Key: KAFKA-13504
> URL: https://issues.apache.org/jira/browse/KAFKA-13504
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Andras Katona
>Assignee: Andras Katona
>Priority: Major
>
> In case the Kafka Broker cluster and the Kafka Connect cluster is started 
> together and Connect would want to create its topics, there's a high chance 
> to fail the creation with InvalidReplicationFactorException.
> {noformat}
> ERROR org.apache.kafka.connect.runtime.distributed.DistributedHerder [Worker 
> clientId=connect-1, groupId=connect-cluster] Uncaught exception in herder 
> work thread, exiting: 
> org.apache.kafka.connect.errors.ConnectException: Error while attempting to 
> create/find topic(s) 'connect-offsets'
> ...
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication 
> factor: 3 larger than available brokers: 2.
> {noformat}
> Introducing a retry logic here would make Connect a bit more robust.
> New configurations:
> * offset.storage.topic.create.retries
> * offset.storage.topic.create.retry.backoff.ms
> * config.storage.topic.create.retries
> * config.storage.topic.create.retry.backoff.ms
> * status.storage.topic.create.retries
> * status.storage.topic.create.retry.backoff.ms



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (KAFKA-13504) Retry connect internal topics' creation in case of InvalidReplicationFactorException

2021-12-03 Thread Andras Katona (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona updated KAFKA-13504:
--
Description: In case the Kafka Broker cluster and the Kafka Connect cluster 
is started together and Connect would want to create its topics, there's a high 
chance to fail the creation with 

> Retry connect internal topics' creation in case of 
> InvalidReplicationFactorException
> 
>
> Key: KAFKA-13504
> URL: https://issues.apache.org/jira/browse/KAFKA-13504
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Andras Katona
>Assignee: Andras Katona
>Priority: Major
>
> In case the Kafka Broker cluster and the Kafka Connect cluster is started 
> together and Connect would want to create its topics, there's a high chance 
> to fail the creation with 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (KAFKA-13504) Retry connect internal topics' creation in case of InvalidReplicationFactorException

2021-12-03 Thread Andras Katona (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona reassigned KAFKA-13504:
-

Assignee: Andras Katona

> Retry connect internal topics' creation in case of 
> InvalidReplicationFactorException
> 
>
> Key: KAFKA-13504
> URL: https://issues.apache.org/jira/browse/KAFKA-13504
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Andras Katona
>Assignee: Andras Katona
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (KAFKA-13504) Retry connect internal topics' creation in case of InvalidReplicationFactorException

2021-12-03 Thread Andras Katona (Jira)
Andras Katona created KAFKA-13504:
-

 Summary: Retry connect internal topics' creation in case of 
InvalidReplicationFactorException
 Key: KAFKA-13504
 URL: https://issues.apache.org/jira/browse/KAFKA-13504
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Andras Katona






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (KAFKA-13337) Scanning for Connect plugins can fail with AccessDeniedException

2021-10-25 Thread Andras Katona (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona updated KAFKA-13337:
--
Description: 
During Connect plugin path scan, if an unreadable file/directory is found, 
Connect will fail with an {{AccessDeniedException}}. As the directories/files 
can be unreadable, it is best to skip them in this case. See referenced PR.

 
{noformat}
java.nio.file.AccessDeniedException: 
/tmp/junit8905851398112785578/plugins/.protected
at 
java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:90)
at 
java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
at 
java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116)
at 
java.base/sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:432)
at java.base/java.nio.file.Files.newDirectoryStream(Files.java:604)
at 
org.apache.kafka.connect.runtime.isolation.PluginUtils.pluginUrls(PluginUtils.java:276)
at 
org.apache.kafka.connect.runtime.isolation.PluginUtilsTest.testPluginUrlsWithProtectedDirectory(PluginUtilsTest.java:481)
...
{noformat}

Connect server fails with the following exception, (I created an "aaa" 
directory only readable by root
{noformat}
Could not get listing for plugin path: /var/lib/kafka. Ignoring.
java.nio.file.AccessDeniedException: /var/lib/kafka/
at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at 
sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:427)
at java.nio.file.Files.newDirectoryStream(Files.java:589)
at 
org.apache.kafka.connect.runtime.isolation.PluginUtils.pluginUrls(PluginUtils.java:231)
at 
org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.registerPlugin(DelegatingClassLoader.java:241)
at 
org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initPluginLoader(DelegatingClassLoader.java:222)
at 
org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initLoaders(DelegatingClassLoader.java:199)
at 
org.apache.kafka.connect.runtime.isolation.Plugins.(Plugins.java:60)
at 
org.apache.kafka.connect.cli.ConnectDistributed.startConnect(ConnectDistributed.java:91)
{noformat}

Additional note:
Connect server would not stop normally but an extension couldn't be found 
because of this in my case which killed connect at later point.

  was:
During Connect plugin path scan, if an unreadable file/directory is found, 
Connect will fail with an {{AccessDeniedException}}. As the directories/files 
can be unreadable, it is best to skip them in this case. See referenced PR.

 
{noformat}
java.nio.file.AccessDeniedException: 
/tmp/junit8905851398112785578/plugins/.protected
at 
java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:90)
at 
java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
at 
java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116)
at 
java.base/sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:432)
at java.base/java.nio.file.Files.newDirectoryStream(Files.java:604)
at 
org.apache.kafka.connect.runtime.isolation.PluginUtils.pluginUrls(PluginUtils.java:276)
at 
org.apache.kafka.connect.runtime.isolation.PluginUtilsTest.testPluginUrlsWithProtectedDirectory(PluginUtilsTest.java:481)
...
{noformat}

Connect server fails with the following exception, (I created an "aaa" 
directory only readable by root
{noformat}
Could not get listing for plugin path: /var/lib/kafka. Ignoring.
java.nio.file.AccessDeniedException: /var/lib/kafka/
at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at 
sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:427)
at java.nio.file.Files.newDirectoryStream(Files.java:589)
at 
org.apache.kafka.connect.runtime.isolation.PluginUtils.pluginUrls(PluginUtils.java:231)
at 
org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.registerPlugin(DelegatingClassLoader.java:241)
at 
org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initPluginLoader(DelegatingClassLoader.java:222)
at 
org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initLoaders(DelegatingClassLoader.java:199)
at 
org.apache.kafka.connect.runtime.isolation.Plugins.(Plugins.java:60)
at 
org.apache

[jira] [Updated] (KAFKA-13337) Scanning for Connect plugins can fail with AccessDeniedException

2021-10-25 Thread Andras Katona (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona updated KAFKA-13337:
--
Fix Version/s: (was: 3.0.1)

> Scanning for Connect plugins can fail with AccessDeniedException
> 
>
> Key: KAFKA-13337
> URL: https://issues.apache.org/jira/browse/KAFKA-13337
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.7.1, 2.6.2, 3.1.0, 2.8.1
>Reporter: Tamás Héri
>Assignee: Tamás Héri
>Priority: Minor
>
> During Connect plugin path scan, if an unreadable file/directory is found, 
> Connect will fail with an {{AccessDeniedException}}. As the directories/files 
> can be unreadable, it is best to skip them in this case. See referenced PR.
>  
> {noformat}
> java.nio.file.AccessDeniedException: 
> /tmp/junit8905851398112785578/plugins/.protected
>   at 
> java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:90)
>   at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
>   at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116)
>   at 
> java.base/sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:432)
>   at java.base/java.nio.file.Files.newDirectoryStream(Files.java:604)
>   at 
> org.apache.kafka.connect.runtime.isolation.PluginUtils.pluginUrls(PluginUtils.java:276)
>   at 
> org.apache.kafka.connect.runtime.isolation.PluginUtilsTest.testPluginUrlsWithProtectedDirectory(PluginUtilsTest.java:481)
> ...
> {noformat}
> Connect server fails with the following exception, (I created an "aaa" 
> directory only readable by root
> {noformat}
> Could not get listing for plugin path: /var/lib/kafka. Ignoring.
> java.nio.file.AccessDeniedException: /var/lib/kafka/
>   at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
>   at 
> sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:427)
>   at java.nio.file.Files.newDirectoryStream(Files.java:589)
>   at 
> org.apache.kafka.connect.runtime.isolation.PluginUtils.pluginUrls(PluginUtils.java:231)
>   at 
> org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.registerPlugin(DelegatingClassLoader.java:241)
>   at 
> org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initPluginLoader(DelegatingClassLoader.java:222)
>   at 
> org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initLoaders(DelegatingClassLoader.java:199)
>   at 
> org.apache.kafka.connect.runtime.isolation.Plugins.(Plugins.java:60)
>   at 
> org.apache.kafka.connect.cli.ConnectDistributed.startConnect(ConnectDistributed.java:91)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-13337) Scanning for Connect plugins can fail with AccessDeniedException

2021-10-25 Thread Andras Katona (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona reassigned KAFKA-13337:
-

Assignee: Tamás Héri  (was: Andras Katona)

> Scanning for Connect plugins can fail with AccessDeniedException
> 
>
> Key: KAFKA-13337
> URL: https://issues.apache.org/jira/browse/KAFKA-13337
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.7.1, 2.6.2, 3.1.0, 2.8.1
>Reporter: Tamás Héri
>Assignee: Tamás Héri
>Priority: Minor
> Fix For: 3.0.1
>
>
> During Connect plugin path scan, if an unreadable file/directory is found, 
> Connect will fail with an {{AccessDeniedException}}. As the directories/files 
> can be unreadable, it is best to skip them in this case. See referenced PR.
>  
> {noformat}
> java.nio.file.AccessDeniedException: 
> /tmp/junit8905851398112785578/plugins/.protected
>   at 
> java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:90)
>   at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
>   at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116)
>   at 
> java.base/sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:432)
>   at java.base/java.nio.file.Files.newDirectoryStream(Files.java:604)
>   at 
> org.apache.kafka.connect.runtime.isolation.PluginUtils.pluginUrls(PluginUtils.java:276)
>   at 
> org.apache.kafka.connect.runtime.isolation.PluginUtilsTest.testPluginUrlsWithProtectedDirectory(PluginUtilsTest.java:481)
> ...
> {noformat}
> Connect server fails with the following exception, (I created an "aaa" 
> directory only readable by root
> {noformat}
> Could not get listing for plugin path: /var/lib/kafka. Ignoring.
> java.nio.file.AccessDeniedException: /var/lib/kafka/
>   at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
>   at 
> sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:427)
>   at java.nio.file.Files.newDirectoryStream(Files.java:589)
>   at 
> org.apache.kafka.connect.runtime.isolation.PluginUtils.pluginUrls(PluginUtils.java:231)
>   at 
> org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.registerPlugin(DelegatingClassLoader.java:241)
>   at 
> org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initPluginLoader(DelegatingClassLoader.java:222)
>   at 
> org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initLoaders(DelegatingClassLoader.java:199)
>   at 
> org.apache.kafka.connect.runtime.isolation.Plugins.(Plugins.java:60)
>   at 
> org.apache.kafka.connect.cli.ConnectDistributed.startConnect(ConnectDistributed.java:91)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9106) metrics exposed via JMX shoud be configurable

2021-10-06 Thread Andras Katona (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17424893#comment-17424893
 ] 

Andras Katona commented on KAFKA-9106:
--

Fix Version was set to 2.5.0 but it made to 2.6.0 as earliest released kafka 
version.

> metrics exposed via JMX shoud be configurable
> -
>
> Key: KAFKA-9106
> URL: https://issues.apache.org/jira/browse/KAFKA-9106
> Project: Kafka
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Xavier Léauté
>Assignee: Xavier Léauté
>Priority: Major
> Fix For: 2.6.0
>
>
> Kafka exposes a very large number of metrics, all of which are always visible 
> in JMX by default. On large clusters with many partitions, this may result in 
> tens of thousands of mbeans to be registered, which can lead to timeouts with 
> some popular monitoring agents that rely on listing JMX metrics via RMI.
> Making the set of JMX-visible metrics configurable would allow operators to 
> decide on the set of critical metrics to collect and workaround limitation of 
> JMX in those cases.
> corresponding KIP
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-544%3A+Make+metrics+exposed+via+JMX+configurable



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9106) metrics exposed via JMX shoud be configurable

2021-10-06 Thread Andras Katona (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona updated KAFKA-9106:
-
Fix Version/s: (was: 2.5.0)
   2.6.0

> metrics exposed via JMX shoud be configurable
> -
>
> Key: KAFKA-9106
> URL: https://issues.apache.org/jira/browse/KAFKA-9106
> Project: Kafka
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Xavier Léauté
>Assignee: Xavier Léauté
>Priority: Major
> Fix For: 2.6.0
>
>
> Kafka exposes a very large number of metrics, all of which are always visible 
> in JMX by default. On large clusters with many partitions, this may result in 
> tens of thousands of mbeans to be registered, which can lead to timeouts with 
> some popular monitoring agents that rely on listing JMX metrics via RMI.
> Making the set of JMX-visible metrics configurable would allow operators to 
> decide on the set of critical metrics to collect and workaround limitation of 
> JMX in those cases.
> corresponding KIP
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-544%3A+Make+metrics+exposed+via+JMX+configurable



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-13337) Scanning for Connect plugins can fail with AccessDeniedException

2021-10-01 Thread Andras Katona (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona updated KAFKA-13337:
--
Description: 
During Connect plugin path scan, if an unreadable file/directory is found, 
Connect will fail with an {{AccessDeniedException}}. As the directories/files 
can be unreadable, it is best to skip them in this case. See referenced PR.

 
{noformat}
java.nio.file.AccessDeniedException: 
/tmp/junit8905851398112785578/plugins/.protected
at 
java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:90)
at 
java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
at 
java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116)
at 
java.base/sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:432)
at java.base/java.nio.file.Files.newDirectoryStream(Files.java:604)
at 
org.apache.kafka.connect.runtime.isolation.PluginUtils.pluginUrls(PluginUtils.java:276)
at 
org.apache.kafka.connect.runtime.isolation.PluginUtilsTest.testPluginUrlsWithProtectedDirectory(PluginUtilsTest.java:481)
...
{noformat}

Connect server fails with the following exception, (I created an "aaa" 
directory only readable by root
{noformat}
Could not get listing for plugin path: /var/lib/kafka. Ignoring.
java.nio.file.AccessDeniedException: /var/lib/kafka/
at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at 
sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:427)
at java.nio.file.Files.newDirectoryStream(Files.java:589)
at 
org.apache.kafka.connect.runtime.isolation.PluginUtils.pluginUrls(PluginUtils.java:231)
at 
org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.registerPlugin(DelegatingClassLoader.java:241)
at 
org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initPluginLoader(DelegatingClassLoader.java:222)
at 
org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initLoaders(DelegatingClassLoader.java:199)
at 
org.apache.kafka.connect.runtime.isolation.Plugins.(Plugins.java:60)
at 
org.apache.kafka.connect.cli.ConnectDistributed.startConnect(ConnectDistributed.java:91)
{noformat}

  was:
During Connect plugin path scan, if an unreadable file/directory is found, 
Connect will fail with an {{AccessDeniedException}}. As the directories/files 
can be unreadable, it is best to skip them in this case. See referenced PR.

 
{noformat}
java.nio.file.AccessDeniedException: 
/tmp/junit8905851398112785578/plugins/.protected
at 
java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:90)
at 
java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
at 
java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116)
at 
java.base/sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:432)
at java.base/java.nio.file.Files.newDirectoryStream(Files.java:604)
at 
org.apache.kafka.connect.runtime.isolation.PluginUtils.pluginUrls(PluginUtils.java:276)
at 
org.apache.kafka.connect.runtime.isolation.PluginUtilsTest.testPluginUrlsWithProtectedDirectory(PluginUtilsTest.java:481)
...
{noformat}


> Scanning for Connect plugins can fail with AccessDeniedException
> 
>
> Key: KAFKA-13337
> URL: https://issues.apache.org/jira/browse/KAFKA-13337
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.7.1, 2.6.2, 3.1.0, 2.8.1
>Reporter: Tamás Héri
>Assignee: Andras Katona
>Priority: Minor
> Fix For: 3.0.1
>
>
> During Connect plugin path scan, if an unreadable file/directory is found, 
> Connect will fail with an {{AccessDeniedException}}. As the directories/files 
> can be unreadable, it is best to skip them in this case. See referenced PR.
>  
> {noformat}
> java.nio.file.AccessDeniedException: 
> /tmp/junit8905851398112785578/plugins/.protected
>   at 
> java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:90)
>   at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
>   at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116)
>   at 
> java.base/sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:432)
>   at java.base/java.nio.file.Files.newDirectoryStream(Files.java:604)
>   at 
> org.apache.kafka.connect.runt

[jira] [Assigned] (KAFKA-13337) Scanning for Connect plugins can fail with AccessDeniedException

2021-10-01 Thread Andras Katona (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona reassigned KAFKA-13337:
-

Assignee: Andras Katona

> Scanning for Connect plugins can fail with AccessDeniedException
> 
>
> Key: KAFKA-13337
> URL: https://issues.apache.org/jira/browse/KAFKA-13337
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.7.1, 2.6.2, 3.1.0, 2.8.1
>Reporter: Tamás Héri
>Assignee: Andras Katona
>Priority: Minor
> Fix For: 3.0.1
>
>
> During Connect plugin path scan, if an unreadable file/directory is found, 
> Connect will fail with an {{AccessDeniedException}}. As the directories/files 
> can be unreadable, it is best to skip them in this case. See referenced PR.
>  
> {noformat}
> java.nio.file.AccessDeniedException: 
> /tmp/junit8905851398112785578/plugins/.protected
>   at 
> java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:90)
>   at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
>   at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116)
>   at 
> java.base/sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:432)
>   at java.base/java.nio.file.Files.newDirectoryStream(Files.java:604)
>   at 
> org.apache.kafka.connect.runtime.isolation.PluginUtils.pluginUrls(PluginUtils.java:276)
>   at 
> org.apache.kafka.connect.runtime.isolation.PluginUtilsTest.testPluginUrlsWithProtectedDirectory(PluginUtilsTest.java:481)
> ...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10639) There should be an EnvironmentConfigProvider that will do variable substitution using environment variable.

2021-09-27 Thread Andras Katona (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17420565#comment-17420565
 ] 

Andras Katona commented on KAFKA-10639:
---

KIP is required

> There should be an EnvironmentConfigProvider that will do variable 
> substitution using environment variable.
> ---
>
> Key: KAFKA-10639
> URL: https://issues.apache.org/jira/browse/KAFKA-10639
> Project: Kafka
>  Issue Type: Improvement
>  Components: config
>Affects Versions: 2.5.1
>Reporter: Brad Davis
>Assignee: Andras Katona
>Priority: Major
>
> Running Kafka Connect in the same docker container in multiple stages (like 
> dev vs production) means that a file based approach to secret hiding using 
> the file config provider isn't viable.  However, docker container instances 
> can have their environment variables customized on a per-container basis, and 
> our existing tech stack typically exposes per-stage secrets (like the dev DB 
> password vs the prod DB password) through env vars within the containers.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-10639) There should be an EnvironmentConfigProvider that will do variable substitution using environment variable.

2021-09-24 Thread Andras Katona (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona reassigned KAFKA-10639:
-

Assignee: Andras Katona

> There should be an EnvironmentConfigProvider that will do variable 
> substitution using environment variable.
> ---
>
> Key: KAFKA-10639
> URL: https://issues.apache.org/jira/browse/KAFKA-10639
> Project: Kafka
>  Issue Type: Improvement
>  Components: config
>Affects Versions: 2.5.1
>Reporter: Brad Davis
>Assignee: Andras Katona
>Priority: Major
>
> Running Kafka Connect in the same docker container in multiple stages (like 
> dev vs production) means that a file based approach to secret hiding using 
> the file config provider isn't viable.  However, docker container instances 
> can have their environment variables customized on a per-container basis, and 
> our existing tech stack typically exposes per-stage secrets (like the dev DB 
> password vs the prod DB password) through env vars within the containers.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-13306) Null connector config value passes validation, but fails creation

2021-09-16 Thread Andras Katona (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona updated KAFKA-13306:
--
Description: 
When validating a connector config containing a property with a null value the 
validation passes, but when creating a connector with the same config the 
worker fails to start the connector because of an invalid config.

Steps to reproduce:
 # Send PUT request to
 \{\{connectRest\}\}/connector-plugins/FileStreamSource/config/validate
 Request body:
{code}
{
  "connector.class": "FileStreamSource",
  "name": "file-source",
  "topic": "target-topic",
  "file":"/source.txt",
  "tasks.max": "1",
  "foo": null
}
{code}
Response:
{code}
{ "name": "FileStreamSource", "error_count": 0, ... }
{code}

 #  OPTION A:
 Send PUT request to
 \{\{connectRest}}/connectors/file-source/config
 Request body:
{code}
{
  "connector.class": "FileStreamSource",
  "name": "file-source",
  "topic": "target-topic",
  "file":"/source.txt",
  "tasks.max": "1",
  "foo": null
}
{code}
OPTION B:
 Send POST request to
 \{\{connectRest}}/connectors/
 Request Body:
{code}
{
  "connector.class": "FileStreamSource",
  "name": "file-source",
  "topic": "target-topic",
  "file":"/source.txt",
  "tasks.max": "1",
  "foo": null
}
{code}
Result:
Connector is created but connector fails to start, with below exception that 
indicates an invalid config:
{noformat}
ERROR org.apache.kafka.connect.runtime.WorkerConnector: [file-source|worker] 
WorkerConnector\{id=file-source} Error initializing connector
java.lang.ClassCastException: Non-string value found in original settings for 
key foo: null
at 
org.apache.kafka.common.config.AbstractConfig.originalsStrings(AbstractConfig.java:234)
at 
org.apache.kafka.connect.runtime.WorkerConnector.initialize(WorkerConnector.java:77)
at 
org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:256)
at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:1190)
at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.processConnectorConfigUpdates(DistributedHerder.java:548)
at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:395)
at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:289)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
{noformat}
 

  was:
When validating a connector config containing a property with a null value the 
validation passes, but when creating a connector with the same config the 
worker fails to start the connector because of an invalid config.

Steps to reproduce:
 # Send PUT request to
 \{\{connectRest\}\}/connector-plugins/FileStreamSource/config/validate
 Request body:
{code}
{
  "connector.class": "FileStreamSource",
  "name": "file-source",
  "topic": "target-topic",
  "file":"/source.txt",
  "tasks.max": "1",
  "foo": null
}
{code}
Response:
{code}
{ "name": "FileStreamSource", "error_count": 0, ... }
{code}

 #  OPTION A:
 Send PUT request to
 \{\{connectRest}}/connectors/file-source/config
 Request body:
{code}
{
  "connector.class": "FileStreamSource",
  "name": "file-source",
  "topic": "target-topic",
  "file":"/source.txt",
  "tasks.max": "1",
  "foo": null
}
{code}
OPTION B:
 Send POST request to
 \{\{connectRest}}/connectors/
 Request Body:
{code}
{
  "connector.class": "FileStreamSource",
  "name": "file-source",
  "topic": "target-topic",
  "file":"/source.txt",
  "tasks.max": "1",
  "foo": null
}
{code}
Result:
Connector is created but connector fails to start, with below exception that 
indicates an invalid config:
{noformat}
 ERROR org.apache.kafka.connect.runtime.WorkerConnector: [file-source|worker] 
WorkerConnector\{id=file-source} Error initializing connector
 java.lang.ClassCastException: Non-string value found in original settings for 
key foo: null
 at 
org.apache.kafka.common.config.AbstractConfig.originalsStrings(AbstractConfig.java:234)
 at 
org.apache.kafka.connect.runtime.WorkerConnector.initialize(WorkerConnector.java:77)
 at org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:256)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:1190)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.processConnectorConfigUpdates(DistributedHerder.java:548)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:395)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(Distrib

[jira] [Assigned] (KAFKA-13306) Null connector config value passes validation, but fails creation

2021-09-16 Thread Andras Katona (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona reassigned KAFKA-13306:
-

Assignee: Andras Katona

> Null connector config value passes validation, but fails creation
> -
>
> Key: KAFKA-13306
> URL: https://issues.apache.org/jira/browse/KAFKA-13306
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Laszlo Istvan Hunyady
>Assignee: Andras Katona
>Priority: Major
>
> When validating a connector config containing a property with a null value 
> the validation passes, but when creating a connector with the same config the 
> worker fails to start the connector because of an invalid config.
> Steps to reproduce:
>  # Send PUT request to
>  \{\{connectRest\}\}/connector-plugins/FileStreamSource/config/validate
>  Request body:
> {code}
> {
>   "connector.class": "FileStreamSource",
>   "name": "file-source",
>   "topic": "target-topic",
>   "file":"/source.txt",
>   "tasks.max": "1",
>   "foo": null
> }
> {code}
> Response:
> {code}
> { "name": "FileStreamSource", "error_count": 0, ... }
> {code}
>  #  OPTION A:
>  Send PUT request to
>  \{\{connectRest}}/connectors/file-source/config
>  Request body:
> {code}
> {
>   "connector.class": "FileStreamSource",
>   "name": "file-source",
>   "topic": "target-topic",
>   "file":"/source.txt",
>   "tasks.max": "1",
>   "foo": null
> }
> {code}
> OPTION B:
>  Send POST request to
>  \{\{connectRest}}/connectors/
>  Request Body:
> {code}
> {
>   "connector.class": "FileStreamSource",
>   "name": "file-source",
>   "topic": "target-topic",
>   "file":"/source.txt",
>   "tasks.max": "1",
>   "foo": null
> }
> {code}
> Result:
> Connector is created but connector fails to start, with below exception that 
> indicates an invalid config:
> {noformat}
>  ERROR org.apache.kafka.connect.runtime.WorkerConnector: [file-source|worker] 
> WorkerConnector\{id=file-source} Error initializing connector
>  java.lang.ClassCastException: Non-string value found in original settings 
> for key foo: null
>  at 
> org.apache.kafka.common.config.AbstractConfig.originalsStrings(AbstractConfig.java:234)
>  at 
> org.apache.kafka.connect.runtime.WorkerConnector.initialize(WorkerConnector.java:77)
>  at org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:256)
>  at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:1190)
>  at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.processConnectorConfigUpdates(DistributedHerder.java:548)
>  at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:395)
>  at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:289)
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> {noformat}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-13306) Null connector config value passes validation, but fails creation

2021-09-16 Thread Andras Katona (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona updated KAFKA-13306:
--
Description: 
When validating a connector config containing a property with a null value the 
validation passes, but when creating a connector with the same config the 
worker fails to start the connector because of an invalid config.

Steps to reproduce:
 # Send PUT request to
 \{\{connectRest\}\}/connector-plugins/FileStreamSource/config/validate
 Request body:
{code}
{
  "connector.class": "FileStreamSource",
  "name": "file-source",
  "topic": "target-topic",
  "file":"/source.txt",
  "tasks.max": "1",
  "foo": null
}
{code}
Response:
{code}
{ "name": "FileStreamSource", "error_count": 0, ... }
{code}

 #  OPTION A:
 Send PUT request to
 \{\{connectRest}}/connectors/file-source/config
 Request body:
{code}
{
  "connector.class": "FileStreamSource",
  "name": "file-source",
  "topic": "target-topic",
  "file":"/source.txt",
  "tasks.max": "1",
  "foo": null
}
{code}
OPTION B:
 Send POST request to
 \{\{connectRest}}/connectors/
 Request Body:
{code}
{
  "connector.class": "FileStreamSource",
  "name": "file-source",
  "topic": "target-topic",
  "file":"/source.txt",
  "tasks.max": "1",
  "foo": null
}
{code}
Result:
Connector is created but connector fails to start, with below exception that 
indicates an invalid config:
{noformat}
 ERROR org.apache.kafka.connect.runtime.WorkerConnector: [file-source|worker] 
WorkerConnector\{id=file-source} Error initializing connector
 java.lang.ClassCastException: Non-string value found in original settings for 
key foo: null
 at 
org.apache.kafka.common.config.AbstractConfig.originalsStrings(AbstractConfig.java:234)
 at 
org.apache.kafka.connect.runtime.WorkerConnector.initialize(WorkerConnector.java:77)
 at org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:256)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:1190)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.processConnectorConfigUpdates(DistributedHerder.java:548)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:395)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:289)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)
{noformat}
 

  was:
When validating a connector config containing a property with a null value the 
validation passes, but when creating a connector with the same config the 
worker fails to start the connector because of an invalid config.

Steps to reproduce:
 # Send PUT request to
 \{\{connectRest\}\}/connector-plugins/FileStreamSource/config/validate
 Request body:
{code}
{
  "connector.class": "FileStreamSource",
  "name": "file-source",
  "topic": "target-topic",
  "file":"/source.txt",
  "tasks.max": "1",
  "foo": null
}
{code}
Response:
{code}
{ "name": "FileStreamSource", "error_count": 0, ... }
{code}

 #  OPTION A:
 Send PUT request to
 \{\{connectRest}}/connectors/file-source/config
 Request body:
{code}
{
  "connector.class": "FileStreamSource",
  "name": "file-source",
  "topic": "target-topic",
  "file":"/source.txt",
  "tasks.max": "1",
  "foo": null
}
{code}
OPTION B:
 Send POST request to
 \{\{connectRest}}/connectors/
 Request Body:
{code}
{
  "connector.class": "FileStreamSource",
  "name": "file-source",
  "topic": "target-topic",
  "file":"/source.txt",
  "tasks.max": "1",
  "foo": null
}
{code}
Result:
{noformat}
 Connector is created but connector fails to start, with below exception that 
indicates an invalid config:
 ERROR org.apache.kafka.connect.runtime.WorkerConnector: [file-source|worker] 
WorkerConnector\{id=file-source} Error initializing connector
 java.lang.ClassCastException: Non-string value found in original settings for 
key foo: null
 at 
org.apache.kafka.common.config.AbstractConfig.originalsStrings(AbstractConfig.java:234)
 at 
org.apache.kafka.connect.runtime.WorkerConnector.initialize(WorkerConnector.java:77)
 at org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:256)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:1190)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.processConnectorConfigUpdates(DistributedHerder.java:548)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:395)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:289)
 at java.util.concurrent.Executors$RunnableAdapter.call(Execut

[jira] [Updated] (KAFKA-13306) Null connector config value passes validation, but fails creation

2021-09-16 Thread Andras Katona (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona updated KAFKA-13306:
--
Summary: Null connector config value passes validation, but fails creation  
(was: Null config value passes validation, but fails creation)

> Null connector config value passes validation, but fails creation
> -
>
> Key: KAFKA-13306
> URL: https://issues.apache.org/jira/browse/KAFKA-13306
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Laszlo Istvan Hunyady
>Priority: Major
>
> When validating a connector config containing a property with a null value 
> the validation passes, but when creating a connector with the same config the 
> worker fails to start the connector because of an invalid config.
> Steps to reproduce:
>  # Send PUT request to
>  \{\{connectRest\}\}/connector-plugins/FileStreamSource/config/validate
>  Request body:
> {code}
> {
>   "connector.class": "FileStreamSource",
>   "name": "file-source",
>   "topic": "target-topic",
>   "file":"/source.txt",
>   "tasks.max": "1",
>   "foo": null
> }
> {code}
> Response:
> {code}
> { "name": "FileStreamSource", "error_count": 0, ... }
> {code}
>  #  OPTION A:
>  Send PUT request to
>  \{\{connectRest}}/connectors/file-source/config
>  Request body:
> {code}
> {
>   "connector.class": "FileStreamSource",
>   "name": "file-source",
>   "topic": "target-topic",
>   "file":"/source.txt",
>   "tasks.max": "1",
>   "foo": null
> }
> {code}
> OPTION B:
>  Send POST request to
>  \{\{connectRest}}/connectors/
>  Request Body:
> {code}
> {
>   "connector.class": "FileStreamSource",
>   "name": "file-source",
>   "topic": "target-topic",
>   "file":"/source.txt",
>   "tasks.max": "1",
>   "foo": null
> }
> {code}
> Result:
> {noformat}
>  Connector is created but connector fails to start, with below exception that 
> indicates an invalid config:
>  ERROR org.apache.kafka.connect.runtime.WorkerConnector: [file-source|worker] 
> WorkerConnector\{id=file-source} Error initializing connector
>  java.lang.ClassCastException: Non-string value found in original settings 
> for key foo: null
>  at 
> org.apache.kafka.common.config.AbstractConfig.originalsStrings(AbstractConfig.java:234)
>  at 
> org.apache.kafka.connect.runtime.WorkerConnector.initialize(WorkerConnector.java:77)
>  at org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:256)
>  at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:1190)
>  at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.processConnectorConfigUpdates(DistributedHerder.java:548)
>  at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:395)
>  at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:289)
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> {noformat}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-13306) Null config value passes validation, but fails creation

2021-09-16 Thread Andras Katona (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona updated KAFKA-13306:
--
Description: 
When validating a connector config containing a property with a null value the 
validation passes, but when creating a connector with the same config the 
worker fails to start the connector because of an invalid config.

Steps to reproduce:
 # Send PUT request to
 \{\{connectRest\}\}/connector-plugins/FileStreamSource/config/validate
 Request body:
{code}
{
  "connector.class": "FileStreamSource",
  "name": "file-source",
  "topic": "target-topic",
  "file":"/source.txt",
  "tasks.max": "1",
  "foo": null
}
{code}
Response:
{code}
{ "name": "FileStreamSource", "error_count": 0, ... }
{code}

 #  OPTION A:
 Send PUT request to
 \{\{connectRest}}/connectors/file-source/config
 Request body:
{code}
{
  "connector.class": "FileStreamSource",
  "name": "file-source",
  "topic": "target-topic",
  "file":"/source.txt",
  "tasks.max": "1",
  "foo": null
}
{code}
OPTION B:
 Send POST request to
 \{\{connectRest}}/connectors/
 Request Body:
{code}
{
  "connector.class": "FileStreamSource",
  "name": "file-source",
  "topic": "target-topic",
  "file":"/source.txt",
  "tasks.max": "1",
  "foo": null
}
{code}
Result:
{noformat}
 Connector is created but connector fails to start, with below exception that 
indicates an invalid config:
 ERROR org.apache.kafka.connect.runtime.WorkerConnector: [file-source|worker] 
WorkerConnector\{id=file-source} Error initializing connector
 java.lang.ClassCastException: Non-string value found in original settings for 
key foo: null
 at 
org.apache.kafka.common.config.AbstractConfig.originalsStrings(AbstractConfig.java:234)
 at 
org.apache.kafka.connect.runtime.WorkerConnector.initialize(WorkerConnector.java:77)
 at org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:256)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:1190)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.processConnectorConfigUpdates(DistributedHerder.java:548)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:395)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:289)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)
{noformat}
 

  was:
When validating a connector config containing a property with a null value the 
validation passes, but when creating a connector with the same config the 
worker fails to start the connector because of an invalid config.

Steps to reproduce:
 # Send PUT request to
 \{\{connectRest}}/connector-plugins/FileStreamSource/config/validate
 Request body:
{code}
{
  "connector.class": "FileStreamSource",
  "name": "file-source",
  "topic": "target-topic",
  "file":"/source.txt",
  "tasks.max": "1",
  "foo": null
}
{code}
Response:
{code}
{ "name": "FileStreamSource", "error_count": 0, ... }
{code}

 #  OPTION A:
 Send PUT request to
 \{\{connectRest}}/connectors/file-source/config
 Request body:
{code}
{
  "connector.class": "FileStreamSource",
  "name": "file-source",
  "topic": "target-topic",
  "file":"/source.txt",
  "tasks.max": "1",
  "foo": null
}
{code}
OPTION B:
 Send POST request to
 \{\{connectRest}}/connectors/
 Request Body:
{code}
{
  "connector.class": "FileStreamSource",
  "name": "file-source",
  "topic": "target-topic",
  "file":"/source.txt",
  "tasks.max": "1",
  "foo": null
}
{code}
Result:
{noformat}
 Connector is created but connector fails to start, with below exception that 
indicates an invalid config:
 ERROR org.apache.kafka.connect.runtime.WorkerConnector: [file-source|worker] 
WorkerConnector\{id=file-source} Error initializing connector
 java.lang.ClassCastException: Non-string value found in original settings for 
key foo: null
 at 
org.apache.kafka.common.config.AbstractConfig.originalsStrings(AbstractConfig.java:234)
 at 
org.apache.kafka.connect.runtime.WorkerConnector.initialize(WorkerConnector.java:77)
 at org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:256)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:1190)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.processConnectorConfigUpdates(DistributedHerder.java:548)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:395)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:289)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executo

[jira] [Updated] (KAFKA-13306) Null config value passes validation, but fails creation

2021-09-16 Thread Andras Katona (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona updated KAFKA-13306:
--
Description: 
When validating a connector config containing a property with a null value the 
validation passes, but when creating a connector with the same config the 
worker fails to start the connector because of an invalid config.

Steps to reproduce:
 # Send PUT request to
 \{\{connectRest}}/connector-plugins/FileStreamSource/config/validate
 Request body:
{code}
{
  "connector.class": "FileStreamSource",
  "name": "file-source",
  "topic": "target-topic",
  "file":"/source.txt",
  "tasks.max": "1",
  "foo": null
}
{code}
Response:
{code}
{ "name": "FileStreamSource", "error_count": 0, ... }
{code}

 #  OPTION A:
 Send PUT request to
 \{\{connectRest}}/connectors/file-source/config
 Request body:
{code}
{
  "connector.class": "FileStreamSource",
  "name": "file-source",
  "topic": "target-topic",
  "file":"/source.txt",
  "tasks.max": "1",
  "foo": null
}
{code}
OPTION B:
 Send POST request to
 \{\{connectRest}}/connectors/
 Request Body:
{code}
{
  "connector.class": "FileStreamSource",
  "name": "file-source",
  "topic": "target-topic",
  "file":"/source.txt",
  "tasks.max": "1",
  "foo": null
}
{code}
Result:
{noformat}
 Connector is created but connector fails to start, with below exception that 
indicates an invalid config:
 ERROR org.apache.kafka.connect.runtime.WorkerConnector: [file-source|worker] 
WorkerConnector\{id=file-source} Error initializing connector
 java.lang.ClassCastException: Non-string value found in original settings for 
key foo: null
 at 
org.apache.kafka.common.config.AbstractConfig.originalsStrings(AbstractConfig.java:234)
 at 
org.apache.kafka.connect.runtime.WorkerConnector.initialize(WorkerConnector.java:77)
 at org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:256)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:1190)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.processConnectorConfigUpdates(DistributedHerder.java:548)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:395)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:289)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)
{noformat}
 

  was:
When validating a connector config containing a property with a null value the 
validation passes, but when creating a connector with the same config the 
worker fails to start the connector because of an invalid config.

Steps to reproduce:
 # Send PUT request to
 \{{connectRest}}/connector-plugins/FileStreamSource/config/validate
 Request body:
{code}
{
  "connector.class": "FileStreamSource",
  "name": "file-source",
  "topic": "target-topic",
  "file":"/source.txt",
  "tasks.max": "1",
  "foo": null
}
{code}
Response:
{code}
{ "name": "FileStreamSource", "error_count": 0, ... }
{code}

 #  OPTION A:
 Send PUT request to
 \{{connectRest}}/connectors/file-source/config
 Request body:
{code}
{
  "connector.class": "FileStreamSource",
  "name": "file-source",
  "topic": "target-topic",
  "file":"/source.txt",
  "tasks.max": "1",
  "foo": null
}
{code}
OPTION B:
 Send POST request to
 \{{connectRest}}/connectors/
 Request Body:
{code}
{
  "connector.class": "FileStreamSource",
  "name": "file-source",
  "topic": "target-topic",
  "file":"/source.txt",
  "tasks.max": "1",
  "foo": null
}
{code}
Result:
{noformat}
 Connector is created but connector fails to start, with below exception that 
indicates an invalid config:
 ERROR org.apache.kafka.connect.runtime.WorkerConnector: [file-source|worker] 
WorkerConnector\{id=file-source} Error initializing connector
 java.lang.ClassCastException: Non-string value found in original settings for 
key foo: null
 at 
org.apache.kafka.common.config.AbstractConfig.originalsStrings(AbstractConfig.java:234)
 at 
org.apache.kafka.connect.runtime.WorkerConnector.initialize(WorkerConnector.java:77)
 at org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:256)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:1190)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.processConnectorConfigUpdates(DistributedHerder.java:548)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:395)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:289)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.ja

[jira] [Updated] (KAFKA-13306) Null config value passes validation, but fails creation

2021-09-16 Thread Andras Katona (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona updated KAFKA-13306:
--
Description: 
When validating a connector config containing a property with a null value the 
validation passes, but when creating a connector with the same config the 
worker fails to start the connector because of an invalid config.

Steps to reproduce:
 # Send PUT request to
 \{{connectRest}}/connector-plugins/FileStreamSource/config/validate
 Request body:
{code}
{
  "connector.class": "FileStreamSource",
  "name": "file-source",
  "topic": "target-topic",
  "file":"/source.txt",
  "tasks.max": "1",
  "foo": null
}
{code}
Response:
{code}
{ "name": "FileStreamSource", "error_count": 0, ... }
{code}

 #  OPTION A:
 Send PUT request to
 \{{connectRest}}/connectors/file-source/config
 Request body:
{code}
{
  "connector.class": "FileStreamSource",
  "name": "file-source",
  "topic": "target-topic",
  "file":"/source.txt",
  "tasks.max": "1",
  "foo": null
}
{code}
OPTION B:
 Send POST request to
 \{{connectRest}}/connectors/
 Request Body:
{code}
{
  "connector.class": "FileStreamSource",
  "name": "file-source",
  "topic": "target-topic",
  "file":"/source.txt",
  "tasks.max": "1",
  "foo": null
}
{code}
Result:
{noformat}
 Connector is created but connector fails to start, with below exception that 
indicates an invalid config:
 ERROR org.apache.kafka.connect.runtime.WorkerConnector: [file-source|worker] 
WorkerConnector\{id=file-source} Error initializing connector
 java.lang.ClassCastException: Non-string value found in original settings for 
key foo: null
 at 
org.apache.kafka.common.config.AbstractConfig.originalsStrings(AbstractConfig.java:234)
 at 
org.apache.kafka.connect.runtime.WorkerConnector.initialize(WorkerConnector.java:77)
 at org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:256)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:1190)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.processConnectorConfigUpdates(DistributedHerder.java:548)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:395)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:289)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)
{noformat}
 

  was:
When validating a connector config containing a property with a null value the 
validation passes, but when creating a connector with the same config the 
worker fails to start the connector because of an invalid config.

Steps to reproduce:
 # Send PUT request to
 {{connectRest}}/connector-plugins/FileStreamSource/config/validate
 Request body:
{code:java}
{
  "connector.class": "FileStreamSource",
  "name": "file-source",
  "topic": "target-topic",
  "file":"/source.txt",
  "tasks.max": "1",
  "foo": null
}
{code}
Response:
{code:java}
{ "name": "FileStreamSource", "error_count": 0, ... }
{code}

 #  OPTION A:
 Send PUT request to
 {{connectRest}}/connectors/file-source/config
 Request body:
{code:java}
{
  "connector.class": "FileStreamSource",
  "name": "file-source",
  "topic": "target-topic",
  "file":"/source.txt",
  "tasks.max": "1",
  "foo": null
}
{code}
OPTION B:
 Send POST request to
 {{connectRest}}/connectors/
 Request Body:
{code:java}
{
  "connector.class": "FileStreamSource",
  "name": "file-source",
  "topic": "target-topic",
  "file":"/source.txt",
  "tasks.max": "1",
  "foo": null
}
{code}
Result:
{noformat}
 Connector is created but connector fails to start, with below exception that 
indicates an invalid config:
 ERROR org.apache.kafka.connect.runtime.WorkerConnector: [file-source|worker] 
WorkerConnector\{id=file-source} Error initializing connector
 java.lang.ClassCastException: Non-string value found in original settings for 
key foo: null
 at 
org.apache.kafka.common.config.AbstractConfig.originalsStrings(AbstractConfig.java:234)
 at 
org.apache.kafka.connect.runtime.WorkerConnector.initialize(WorkerConnector.java:77)
 at org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:256)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:1190)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.processConnectorConfigUpdates(DistributedHerder.java:548)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:395)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:289)
 at java.util.concurrent.Executors$RunnableAdapter.cal

[jira] [Updated] (KAFKA-13306) Null config value passes validation, but fails creation

2021-09-16 Thread Andras Katona (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona updated KAFKA-13306:
--
Component/s: KafkaConnect

> Null config value passes validation, but fails creation
> ---
>
> Key: KAFKA-13306
> URL: https://issues.apache.org/jira/browse/KAFKA-13306
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Laszlo Istvan Hunyady
>Priority: Major
>
> When validating a connector config containing a property with a null value 
> the validation passes, but when creating a connector with the same config the 
> worker fails to start the connector because of an invalid config.
> Steps to reproduce:
>  # Send PUT request to
>  {{connectRest}}/connector-plugins/FileStreamSource/config/validate
>  Request body:
> {code:java}
> {
>   "connector.class": "FileStreamSource",
>   "name": "file-source",
>   "topic": "target-topic",
>   "file":"/source.txt",
>   "tasks.max": "1",
>   "foo": null
> }
> {code}
> Response:
> {code:java}
> { "name": "FileStreamSource", "error_count": 0, ... }
> {code}
>  #  OPTION A:
>  Send PUT request to
>  {{connectRest}}/connectors/file-source/config
>  Request body:
> {code:java}
> {
>   "connector.class": "FileStreamSource",
>   "name": "file-source",
>   "topic": "target-topic",
>   "file":"/source.txt",
>   "tasks.max": "1",
>   "foo": null
> }
> {code}
> OPTION B:
>  Send POST request to
>  {{connectRest}}/connectors/
>  Request Body:
> {code:java}
> {
>   "connector.class": "FileStreamSource",
>   "name": "file-source",
>   "topic": "target-topic",
>   "file":"/source.txt",
>   "tasks.max": "1",
>   "foo": null
> }
> {code}
> Result:
> {noformat}
>  Connector is created but connector fails to start, with below exception that 
> indicates an invalid config:
>  ERROR org.apache.kafka.connect.runtime.WorkerConnector: [file-source|worker] 
> WorkerConnector\{id=file-source} Error initializing connector
>  java.lang.ClassCastException: Non-string value found in original settings 
> for key foo: null
>  at 
> org.apache.kafka.common.config.AbstractConfig.originalsStrings(AbstractConfig.java:234)
>  at 
> org.apache.kafka.connect.runtime.WorkerConnector.initialize(WorkerConnector.java:77)
>  at org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:256)
>  at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:1190)
>  at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.processConnectorConfigUpdates(DistributedHerder.java:548)
>  at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:395)
>  at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:289)
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> {noformat}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-13306) Null config value passes validation, but fails creation

2021-09-16 Thread Andras Katona (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona updated KAFKA-13306:
--
Description: 
When validating a connector config containing a property with a null value the 
validation passes, but when creating a connector with the same config the 
worker fails to start the connector because of an invalid config.

Steps to reproduce:
 # Send PUT request to
 {{connectRest}}/connector-plugins/FileStreamSource/config/validate
 Request body:
{code:java}
{
  "connector.class": "FileStreamSource",
  "name": "file-source",
  "topic": "target-topic",
  "file":"/source.txt",
  "tasks.max": "1",
  "foo": null
}
{code}
Response:
{code:java}
{ "name": "FileStreamSource", "error_count": 0, ... }
{code}

 #  OPTION A:
 Send PUT request to
 {{connectRest}}/connectors/file-source/config
 Request body:
{code:java}
{
  "connector.class": "FileStreamSource",
  "name": "file-source",
  "topic": "target-topic",
  "file":"/source.txt",
  "tasks.max": "1",
  "foo": null
}
{code}
OPTION B:
 Send POST request to
 {{connectRest}}/connectors/
 Request Body:
{code:java}
{
  "connector.class": "FileStreamSource",
  "name": "file-source",
  "topic": "target-topic",
  "file":"/source.txt",
  "tasks.max": "1",
  "foo": null
}
{code}
Result:
{noformat}
 Connector is created but connector fails to start, with below exception that 
indicates an invalid config:
 ERROR org.apache.kafka.connect.runtime.WorkerConnector: [file-source|worker] 
WorkerConnector\{id=file-source} Error initializing connector
 java.lang.ClassCastException: Non-string value found in original settings for 
key foo: null
 at 
org.apache.kafka.common.config.AbstractConfig.originalsStrings(AbstractConfig.java:234)
 at 
org.apache.kafka.connect.runtime.WorkerConnector.initialize(WorkerConnector.java:77)
 at org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:256)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:1190)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.processConnectorConfigUpdates(DistributedHerder.java:548)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:395)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:289)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)
{noformat}
 

  was:
When validating a connector config containing a property with a null value the 
validation passes, but when creating a connector with the same config the 
worker fails to start the connector because of an invalid config.

Steps to reproduce:

1., Send PUT request to
 {{connectRest}}/connector-plugins/FileStreamSource/config/validate
 Request body: \{ "connector.class": "FileStreamSource", "name": "file-source", 
"topic": "target-topic", "file":"/source.txt", "tasks.max": "1", "foo": null }
 Response: \{ "name": "FileStreamSource", "error_count": 0, ... } 

2.,

OPTION A:
 Send PUT request to
 {{connectRest}}/connectors/file-source/config
 Request body: \{ "connector.class": "FileStreamSource", "name": "file-source", 
"topic": "target-topic", "file":"/source.txt", "tasks.max": "1", "foo": null }

OPTION B:
 Send POST request to
 {{connectRest}}/connectors/
 Request Body:
 {
 "name": "file-source",
 "config":\{ "connector.class": "FileStreamSource", "name": "file-source", 
"topic": "target-topic", "file": "/source.txt", "tasks.max": "1", "foo": null }
}

Result:
 Connector is created but connector fails to start, with below exception that 
indicates an invalid config:
 ERROR org.apache.kafka.connect.runtime.WorkerConnector: [file-source|worker] 
WorkerConnector\{id=file-source} Error initializing connector
 java.lang.ClassCastException: Non-string value found in original settings for 
key foo: null
 at 
org.apache.kafka.common.config.AbstractConfig.originalsStrings(AbstractConfig.java:234)
 at 
org.apache.kafka.connect.runtime.WorkerConnector.initialize(WorkerConnector.java:77)
 at org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:256)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:1190)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.processConnectorConfigUpdates(DistributedHerder.java:548)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:395)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:289)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
 at java.util.concurrent.Futur

[jira] [Resolved] (KAFKA-9805) Running MirrorMaker in a Connect cluster,but the task not running

2021-08-04 Thread Andras Katona (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona resolved KAFKA-9805.
--
Resolution: Duplicate

> Running MirrorMaker in a Connect cluster,but the task not running
> -
>
> Key: KAFKA-9805
> URL: https://issues.apache.org/jira/browse/KAFKA-9805
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.4.1
> Environment: linux
>Reporter: ZHAO GH
>Priority: Major
>   Original Estimate: 5h
>  Remaining Estimate: 5h
>
> when i am Running MirrorMaker in a Connect clusterwhen i am Running 
> MirrorMaker in a Connect cluster,sometime the task running,but sometime the 
> task cannot assignment。
> I post connector config to connect cluster,here is my config
> http://99.12.98.33:8083/connectorshttp://99.12.98.33:8083/connectors
> {    "name": "kafka->kafka241-3",   
> "config": {       
>     "connector.class": 
> "org.apache.kafka.connect.mirror.MirrorSourceConnector",     
>     "topics": "MM2-3",     
>     "key.converter": 
> "org.apache.kafka.connect.converters.ByteArrayConverter",                     
>   "value.converter": 
> "org.apache.kafka.connect.converters.ByteArrayConverter",     
> "tasks.max": 8,     
> "sasl.mechanism": "PLAIN",   
>  "security.protocol": "SASL_PLAINTEXT",   
>  "sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule 
> required username=\"admin\" password=\"admin\";",          
> "source.cluster.alias": "kafka",     "source.cluster.bootstrap.servers": 
> "55.13.104.70:9092,55.13.104.74:9092,55.13.104.126:9092",     
> "source.admin.sasl.mechanism": "PLAIN",     
> "source.admin.security.protocol": "SASL_PLAINTEXT",     
> "source.admin.sasl.jaas.config": 
> "org.apache.kafka.common.security.plain.PlainLoginModule required 
> username=\"admin\" password=\"admin\";",         
> "target.cluster.alias": "kafka241",   
>  "target.cluster.bootstrap.servers": 
> "55.14.111.22:9092,55.14.111.23:9092,55.14.111.25:9092",     
> "target.admin.sasl.mechanism": "PLAIN",     
> "target.admin.security.protocol": "SASL_PLAINTEXT",   
> "target.admin.sasl.jaas.config": 
> "org.apache.kafka.common.security.plain.PlainLoginModule required 
> username=\"admin\" password=\"admin\";",       
>  "producer.sasl.mechanism": "PLAIN",   
>  "producer.security.protocol": "SASL_PLAINTEXT",     
> "producer.sasl.jaas.config": 
> "org.apache.kafka.common.security.plain.PlainLoginModule required 
> username=\"admin\" password=\"admin\";",          
> "consumer.sasl.mechanism": "PLAIN",   
>  "consumer.security.protocol": "SASL_PLAINTEXT",   
>  "consumer.sasl.jaas.config": 
> "org.apache.kafka.common.security.plain.PlainLoginModule required 
> username=\"admin\" password=\"admin\";",     
> "consumer.group.id": "mm2-1"       
> }
> }
>  
> but I get the connector status,found not tasks running
> http://99.12.98.33:8083/connectors/kafka->kafka241-3/status
> {
>  "name": "kafka->kafka241-3",
>  "connector": {
>  "state": "RUNNING",
>  "worker_id": "99.12.98.34:8083"
>  },
>  "tasks": [],
>  "type": "source"
> }
>  
> but sometime,the task run success
> http://99.12.98.33:8083/connectors/kafka->kafka241-1/status
> {
>  "name": "kafka->kafka241-1",
>  "connector": {
>  "state": "RUNNING",
>  "worker_id": "99.12.98.34:8083"
>  },
>  "tasks": [
>  {
>  "id": 0,
>  "state": "RUNNING",
>  "worker_id": "99.12.98.34:8083"
>  },
>  {
>  "id": 1,
>  "state": "RUNNING",
>  "worker_id": "99.12.98.33:8083"
>  },
>  {
>  "id": 2,
>  "state": "RUNNING",
>  "worker_id": "99.12.98.34:8083"
>  }
>  ],
>  "type": "source"
> }
> is somebody met this problem? how to fix it,is it a bug?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9747) No tasks created for a connector

2021-08-04 Thread Andras Katona (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17393078#comment-17393078
 ] 

Andras Katona commented on KAFKA-9747:
--

The connect name contains a character which is considered as illegal char via 
HttpClient::newRequest
{noformat}
java.lang.IllegalArgumentException: Illegal character in path at index ..
at java.net.URI.create(URI.java:852)
at org.eclipse.jetty.client.HttpClient.newRequest(HttpClient.java:453)
...
Caused by: java.net.URISyntaxException: Illegal character in path at index 
...
at java.net.URI$Parser.fail(URI.java:2848)
at java.net.URI$Parser.checkChars(URI.java:3021)
{noformat}

> No tasks created for a connector
> 
>
> Key: KAFKA-9747
> URL: https://issues.apache.org/jira/browse/KAFKA-9747
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.4.0
> Environment: OS: Ubuntu 18.04 LTS
> Platform: Confluent Platform 5.4
> HW: The same behaviour on various AWS instances - from t3.small to c5.xlarge
>Reporter: Vit Koma
>Assignee: Andras Katona
>Priority: Major
> Attachments: connect-distributed.properties, connect.log
>
>
> We are running Kafka Connect in a distributed mode on 3 nodes using Debezium 
> (MongoDB) and Confluent S3 connectors. When adding a new connector via the 
> REST API the connector is created in RUNNING state, but no tasks are created 
> for the connector.
> Pausing and resuming the connector does not help. When we stop all workers 
> and then start them again, the tasks are created and everything runs as it 
> should.
> The issue does not show up if we run only a single node.
> The issue is not caused by the connector plugins, because we see the same 
> behaviour for both Debezium and S3 connectors. Also in debug logs I can see 
> that Debezium is correctly returning a task configuration from the 
> Connector.taskConfigs() method.
> Connector configuration examples
> Debezium:
> {code}
> {
>   "name": "qa-mongodb-comp-converter-task|1",
>   "config": {
> "connector.class": "io.debezium.connector.mongodb.MongoDbConnector",
> "mongodb.hosts": 
> "mongodb-qa-001:27017,mongodb-qa-002:27017,mongodb-qa-003:27017",
> "mongodb.name": "qa-debezium-comp",
> "mongodb.ssl.enabled": true,
> "collection.whitelist": "converter[.]task",
> "tombstones.on.delete": true
>   }
> }
> {code}
> S3 Connector:
> {code}
> {
>   "name": "qa-s3-sink-task|1",
>   "config": {
> "connector.class": "io.confluent.connect.s3.S3SinkConnector",
> "topics": "qa-debezium-comp.converter.task",
> "topics.dir": "data/env/qa",
> "s3.region": "eu-west-1",
> "s3.bucket.name": "",
> "flush.size": "15000",
> "rotate.interval.ms": "360",
> "storage.class": "io.confluent.connect.s3.storage.S3Storage",
> "format.class": 
> "custom.kafka.connect.s3.format.plaintext.PlaintextFormat",
> "schema.generator.class": 
> "io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator",
> "partitioner.class": 
> "io.confluent.connect.storage.partitioner.DefaultPartitioner",
> "schema.compatibility": "NONE",
> "key.converter": "org.apache.kafka.connect.json.JsonConverter",
> "value.converter": "org.apache.kafka.connect.json.JsonConverter",
> "key.converter.schemas.enable": false,
> "value.converter.schemas.enable": false,
> "transforms": "ExtractDocument",
> 
> "transforms.ExtractDocument.type":"custom.kafka.connect.transforms.ExtractDocument$Value"
>   }
> }
> {code}
> The connectors are created using curl: {{curl -X POST -H "Content-Type: 
> application/json" --data @ http:/:10083/connectors}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-9747) No tasks created for a connector

2021-08-04 Thread Andras Katona (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona reassigned KAFKA-9747:


Assignee: Andras Katona

> No tasks created for a connector
> 
>
> Key: KAFKA-9747
> URL: https://issues.apache.org/jira/browse/KAFKA-9747
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.4.0
> Environment: OS: Ubuntu 18.04 LTS
> Platform: Confluent Platform 5.4
> HW: The same behaviour on various AWS instances - from t3.small to c5.xlarge
>Reporter: Vit Koma
>Assignee: Andras Katona
>Priority: Major
> Attachments: connect-distributed.properties, connect.log
>
>
> We are running Kafka Connect in a distributed mode on 3 nodes using Debezium 
> (MongoDB) and Confluent S3 connectors. When adding a new connector via the 
> REST API the connector is created in RUNNING state, but no tasks are created 
> for the connector.
> Pausing and resuming the connector does not help. When we stop all workers 
> and then start them again, the tasks are created and everything runs as it 
> should.
> The issue does not show up if we run only a single node.
> The issue is not caused by the connector plugins, because we see the same 
> behaviour for both Debezium and S3 connectors. Also in debug logs I can see 
> that Debezium is correctly returning a task configuration from the 
> Connector.taskConfigs() method.
> Connector configuration examples
> Debezium:
> {code}
> {
>   "name": "qa-mongodb-comp-converter-task|1",
>   "config": {
> "connector.class": "io.debezium.connector.mongodb.MongoDbConnector",
> "mongodb.hosts": 
> "mongodb-qa-001:27017,mongodb-qa-002:27017,mongodb-qa-003:27017",
> "mongodb.name": "qa-debezium-comp",
> "mongodb.ssl.enabled": true,
> "collection.whitelist": "converter[.]task",
> "tombstones.on.delete": true
>   }
> }
> {code}
> S3 Connector:
> {code}
> {
>   "name": "qa-s3-sink-task|1",
>   "config": {
> "connector.class": "io.confluent.connect.s3.S3SinkConnector",
> "topics": "qa-debezium-comp.converter.task",
> "topics.dir": "data/env/qa",
> "s3.region": "eu-west-1",
> "s3.bucket.name": "",
> "flush.size": "15000",
> "rotate.interval.ms": "360",
> "storage.class": "io.confluent.connect.s3.storage.S3Storage",
> "format.class": 
> "custom.kafka.connect.s3.format.plaintext.PlaintextFormat",
> "schema.generator.class": 
> "io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator",
> "partitioner.class": 
> "io.confluent.connect.storage.partitioner.DefaultPartitioner",
> "schema.compatibility": "NONE",
> "key.converter": "org.apache.kafka.connect.json.JsonConverter",
> "value.converter": "org.apache.kafka.connect.json.JsonConverter",
> "key.converter.schemas.enable": false,
> "value.converter.schemas.enable": false,
> "transforms": "ExtractDocument",
> 
> "transforms.ExtractDocument.type":"custom.kafka.connect.transforms.ExtractDocument$Value"
>   }
> }
> {code}
> The connectors are created using curl: {{curl -X POST -H "Content-Type: 
> application/json" --data @ http:/:10083/connectors}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9747) No tasks created for a connector

2021-08-04 Thread Andras Katona (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona updated KAFKA-9747:
-
Description: 
We are running Kafka Connect in a distributed mode on 3 nodes using Debezium 
(MongoDB) and Confluent S3 connectors. When adding a new connector via the REST 
API the connector is created in RUNNING state, but no tasks are created for the 
connector.

Pausing and resuming the connector does not help. When we stop all workers and 
then start them again, the tasks are created and everything runs as it should.

The issue does not show up if we run only a single node.

The issue is not caused by the connector plugins, because we see the same 
behaviour for both Debezium and S3 connectors. Also in debug logs I can see 
that Debezium is correctly returning a task configuration from the 
Connector.taskConfigs() method.

Connector configuration examples

Debezium:
{code}
{
  "name": "qa-mongodb-comp-converter-task|1",
  "config": {
"connector.class": "io.debezium.connector.mongodb.MongoDbConnector",
"mongodb.hosts": 
"mongodb-qa-001:27017,mongodb-qa-002:27017,mongodb-qa-003:27017",
"mongodb.name": "qa-debezium-comp",
"mongodb.ssl.enabled": true,
"collection.whitelist": "converter[.]task",
"tombstones.on.delete": true
  }
}
{code}
S3 Connector:
{code}
{
  "name": "qa-s3-sink-task|1",
  "config": {
"connector.class": "io.confluent.connect.s3.S3SinkConnector",
"topics": "qa-debezium-comp.converter.task",
"topics.dir": "data/env/qa",
"s3.region": "eu-west-1",
"s3.bucket.name": "",
"flush.size": "15000",
"rotate.interval.ms": "360",
"storage.class": "io.confluent.connect.s3.storage.S3Storage",
"format.class": "custom.kafka.connect.s3.format.plaintext.PlaintextFormat",
"schema.generator.class": 
"io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator",
"partitioner.class": 
"io.confluent.connect.storage.partitioner.DefaultPartitioner",
"schema.compatibility": "NONE",
"key.converter": "org.apache.kafka.connect.json.JsonConverter",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"key.converter.schemas.enable": false,
"value.converter.schemas.enable": false,
"transforms": "ExtractDocument",

"transforms.ExtractDocument.type":"custom.kafka.connect.transforms.ExtractDocument$Value"
  }
}
{code}
The connectors are created using curl: {{curl -X POST -H "Content-Type: 
application/json" --data @ http:/:10083/connectors}}



  was:
We are running Kafka Connect in a distributed mode on 3 nodes using Debezium 
(MongoDB) and Confluent S3 connectors. When adding a new connector via the REST 
API the connector is created in RUNNING state, but no tasks are created for the 
connector.

Pausing and resuming the connector does not help. When we stop all workers and 
then start them again, the tasks are created and everything runs as it should.

The issue does not show up if we run only a single node.

The issue is not caused by the connector plugins, because we see the same 
behaviour for both Debezium and S3 connectors. Also in debug logs I can see 
that Debezium is correctly returning a task configuration from the 
Connector.taskConfigs() method.

Connector configuration examples

Debezium:

{
  "name": "qa-mongodb-comp-converter-task|1",
  "config": {
"connector.class": "io.debezium.connector.mongodb.MongoDbConnector",
"mongodb.hosts": 
"mongodb-qa-001:27017,mongodb-qa-002:27017,mongodb-qa-003:27017",
"mongodb.name": "qa-debezium-comp",
"mongodb.ssl.enabled": true,
"collection.whitelist": "converter[.]task",
"tombstones.on.delete": true
  }
}

S3 Connector:

{
  "name": "qa-s3-sink-task|1",
  "config": {
"connector.class": "io.confluent.connect.s3.S3SinkConnector",
"topics": "qa-debezium-comp.converter.task",
"topics.dir": "data/env/qa",
"s3.region": "eu-west-1",
"s3.bucket.name": "",
"flush.size": "15000",
"rotate.interval.ms": "360",
"storage.class": "io.confluent.connect.s3.storage.S3Storage",
"format.class": "custom.kafka.connect.s3.format.plaintext.PlaintextFormat",
"schema.generator.class": 
"io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator",
"partitioner.class": 
"io.confluent.connect.storage.partitioner.DefaultPartitioner",
"schema.compatibility": "NONE",
"key.converter": "org.apache.kafka.connect.json.JsonConverter",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"key.converter.schemas.enable": false,
"value.converter.schemas.enable": false,
"transforms": "ExtractDocument",

"transforms.ExtractDocument.type":"custom.kafka.connect.transforms.ExtractDocument$Value"
  }
}

The connectors are created using curl: {{curl -X POST -H "Content-Type: 
application/json" --data @ http:/:10083/connectors}}




> No tasks created for a connector
> -

[jira] [Commented] (KAFKA-6266) Kafka 1.0.0 : Repeated occurrence of WARN Resetting first dirty offset of __consumer_offsets-xx to log start offset 203569 since the checkpointed offset 120955 is inval

2021-01-12 Thread Andras Katona (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-6266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17263146#comment-17263146
 ] 

Andras Katona commented on KAFKA-6266:
--

I had to modify the fix version, the commit landed on trunk is not included on 
2.5.0, 2.5.1, but it is in 2.6.0.

> Kafka 1.0.0 : Repeated occurrence of WARN Resetting first dirty offset of 
> __consumer_offsets-xx to log start offset 203569 since the checkpointed 
> offset 120955 is invalid. (kafka.log.LogCleanerManager$)
> --
>
> Key: KAFKA-6266
> URL: https://issues.apache.org/jira/browse/KAFKA-6266
> Project: Kafka
>  Issue Type: Bug
>  Components: offset manager
>Affects Versions: 1.0.0, 1.0.1
> Environment: CentOS 7, Apache kafka_2.12-1.0.0
>Reporter: VinayKumar
>Assignee: David Mao
>Priority: Major
> Fix For: 2.4.1, 2.6.0
>
>
> I upgraded Kafka from 0.10.2.1 to 1.0.0 version. From then, I see the below 
> warnings in the log.
>  I'm seeing these continuously in the log, and want these to be fixed- so 
> that they wont repeat. Can someone please help me in fixing the below 
> warnings.
> {code}
> WARN Resetting first dirty offset of __consumer_offsets-17 to log start 
> offset 3346 since the checkpointed offset 3332 is invalid. 
> (kafka.log.LogCleanerManager$)
>  WARN Resetting first dirty offset of __consumer_offsets-23 to log start 
> offset 4 since the checkpointed offset 1 is invalid. 
> (kafka.log.LogCleanerManager$)
>  WARN Resetting first dirty offset of __consumer_offsets-19 to log start 
> offset 203569 since the checkpointed offset 120955 is invalid. 
> (kafka.log.LogCleanerManager$)
>  WARN Resetting first dirty offset of __consumer_offsets-35 to log start 
> offset 16957 since the checkpointed offset 7 is invalid. 
> (kafka.log.LogCleanerManager$)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-6266) Kafka 1.0.0 : Repeated occurrence of WARN Resetting first dirty offset of __consumer_offsets-xx to log start offset 203569 since the checkpointed offset 120955 is invalid

2021-01-12 Thread Andras Katona (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona updated KAFKA-6266:
-
Fix Version/s: (was: 2.5.0)
   2.6.0

> Kafka 1.0.0 : Repeated occurrence of WARN Resetting first dirty offset of 
> __consumer_offsets-xx to log start offset 203569 since the checkpointed 
> offset 120955 is invalid. (kafka.log.LogCleanerManager$)
> --
>
> Key: KAFKA-6266
> URL: https://issues.apache.org/jira/browse/KAFKA-6266
> Project: Kafka
>  Issue Type: Bug
>  Components: offset manager
>Affects Versions: 1.0.0, 1.0.1
> Environment: CentOS 7, Apache kafka_2.12-1.0.0
>Reporter: VinayKumar
>Assignee: David Mao
>Priority: Major
> Fix For: 2.4.1, 2.6.0
>
>
> I upgraded Kafka from 0.10.2.1 to 1.0.0 version. From then, I see the below 
> warnings in the log.
>  I'm seeing these continuously in the log, and want these to be fixed- so 
> that they wont repeat. Can someone please help me in fixing the below 
> warnings.
> {code}
> WARN Resetting first dirty offset of __consumer_offsets-17 to log start 
> offset 3346 since the checkpointed offset 3332 is invalid. 
> (kafka.log.LogCleanerManager$)
>  WARN Resetting first dirty offset of __consumer_offsets-23 to log start 
> offset 4 since the checkpointed offset 1 is invalid. 
> (kafka.log.LogCleanerManager$)
>  WARN Resetting first dirty offset of __consumer_offsets-19 to log start 
> offset 203569 since the checkpointed offset 120955 is invalid. 
> (kafka.log.LogCleanerManager$)
>  WARN Resetting first dirty offset of __consumer_offsets-35 to log start 
> offset 16957 since the checkpointed offset 7 is invalid. 
> (kafka.log.LogCleanerManager$)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (KAFKA-9839) IllegalStateException on metadata update when broker learns about its new epoch after the controller

2020-08-13 Thread Andras Katona (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17177295#comment-17177295
 ] 

Andras Katona edited comment on KAFKA-9839 at 8/13/20, 7:28 PM:


This is in 2.6.0 too, but it's not in [release notes of 
2.6.0|https://dist.apache.org/repos/dist/release/kafka/2.6.0/RELEASE_NOTES.html].
 Commit:
https://github.com/apache/kafka/commit/bd17085ec10c767bc82e6b19a3016cf5d50dad92 
Shouldn't it be there? It's in 2.5.1 release notes but it was released later, 
so that confused my colleague. 


was (Author: akatona):
This is in 2.6.0 too, but it's not in [release notes of 
2.6.0|https://dist.apache.org/repos/dist/release/kafka/2.6.0/RELEASE_NOTES.html].
 Commit:
https://github.com/apache/kafka/commit/bd17085ec10c767bc82e6b19a3016cf5d50dad92 
Shouldn't it be there?

> IllegalStateException on metadata update when broker learns about its new 
> epoch after the controller
> 
>
> Key: KAFKA-9839
> URL: https://issues.apache.org/jira/browse/KAFKA-9839
> Project: Kafka
>  Issue Type: Bug
>  Components: controller, core
>Affects Versions: 2.2.1, 2.3.1, 2.5.0, 2.4.1
>Reporter: Anna Povzner
>Assignee: Anna Povzner
>Priority: Critical
> Fix For: 2.5.1
>
>
> Broker throws "java.lang.IllegalStateException: Epoch XXX larger than current 
> broker epoch YYY"  on UPDATE_METADATA when the controller learns about the 
> broker epoch and sends UPDATE_METADATA before KafkaZkCLient.registerBroker 
> completes (the broker learns about its new epoch).
> Here is the scenario we observed in more detail:
> 1. ZK session expires on broker 1
> 2. Broker 1 establishes new session to ZK and creates znode
> 3. Controller learns about broker 1 and assigns epoch
> 4. Broker 1 receives UPDATE_METADATA from controller, but it does not know 
> about its new epoch yet, so we get an exception:
> ERROR [KafkaApi-3] Error when handling request: clientId=1, correlationId=0, 
> api=UPDATE_METADATA, body={
> .
> java.lang.IllegalStateException: Epoch XXX larger than current broker epoch 
> YYY at kafka.server.KafkaApis.isBrokerEpochStale(KafkaApis.scala:2725) at 
> kafka.server.KafkaApis.handleUpdateMetadataRequest(KafkaApis.scala:320) at 
> kafka.server.KafkaApis.handle(KafkaApis.scala:139) at 
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69) at 
> java.lang.Thread.run(Thread.java:748)
> 5. KafkaZkCLient.registerBroker completes on broker 1: "INFO Stat of the 
> created znode at /brokers/ids/1"
> The result is the broker has a stale metadata for some time.
> Possible solutions:
> 1. Broker returns a more specific error and controller retries UPDATE_MEDATA
> 2. Broker accepts UPDATE_METADATA with larger broker epoch.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9839) IllegalStateException on metadata update when broker learns about its new epoch after the controller

2020-08-13 Thread Andras Katona (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17177295#comment-17177295
 ] 

Andras Katona commented on KAFKA-9839:
--

This is in 2.6.0 too, but it's not in [release notes of 
2.6.0|https://dist.apache.org/repos/dist/release/kafka/2.6.0/RELEASE_NOTES.html].
 Commit:
https://github.com/apache/kafka/commit/bd17085ec10c767bc82e6b19a3016cf5d50dad92 
Shouldn't it be there?

> IllegalStateException on metadata update when broker learns about its new 
> epoch after the controller
> 
>
> Key: KAFKA-9839
> URL: https://issues.apache.org/jira/browse/KAFKA-9839
> Project: Kafka
>  Issue Type: Bug
>  Components: controller, core
>Affects Versions: 2.2.1, 2.3.1, 2.5.0, 2.4.1
>Reporter: Anna Povzner
>Assignee: Anna Povzner
>Priority: Critical
> Fix For: 2.5.1
>
>
> Broker throws "java.lang.IllegalStateException: Epoch XXX larger than current 
> broker epoch YYY"  on UPDATE_METADATA when the controller learns about the 
> broker epoch and sends UPDATE_METADATA before KafkaZkCLient.registerBroker 
> completes (the broker learns about its new epoch).
> Here is the scenario we observed in more detail:
> 1. ZK session expires on broker 1
> 2. Broker 1 establishes new session to ZK and creates znode
> 3. Controller learns about broker 1 and assigns epoch
> 4. Broker 1 receives UPDATE_METADATA from controller, but it does not know 
> about its new epoch yet, so we get an exception:
> ERROR [KafkaApi-3] Error when handling request: clientId=1, correlationId=0, 
> api=UPDATE_METADATA, body={
> .
> java.lang.IllegalStateException: Epoch XXX larger than current broker epoch 
> YYY at kafka.server.KafkaApis.isBrokerEpochStale(KafkaApis.scala:2725) at 
> kafka.server.KafkaApis.handleUpdateMetadataRequest(KafkaApis.scala:320) at 
> kafka.server.KafkaApis.handle(KafkaApis.scala:139) at 
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69) at 
> java.lang.Thread.run(Thread.java:748)
> 5. KafkaZkCLient.registerBroker completes on broker 1: "INFO Stat of the 
> created znode at /brokers/ids/1"
> The result is the broker has a stale metadata for some time.
> Possible solutions:
> 1. Broker returns a more specific error and controller retries UPDATE_MEDATA
> 2. Broker accepts UPDATE_METADATA with larger broker epoch.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9992) EmbeddedKafkaCluster.deleteTopicAndWait not working with kafka_2.13

2020-05-19 Thread Andras Katona (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17111436#comment-17111436
 ] 

Andras Katona commented on KAFKA-9992:
--

Okay :) cool!

> EmbeddedKafkaCluster.deleteTopicAndWait not working with kafka_2.13
> ---
>
> Key: KAFKA-9992
> URL: https://issues.apache.org/jira/browse/KAFKA-9992
> Project: Kafka
>  Issue Type: Bug
>  Components: packaging, streams
>Affects Versions: 2.4.1
>Reporter: Andras Katona
>Assignee: Andras Katona
>Priority: Major
> Fix For: 2.6.0, 2.5.1
>
>
> Kafka Streams artifact is depending on kafka_2.12 as of now, it is in the 
> [kafka-streams-2.4.1.pom|https://repo1.maven.org/maven2/org/apache/kafka/kafka-streams/2.4.1/kafka-streams-2.4.1.pom]:
> {code}
> 
>   org.apache.kafka
>   kafka_2.12
>   2.4.1
>   test
>   
> {code}
> But it is not hardcoded, whatever scala version was used to compile this 
> component before uploading, that will be present in the pom.
> When I'm using these deps:
> {code}
> 
>   org.apache.kafka
>   kafka-streams
>   2.4.1
>   test
>   test
> 
> 
>   org.apache.kafka
>   kafka_2.13
>   2.4.1
>   test
>   test
> 
> {code}
> My test fails with the following exception (deleteTopicAndWait is called in 
> my @After method):
> {noformat}
> java.lang.NoSuchMethodError: 
> scala.collection.JavaConverters.setAsJavaSetConverter(Lscala/collection/Set;)Lscala/collection/convert/Decorators$AsJava;
> at 
> org.apache.kafka.streams.integration.utils.EmbeddedKafkaCluster$TopicsDeletedCondition.conditionMet(EmbeddedKafkaCluster.java:316)
> at 
> org.apache.kafka.test.TestUtils.lambda$waitForCondition$4(TestUtils.java:370)
> at 
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:417)
> at 
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:385)
> at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:368)
> at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:356)
> at 
> org.apache.kafka.streams.integration.utils.EmbeddedKafkaCluster.deleteTopicsAndWait(EmbeddedKafkaCluster.java:266)
> at 
> org.apache.kafka.streams.integration.utils.EmbeddedKafkaCluster.deleteTopicAndWait(EmbeddedKafkaCluster.java:221)
> {noformat}
> I modified kafka build locally to separate artifacts based on scala version 
> just like it is done with kafka core, and I pulled in kafka-streams_2.13 from 
> my local mvn repo and test was working again.
> I was only trying with 2.4.1, but I'm assuming other versions are also 
> affected, please add the proper versions and proper components too (in case 
> it's not packaging).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9992) EmbeddedKafkaCluster.deleteTopicAndWait not working with kafka_2.13

2020-05-15 Thread Andras Katona (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona updated KAFKA-9992:
-
Summary: EmbeddedKafkaCluster.deleteTopicAndWait not working with 
kafka_2.13  (was: EmbeddedKafkaCluster not working with kafka_2.13)

> EmbeddedKafkaCluster.deleteTopicAndWait not working with kafka_2.13
> ---
>
> Key: KAFKA-9992
> URL: https://issues.apache.org/jira/browse/KAFKA-9992
> Project: Kafka
>  Issue Type: Bug
>  Components: packaging, streams
>Affects Versions: 2.4.1
>Reporter: Andras Katona
>Assignee: Andras Katona
>Priority: Major
>
> Kafka Streams artifact is depending on kafka_2.12 as of now, it is in the 
> [kafka-streams-2.4.1.pom|https://repo1.maven.org/maven2/org/apache/kafka/kafka-streams/2.4.1/kafka-streams-2.4.1.pom]:
> {code}
> 
>   org.apache.kafka
>   kafka_2.12
>   2.4.1
>   test
>   
> {code}
> But it is not hardcoded, whatever scala version was used to compile this 
> component before uploading, that will be present in the pom.
> When I'm using these deps:
> {code}
> 
>   org.apache.kafka
>   kafka-streams
>   2.4.1
>   test
>   test
> 
> 
>   org.apache.kafka
>   kafka_2.13
>   2.4.1
>   test
>   test
> 
> {code}
> My test fails with the following exception (deleteTopicAndWait is called in 
> my @After method):
> {noformat}
> java.lang.NoSuchMethodError: 
> scala.collection.JavaConverters.setAsJavaSetConverter(Lscala/collection/Set;)Lscala/collection/convert/Decorators$AsJava;
> at 
> org.apache.kafka.streams.integration.utils.EmbeddedKafkaCluster$TopicsDeletedCondition.conditionMet(EmbeddedKafkaCluster.java:316)
> at 
> org.apache.kafka.test.TestUtils.lambda$waitForCondition$4(TestUtils.java:370)
> at 
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:417)
> at 
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:385)
> at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:368)
> at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:356)
> at 
> org.apache.kafka.streams.integration.utils.EmbeddedKafkaCluster.deleteTopicsAndWait(EmbeddedKafkaCluster.java:266)
> at 
> org.apache.kafka.streams.integration.utils.EmbeddedKafkaCluster.deleteTopicAndWait(EmbeddedKafkaCluster.java:221)
> {noformat}
> I modified kafka build locally to separate artifacts based on scala version 
> just like it is done with kafka core, and I pulled in kafka-streams_2.13 from 
> my local mvn repo and test was working again.
> I was only trying with 2.4.1, but I'm assuming other versions are also 
> affected, please add the proper versions and proper components too (in case 
> it's not packaging).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9992) EmbeddedKafkaCluster not working with kafka_2.13

2020-05-14 Thread Andras Katona (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona updated KAFKA-9992:
-
Summary: EmbeddedKafkaCluster not working with kafka_2.13  (was: 
EmbeddedKafka not working with kafka_2.13)

> EmbeddedKafkaCluster not working with kafka_2.13
> 
>
> Key: KAFKA-9992
> URL: https://issues.apache.org/jira/browse/KAFKA-9992
> Project: Kafka
>  Issue Type: Bug
>  Components: packaging, streams
>Affects Versions: 2.4.1
>Reporter: Andras Katona
>Priority: Major
>
> Kafka Streams artifact is depending on kafka_2.12 as of now, it is in the 
> [kafka-streams-2.4.1.pom|https://repo1.maven.org/maven2/org/apache/kafka/kafka-streams/2.4.1/kafka-streams-2.4.1.pom]:
> {code}
> 
>   org.apache.kafka
>   kafka_2.12
>   2.4.1
>   test
>   
> {code}
> But it is not hardcoded, whatever scala version was used to compile this 
> component before uploading, that will be present in the pom.
> When I'm using these deps:
> {code}
> 
>   org.apache.kafka
>   kafka-streams
>   2.4.1
>   test
>   test
> 
> 
>   org.apache.kafka
>   kafka_2.13
>   2.4.1
>   test
>   test
> 
> {code}
> My test fails with the following exception (deleteTopicAndWait is called in 
> my @After method):
> {noformat}
> java.lang.NoSuchMethodError: 
> scala.collection.JavaConverters.setAsJavaSetConverter(Lscala/collection/Set;)Lscala/collection/convert/Decorators$AsJava;
> at 
> org.apache.kafka.streams.integration.utils.EmbeddedKafkaCluster$TopicsDeletedCondition.conditionMet(EmbeddedKafkaCluster.java:316)
> at 
> org.apache.kafka.test.TestUtils.lambda$waitForCondition$4(TestUtils.java:370)
> at 
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:417)
> at 
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:385)
> at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:368)
> at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:356)
> at 
> org.apache.kafka.streams.integration.utils.EmbeddedKafkaCluster.deleteTopicsAndWait(EmbeddedKafkaCluster.java:266)
> at 
> org.apache.kafka.streams.integration.utils.EmbeddedKafkaCluster.deleteTopicAndWait(EmbeddedKafkaCluster.java:221)
> {noformat}
> I modified kafka build locally to separate artifacts based on scala version 
> just like it is done with kafka core, and I pulled in kafka-streams_2.13 from 
> my local mvn repo and test was working again.
> I was only trying with 2.4.1, but I'm assuming other versions are also 
> affected, please add the proper versions and proper components too (in case 
> it's not packaging).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9992) EmbeddedKafka not working with kafka_2.13

2020-05-14 Thread Andras Katona (Jira)
Andras Katona created KAFKA-9992:


 Summary: EmbeddedKafka not working with kafka_2.13
 Key: KAFKA-9992
 URL: https://issues.apache.org/jira/browse/KAFKA-9992
 Project: Kafka
  Issue Type: Bug
  Components: packaging, streams
Affects Versions: 2.4.1
Reporter: Andras Katona


Kafka Streams artifact is depending on kafka_2.12 as of now, it is in the 
[kafka-streams-2.4.1.pom|https://repo1.maven.org/maven2/org/apache/kafka/kafka-streams/2.4.1/kafka-streams-2.4.1.pom]:
{code}

  org.apache.kafka
  kafka_2.12
  2.4.1
  test
  
{code}
But it is not hardcoded, whatever scala version was used to compile this 
component before uploading, that will be present in the pom.

When I'm using these deps:
{code}

  org.apache.kafka
  kafka-streams
  2.4.1
  test
  test



  org.apache.kafka
  kafka_2.13
  2.4.1
  test
  test

{code}

My test fails with the following exception (deleteTopicAndWait is called in my 
@After method):
{noformat}
java.lang.NoSuchMethodError: 
scala.collection.JavaConverters.setAsJavaSetConverter(Lscala/collection/Set;)Lscala/collection/convert/Decorators$AsJava;
at 
org.apache.kafka.streams.integration.utils.EmbeddedKafkaCluster$TopicsDeletedCondition.conditionMet(EmbeddedKafkaCluster.java:316)
at 
org.apache.kafka.test.TestUtils.lambda$waitForCondition$4(TestUtils.java:370)
at 
org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:417)
at 
org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:385)
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:368)
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:356)
at 
org.apache.kafka.streams.integration.utils.EmbeddedKafkaCluster.deleteTopicsAndWait(EmbeddedKafkaCluster.java:266)
at 
org.apache.kafka.streams.integration.utils.EmbeddedKafkaCluster.deleteTopicAndWait(EmbeddedKafkaCluster.java:221)
{noformat}

I modified kafka build locally to separate artifacts based on scala version 
just like it is done with kafka core, and I pulled in kafka-streams_2.13 from 
my local mvn repo and test was working again.

I was only trying with 2.4.1, but I'm assuming other versions are also 
affected, please add the proper versions and proper components too (in case 
it's not packaging).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-7908) retention.ms and message.timestamp.difference.max.ms are tied

2020-03-13 Thread Andras Katona (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona resolved KAFKA-7908.
--
Resolution: Fixed

> retention.ms and message.timestamp.difference.max.ms are tied
> -
>
> Key: KAFKA-7908
> URL: https://issues.apache.org/jira/browse/KAFKA-7908
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.1.0
>Reporter: Ciprian Pascu
>Priority: Minor
> Fix For: 2.4.0, 2.3.0
>
>
> When configuring retention.ms for a topic, following warning will be printed:
> _retention.ms for topic X is set to 180. It is smaller than 
> message.timestamp.difference.max.ms's value 9223372036854775807. This may 
> result in frequent log rolling. (kafka.log.Log)_
>  
> message.timestamp.difference.max.ms has not been configured explicitly, so it 
> has the default value of 9223372036854775807; I haven't seen anywhere 
> mentioned that this parameter needs to be configured also, if retention.ms is 
> configured; also, if we look at the default values for these parameters, they 
> are also so, that retention.ms < message.timestamp.difference.max.ms; so, 
> what is the purpose of this warning, in this case?
> The warning is generated from this code 
> (core/src/main/scala/kafka/log/Log.scala):
>   _def updateConfig(updatedKeys: Set[String], newConfig: LogConfig): Unit = {_
>     _*if ((updatedKeys.contains(LogConfig.RetentionMsProp)*_
>   *_|| 
> updatedKeys.contains(LogConfig.MessageTimestampDifferenceMaxMsProp))_*
>   _&& topicPartition.partition == 0  // generate warnings only for one 
> partition of each topic_
>   _&& newConfig.retentionMs < newConfig.messageTimestampDifferenceMaxMs)_
>   _warn(s"${LogConfig.RetentionMsProp} for topic ${topicPartition.topic} 
> is set to ${newConfig.retentionMs}. It is smaller than " +_
>     _s"${LogConfig.MessageTimestampDifferenceMaxMsProp}'s value 
> ${newConfig.messageTimestampDifferenceMaxMs}. " +_
>     _s"This may result in frequent log rolling.")_
>     _this.config = newConfig_
>   _}_
>  
> Shouldn't the || operand in the bolded condition be replaced with &&?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-7908) retention.ms and message.timestamp.difference.max.ms are tied

2020-03-13 Thread Andras Katona (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona updated KAFKA-7908:
-
Fix Version/s: 2.4.0

> retention.ms and message.timestamp.difference.max.ms are tied
> -
>
> Key: KAFKA-7908
> URL: https://issues.apache.org/jira/browse/KAFKA-7908
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.1.0
>Reporter: Ciprian Pascu
>Priority: Minor
> Fix For: 2.3.0, 2.4.0
>
>
> When configuring retention.ms for a topic, following warning will be printed:
> _retention.ms for topic X is set to 180. It is smaller than 
> message.timestamp.difference.max.ms's value 9223372036854775807. This may 
> result in frequent log rolling. (kafka.log.Log)_
>  
> message.timestamp.difference.max.ms has not been configured explicitly, so it 
> has the default value of 9223372036854775807; I haven't seen anywhere 
> mentioned that this parameter needs to be configured also, if retention.ms is 
> configured; also, if we look at the default values for these parameters, they 
> are also so, that retention.ms < message.timestamp.difference.max.ms; so, 
> what is the purpose of this warning, in this case?
> The warning is generated from this code 
> (core/src/main/scala/kafka/log/Log.scala):
>   _def updateConfig(updatedKeys: Set[String], newConfig: LogConfig): Unit = {_
>     _*if ((updatedKeys.contains(LogConfig.RetentionMsProp)*_
>   *_|| 
> updatedKeys.contains(LogConfig.MessageTimestampDifferenceMaxMsProp))_*
>   _&& topicPartition.partition == 0  // generate warnings only for one 
> partition of each topic_
>   _&& newConfig.retentionMs < newConfig.messageTimestampDifferenceMaxMs)_
>   _warn(s"${LogConfig.RetentionMsProp} for topic ${topicPartition.topic} 
> is set to ${newConfig.retentionMs}. It is smaller than " +_
>     _s"${LogConfig.MessageTimestampDifferenceMaxMsProp}'s value 
> ${newConfig.messageTimestampDifferenceMaxMs}. " +_
>     _s"This may result in frequent log rolling.")_
>     _this.config = newConfig_
>   _}_
>  
> Shouldn't the || operand in the bolded condition be replaced with &&?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-7908) retention.ms and message.timestamp.difference.max.ms are tied

2020-03-13 Thread Andras Katona (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona updated KAFKA-7908:
-
Fix Version/s: 2.3.0

> retention.ms and message.timestamp.difference.max.ms are tied
> -
>
> Key: KAFKA-7908
> URL: https://issues.apache.org/jira/browse/KAFKA-7908
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.1.0
>Reporter: Ciprian Pascu
>Priority: Minor
> Fix For: 2.3.0
>
>
> When configuring retention.ms for a topic, following warning will be printed:
> _retention.ms for topic X is set to 180. It is smaller than 
> message.timestamp.difference.max.ms's value 9223372036854775807. This may 
> result in frequent log rolling. (kafka.log.Log)_
>  
> message.timestamp.difference.max.ms has not been configured explicitly, so it 
> has the default value of 9223372036854775807; I haven't seen anywhere 
> mentioned that this parameter needs to be configured also, if retention.ms is 
> configured; also, if we look at the default values for these parameters, they 
> are also so, that retention.ms < message.timestamp.difference.max.ms; so, 
> what is the purpose of this warning, in this case?
> The warning is generated from this code 
> (core/src/main/scala/kafka/log/Log.scala):
>   _def updateConfig(updatedKeys: Set[String], newConfig: LogConfig): Unit = {_
>     _*if ((updatedKeys.contains(LogConfig.RetentionMsProp)*_
>   *_|| 
> updatedKeys.contains(LogConfig.MessageTimestampDifferenceMaxMsProp))_*
>   _&& topicPartition.partition == 0  // generate warnings only for one 
> partition of each topic_
>   _&& newConfig.retentionMs < newConfig.messageTimestampDifferenceMaxMs)_
>   _warn(s"${LogConfig.RetentionMsProp} for topic ${topicPartition.topic} 
> is set to ${newConfig.retentionMs}. It is smaller than " +_
>     _s"${LogConfig.MessageTimestampDifferenceMaxMsProp}'s value 
> ${newConfig.messageTimestampDifferenceMaxMs}. " +_
>     _s"This may result in frequent log rolling.")_
>     _this.config = newConfig_
>   _}_
>  
> Shouldn't the || operand in the bolded condition be replaced with &&?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-7520) Adding possibility to configure versions in Mirror Maker ducktape test

2019-01-17 Thread Andras Katona (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona resolved KAFKA-7520.
--
Resolution: Won't Fix

> Adding possibility to configure versions in Mirror Maker ducktape test
> --
>
> Key: KAFKA-7520
> URL: https://issues.apache.org/jira/browse/KAFKA-7520
> Project: Kafka
>  Issue Type: Improvement
>  Components: system tests
>Reporter: Andras Katona
>Assignee: Andras Katona
>Priority: Minor
>
> Currently it is testing the current (dev) version only. It would be nice to 
> test mirror maker between different type of brokers for example.
> Test: {{tests/kafkatest/tests/core/mirror_maker_test.py}}
> Test service: {{tests/kafkatest/services/mirror_maker.py}}
> This ticket is for extending MM test service and modify test itself to be 
> able to configure it with other than DEV version, but not changing the test's 
> behaviour.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7520) Adding possibility to configure versions in Mirror Maker ducktape test

2018-10-19 Thread Andras Katona (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona updated KAFKA-7520:
-
Priority: Minor  (was: Major)

> Adding possibility to configure versions in Mirror Maker ducktape test
> --
>
> Key: KAFKA-7520
> URL: https://issues.apache.org/jira/browse/KAFKA-7520
> Project: Kafka
>  Issue Type: Improvement
>  Components: system tests
>Reporter: Andras Katona
>Assignee: Andras Katona
>Priority: Minor
>
> Currently it is testing the current (dev) version only. It would be nice to 
> test mirror maker between different type of brokers for example.
> Test: {{tests/kafkatest/tests/core/mirror_maker_test.py}}
> Test service: {{tests/kafkatest/services/mirror_maker.py}}
> This ticket is for extending MM test service and modify test itself to be 
> able to configure it with other than DEV version, but not changing the test's 
> behaviour.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7520) Adding possibility to configure versions in Mirror Maker ducktape test

2018-10-19 Thread Andras Katona (JIRA)
Andras Katona created KAFKA-7520:


 Summary: Adding possibility to configure versions in Mirror Maker 
ducktape test
 Key: KAFKA-7520
 URL: https://issues.apache.org/jira/browse/KAFKA-7520
 Project: Kafka
  Issue Type: Improvement
  Components: system tests
Reporter: Andras Katona
Assignee: Andras Katona


Currently it is testing the current (dev) version only. It would be nice to 
test mirror maker between different type of brokers for example.

Test: {{tests/kafkatest/tests/core/mirror_maker_test.py}}
Test service: {{tests/kafkatest/services/mirror_maker.py}}

This ticket is for extending MM test service and modify test itself to be able 
to configure it with other than DEV version, but not changing the test's 
behaviour.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7518) FutureRecordMetadata.get deadline calculation from timeout is not using timeunit

2018-10-18 Thread Andras Katona (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona updated KAFKA-7518:
-
Description: 
Code below assumes that timeout is in milliseconds when calculating deadline.

{code}
@Override
public RecordMetadata get(long timeout, TimeUnit unit) throws 
InterruptedException, ExecutionException, TimeoutException {
// Handle overflow.
long now = System.currentTimeMillis();
long deadline = Long.MAX_VALUE - timeout < now ? Long.MAX_VALUE : now + 
timeout;
{code}

{{kafka.server.DynamicBrokerReconfigurationTest#testAdvertisedListenerUpdate}} 
failed sometimes for me and it took me to this code segment.

  was:
Code below assumes that timeout is in milliseconds when calculating deadline.

{code}
@Override
public RecordMetadata get(long timeout, TimeUnit unit) throws 
InterruptedException, ExecutionException, TimeoutException {
// Handle overflow.
long now = System.currentTimeMillis();
long deadline = Long.MAX_VALUE - timeout < now ? Long.MAX_VALUE : now + 
timeout;
{code}

It was causing 
{{kafka.server.DynamicBrokerReconfigurationTest#testAdvertisedListenerUpdate}} 
to fail sometimes and it took me to this code segment.


> FutureRecordMetadata.get deadline calculation from timeout is not using 
> timeunit
> 
>
> Key: KAFKA-7518
> URL: https://issues.apache.org/jira/browse/KAFKA-7518
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Andras Katona
>Assignee: Andras Katona
>Priority: Major
>
> Code below assumes that timeout is in milliseconds when calculating deadline.
> {code}
> @Override
> public RecordMetadata get(long timeout, TimeUnit unit) throws 
> InterruptedException, ExecutionException, TimeoutException {
> // Handle overflow.
> long now = System.currentTimeMillis();
> long deadline = Long.MAX_VALUE - timeout < now ? Long.MAX_VALUE : now 
> + timeout;
> {code}
> {{kafka.server.DynamicBrokerReconfigurationTest#testAdvertisedListenerUpdate}}
>  failed sometimes for me and it took me to this code segment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7518) FutureRecordMetadata.get deadline calculation from timeout is not using timeunit

2018-10-18 Thread Andras Katona (JIRA)
Andras Katona created KAFKA-7518:


 Summary: FutureRecordMetadata.get deadline calculation from 
timeout is not using timeunit
 Key: KAFKA-7518
 URL: https://issues.apache.org/jira/browse/KAFKA-7518
 Project: Kafka
  Issue Type: Bug
  Components: clients
Reporter: Andras Katona
Assignee: Andras Katona


Code below assumes that timeout is in milliseconds when calculating deadline.

{code}
@Override
public RecordMetadata get(long timeout, TimeUnit unit) throws 
InterruptedException, ExecutionException, TimeoutException {
// Handle overflow.
long now = System.currentTimeMillis();
long deadline = Long.MAX_VALUE - timeout < now ? Long.MAX_VALUE : now + 
timeout;
{code}

It was causing 
{{kafka.server.DynamicBrokerReconfigurationTest#testAdvertisedListenerUpdate}} 
to fail sometimes and it took me to this code segment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7489) ConnectDistributedTest is always running broker with dev version

2018-10-08 Thread Andras Katona (JIRA)
Andras Katona created KAFKA-7489:


 Summary: ConnectDistributedTest is always running broker with dev 
version
 Key: KAFKA-7489
 URL: https://issues.apache.org/jira/browse/KAFKA-7489
 Project: Kafka
  Issue Type: Test
  Components: KafkaConnect, system tests
Reporter: Andras Katona


h2. Test class
kafkatest.tests.connect.connect_distributed_test.ConnectDistributedTest

h2. Details
_test_broker_compatibility_ is +parametrized+ with different types of brokers, 
yet it is passed as string to _setup_services_ and this way KafkaService is 
initialised with DEV version in the end.

This is easy to fix, just wrap the _broker_version_ with KafkaVersion
{panel}
self.setup_services(broker_version={color:#FF}KafkaVersion{color}(broker_version),
 auto_create_topics=auto_create_topics, security_protocol=security_protocol)
{panel}

But test is failing with the parameter LATEST_0_9 with the following error 
message
{noformat}
Kafka Connect failed to start on node: ducker@ducker02 in condition mode: LISTEN
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7134) KafkaLog4jAppender - Appender exceptions are propagated to caller

2018-08-30 Thread Andras Katona (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16597229#comment-16597229
 ] 

Andras Katona commented on KAFKA-7134:
--

Pull request is merged to trunk, yet this ticket is not closed automatically. 
I'm not sure what to do in this case

> KafkaLog4jAppender - Appender exceptions are propagated to caller
> -
>
> Key: KAFKA-7134
> URL: https://issues.apache.org/jira/browse/KAFKA-7134
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: venkata praveen
>Assignee: Andras Katona
>Priority: Major
>
> KafkaLog4jAppender exceptions are propagated to caller when Kafka is 
> down/slow/other, it may cause the application crash. Ideally appender should 
> print and ignore the exception
>  or should provide option to ignore/throw the exceptions like 
> 'ignoreExceptions' property of 
> https://logging.apache.org/log4j/2.x/manual/appenders.html#KafkaAppender



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7134) KafkaLog4jAppender - Appender exceptions are propagated to caller

2018-08-09 Thread Andras Katona (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16574622#comment-16574622
 ] 

Andras Katona commented on KAFKA-7134:
--

My pull request's review/acceptance is a bit stuck, let me summarise it here as 
well.

I've introduced two new config parameters to the log appender:
 * *ignoreExceptions* - by default it is not ignoring exceptions thrown by the 
producer, has to be true in order to ignore them
 * *maxBlockMs* - it is introduced basically just to be able to test the 
ignoreExceptions parameter more efficiently with real log appender, without a 
producer behind it. By default it is 60 sec (that would be too much to wait in 
tests)

Regarding reviews, after a few iterations, the change is fine, although there 
is a question in this conversation, which is not answered yet.

[https://github.com/apache/kafka/pull/5415#discussion_r207178978]

Asked by [~rsivaram] from [~ijuma]
{quote}[@ijuma|https://github.com/ijuma] Do we use KIPs for adding configs to 
{{KafkaLog4jAppender}} ({{ignoreExceptions}} and {{timeout}} here)?
{quote}
*Note:* {{timeout}} is renamed to {{maxBlockMs}}

[~ijuma], could you check whether we need a KIP for these changes? I hope not, 
since this is a minor enhancement, but I'm looking forward to your answer. 
Thanks!

> KafkaLog4jAppender - Appender exceptions are propagated to caller
> -
>
> Key: KAFKA-7134
> URL: https://issues.apache.org/jira/browse/KAFKA-7134
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: venkata praveen
>Assignee: Andras Katona
>Priority: Major
>
> KafkaLog4jAppender exceptions are propagated to caller when Kafka is 
> down/slow/other, it may cause the application crash. Ideally appender should 
> print and ignore the exception
>  or should provide option to ignore/throw the exceptions like 
> 'ignoreExceptions' property of 
> https://logging.apache.org/log4j/2.x/manual/appenders.html#KafkaAppender



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7134) KafkaLog4jAppender - Appender exceptions are propagated to caller

2018-07-27 Thread Andras Katona (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16559353#comment-16559353
 ] 

Andras Katona commented on KAFKA-7134:
--

When using kafka appender, logging from org.apache.kafka.* packages should be 
disabled.

> KafkaLog4jAppender - Appender exceptions are propagated to caller
> -
>
> Key: KAFKA-7134
> URL: https://issues.apache.org/jira/browse/KAFKA-7134
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: venkata praveen
>Assignee: Andras Katona
>Priority: Major
>
> KafkaLog4jAppender exceptions are propagated to caller when Kafka is 
> down/slow/other, it may cause the application crash. Ideally appender should 
> print and ignore the exception
>  or should provide option to ignore/throw the exceptions like 
> 'ignoreExceptions' property of 
> https://logging.apache.org/log4j/2.x/manual/appenders.html#KafkaAppender



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7134) KafkaLog4jAppender - Appender exceptions are propagated to caller

2018-07-26 Thread Andras Katona (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558028#comment-16558028
 ] 

Andras Katona commented on KAFKA-7134:
--

I just realized, there are two KafkaLog4jAppender classes:
 # org.apache.kafka.log4jappender.KafkaLog4jAppender - this is in kafka 
repository
 # org.apache.logging.log4j.core.appender.mom.kafka.KafkaAppender - in log4j 
repository

The documentation is about #2. And in my opinion that one should be used 
instead of #1. The appender in log4j repository is far more sophisticated.

[~venkatpotru], which appender is this issue about?

> KafkaLog4jAppender - Appender exceptions are propagated to caller
> -
>
> Key: KAFKA-7134
> URL: https://issues.apache.org/jira/browse/KAFKA-7134
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: venkata praveen
>Assignee: Andras Katona
>Priority: Major
>
> KafkaLog4jAppender exceptions are propagated to caller when Kafka is 
> down/slow/other, it may cause the application crash. Ideally appender should 
> print and ignore the exception
>  or should provide option to ignore/throw the exceptions like 
> 'ignoreExceptions' property of 
> https://logging.apache.org/log4j/2.x/manual/appenders.html#KafkaAppender



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-7187) offsetsForTimes in MockConsumer implementation

2018-07-20 Thread Andras Katona (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona reassigned KAFKA-7187:


Assignee: (was: Andras Katona)

> offsetsForTimes in MockConsumer implementation
> --
>
> Key: KAFKA-7187
> URL: https://issues.apache.org/jira/browse/KAFKA-7187
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Reporter: Jing Chen
>Priority: Minor
>
> The implementation for offsetsForTimes in MockConsumer is missing, it simply 
> throws UnsupportedOperationException, can anyone help to provide the 
> implementation of the method?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-7134) KafkaLog4jAppender - Appender exceptions are propagated to caller

2018-07-20 Thread Andras Katona (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona reassigned KAFKA-7134:


Assignee: Andras Katona

> KafkaLog4jAppender - Appender exceptions are propagated to caller
> -
>
> Key: KAFKA-7134
> URL: https://issues.apache.org/jira/browse/KAFKA-7134
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: venkata praveen
>Assignee: Andras Katona
>Priority: Major
>
> KafkaLog4jAppender exceptions are propagated to caller when Kafka is 
> down/slow/other, it may cause the application crash. Ideally appender should 
> print and ignore the exception
>  or should provide option to ignore/throw the exceptions like 
> 'ignoreExceptions' property of 
> https://logging.apache.org/log4j/2.x/manual/appenders.html#KafkaAppender



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-7187) offsetsForTimes in MockConsumer implementation

2018-07-20 Thread Andras Katona (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona reassigned KAFKA-7187:


Assignee: Andras Katona

> offsetsForTimes in MockConsumer implementation
> --
>
> Key: KAFKA-7187
> URL: https://issues.apache.org/jira/browse/KAFKA-7187
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Reporter: Jing Chen
>Assignee: Andras Katona
>Priority: Minor
>
> The implementation for offsetsForTimes in MockConsumer is missing, it simply 
> throws UnsupportedOperationException, can anyone help to provide the 
> implementation of the method?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)