[jira] [Created] (KAFKA-7067) ConnectRestApiTest fails assertion

2018-06-16 Thread Magesh kumar Nandakumar (JIRA)
Magesh kumar Nandakumar created KAFKA-7067:
--

 Summary: ConnectRestApiTest fails assertion
 Key: KAFKA-7067
 URL: https://issues.apache.org/jira/browse/KAFKA-7067
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.0.0
Reporter: Magesh kumar Nandakumar
Assignee: Magesh kumar Nandakumar
 Fix For: 2.0.0


ConnectRestApiTest fails assertion for the test_rest_api. The test needs to be 
updated to include the new configs added in 2.0 in the expected result.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-7058) ConnectSchema#equals() broken for array-typed default values

2018-06-16 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava reassigned KAFKA-7058:


   Resolution: Fixed
 Assignee: Ewen Cheslack-Postava
 Reviewer: Ewen Cheslack-Postava
Fix Version/s: 2.1.0
   1.1.1
   1.0.2
   0.11.0.3
   2.0.0
   0.10.2.2
   0.10.1.2
   0.10.0.2
   0.9.0.2

> ConnectSchema#equals() broken for array-typed default values
> 
>
> Key: KAFKA-7058
> URL: https://issues.apache.org/jira/browse/KAFKA-7058
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.0.0, 1.1.0
>Reporter: Gunnar Morling
>Assignee: Ewen Cheslack-Postava
>Priority: Major
> Fix For: 0.9.0.2, 0.10.0.2, 0.10.1.2, 0.10.2.2, 2.0.0, 0.11.0.3, 
> 1.0.2, 1.1.1, 2.1.0
>
>
> {{ConnectSchema#equals()}} calls {{Objects#equals()}} for the schemas' 
> default values, but this doesn't work correctly if the default values in fact 
> are arrays. In this case, always {{false}} will be returned, also if the 
> default value arrays actually are the same.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7039) DelegatingClassLoader creates plugin instance even if its not Versioned

2018-06-16 Thread Ismael Juma (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-7039:
---
Affects Version/s: (was: 2.0.0)

> DelegatingClassLoader creates plugin instance even if its not Versioned
> ---
>
> Key: KAFKA-7039
> URL: https://issues.apache.org/jira/browse/KAFKA-7039
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Magesh kumar Nandakumar
>Assignee: Magesh kumar Nandakumar
>Priority: Blocker
> Fix For: 2.0.0
>
>
> The versioned interface was introduced as part of 
> [KIP-285|https://cwiki.apache.org/confluence/display/KAFKA/KIP-285%3A+Connect+Rest+Extension+Plugin].
>  DelegatingClassLoader is now attempting to create an instance of all the 
> plugins, even if it's not required.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7039) DelegatingClassLoader creates plugin instance even if its not Versioned

2018-06-16 Thread Ismael Juma (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-7039:
---
Fix Version/s: (was: 2.1.0)

> DelegatingClassLoader creates plugin instance even if its not Versioned
> ---
>
> Key: KAFKA-7039
> URL: https://issues.apache.org/jira/browse/KAFKA-7039
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Magesh kumar Nandakumar
>Assignee: Magesh kumar Nandakumar
>Priority: Blocker
> Fix For: 2.0.0
>
>
> The versioned interface was introduced as part of 
> [KIP-285|https://cwiki.apache.org/confluence/display/KAFKA/KIP-285%3A+Connect+Rest+Extension+Plugin].
>  DelegatingClassLoader is now attempting to create an instance of all the 
> plugins, even if it's not required.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7058) ConnectSchema#equals() broken for array-typed default values

2018-06-16 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16515005#comment-16515005
 ] 

ASF GitHub Bot commented on KAFKA-7058:
---

ewencp closed pull request #5225: KAFKA-7058 [Connect] Comparing schema default 
values using Objects#deepEquals()
URL: https://github.com/apache/kafka/pull/5225
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/connect/api/src/main/java/org/apache/kafka/connect/data/ConnectSchema.java 
b/connect/api/src/main/java/org/apache/kafka/connect/data/ConnectSchema.java
index ff8271635f3..f1a05bb19a6 100644
--- a/connect/api/src/main/java/org/apache/kafka/connect/data/ConnectSchema.java
+++ b/connect/api/src/main/java/org/apache/kafka/connect/data/ConnectSchema.java
@@ -291,7 +291,7 @@ public boolean equals(Object o) {
 Objects.equals(name, schema.name) &&
 Objects.equals(doc, schema.doc) &&
 Objects.equals(type, schema.type) &&
-Objects.equals(defaultValue, schema.defaultValue) &&
+Objects.deepEquals(defaultValue, schema.defaultValue) &&
 Objects.equals(fields, schema.fields) &&
 Objects.equals(keySchema, schema.keySchema) &&
 Objects.equals(valueSchema, schema.valueSchema) &&
diff --git 
a/connect/api/src/test/java/org/apache/kafka/connect/data/ConnectSchemaTest.java
 
b/connect/api/src/test/java/org/apache/kafka/connect/data/ConnectSchemaTest.java
index 339ef23ca54..048784e3335 100644
--- 
a/connect/api/src/test/java/org/apache/kafka/connect/data/ConnectSchemaTest.java
+++ 
b/connect/api/src/test/java/org/apache/kafka/connect/data/ConnectSchemaTest.java
@@ -268,6 +268,16 @@ public void testArrayEquality() {
 assertNotEquals(s1, differentValueSchema);
 }
 
+@Test
+public void testArrayDefaultValueEquality() {
+ConnectSchema s1 = new ConnectSchema(Schema.Type.ARRAY, false, new 
String[] {"a", "b"}, null, null, null, null, null, null, 
SchemaBuilder.int8().build());
+ConnectSchema s2 = new ConnectSchema(Schema.Type.ARRAY, false, new 
String[] {"a", "b"}, null, null, null, null, null, null, 
SchemaBuilder.int8().build());
+ConnectSchema differentValueSchema = new 
ConnectSchema(Schema.Type.ARRAY, false, new String[] {"b", "c"}, null, null, 
null, null, null, null, SchemaBuilder.int8().build());
+
+assertEquals(s1, s2);
+assertNotEquals(s1, differentValueSchema);
+}
+
 @Test
 public void testMapEquality() {
 // Same as testArrayEquality, but for both key and value schemas


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> ConnectSchema#equals() broken for array-typed default values
> 
>
> Key: KAFKA-7058
> URL: https://issues.apache.org/jira/browse/KAFKA-7058
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.0.0, 1.1.0
>Reporter: Gunnar Morling
>Priority: Major
>
> {{ConnectSchema#equals()}} calls {{Objects#equals()}} for the schemas' 
> default values, but this doesn't work correctly if the default values in fact 
> are arrays. In this case, always {{false}} will be returned, also if the 
> default value arrays actually are the same.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7047) Connect isolation whitelist does not include SimpleHeaderConverter

2018-06-16 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-7047.
--
   Resolution: Fixed
 Reviewer: Ewen Cheslack-Postava
Fix Version/s: 2.1.0
   1.1.1
   2.0.0

> Connect isolation whitelist does not include SimpleHeaderConverter
> --
>
> Key: KAFKA-7047
> URL: https://issues.apache.org/jira/browse/KAFKA-7047
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.1.0
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Critical
> Fix For: 2.0.0, 1.1.1, 2.1.0
>
>
> The SimpleHeaderConverter added in 1.1.0 was never added to the PluginUtils 
> whitelist so that this header converter is loaded in isolation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7047) Connect isolation whitelist does not include SimpleHeaderConverter

2018-06-16 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514996#comment-16514996
 ] 

ASF GitHub Bot commented on KAFKA-7047:
---

ewencp closed pull request #5204: KAFKA-7047: Added SimpleHeaderConverter to 
plugin isolation whitelist
URL: https://github.com/apache/kafka/pull/5204
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/isolation/PluginUtils.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/isolation/PluginUtils.java
index b4aee4741c3..0bbca81ec69 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/isolation/PluginUtils.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/isolation/PluginUtils.java
@@ -128,6 +128,7 @@
 + "|file\\..*"
 + "|converters\\..*"
 + "|storage\\.StringConverter"
++ "|storage\\.SimpleHeaderConverter"
 + 
"|rest\\.basic\\.auth\\.extension\\.BasicAuthSecurityRestExtension"
 + "))$";
 
diff --git 
a/connect/runtime/src/test/java/org/apache/kafka/connect/runtime/isolation/PluginUtilsTest.java
 
b/connect/runtime/src/test/java/org/apache/kafka/connect/runtime/isolation/PluginUtilsTest.java
index 9698153f986..0882c305135 100644
--- 
a/connect/runtime/src/test/java/org/apache/kafka/connect/runtime/isolation/PluginUtilsTest.java
+++ 
b/connect/runtime/src/test/java/org/apache/kafka/connect/runtime/isolation/PluginUtilsTest.java
@@ -146,6 +146,9 @@ public void testAllowedConnectFrameworkClasses() throws 
Exception {
 assertTrue(PluginUtils.shouldLoadInIsolation(
 "org.apache.kafka.connect.storage.StringConverter")
 );
+assertTrue(PluginUtils.shouldLoadInIsolation(
+"org.apache.kafka.connect.storage.SimpleHeaderConverter")
+);
 assertTrue(PluginUtils.shouldLoadInIsolation(
 
"org.apache.kafka.connect.rest.basic.auth.extension.BasicAuthSecurityRestExtension"
 ));


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Connect isolation whitelist does not include SimpleHeaderConverter
> --
>
> Key: KAFKA-7047
> URL: https://issues.apache.org/jira/browse/KAFKA-7047
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.1.0
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Critical
>
> The SimpleHeaderConverter added in 1.1.0 was never added to the PluginUtils 
> whitelist so that this header converter is loaded in isolation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7039) DelegatingClassLoader creates plugin instance even if its not Versioned

2018-06-16 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514995#comment-16514995
 ] 

ASF GitHub Bot commented on KAFKA-7039:
---

ewencp closed pull request #5191: KAFKA-7039 : Create an instance of the plugin 
only it's a Versioned Plugin
URL: https://github.com/apache/kafka/pull/5191
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/isolation/DelegatingClassLoader.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/isolation/DelegatingClassLoader.java
index dd387c45f52..144dbd87f55 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/isolation/DelegatingClassLoader.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/isolation/DelegatingClassLoader.java
@@ -62,6 +62,7 @@
 public class DelegatingClassLoader extends URLClassLoader {
 private static final Logger log = 
LoggerFactory.getLogger(DelegatingClassLoader.class);
 private static final String CLASSPATH_NAME = "classpath";
+private static final String UNDEFINED_VERSION = "undefined";
 
 private final Map, ClassLoader>> 
pluginLoaders;
 private final Map aliases;
@@ -318,7 +319,7 @@ private PluginScanResult scanPluginPath(
 Collection> result = new ArrayList<>();
 for (Class plugin : plugins) {
 if (PluginUtils.isConcrete(plugin)) {
-result.add(new PluginDesc<>(plugin, 
versionFor(plugin.newInstance()), loader));
+result.add(new PluginDesc<>(plugin, versionFor(plugin), 
loader));
 } else {
 log.debug("Skipping {} as it is not concrete implementation", 
plugin);
 }
@@ -336,7 +337,12 @@ private PluginScanResult scanPluginPath(
 }
 
 private static   String versionFor(T pluginImpl) {
-return pluginImpl instanceof Versioned ? ((Versioned) 
pluginImpl).version() : "undefined";
+return pluginImpl instanceof Versioned ? ((Versioned) 
pluginImpl).version() : UNDEFINED_VERSION;
+}
+
+private static  String versionFor(Class pluginKlass) 
throws IllegalAccessException, InstantiationException {
+// Temporary workaround until all the plugins are versioned.
+return Connector.class.isAssignableFrom(pluginKlass) ? 
versionFor(pluginKlass.newInstance()) : UNDEFINED_VERSION;
 }
 
 @Override


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> DelegatingClassLoader creates plugin instance even if its not Versioned
> ---
>
> Key: KAFKA-7039
> URL: https://issues.apache.org/jira/browse/KAFKA-7039
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.0.0
>Reporter: Magesh kumar Nandakumar
>Assignee: Magesh kumar Nandakumar
>Priority: Blocker
> Fix For: 2.0.0, 2.1.0
>
>
> The versioned interface was introduced as part of 
> [KIP-285|https://cwiki.apache.org/confluence/display/KAFKA/KIP-285%3A+Connect+Rest+Extension+Plugin].
>  DelegatingClassLoader is now attempting to create an instance of all the 
> plugins, even if it's not required.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7039) DelegatingClassLoader creates plugin instance even if its not Versioned

2018-06-16 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-7039.
--
   Resolution: Fixed
Fix Version/s: 2.1.0

Issue resolved by pull request 5191
[https://github.com/apache/kafka/pull/5191]

> DelegatingClassLoader creates plugin instance even if its not Versioned
> ---
>
> Key: KAFKA-7039
> URL: https://issues.apache.org/jira/browse/KAFKA-7039
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.0.0
>Reporter: Magesh kumar Nandakumar
>Assignee: Magesh kumar Nandakumar
>Priority: Blocker
> Fix For: 2.0.0, 2.1.0
>
>
> The versioned interface was introduced as part of 
> [KIP-285|https://cwiki.apache.org/confluence/display/KAFKA/KIP-285%3A+Connect+Rest+Extension+Plugin].
>  DelegatingClassLoader is now attempting to create an instance of all the 
> plugins, even if it's not required.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7066) Make Streams Runtime Error User Friendly in Case of Serialisation exception

2018-06-16 Thread Stephane Maarek (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514953#comment-16514953
 ] 

Stephane Maarek commented on KAFKA-7066:


Thanks [~mjsax] . I think that helps, but my PR looks like it goes at the most 
common - lowest level for all these issues, which addresses all kinds of 
stores. 
With logging though, I'd rather have too much than less, so I don't think any 
issues supersedes others

> Make Streams Runtime Error User Friendly in Case of Serialisation exception
> ---
>
> Key: KAFKA-7066
> URL: https://issues.apache.org/jira/browse/KAFKA-7066
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 1.1.0
>Reporter: Stephane Maarek
>Assignee: Stephane Maarek
>Priority: Major
> Fix For: 2.0.0
>
>
> This kind of exception can be cryptic for the beginner:
> {code:java}
> ERROR stream-thread 
> [favourite-colors-application-a336770d-6ba6-4bbb-8681-3c8ea91bd12e-StreamThread-1]
>  Failed to process stream task 2_0 due to the following error: 
> (org.apache.kafka.streams.processor.internals.AssignedStreamsTasks:105)
> java.lang.ClassCastException: java.lang.Long cannot be cast to 
> java.lang.String
> at 
> org.apache.kafka.common.serialization.StringSerializer.serialize(StringSerializer.java:28)
> at org.apache.kafka.streams.state.StateSerdes.rawValue(StateSerdes.java:178)
> at 
> org.apache.kafka.streams.state.internals.MeteredKeyValueBytesStore$1.innerValue(MeteredKeyValueBytesStore.java:66)
> at 
> org.apache.kafka.streams.state.internals.MeteredKeyValueBytesStore$1.innerValue(MeteredKeyValueBytesStore.java:57)
> at 
> org.apache.kafka.streams.state.internals.InnerMeteredKeyValueStore.put(InnerMeteredKeyValueStore.java:198)
> at 
> org.apache.kafka.streams.state.internals.MeteredKeyValueBytesStore.put(MeteredKeyValueBytesStore.java:117)
> at 
> org.apache.kafka.streams.kstream.internals.KTableAggregate$KTableAggregateProcessor.process(KTableAggregate.java:95)
> at 
> org.apache.kafka.streams.kstream.internals.KTableAggregate$KTableAggregateProcessor.process(KTableAggregate.java:56)
> at 
> org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:46)
> at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:208)
> at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:124)
> at 
> org.apache.kafka.streams.processor.internals.AbstractProcessorContext.forward(AbstractProcessorContext.java:174)
> at 
> org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:80)
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:224)
> at 
> org.apache.kafka.streams.processor.internals.AssignedStreamsTasks.process(AssignedStreamsTasks.java:94)
> at 
> org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:411)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.processAndMaybeCommit(StreamThread.java:918)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:798)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:750)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:720){code}
> We should add more detailed logging already present in SinkNode to assist the 
> user into solving this issue



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7066) Make Streams Runtime Error User Friendly in Case of Serialisation exception

2018-06-16 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514951#comment-16514951
 ] 

Matthias J. Sax commented on KAFKA-7066:


[~stephane.maa...@gmail.com] This two Jira seem to be related: KAFKA-6538 and 
KAFKA-7015

Just FYI. Hope it helps. If not, just ignore this comment.

> Make Streams Runtime Error User Friendly in Case of Serialisation exception
> ---
>
> Key: KAFKA-7066
> URL: https://issues.apache.org/jira/browse/KAFKA-7066
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 1.1.0
>Reporter: Stephane Maarek
>Assignee: Stephane Maarek
>Priority: Major
> Fix For: 2.0.0
>
>
> This kind of exception can be cryptic for the beginner:
> {code:java}
> ERROR stream-thread 
> [favourite-colors-application-a336770d-6ba6-4bbb-8681-3c8ea91bd12e-StreamThread-1]
>  Failed to process stream task 2_0 due to the following error: 
> (org.apache.kafka.streams.processor.internals.AssignedStreamsTasks:105)
> java.lang.ClassCastException: java.lang.Long cannot be cast to 
> java.lang.String
> at 
> org.apache.kafka.common.serialization.StringSerializer.serialize(StringSerializer.java:28)
> at org.apache.kafka.streams.state.StateSerdes.rawValue(StateSerdes.java:178)
> at 
> org.apache.kafka.streams.state.internals.MeteredKeyValueBytesStore$1.innerValue(MeteredKeyValueBytesStore.java:66)
> at 
> org.apache.kafka.streams.state.internals.MeteredKeyValueBytesStore$1.innerValue(MeteredKeyValueBytesStore.java:57)
> at 
> org.apache.kafka.streams.state.internals.InnerMeteredKeyValueStore.put(InnerMeteredKeyValueStore.java:198)
> at 
> org.apache.kafka.streams.state.internals.MeteredKeyValueBytesStore.put(MeteredKeyValueBytesStore.java:117)
> at 
> org.apache.kafka.streams.kstream.internals.KTableAggregate$KTableAggregateProcessor.process(KTableAggregate.java:95)
> at 
> org.apache.kafka.streams.kstream.internals.KTableAggregate$KTableAggregateProcessor.process(KTableAggregate.java:56)
> at 
> org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:46)
> at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:208)
> at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:124)
> at 
> org.apache.kafka.streams.processor.internals.AbstractProcessorContext.forward(AbstractProcessorContext.java:174)
> at 
> org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:80)
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:224)
> at 
> org.apache.kafka.streams.processor.internals.AssignedStreamsTasks.process(AssignedStreamsTasks.java:94)
> at 
> org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:411)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.processAndMaybeCommit(StreamThread.java:918)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:798)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:750)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:720){code}
> We should add more detailed logging already present in SinkNode to assist the 
> user into solving this issue



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6975) AdminClient.deleteRecords() may cause replicas unable to fetch from beginning

2018-06-16 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-6975:
---
Fix Version/s: 1.0.2

> AdminClient.deleteRecords() may cause replicas unable to fetch from beginning
> -
>
> Key: KAFKA-6975
> URL: https://issues.apache.org/jira/browse/KAFKA-6975
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 1.1.0, 1.0.1
>Reporter: Anna Povzner
>Assignee: Anna Povzner
>Priority: Blocker
> Fix For: 2.0.0, 1.0.2, 1.1.1
>
>
> AdminClient.deleteRecords(beforeOffset(offset)) will set log start offset to 
> the requested offset. If the requested offset is in the middle of the batch, 
> the replica will not be able to fetch from that offset (because it is in the 
> middle of the batch). 
> One use-case where this could cause problems is replica re-assignment. 
> Suppose we have a topic partition with 3 initial replicas, and at some point 
> the user issues  AdminClient.deleteRecords() for the offset that falls in the 
> middle of the batch. It now becomes log start offset for this topic 
> partition. Suppose at some later time, the user starts partition 
> re-assignment to 3 new replicas. The new replicas (followers) will start with 
> HW = 0, will try to fetch from 0, then get "out of order offset" because 0 < 
> log start offset (LSO); the follower will be able to reset offset to LSO of 
> the leader and fetch LSO; the leader will send a batch in response with base 
> offset  stop the fetcher thread. The end result is that the new replicas will not be 
> able to start fetching unless LSO moves to an offset that is not in the 
> middle of the batch, and the re-assignment will be stuck for a possibly a 
> very log time. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-7018) persist memberId for consumer restart

2018-06-16 Thread Boyang Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514939#comment-16514939
 ] 

Boyang Chen edited comment on KAFKA-7018 at 6/17/18 12:06 AM:
--

A summary of the idea:

When leader sends join request, it will always trigger another rebalance 
because there could be metadata change on the topic. So we need to make sure 
other members are aware of that.

Let's imagine a condition where every member joins with previous generation 
info, then:

If follower joins after leader, the stage would be starting from 
prepareRebalance, all the join group request from followers will be retained 
and make member state to awaitJoinCallBack.

If followers are joining before leader joins, they would send another sync 
group request.
 * If leader changes group state to prepareRebalance, we refuse the sync group 
request and they would rejoin.
 * If leader haven’t changed the group state to prepareRebalance, sync group 
request would success and follower starts sending heartbeat. In 
handleHeartbeat() function, eventually the leader will move state towards 
prepareRebalance, so the rebalance in progress error will be triggered.

Code logic here:

 *_} else if (error == Errors.REBALANCE_IN_PROGRESS) {_*

   *_log.debug("Attempt to heartbeat failed since group is 
rebalancing");_*

   *_requestRejoin();_*

   *_future.raise(Errors.REBALANCE_IN_PROGRESS);_*

   *_}_*

So now the only thing we need to do is on client side to make sure member id 
keeps the same through restart, don't need to worry about the join sequence of 
follower/leader.

 


was (Author: bchen225242):
A summary of the idea:

When leader sends join request, it will always trigger another rebalance 
because there could be metadata change on the topic. So we need to make sure 
other members are aware of that.

Let's imagine a condition where every member joins with previous generation 
info, then:

If follower joins after leader, the stage would be starting from 
prepareRebalance, all the join group request from followers will be retained 
and make member state to awaitJoinCallBack. 

If followers are joining before leader joins, they would send another sync 
group request. 
 * If leader changes group state to prepareRebalance, we refuse the sync group 
request and they would rejoin.
 * If leader haven’t changed the group state to prepareRebalance, sync group 
request would success and follower starts sending heartbeat. In 
handleHeartbeat() function, eventually the leader will move state towards 
prepareRebalance, so the rebalance in progress error will be triggered.

Code logic here:

 *_} else if (error == Errors.REBALANCE_IN_PROGRESS) {_*

    *_log.debug("Attempt to heartbeat failed since group is 
rebalancing");_*

    *_requestRejoin();_*

    *_future.raise(Errors.REBALANCE_IN_PROGRESS);_*

    *_}_* 

So now the only thing we need to do is on client side to make sure member id 
keeps the same through restart.

 

> persist memberId for consumer restart
> -
>
> Key: KAFKA-7018
> URL: https://issues.apache.org/jira/browse/KAFKA-7018
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer, streams
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
>
> In group coordinator, there is a logic to neglect join group request from 
> existing follower consumers:
> {code:java}
> case Empty | Stable =>
>   if (memberId == JoinGroupRequest.UNKNOWN_MEMBER_ID) {
> // if the member id is unknown, register the member to the group
> addMemberAndRebalance(rebalanceTimeoutMs, sessionTimeoutMs, clientId, 
> clientHost, protocolType, protocols, group, responseCallback)
>   } else {
> val member = group.get(memberId)
> if (group.isLeader(memberId) || !member.matches(protocols)) {
>   // force a rebalance if a member has changed metadata or if the leader 
> sends JoinGroup.
>   // The latter allows the leader to trigger rebalances for changes 
> affecting assignment
>   // which do not affect the member metadata (such as topic metadata 
> changes for the consumer)
>   updateMemberAndRebalance(group, member, protocols, responseCallback)
> } else {
>   // for followers with no actual change to their metadata, just return 
> group information
>   // for the current generation which will allow them to issue SyncGroup
>   responseCallback(JoinGroupResult(
> members = Map.empty,
> memberId = memberId,
> generationId = group.generationId,
> subProtocol = group.protocolOrNull,
> leaderId = group.leaderOrNull,
> error = Errors.NONE))
> }
> {code}
> While looking at the AbstractCoordinator, I fo

[jira] [Commented] (KAFKA-7018) persist memberId for consumer restart

2018-06-16 Thread Boyang Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514939#comment-16514939
 ] 

Boyang Chen commented on KAFKA-7018:


A summary of the idea:

When leader sends join request, it will always trigger another rebalance 
because there could be metadata change on the topic. So we need to make sure 
other members are aware of that.

Let's imagine a condition where every member joins with previous generation 
info, then:

If follower joins after leader, the stage would be starting from 
prepareRebalance, all the join group request from followers will be retained 
and make member state to awaitJoinCallBack. 

If followers are joining before leader joins, they would send another sync 
group request. 
 * If leader changes group state to prepareRebalance, we refuse the sync group 
request and they would rejoin.
 * If leader haven’t changed the group state to prepareRebalance, sync group 
request would success and follower starts sending heartbeat. In 
handleHeartbeat() function, eventually the leader will move state towards 
prepareRebalance, so the rebalance in progress error will be triggered.

Code logic here:

 *_} else if (error == Errors.REBALANCE_IN_PROGRESS) {_*

    *_log.debug("Attempt to heartbeat failed since group is 
rebalancing");_*

    *_requestRejoin();_*

    *_future.raise(Errors.REBALANCE_IN_PROGRESS);_*

    *_}_* 

So now the only thing we need to do is on client side to make sure member id 
keeps the same through restart.

 

> persist memberId for consumer restart
> -
>
> Key: KAFKA-7018
> URL: https://issues.apache.org/jira/browse/KAFKA-7018
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer, streams
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
>
> In group coordinator, there is a logic to neglect join group request from 
> existing follower consumers:
> {code:java}
> case Empty | Stable =>
>   if (memberId == JoinGroupRequest.UNKNOWN_MEMBER_ID) {
> // if the member id is unknown, register the member to the group
> addMemberAndRebalance(rebalanceTimeoutMs, sessionTimeoutMs, clientId, 
> clientHost, protocolType, protocols, group, responseCallback)
>   } else {
> val member = group.get(memberId)
> if (group.isLeader(memberId) || !member.matches(protocols)) {
>   // force a rebalance if a member has changed metadata or if the leader 
> sends JoinGroup.
>   // The latter allows the leader to trigger rebalances for changes 
> affecting assignment
>   // which do not affect the member metadata (such as topic metadata 
> changes for the consumer)
>   updateMemberAndRebalance(group, member, protocols, responseCallback)
> } else {
>   // for followers with no actual change to their metadata, just return 
> group information
>   // for the current generation which will allow them to issue SyncGroup
>   responseCallback(JoinGroupResult(
> members = Map.empty,
> memberId = memberId,
> generationId = group.generationId,
> subProtocol = group.protocolOrNull,
> leaderId = group.leaderOrNull,
> error = Errors.NONE))
> }
> {code}
> While looking at the AbstractCoordinator, I found that the generation was 
> hard-coded as 
> NO_GENERATION on restart, which means we will send UNKNOWN_MEMBER_ID in the 
> first join group request. This means we will treat the restarted consumer as 
> a new member, so the rebalance will be triggered until session timeout.
> I'm trying to clarify the following things before we extend the discussion:
>  # Whether my understanding of the above logic is right (Hope [~mjsax] could 
> help me double check)
>  # Whether it makes sense to persist last round of memberId for consumers? We 
> currently only need this feature in stream application, but will do no harm 
> if we also use it for consumer in general. This would be a nice-to-have 
> feature on consumer restart when we configured the loading-previous-memberId 
> to true. If we failed, simply use the UNKNOWN_MEMBER_ID
>  # The behavior could also be changed on the broker side, but I suspect it is 
> very risky. So far client side change should be the least effort. The end 
> goal is to avoid excessive rebalance from the same consumer restart, so if 
> you feel server side change could also help, we could further discuss.
> Thank you for helping out! [~mjsax] [~guozhang]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7063) Update documentation to remove references to old producers and consumers

2018-06-16 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514873#comment-16514873
 ] 

ASF GitHub Bot commented on KAFKA-7063:
---

omkreddy opened a new pull request #5240: KAFKA-7063: Update documentation to 
remove references to old producers and consumers
URL: https://github.com/apache/kafka/pull/5240
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Update documentation to remove references to old producers and consumers
> 
>
> Key: KAFKA-7063
> URL: https://issues.apache.org/jira/browse/KAFKA-7063
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Manikumar
>Priority: Major
>  Labels: newbie
> Fix For: 2.0.0
>
>
> We should also remove any mention of "new consumer" or "new producer". They 
> should just be "producer" and "consumer".
> Finally, any mention of "Scala producer/consumer/client" should also be 
> removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7063) Update documentation to remove references to old producers and consumers

2018-06-16 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-7063:
-
Fix Version/s: 2.0.0

> Update documentation to remove references to old producers and consumers
> 
>
> Key: KAFKA-7063
> URL: https://issues.apache.org/jira/browse/KAFKA-7063
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Manikumar
>Priority: Major
>  Labels: newbie
> Fix For: 2.0.0
>
>
> We should also remove any mention of "new consumer" or "new producer". They 
> should just be "producer" and "consumer".
> Finally, any mention of "Scala producer/consumer/client" should also be 
> removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6933) Broker reports Corrupted index warnings apparently infinitely

2018-06-16 Thread Ismael Juma (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-6933.

Resolution: Not A Bug

> Broker reports Corrupted index warnings apparently infinitely
> -
>
> Key: KAFKA-6933
> URL: https://issues.apache.org/jira/browse/KAFKA-6933
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 1.0.1
>Reporter: Franco Bonazza
>Priority: Major
>
> I'm running into a situation where the server logs show continuously the 
> following snippet:
> {noformat}
> [2018-05-23 10:58:56,590] INFO Loading producer state from offset 20601420 
> for partition transaction_r10_updates-6 with message format version 2 
> (kafka.log.Log)
> [2018-05-23 10:58:56,592] INFO Loading producer state from snapshot file 
> '/data/0/kafka-logs/transaction_r10_updates-6/20601420.snapshot' 
> for partition transaction_r10_u
> pdates-6 (kafka.log.ProducerStateManager)
> [2018-05-23 10:58:56,593] INFO Completed load of log 
> transaction_r10_updates-6 with 74 log segments, log start offset 0 and log 
> end offset 20601420 in 5823 ms (kafka.log.Log)
> [2018-05-23 10:58:58,761] WARN Found a corrupted index file due to 
> requirement failed: Corrupt index found, index file 
> (/data/0/kafka-logs/transaction_r10_updates-15/20544956.index) 
> has non-zero size but the last offset is 20544956 which is no larger than the 
> base offset 20544956.}. deleting 
> /data/0/kafka-logs/transaction_r10_updates-15/20544956.timeindex, 
> /data/0/kafka-logs/transaction_r10_updates-15/20544956.index, and 
> /data/0/kafka-logs/transaction_r10_updates-15/20544956.txnindex 
> and rebuilding index... (kafka.log.Log)
> [2018-05-23 10:58:58,763] INFO Loading producer state from snapshot file 
> '/data/0/kafka-logs/transaction_r10_updates-15/20544956.snapshot' 
> for partition transaction_r10_updates-15 (kafka.log.ProducerStateManager)
> [2018-05-23 10:59:02,202] INFO Recovering unflushed segment 20544956 in log 
> transaction_r10_updates-15. (kafka.log.Log){noformat}
> The set up is the following,
> Broker is 1.0.1
> There are mirrors from another cluster using client 0.10.2.1 
> There are kafka streams and other custom consumer / producers using 1.0.0 
> client.
>  
> While is doing this the JVM of the broker is up but it doesn't respond so 
> it's impossible to produce, consume or run any commands.
> If I delete all the index files the WARN turns into an ERROR, which takes a 
> long time (1 day last time I tried) but eventually it goes into a healthy 
> state, then I start the producers and things are still healthy, but when I 
> start the consumers it quickly goes into the original WARN loop, which seems 
> infinite.
>  
> I couldn't find any references to the problem, it seems to be at least 
> missreporting the issue, and perhaps it's not infinite? I let it loop over 
> the WARN for over a day and it never moved past that, and if there was 
> something really wrong with the state maybe it should be reported.
> The log cleaner log showed a few "too many files open" when it originally 
> happened but ulimit has always been set to unlimited so I'm not sure what 
> that error means.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6933) Broker reports Corrupted index warnings apparently infinitely

2018-06-16 Thread Ismael Juma (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514858#comment-16514858
 ] 

Ismael Juma commented on KAFKA-6933:


When a broker is shutdown cleanly it does what we call a "controlled shutdown", 
it tells the controller that it's going away and waits until leadership is 
moved to other brokers. This can take some time in some cases and if you have a 
script that kills the broker after a short period of time waiting, that could 
be the reason. 1.1.x does controlled shutdowns much faster too.

> Broker reports Corrupted index warnings apparently infinitely
> -
>
> Key: KAFKA-6933
> URL: https://issues.apache.org/jira/browse/KAFKA-6933
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 1.0.1
>Reporter: Franco Bonazza
>Priority: Major
>
> I'm running into a situation where the server logs show continuously the 
> following snippet:
> {noformat}
> [2018-05-23 10:58:56,590] INFO Loading producer state from offset 20601420 
> for partition transaction_r10_updates-6 with message format version 2 
> (kafka.log.Log)
> [2018-05-23 10:58:56,592] INFO Loading producer state from snapshot file 
> '/data/0/kafka-logs/transaction_r10_updates-6/20601420.snapshot' 
> for partition transaction_r10_u
> pdates-6 (kafka.log.ProducerStateManager)
> [2018-05-23 10:58:56,593] INFO Completed load of log 
> transaction_r10_updates-6 with 74 log segments, log start offset 0 and log 
> end offset 20601420 in 5823 ms (kafka.log.Log)
> [2018-05-23 10:58:58,761] WARN Found a corrupted index file due to 
> requirement failed: Corrupt index found, index file 
> (/data/0/kafka-logs/transaction_r10_updates-15/20544956.index) 
> has non-zero size but the last offset is 20544956 which is no larger than the 
> base offset 20544956.}. deleting 
> /data/0/kafka-logs/transaction_r10_updates-15/20544956.timeindex, 
> /data/0/kafka-logs/transaction_r10_updates-15/20544956.index, and 
> /data/0/kafka-logs/transaction_r10_updates-15/20544956.txnindex 
> and rebuilding index... (kafka.log.Log)
> [2018-05-23 10:58:58,763] INFO Loading producer state from snapshot file 
> '/data/0/kafka-logs/transaction_r10_updates-15/20544956.snapshot' 
> for partition transaction_r10_updates-15 (kafka.log.ProducerStateManager)
> [2018-05-23 10:59:02,202] INFO Recovering unflushed segment 20544956 in log 
> transaction_r10_updates-15. (kafka.log.Log){noformat}
> The set up is the following,
> Broker is 1.0.1
> There are mirrors from another cluster using client 0.10.2.1 
> There are kafka streams and other custom consumer / producers using 1.0.0 
> client.
>  
> While is doing this the JVM of the broker is up but it doesn't respond so 
> it's impossible to produce, consume or run any commands.
> If I delete all the index files the WARN turns into an ERROR, which takes a 
> long time (1 day last time I tried) but eventually it goes into a healthy 
> state, then I start the producers and things are still healthy, but when I 
> start the consumers it quickly goes into the original WARN loop, which seems 
> infinite.
>  
> I couldn't find any references to the problem, it seems to be at least 
> missreporting the issue, and perhaps it's not infinite? I let it loop over 
> the WARN for over a day and it never moved past that, and if there was 
> something really wrong with the state maybe it should be reported.
> The log cleaner log showed a few "too many files open" when it originally 
> happened but ulimit has always been set to unlimited so I'm not sure what 
> that error means.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6933) Broker reports Corrupted index warnings apparently infinitely

2018-06-16 Thread Franco Bonazza (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514848#comment-16514848
 ] 

Franco Bonazza commented on KAFKA-6933:
---

I haven't seen this issue again, I'm pretty sure you are right and what 
happened was that for a yet to be determined issue clean shutdowns were not 
happening for a while. I was surprised at how long it was taking to recreate 
the indices in 1.0.1 so I thought it was infinite. I think both issues are tied 
with long retention, but also with the speed at which it recovers at 1.1.0, I 
think it's just fine. I don't think there's a problem here.

> Broker reports Corrupted index warnings apparently infinitely
> -
>
> Key: KAFKA-6933
> URL: https://issues.apache.org/jira/browse/KAFKA-6933
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 1.0.1
>Reporter: Franco Bonazza
>Priority: Major
>
> I'm running into a situation where the server logs show continuously the 
> following snippet:
> {noformat}
> [2018-05-23 10:58:56,590] INFO Loading producer state from offset 20601420 
> for partition transaction_r10_updates-6 with message format version 2 
> (kafka.log.Log)
> [2018-05-23 10:58:56,592] INFO Loading producer state from snapshot file 
> '/data/0/kafka-logs/transaction_r10_updates-6/20601420.snapshot' 
> for partition transaction_r10_u
> pdates-6 (kafka.log.ProducerStateManager)
> [2018-05-23 10:58:56,593] INFO Completed load of log 
> transaction_r10_updates-6 with 74 log segments, log start offset 0 and log 
> end offset 20601420 in 5823 ms (kafka.log.Log)
> [2018-05-23 10:58:58,761] WARN Found a corrupted index file due to 
> requirement failed: Corrupt index found, index file 
> (/data/0/kafka-logs/transaction_r10_updates-15/20544956.index) 
> has non-zero size but the last offset is 20544956 which is no larger than the 
> base offset 20544956.}. deleting 
> /data/0/kafka-logs/transaction_r10_updates-15/20544956.timeindex, 
> /data/0/kafka-logs/transaction_r10_updates-15/20544956.index, and 
> /data/0/kafka-logs/transaction_r10_updates-15/20544956.txnindex 
> and rebuilding index... (kafka.log.Log)
> [2018-05-23 10:58:58,763] INFO Loading producer state from snapshot file 
> '/data/0/kafka-logs/transaction_r10_updates-15/20544956.snapshot' 
> for partition transaction_r10_updates-15 (kafka.log.ProducerStateManager)
> [2018-05-23 10:59:02,202] INFO Recovering unflushed segment 20544956 in log 
> transaction_r10_updates-15. (kafka.log.Log){noformat}
> The set up is the following,
> Broker is 1.0.1
> There are mirrors from another cluster using client 0.10.2.1 
> There are kafka streams and other custom consumer / producers using 1.0.0 
> client.
>  
> While is doing this the JVM of the broker is up but it doesn't respond so 
> it's impossible to produce, consume or run any commands.
> If I delete all the index files the WARN turns into an ERROR, which takes a 
> long time (1 day last time I tried) but eventually it goes into a healthy 
> state, then I start the producers and things are still healthy, but when I 
> start the consumers it quickly goes into the original WARN loop, which seems 
> infinite.
>  
> I couldn't find any references to the problem, it seems to be at least 
> missreporting the issue, and perhaps it's not infinite? I let it loop over 
> the WARN for over a day and it never moved past that, and if there was 
> something really wrong with the state maybe it should be reported.
> The log cleaner log showed a few "too many files open" when it originally 
> happened but ulimit has always been set to unlimited so I'm not sure what 
> that error means.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7064) "Unexpected resource type GROUP" when describing broker configs using latest admin client

2018-06-16 Thread Ismael Juma (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514829#comment-16514829
 ] 

Ismael Juma commented on KAFKA-7064:


Looks like there is an issue with the link above, so here's another one from a 
manually triggered run:

http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2018-06-15--001.1529092440--apache--trunk--dad19ac/ClientCompatibilityFeaturesTest/run_compatibility_test/broker_version%3D0.11.0.2/0/client_compatibility_test_output.txt

> "Unexpected resource type GROUP" when describing broker configs using latest 
> admin client
> -
>
> Key: KAFKA-7064
> URL: https://issues.apache.org/jira/browse/KAFKA-7064
> Project: Kafka
>  Issue Type: Bug
>Reporter: Rohan Desai
>Assignee: Andy Coates
>Priority: Blocker
> Fix For: 2.0.0
>
>
> I'm getting the following error when I try to describe broker configs using 
> the admin client:
> {code:java}
> org.apache.kafka.common.errors.InvalidRequestException: Unexpected resource 
> type GROUP for resource 0{code}
> I think its due to this commit: 
> [https://github.com/apache/kafka/commit/49db5a63c043b50c10c2dfd0648f8d74ee917b6a]
>  
> My guess at what's going on is that now that the client is using 
> ConfigResource instead of Resource it's sending a describe request for 
> resource type BROKER w/ id 3, while the broker associates id 3 w/ GROUP



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-5054) ChangeLoggingKeyValueByteStore delete and putIfAbsent should be synchronized

2018-06-16 Thread Guozhang Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-5054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514828#comment-16514828
 ] 

Guozhang Wang commented on KAFKA-5054:
--

Double checked the code but delete and putIfAbsent are still not synchronized?

> ChangeLoggingKeyValueByteStore delete and putIfAbsent should be synchronized
> 
>
> Key: KAFKA-5054
> URL: https://issues.apache.org/jira/browse/KAFKA-5054
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Damian Guy
>Assignee: Damian Guy
>Priority: Critical
> Fix For: 2.1.0
>
>
> {{putIfAbsent}} and {{delete}} should be synchronized as they involve at 
> least 2 operations on the underlying store and may result in inconsistent 
> results if someone were to query via IQ



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7064) "Unexpected resource type GROUP" when describing broker configs using latest admin client

2018-06-16 Thread Ismael Juma (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514826#comment-16514826
 ] 

Ismael Juma commented on KAFKA-7064:


We have a system test failure that may be related:

http://confluent-systest.s3-website-us-west-2.amazonaws.com/confluent-kafka-2-0-system-test-results/?prefix=2018-06-15--001.1529136488--apache--2.0--988ad7e

{code}
FAILED: Caught exception Expected describeAclsSupported to be supported, but it 
wasn't.

java.lang.RuntimeException: Expected describeAclsSupported to be supported, but 
it wasn't.
at 
org.apache.kafka.tools.ClientCompatibilityTest.tryFeature(ClientCompatibilityTest.java:503)
at 
org.apache.kafka.tools.ClientCompatibilityTest.tryFeature(ClientCompatibilityTest.java:488)
at 
org.apache.kafka.tools.ClientCompatibilityTest.testAdminClient(ClientCompatibilityTest.java:306)
at 
org.apache.kafka.tools.ClientCompatibilityTest.run(ClientCompatibilityTest.java:226)
at 
org.apache.kafka.tools.ClientCompatibilityTest.main(ClientCompatibilityTest.java:179)
Caused by: org.apache.kafka.common.errors.UnsupportedVersionException: Version 
0 only supports literal resource pattern types
** Command failed!
**
{code}

> "Unexpected resource type GROUP" when describing broker configs using latest 
> admin client
> -
>
> Key: KAFKA-7064
> URL: https://issues.apache.org/jira/browse/KAFKA-7064
> Project: Kafka
>  Issue Type: Bug
>Reporter: Rohan Desai
>Assignee: Andy Coates
>Priority: Blocker
> Fix For: 2.0.0
>
>
> I'm getting the following error when I try to describe broker configs using 
> the admin client:
> {code:java}
> org.apache.kafka.common.errors.InvalidRequestException: Unexpected resource 
> type GROUP for resource 0{code}
> I think its due to this commit: 
> [https://github.com/apache/kafka/commit/49db5a63c043b50c10c2dfd0648f8d74ee917b6a]
>  
> My guess at what's going on is that now that the client is using 
> ConfigResource instead of Resource it's sending a describe request for 
> resource type BROKER w/ id 3, while the broker associates id 3 w/ GROUP



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6708) Review Exception messages with regards to Serde Useage

2018-06-16 Thread Guozhang Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514825#comment-16514825
 ] 

Guozhang Wang commented on KAFKA-6708:
--

The serdes only happen in the following case:

1. when sending to an external topic or repartition topic, this is covered in 
SinkNode.
2. when reading from external topic, we cover deserialization errors in the 
DeserializationExceptionHandler interface, customizable in config.
3. when writing into the store, which accepts only serialized bytes (note it 
includes sending to the changelog topic as well if the store is logging 
enabled).


So as of now only case 3) is not captured, and the serdes happens at 
MeteredXXStores, calling the serde, i.e. not centralized in one class. We can 
add the logic similar in SinkNode to capture ClassCastException in the serde 
calls there.

3) is being covered in 

Let's pick this up in 
https://issues.apache.org/jira/projects/KAFKA/issues/KAFKA-7066 and 
https://github.com/apache/kafka/pull/5239 

> Review Exception messages with regards to Serde Useage
> --
>
> Key: KAFKA-6708
> URL: https://issues.apache.org/jira/browse/KAFKA-6708
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Bill Bejeck
>Priority: Major
>  Labels: newbie
>
> Error messages when not including Serdes required other than the provided 
> default ones should have error messages that are more specific with what 
> needs to be done and possible causes than just a {{ClassCastException}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6708) Review Exception messages with regards to Serde Useage

2018-06-16 Thread Guozhang Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-6708:
-
Labels: newbie  (was: newbie,)

> Review Exception messages with regards to Serde Useage
> --
>
> Key: KAFKA-6708
> URL: https://issues.apache.org/jira/browse/KAFKA-6708
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Bill Bejeck
>Priority: Major
>  Labels: newbie
>
> Error messages when not including Serdes required other than the provided 
> default ones should have error messages that are more specific with what 
> needs to be done and possible causes than just a {{ClassCastException}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7065) Quickstart tutorial fails because of missing brokers

2018-06-16 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514729#comment-16514729
 ] 

Manikumar commented on KAFKA-7065:
--

looks like you have not started kafka server. 

> Quickstart tutorial fails because of missing brokers
> 
>
> Key: KAFKA-7065
> URL: https://issues.apache.org/jira/browse/KAFKA-7065
> Project: Kafka
>  Issue Type: Bug
>  Components: website
>Affects Versions: 1.1.0
> Environment: Java 1.8.0_162
> MacOS 10.13.4
>Reporter: Holger Brandl
>Priority: Major
>
> Following the tutorial on [https://kafka.apache.org/quickstart] I've tried 
> setup a kafka instance with
> {{wget --no-check-certificate 
> http://apache.lauf-forum.at/kafka/1.1.0/kafka_2.12-1.1.0.tgz}}
> {{tar xvf kafka_2.12-1.1.0.tgz}}
> {{## start the server}}
> {{cd kafka_2.12-1.1.0}}
> {{bin/zookeeper-server-start.sh config/zookeeper.properties}}
>  
> Until here everything is fine, and it is reporting:
>  
> {{[kafka_2.12-1.1.0]$ bin/zookeeper-server-start.sh 
> config/zookeeper.properties}}{{[2018-06-16 10:38:41,238] INFO Reading 
> configuration from: config/zookeeper.properties 
> (org.apache.zookeeper.server.quorum.QuorumPeerConfig)}}{{[2018-06-16 
> 10:38:41,240] INFO autopurge.snapRetainCount set to 3 
> (org.apache.zookeeper.server.DatadirCleanupManager)}}{{[2018-06-16 
> 10:38:41,241] INFO autopurge.purgeInterval set to 0 
> (org.apache.zookeeper.server.DatadirCleanupManager)}}{{[2018-06-16 
> 10:38:41,241] INFO Purge task is not scheduled. 
> (org.apache.zookeeper.server.DatadirCleanupManager)}}{{[2018-06-16 
> 10:38:41,241] WARN Either no config or no quorum defined in config, running  
> in standalone mode 
> (org.apache.zookeeper.server.quorum.QuorumPeerMain)}}{{[2018-06-16 
> 10:38:41,272] INFO Reading configuration from: config/zookeeper.properties 
> (org.apache.zookeeper.server.quorum.QuorumPeerConfig)}}{{[2018-06-16 
> 10:38:41,273] INFO Starting server 
> (org.apache.zookeeper.server.ZooKeeperServerMain)}}{{[2018-06-16 
> 10:38:41,299] INFO Server 
> environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f,
>  built on 03/23/2017 10:13 GMT 
> (org.apache.zookeeper.server.ZooKeeperServer)}}{{[2018-06-16 10:38:41,300] 
> INFO Server environment:host.name=192.168.0.8 
> (org.apache.zookeeper.server.ZooKeeperServer)}}{{[2018-06-16 10:38:41,300] 
> INFO Server environment:java.version=1.8.0_162 
> (org.apache.zookeeper.server.ZooKeeperServer)}}{{[2018-06-16 10:38:41,300] 
> INFO Server environment:java.vendor=Oracle Corporation 
> (org.apache.zookeeper.server.ZooKeeperServer)}}{{[2018-06-16 10:38:41,300] 
> INFO Server 
> environment:java.home=/Library/Java/JavaVirtualMachines/jdk1.8.0_162.jdk/Contents/Home/jre
>  (org.apache.zookeeper.server.ZooKeeperServer)}}{{[2018-06-16 10:38:41,300] 
> INFO Server 
> environment:java.class.path=/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/aopalliance-repackaged-2.5.0-b32.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/argparse4j-0.7.0.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/commons-lang3-3.5.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/connect-api-1.1.0.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/connect-file-1.1.0.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/connect-json-1.1.0.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/connect-runtime-1.1.0.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/connect-transforms-1.1.0.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/guava-20.0.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/hk2-api-2.5.0-b32.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/hk2-locator-2.5.0-b32.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/hk2-utils-2.5.0-b32.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/jackson-annotations-2.9.4.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/jackson-core-2.9.4.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/jackson-databind-2.9.4.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/jackson-jaxrs-base-2.9.4.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/jackson-jaxrs-json-provider-2.9.4.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/jackson-module-jaxb-annotations-2.9.4.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/javassist-3.20.0-GA.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/javassist-3.21.0-GA.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/javax.annotation-api-1.2.jar:/Users/bran

[jira] [Commented] (KAFKA-7065) Quickstart tutorial fails because of missing brokers

2018-06-16 Thread Stanislav Kozlovski (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514726#comment-16514726
 ] 

Stanislav Kozlovski commented on KAFKA-7065:


Strange, the quickstart works for me with Ubuntu 16.04 LTS with Java 1.8.0_171. 

The only strange warning I got is from ZooKeeper - 
{code}
[2018-06-16 12:08:07,970] INFO Got user-level KeeperException when processing 
sessionid:0x16407d90eec0001 type:setData cxid:0x4 zxid:0x1f txntype:-1 
reqpath:n/a Error Path:/config/topics/test Error:KeeperErrorCode = NoNode for 
/config/topics/test (org.apache.zookeeper.server.PrepRequestProcessor)
{code}

> Quickstart tutorial fails because of missing brokers
> 
>
> Key: KAFKA-7065
> URL: https://issues.apache.org/jira/browse/KAFKA-7065
> Project: Kafka
>  Issue Type: Bug
>  Components: website
>Affects Versions: 1.1.0
> Environment: Java 1.8.0_162
> MacOS 10.13.4
>Reporter: Holger Brandl
>Priority: Major
>
> Following the tutorial on [https://kafka.apache.org/quickstart] I've tried 
> setup a kafka instance with
> {{wget --no-check-certificate 
> http://apache.lauf-forum.at/kafka/1.1.0/kafka_2.12-1.1.0.tgz}}
> {{tar xvf kafka_2.12-1.1.0.tgz}}
> {{## start the server}}
> {{cd kafka_2.12-1.1.0}}
> {{bin/zookeeper-server-start.sh config/zookeeper.properties}}
>  
> Until here everything is fine, and it is reporting:
>  
> {{[kafka_2.12-1.1.0]$ bin/zookeeper-server-start.sh 
> config/zookeeper.properties}}{{[2018-06-16 10:38:41,238] INFO Reading 
> configuration from: config/zookeeper.properties 
> (org.apache.zookeeper.server.quorum.QuorumPeerConfig)}}{{[2018-06-16 
> 10:38:41,240] INFO autopurge.snapRetainCount set to 3 
> (org.apache.zookeeper.server.DatadirCleanupManager)}}{{[2018-06-16 
> 10:38:41,241] INFO autopurge.purgeInterval set to 0 
> (org.apache.zookeeper.server.DatadirCleanupManager)}}{{[2018-06-16 
> 10:38:41,241] INFO Purge task is not scheduled. 
> (org.apache.zookeeper.server.DatadirCleanupManager)}}{{[2018-06-16 
> 10:38:41,241] WARN Either no config or no quorum defined in config, running  
> in standalone mode 
> (org.apache.zookeeper.server.quorum.QuorumPeerMain)}}{{[2018-06-16 
> 10:38:41,272] INFO Reading configuration from: config/zookeeper.properties 
> (org.apache.zookeeper.server.quorum.QuorumPeerConfig)}}{{[2018-06-16 
> 10:38:41,273] INFO Starting server 
> (org.apache.zookeeper.server.ZooKeeperServerMain)}}{{[2018-06-16 
> 10:38:41,299] INFO Server 
> environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f,
>  built on 03/23/2017 10:13 GMT 
> (org.apache.zookeeper.server.ZooKeeperServer)}}{{[2018-06-16 10:38:41,300] 
> INFO Server environment:host.name=192.168.0.8 
> (org.apache.zookeeper.server.ZooKeeperServer)}}{{[2018-06-16 10:38:41,300] 
> INFO Server environment:java.version=1.8.0_162 
> (org.apache.zookeeper.server.ZooKeeperServer)}}{{[2018-06-16 10:38:41,300] 
> INFO Server environment:java.vendor=Oracle Corporation 
> (org.apache.zookeeper.server.ZooKeeperServer)}}{{[2018-06-16 10:38:41,300] 
> INFO Server 
> environment:java.home=/Library/Java/JavaVirtualMachines/jdk1.8.0_162.jdk/Contents/Home/jre
>  (org.apache.zookeeper.server.ZooKeeperServer)}}{{[2018-06-16 10:38:41,300] 
> INFO Server 
> environment:java.class.path=/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/aopalliance-repackaged-2.5.0-b32.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/argparse4j-0.7.0.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/commons-lang3-3.5.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/connect-api-1.1.0.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/connect-file-1.1.0.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/connect-json-1.1.0.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/connect-runtime-1.1.0.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/connect-transforms-1.1.0.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/guava-20.0.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/hk2-api-2.5.0-b32.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/hk2-locator-2.5.0-b32.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/hk2-utils-2.5.0-b32.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/jackson-annotations-2.9.4.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/jackson-core-2.9.4.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/jackson-databind-2.9.4.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/jackson-jaxrs-base-2.9.4.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/jackso

[jira] [Commented] (KAFKA-7066) Make Streams Runtime Error User Friendly in Case of Serialisation exception

2018-06-16 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514724#comment-16514724
 ] 

ASF GitHub Bot commented on KAFKA-7066:
---

simplesteph opened a new pull request #5239: KAFKA-7066 added better logging in 
case of Serialisation issue
URL: https://github.com/apache/kafka/pull/5239
 
 
   Following the error message of: 
https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/processor/internals/SinkNode.java#L93


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Make Streams Runtime Error User Friendly in Case of Serialisation exception
> ---
>
> Key: KAFKA-7066
> URL: https://issues.apache.org/jira/browse/KAFKA-7066
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 1.1.0
>Reporter: Stephane Maarek
>Assignee: Stephane Maarek
>Priority: Major
> Fix For: 2.0.0
>
>
> This kind of exception can be cryptic for the beginner:
> {code:java}
> ERROR stream-thread 
> [favourite-colors-application-a336770d-6ba6-4bbb-8681-3c8ea91bd12e-StreamThread-1]
>  Failed to process stream task 2_0 due to the following error: 
> (org.apache.kafka.streams.processor.internals.AssignedStreamsTasks:105)
> java.lang.ClassCastException: java.lang.Long cannot be cast to 
> java.lang.String
> at 
> org.apache.kafka.common.serialization.StringSerializer.serialize(StringSerializer.java:28)
> at org.apache.kafka.streams.state.StateSerdes.rawValue(StateSerdes.java:178)
> at 
> org.apache.kafka.streams.state.internals.MeteredKeyValueBytesStore$1.innerValue(MeteredKeyValueBytesStore.java:66)
> at 
> org.apache.kafka.streams.state.internals.MeteredKeyValueBytesStore$1.innerValue(MeteredKeyValueBytesStore.java:57)
> at 
> org.apache.kafka.streams.state.internals.InnerMeteredKeyValueStore.put(InnerMeteredKeyValueStore.java:198)
> at 
> org.apache.kafka.streams.state.internals.MeteredKeyValueBytesStore.put(MeteredKeyValueBytesStore.java:117)
> at 
> org.apache.kafka.streams.kstream.internals.KTableAggregate$KTableAggregateProcessor.process(KTableAggregate.java:95)
> at 
> org.apache.kafka.streams.kstream.internals.KTableAggregate$KTableAggregateProcessor.process(KTableAggregate.java:56)
> at 
> org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:46)
> at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:208)
> at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:124)
> at 
> org.apache.kafka.streams.processor.internals.AbstractProcessorContext.forward(AbstractProcessorContext.java:174)
> at 
> org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:80)
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:224)
> at 
> org.apache.kafka.streams.processor.internals.AssignedStreamsTasks.process(AssignedStreamsTasks.java:94)
> at 
> org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:411)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.processAndMaybeCommit(StreamThread.java:918)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:798)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:750)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:720){code}
> We should add more detailed logging already present in SinkNode to assist the 
> user into solving this issue



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-7066) Make Streams Runtime Error User Friendly in Case of Serialisation exception

2018-06-16 Thread Stephane Maarek (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephane Maarek reassigned KAFKA-7066:
--

Assignee: Stephane Maarek

> Make Streams Runtime Error User Friendly in Case of Serialisation exception
> ---
>
> Key: KAFKA-7066
> URL: https://issues.apache.org/jira/browse/KAFKA-7066
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 1.1.0
>Reporter: Stephane Maarek
>Assignee: Stephane Maarek
>Priority: Major
> Fix For: 2.0.0
>
>
> This kind of exception can be cryptic for the beginner:
> {code:java}
> ERROR stream-thread 
> [favourite-colors-application-a336770d-6ba6-4bbb-8681-3c8ea91bd12e-StreamThread-1]
>  Failed to process stream task 2_0 due to the following error: 
> (org.apache.kafka.streams.processor.internals.AssignedStreamsTasks:105)
> java.lang.ClassCastException: java.lang.Long cannot be cast to 
> java.lang.String
> at 
> org.apache.kafka.common.serialization.StringSerializer.serialize(StringSerializer.java:28)
> at org.apache.kafka.streams.state.StateSerdes.rawValue(StateSerdes.java:178)
> at 
> org.apache.kafka.streams.state.internals.MeteredKeyValueBytesStore$1.innerValue(MeteredKeyValueBytesStore.java:66)
> at 
> org.apache.kafka.streams.state.internals.MeteredKeyValueBytesStore$1.innerValue(MeteredKeyValueBytesStore.java:57)
> at 
> org.apache.kafka.streams.state.internals.InnerMeteredKeyValueStore.put(InnerMeteredKeyValueStore.java:198)
> at 
> org.apache.kafka.streams.state.internals.MeteredKeyValueBytesStore.put(MeteredKeyValueBytesStore.java:117)
> at 
> org.apache.kafka.streams.kstream.internals.KTableAggregate$KTableAggregateProcessor.process(KTableAggregate.java:95)
> at 
> org.apache.kafka.streams.kstream.internals.KTableAggregate$KTableAggregateProcessor.process(KTableAggregate.java:56)
> at 
> org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:46)
> at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:208)
> at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:124)
> at 
> org.apache.kafka.streams.processor.internals.AbstractProcessorContext.forward(AbstractProcessorContext.java:174)
> at 
> org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:80)
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:224)
> at 
> org.apache.kafka.streams.processor.internals.AssignedStreamsTasks.process(AssignedStreamsTasks.java:94)
> at 
> org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:411)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.processAndMaybeCommit(StreamThread.java:918)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:798)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:750)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:720){code}
> We should add more detailed logging already present in SinkNode to assist the 
> user into solving this issue



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7066) Make Streams Runtime Error User Friendly in Case of Serialisation exception

2018-06-16 Thread Stephane Maarek (JIRA)
Stephane Maarek created KAFKA-7066:
--

 Summary: Make Streams Runtime Error User Friendly in Case of 
Serialisation exception
 Key: KAFKA-7066
 URL: https://issues.apache.org/jira/browse/KAFKA-7066
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Affects Versions: 1.1.0
Reporter: Stephane Maarek
 Fix For: 2.0.0


This kind of exception can be cryptic for the beginner:
{code:java}
ERROR stream-thread 
[favourite-colors-application-a336770d-6ba6-4bbb-8681-3c8ea91bd12e-StreamThread-1]
 Failed to process stream task 2_0 due to the following error: 
(org.apache.kafka.streams.processor.internals.AssignedStreamsTasks:105)
java.lang.ClassCastException: java.lang.Long cannot be cast to java.lang.String
at 
org.apache.kafka.common.serialization.StringSerializer.serialize(StringSerializer.java:28)
at org.apache.kafka.streams.state.StateSerdes.rawValue(StateSerdes.java:178)
at 
org.apache.kafka.streams.state.internals.MeteredKeyValueBytesStore$1.innerValue(MeteredKeyValueBytesStore.java:66)
at 
org.apache.kafka.streams.state.internals.MeteredKeyValueBytesStore$1.innerValue(MeteredKeyValueBytesStore.java:57)
at 
org.apache.kafka.streams.state.internals.InnerMeteredKeyValueStore.put(InnerMeteredKeyValueStore.java:198)
at 
org.apache.kafka.streams.state.internals.MeteredKeyValueBytesStore.put(MeteredKeyValueBytesStore.java:117)
at 
org.apache.kafka.streams.kstream.internals.KTableAggregate$KTableAggregateProcessor.process(KTableAggregate.java:95)
at 
org.apache.kafka.streams.kstream.internals.KTableAggregate$KTableAggregateProcessor.process(KTableAggregate.java:56)
at 
org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:46)
at 
org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:208)
at 
org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:124)
at 
org.apache.kafka.streams.processor.internals.AbstractProcessorContext.forward(AbstractProcessorContext.java:174)
at 
org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:80)
at 
org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:224)
at 
org.apache.kafka.streams.processor.internals.AssignedStreamsTasks.process(AssignedStreamsTasks.java:94)
at 
org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:411)
at 
org.apache.kafka.streams.processor.internals.StreamThread.processAndMaybeCommit(StreamThread.java:918)
at 
org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:798)
at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:750)
at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:720){code}

We should add more detailed logging already present in SinkNode to assist the 
user into solving this issue



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7065) Quickstart tutorial fails because of missing brokers

2018-06-16 Thread Holger Brandl (JIRA)
Holger Brandl created KAFKA-7065:


 Summary: Quickstart tutorial fails because of missing brokers
 Key: KAFKA-7065
 URL: https://issues.apache.org/jira/browse/KAFKA-7065
 Project: Kafka
  Issue Type: Bug
  Components: website
Affects Versions: 1.1.0
 Environment: Java 1.8.0_162
MacOS 10.13.4
Reporter: Holger Brandl


Following the tutorial on [https://kafka.apache.org/quickstart] I've tried 
setup a kafka instance with

{{wget --no-check-certificate 
http://apache.lauf-forum.at/kafka/1.1.0/kafka_2.12-1.1.0.tgz}}

{{tar xvf kafka_2.12-1.1.0.tgz}}

{{## start the server}}
{{cd kafka_2.12-1.1.0}}
{{bin/zookeeper-server-start.sh config/zookeeper.properties}}

 

Until here everything is fine, and it is reporting:

 

{{[kafka_2.12-1.1.0]$ bin/zookeeper-server-start.sh 
config/zookeeper.properties}}{{[2018-06-16 10:38:41,238] INFO Reading 
configuration from: config/zookeeper.properties 
(org.apache.zookeeper.server.quorum.QuorumPeerConfig)}}{{[2018-06-16 
10:38:41,240] INFO autopurge.snapRetainCount set to 3 
(org.apache.zookeeper.server.DatadirCleanupManager)}}{{[2018-06-16 
10:38:41,241] INFO autopurge.purgeInterval set to 0 
(org.apache.zookeeper.server.DatadirCleanupManager)}}{{[2018-06-16 
10:38:41,241] INFO Purge task is not scheduled. 
(org.apache.zookeeper.server.DatadirCleanupManager)}}{{[2018-06-16 
10:38:41,241] WARN Either no config or no quorum defined in config, running  in 
standalone mode 
(org.apache.zookeeper.server.quorum.QuorumPeerMain)}}{{[2018-06-16 
10:38:41,272] INFO Reading configuration from: config/zookeeper.properties 
(org.apache.zookeeper.server.quorum.QuorumPeerConfig)}}{{[2018-06-16 
10:38:41,273] INFO Starting server 
(org.apache.zookeeper.server.ZooKeeperServerMain)}}{{[2018-06-16 10:38:41,299] 
INFO Server 
environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, 
built on 03/23/2017 10:13 GMT 
(org.apache.zookeeper.server.ZooKeeperServer)}}{{[2018-06-16 10:38:41,300] INFO 
Server environment:host.name=192.168.0.8 
(org.apache.zookeeper.server.ZooKeeperServer)}}{{[2018-06-16 10:38:41,300] INFO 
Server environment:java.version=1.8.0_162 
(org.apache.zookeeper.server.ZooKeeperServer)}}{{[2018-06-16 10:38:41,300] INFO 
Server environment:java.vendor=Oracle Corporation 
(org.apache.zookeeper.server.ZooKeeperServer)}}{{[2018-06-16 10:38:41,300] INFO 
Server 
environment:java.home=/Library/Java/JavaVirtualMachines/jdk1.8.0_162.jdk/Contents/Home/jre
 (org.apache.zookeeper.server.ZooKeeperServer)}}{{[2018-06-16 10:38:41,300] 
INFO Server 
environment:java.class.path=/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/aopalliance-repackaged-2.5.0-b32.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/argparse4j-0.7.0.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/commons-lang3-3.5.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/connect-api-1.1.0.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/connect-file-1.1.0.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/connect-json-1.1.0.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/connect-runtime-1.1.0.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/connect-transforms-1.1.0.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/guava-20.0.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/hk2-api-2.5.0-b32.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/hk2-locator-2.5.0-b32.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/hk2-utils-2.5.0-b32.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/jackson-annotations-2.9.4.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/jackson-core-2.9.4.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/jackson-databind-2.9.4.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/jackson-jaxrs-base-2.9.4.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/jackson-jaxrs-json-provider-2.9.4.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/jackson-module-jaxb-annotations-2.9.4.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/javassist-3.20.0-GA.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/javassist-3.21.0-GA.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/javax.annotation-api-1.2.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/javax.inject-1.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/javax.inject-2.5.0-b32.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/javax.servlet-api-3.1.0.jar:/Users/brandl/projects/kotlin/kafka/kafka_2.12-1.1.0/bin/../libs/javax.ws.rs-api-2.0.1.jar:/Users/brandl/projects/kotlin