[jira] [Commented] (KAFKA-5783) Implement KafkaPrincipalBuilder interface with support for SASL (KIP-189)

2017-09-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165986#comment-16165986
 ] 

ASF GitHub Bot commented on KAFKA-5783:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3795


> Implement KafkaPrincipalBuilder interface with support for SASL (KIP-189)
> -
>
> Key: KAFKA-5783
> URL: https://issues.apache.org/jira/browse/KAFKA-5783
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 1.0.0
>
>
> This issue covers the implementation of 
> [KIP-189|https://cwiki.apache.org/confluence/display/KAFKA/KIP-189%3A+Improve+principal+builder+interface+and+add+support+for+SASL].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4764) Improve diagnostics for SASL authentication failures

2017-09-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165998#comment-16165998
 ] 

ASF GitHub Bot commented on KAFKA-4764:
---

Github user rajinisivaram closed the pull request at:

https://github.com/apache/kafka/pull/2546


> Improve diagnostics for SASL authentication failures
> 
>
> Key: KAFKA-4764
> URL: https://issues.apache.org/jira/browse/KAFKA-4764
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.10.2.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 1.0.0
>
>
> At the moment, broker closes the client connection if SASL authentication 
> fails. Clients see this as a connection failure and do not get any feedback 
> for the reason why the connection was closed. Producers and consumers retry, 
> attempting to create successful connections, treating authentication failures 
> as transient failures. There are no log entries on the client-side which 
> indicate that any of these connection failures were due to authentication 
> failure.
> This JIRA will aim to improve diagnosis of authentication failures with the 
> changes described in 
> [KIP-152|https://cwiki.apache.org/confluence/display/KAFKA/KIP-152+-+Improve+diagnostics+for+SASL+authentication+failures].
> This JIRA also does not change handling of SSL authentication failures. 
> javax.net.debug provides sufficient diagnostics for this case. SSL changes 
> are harder to do while preserving backward compatibility.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5890) records.lag should use tags for topic and partition rather than using metric name.

2017-09-14 Thread Charly Molter (JIRA)
Charly Molter created KAFKA-5890:


 Summary: records.lag should use tags for topic and partition 
rather than using metric name.
 Key: KAFKA-5890
 URL: https://issues.apache.org/jira/browse/KAFKA-5890
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.10.2.0
Reporter: Charly Molter


As part of KIP-92[1] a per partition lag metric was added.

These metrics are really useful, however in the implementation  it was 
implemented as a prefix to the metric name: 
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/internals/Fetcher.java#L1321-L1344

Usually these kind of metrics use tags and the name is constant for all topics, 
partitions.

We have a custom reporter which aggregates topics/partitions together to avoid 
explosion of the number of KPIs and this KPI doesn't support this as it doesn't 
have tags but a complex name.

[1] 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-92+-+Add+per+partition+lag+metrics+to+KafkaConsumer



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5750) Elevate log messages for denials to INFO in SimpleAclAuthorizer class

2017-09-14 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-5750:
-
Summary: Elevate log messages for denials to INFO in SimpleAclAuthorizer 
class  (was: Elevate log messages for denials to WARN in SimpleAclAuthorizer 
class)

> Elevate log messages for denials to INFO in SimpleAclAuthorizer class
> -
>
> Key: KAFKA-5750
> URL: https://issues.apache.org/jira/browse/KAFKA-5750
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Reporter: Phillip Walker
>Assignee: Manikumar
> Fix For: 1.0.0
>
>
> Currently, the authorizer logs all messages at DEBUG level and logs every 
> single authorization attempt, which can greatly decrease cluster performance, 
> especially when Mirrormaker also produces to that cluster. Many InfoSec 
> requirements, though, require that authorization denials be logged. The 
> proposed solution is to elevate any denial in SimpleAclAuthorizer and any 
> other relevant class to WARN while leaving approvals at their currently 
> logging levels.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5881) Consuming from added partitions without restarting the consumer

2017-09-14 Thread Viliam Durina (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16166350#comment-16166350
 ] 

Viliam Durina commented on KAFKA-5881:
--

Thanks for the reply, i'll give it a test on Monday. If it works then I'll 
close the bug.

> Consuming from added partitions without restarting the consumer
> ---
>
> Key: KAFKA-5881
> URL: https://issues.apache.org/jira/browse/KAFKA-5881
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 0.11.0.0
>Reporter: Viliam Durina
>
> Currently the {{KafkaConsumer}} is not able to return events from newly added 
> partitions, neither in automatic nor in manual assignment. I have to create a 
> new consumer. This was a surprise to me and [other 
> users|https://stackoverflow.com/q/46175275/952135].
> With manual assignment, the {{consumer.partitionsFor("topic")}} should 
> eventually return new partitions.
> With automatic assignment, one of the consumers should start consuming from 
> new partitions.
> If this is technically not possible, it should at least be documented.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5891) Cast transformation fails if record schema contains timestamp field

2017-09-14 Thread Artem Plotnikov (JIRA)
Artem Plotnikov created KAFKA-5891:
--

 Summary: Cast transformation fails if record schema contains 
timestamp field
 Key: KAFKA-5891
 URL: https://issues.apache.org/jira/browse/KAFKA-5891
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 0.11.0.0
Reporter: Artem Plotnikov


I have the following simple type cast transformation:
```
name=postgresql-source-simple
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max=1

connection.url=jdbc:postgresql://localhost:5432/testdb?user=postgres&password=mysecretpassword
query=SELECT 1::INT as a, '2017-09-14 10:23:54'::TIMESTAMP as b

transforms=Cast
transforms.Cast.type=org.apache.kafka.connect.transforms.Cast$Value
transforms.Cast.spec=a:boolean

mode=bulk
topic.prefix=clients
```
Which fails with the following exception in runtime:
```
[2017-09-14 16:51:01,885] ERROR Task postgresql-source-simple-0 threw an 
uncaught and unrecoverable exception 
(org.apache.kafka.connect.runtime.WorkerTask:148)
org.apache.kafka.connect.errors.DataException: Invalid Java object for schema 
type INT64: class java.sql.Timestamp for field: "null"
at 
org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:239)
at 
org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:209)
at org.apache.kafka.connect.data.Struct.put(Struct.java:214)
at 
org.apache.kafka.connect.transforms.Cast.applyWithSchema(Cast.java:152)
at org.apache.kafka.connect.transforms.Cast.apply(Cast.java:108)
at 
org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:38)
at 
org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:190)
at 
org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:168)
at 
org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:146)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:190)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
  at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
```
If I remove the  transforms.* part of the connector it will work correctly. 
Actually, it doesn't really matter which types I use in the transformation for 
field 'a', just the existence of a timestamp field brings the exception.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5891) Cast transformation fails if record schema contains timestamp field

2017-09-14 Thread Artem Plotnikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Artem Plotnikov updated KAFKA-5891:
---
Description: 
I have the following simple type cast transformation:
{code}
name=postgresql-source-simple
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max=1

connection.url=jdbc:postgresql://localhost:5432/testdb?user=postgres&password=mysecretpassword
query=SELECT 1::INT as a, '2017-09-14 10:23:54'::TIMESTAMP as b

transforms=Cast
transforms.Cast.type=org.apache.kafka.connect.transforms.Cast$Value
transforms.Cast.spec=a:boolean

mode=bulk
topic.prefix=clients
{code}
Which fails with the following exception in runtime:
{code}
[2017-09-14 16:51:01,885] ERROR Task postgresql-source-simple-0 threw an 
uncaught and unrecoverable exception 
(org.apache.kafka.connect.runtime.WorkerTask:148)
org.apache.kafka.connect.errors.DataException: Invalid Java object for schema 
type INT64: class java.sql.Timestamp for field: "null"
at 
org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:239)
at 
org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:209)
at org.apache.kafka.connect.data.Struct.put(Struct.java:214)
at 
org.apache.kafka.connect.transforms.Cast.applyWithSchema(Cast.java:152)
at org.apache.kafka.connect.transforms.Cast.apply(Cast.java:108)
at 
org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:38)
at 
org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:190)
at 
org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:168)
at 
org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:146)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:190)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
  at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}
If I remove the  transforms.* part of the connector it will work correctly. 
Actually, it doesn't really matter which types I use in the transformation for 
field 'a', just the existence of a timestamp field brings the exception.

  was:
I have the following simple type cast transformation:
```
name=postgresql-source-simple
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max=1

connection.url=jdbc:postgresql://localhost:5432/testdb?user=postgres&password=mysecretpassword
query=SELECT 1::INT as a, '2017-09-14 10:23:54'::TIMESTAMP as b

transforms=Cast
transforms.Cast.type=org.apache.kafka.connect.transforms.Cast$Value
transforms.Cast.spec=a:boolean

mode=bulk
topic.prefix=clients
```
Which fails with the following exception in runtime:
```
[2017-09-14 16:51:01,885] ERROR Task postgresql-source-simple-0 threw an 
uncaught and unrecoverable exception 
(org.apache.kafka.connect.runtime.WorkerTask:148)
org.apache.kafka.connect.errors.DataException: Invalid Java object for schema 
type INT64: class java.sql.Timestamp for field: "null"
at 
org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:239)
at 
org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:209)
at org.apache.kafka.connect.data.Struct.put(Struct.java:214)
at 
org.apache.kafka.connect.transforms.Cast.applyWithSchema(Cast.java:152)
at org.apache.kafka.connect.transforms.Cast.apply(Cast.java:108)
at 
org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:38)
at 
org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:190)
at 
org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:168)
at 
org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:146)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:190)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
  at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
```
If I remove the  transforms.* part of the connector it will work correctly. 
Actually, it doesn't really matter which types I use in the transformation for 
field 'a', just the existence of a timestamp field brings the exception.


> Cast transformation fails if record schema contains timestamp field
> -

[jira] [Commented] (KAFKA-5882) NullPointerException in StreamTask

2017-09-14 Thread Seweryn Habdank-Wojewodzki (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16166376#comment-16166376
 ] 

Seweryn Habdank-Wojewodzki commented on KAFKA-5882:
---

I do not have any more logs as I reported. 
The problem was coming when during the data processing. 
We made Kafka rolling update - node by node with waiting when the node is 
updated and then this and similar stuff comes.

So we saw that KStream based apps cannot handle rolling update at all.

Old fashioned consumer/producer apps are working brilliant, they do not have 
problems to survive rolling update, but KStream based, they are going nuts :-).

> NullPointerException in StreamTask
> --
>
> Key: KAFKA-5882
> URL: https://issues.apache.org/jira/browse/KAFKA-5882
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0
>Reporter: Seweryn Habdank-Wojewodzki
>
> It seems bugfix [KAFKA-5073|https://issues.apache.org/jira/browse/KAFKA-5073] 
> is made, but introduce some other issue.
> In some cases (I am not sure which ones) I got NPE (below).
> I would expect that even in case of FATAL error anythink except NPE is thrown.
> {code}
> 2017-09-12 23:34:54 ERROR ConsumerCoordinator:269 - User provided listener 
> org.apache.kafka.streams.processor.internals.StreamThread$RebalanceListener 
> for group streamer failed on partition assignment
> java.lang.NullPointerException: null
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:123)
>  ~[myapp-streamer.jar:?]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:1234)
>  ~[myapp-streamer.jar:?]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:294)
>  ~[myapp-streamer.jar:?]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.retryWithBackoff(StreamThread.java:254)
>  ~[myapp-streamer.jar:?]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:1313)
>  ~[myapp-streamer.jar:?]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.access$1100(StreamThread.java:73)
>  ~[myapp-streamer.jar:?]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread$RebalanceListener.onPartitionsAssigned(StreamThread.java:183)
>  ~[myapp-streamer.jar:?]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:265)
>  [myapp-streamer.jar:?]
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:363)
>  [myapp-streamer.jar:?]
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:310)
>  [myapp-streamer.jar:?]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:297)
>  [myapp-streamer.jar:?]
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1078)
>  [myapp-streamer.jar:?]
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043) 
> [myapp-streamer.jar:?]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:582)
>  [myapp-streamer.jar:?]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:553)
>  [myapp-streamer.jar:?]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:527)
>  [myapp-streamer.jar:?]
> 2017-09-12 23:34:54 INFO  StreamThread:1040 - stream-thread 
> [streamer-3a44578b-faa8-4b5b-bbeb-7a7f04639563-StreamThread-1] Shutting down
> 2017-09-12 23:34:54 INFO  KafkaProducer:972 - Closing the Kafka producer with 
> timeoutMillis = 9223372036854775807 ms.
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4392) Failed to lock the state directory due to an unexpected exception

2017-09-14 Thread Seweryn Habdank-Wojewodzki (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16166378#comment-16166378
 ] 

Seweryn Habdank-Wojewodzki commented on KAFKA-4392:
---

Sure, but next week :-).

> Failed to lock the state directory due to an unexpected exception
> -
>
> Key: KAFKA-4392
> URL: https://issues.apache.org/jira/browse/KAFKA-4392
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Ara Ebrahimi
>Assignee: Guozhang Wang
> Fix For: 0.10.2.0
>
>
> This happened on streaming startup, on a clean installation, no existing 
> folder. Here I was starting 4 instances of our streaming app on 4 machines 
> and one threw this exception. Seems to me there’s a race condition somewhere 
> when instances discover others, or something like that.
> 2016-11-02 15:43:47 INFO  StreamRunner:59 - Started http server successfully.
> 2016-11-02 15:44:50 ERROR StateDirectory:147 - Failed to lock the state 
> directory due to an unexpected exception
> java.nio.file.NoSuchFileException: 
> /data/1/kafka-streams/myapp-streams/7_21/.lock
>   at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
>   at 
> sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177)
>   at java.nio.channels.FileChannel.open(FileChannel.java:287)
>   at java.nio.channels.FileChannel.open(FileChannel.java:335)
>   at 
> org.apache.kafka.streams.processor.internals.StateDirectory.getOrCreateFileChannel(StateDirectory.java:176)
>   at 
> org.apache.kafka.streams.processor.internals.StateDirectory.lock(StateDirectory.java:90)
>   at 
> org.apache.kafka.streams.processor.internals.StateDirectory.cleanRemovedTasks(StateDirectory.java:140)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeClean(StreamThread.java:552)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:459)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:242)
> ^C
> [arae@a4 ~]$ ls -al /data/1/kafka-streams/myapp-streams/7_21/
> ls: cannot access /data/1/kafka-streams/myapp-streams/7_21/: No such file or 
> directory
> [arae@a4 ~]$ ls -al /data/1/kafka-streams/myapp-streams/
> total 4
> drwxr-xr-x 74 root root 4096 Nov  2 15:44 .
> drwxr-xr-x  3 root root   27 Nov  2 15:43 ..
> drwxr-xr-x  3 root root   32 Nov  2 15:43 0_1
> drwxr-xr-x  3 root root   32 Nov  2 15:43 0_13
> drwxr-xr-x  3 root root   32 Nov  2 15:43 0_14
> drwxr-xr-x  3 root root   32 Nov  2 15:43 0_16
> drwxr-xr-x  3 root root   32 Nov  2 15:43 0_2
> drwxr-xr-x  3 root root   32 Nov  2 15:43 0_22
> drwxr-xr-x  3 root root   32 Nov  2 15:43 0_28
> drwxr-xr-x  3 root root   32 Nov  2 15:43 0_3
> drwxr-xr-x  3 root root   32 Nov  2 15:43 0_31
> drwxr-xr-x  3 root root   32 Nov  2 15:43 0_5
> drwxr-xr-x  3 root root   32 Nov  2 15:43 0_7
> drwxr-xr-x  3 root root   32 Nov  2 15:43 0_8
> drwxr-xr-x  3 root root   32 Nov  2 15:43 0_9
> drwxr-xr-x  3 root root   32 Nov  2 15:43 1_1
> drwxr-xr-x  3 root root   32 Nov  2 15:43 1_10
> drwxr-xr-x  3 root root   32 Nov  2 15:43 1_14
> drwxr-xr-x  3 root root   32 Nov  2 15:43 1_15
> drwxr-xr-x  3 root root   32 Nov  2 15:43 1_16
> drwxr-xr-x  3 root root   32 Nov  2 15:43 1_17
> drwxr-xr-x  3 root root   32 Nov  2 15:43 1_18
> drwxr-xr-x  3 root root   32 Nov  2 15:43 1_3
> drwxr-xr-x  3 root root   32 Nov  2 15:43 1_5
> drwxr-xr-x  3 root root   60 Nov  2 15:43 2_1
> drwxr-xr-x  3 root root   60 Nov  2 15:43 2_10
> drwxr-xr-x  3 root root   60 Nov  2 15:43 2_12
> drwxr-xr-x  3 root root   60 Nov  2 15:43 2_20
> drwxr-xr-x  3 root root   60 Nov  2 15:43 2_24
> drwxr-xr-x  3 root root   61 Nov  2 15:43 3_10
> drwxr-xr-x  3 root root   61 Nov  2 15:43 3_11
> drwxr-xr-x  3 root root   61 Nov  2 15:43 3_19
> drwxr-xr-x  3 root root   61 Nov  2 15:43 3_20
> drwxr-xr-x  3 root root   61 Nov  2 15:43 3_25
> drwxr-xr-x  3 root root   61 Nov  2 15:43 3_26
> drwxr-xr-x  3 root root   61 Nov  2 15:43 3_3
> drwxr-xr-x  3 root root   64 Nov  2 15:43 4_11
> drwxr-xr-x  3 root root   64 Nov  2 15:43 4_12
> drwxr-xr-x  3 root root   64 Nov  2 15:43 4_18
> drwxr-xr-x  3 root root   64 Nov  2 15:43 4_19
> drwxr-xr-x  3 root root   64 Nov  2 15:43 4_24
> drwxr-xr-x  3 root root   64 Nov  2 15:43 4_25
> drwxr-xr-x  3 root root   64 Nov  2 15:43 4_26
> drwxr-xr-x  3 root root   64 Nov  2 15:43 4_4
> drwxr-xr-x  3 root root   64 Nov  2 15:43 4_9
> drwxr-xr-x  3 root root   58 Nov  2 15:43 5_1
> drwxr-xr-x  3 root root   58 Nov  2 15:43 5_10
> 

[jira] [Created] (KAFKA-5892) Connector property override does not work unless setting ALL converter properties

2017-09-14 Thread Yeva Byzek (JIRA)
Yeva Byzek created KAFKA-5892:
-

 Summary: Connector property override does not work unless setting 
ALL converter properties
 Key: KAFKA-5892
 URL: https://issues.apache.org/jira/browse/KAFKA-5892
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Yeva Byzek
Priority: Minor


A single connector setting override {{  "value.converter.schemas.enable": false 
}} won't take effect if not ALL the converter properties are overriden in the 
connector.

At minimum, we should give user warning or error that this is will be ignored.

We should also consider changing the behavior to allow the single property 
override even if all the converter properties are not specified, but this 
requires discussion to evaluate the impact of this change.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5892) Connector property override does not work unless setting ALL converter properties

2017-09-14 Thread Randall Hauch (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch updated KAFKA-5892:
-
Description: 
A single connector setting override {{value.converter.schemas.enable=false}} 
only takes effect if ALL of the converter properties are overridden in the 
connector.

At minimum, we should give user warning or error that this is will be ignored.

We should also consider changing the behavior to allow the single property 
override even if all the converter properties are not specified, but this 
requires discussion to evaluate the impact of this change.

  was:
A single connector setting override {{  "value.converter.schemas.enable": false 
}} won't take effect if not ALL the converter properties are overriden in the 
connector.

At minimum, we should give user warning or error that this is will be ignored.

We should also consider changing the behavior to allow the single property 
override even if all the converter properties are not specified, but this 
requires discussion to evaluate the impact of this change.


> Connector property override does not work unless setting ALL converter 
> properties
> -
>
> Key: KAFKA-5892
> URL: https://issues.apache.org/jira/browse/KAFKA-5892
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Yeva Byzek
>Priority: Minor
>
> A single connector setting override {{value.converter.schemas.enable=false}} 
> only takes effect if ALL of the converter properties are overridden in the 
> connector.
> At minimum, we should give user warning or error that this is will be ignored.
> We should also consider changing the behavior to allow the single property 
> override even if all the converter properties are not specified, but this 
> requires discussion to evaluate the impact of this change.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5892) Connector property override does not work unless setting ALL converter properties

2017-09-14 Thread Randall Hauch (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16166456#comment-16166456
 ] 

Randall Hauch commented on KAFKA-5892:
--

At first blush, it seems like Connect should be able to do something smarter, 
like allow the user to override just some of the properties. Technically this 
is possible, but this is only really possible if the connector wants to reuse 
the same converter defined by the worker. What about when the worker is using 
the Avro converter, and a connector sets just one of the JSON converter's 
properties? Even if they start out the same and a connector with an override is 
deployed, changing the worker configuration to use a different converter will 
then result in the connector failing upon restart.

I do agree that Connect should do a better job of warning when a connector 
defines just partial converters. For example, if one or more 
{{[key|value].converter.*}} properties is specified without the corresponding 
{{[key|value].converter}}, we could log an warning or consider this a 
configuration error. 

The more I think about it, the more I think this is a configuration error. 
Thoughts?

> Connector property override does not work unless setting ALL converter 
> properties
> -
>
> Key: KAFKA-5892
> URL: https://issues.apache.org/jira/browse/KAFKA-5892
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Yeva Byzek
>Priority: Minor
>
> A single connector setting override {{value.converter.schemas.enable=false}} 
> only takes effect if ALL of the converter properties are overridden in the 
> connector.
> At minimum, we should give user warning or error that this is will be ignored.
> We should also consider changing the behavior to allow the single property 
> override even if all the converter properties are not specified, but this 
> requires discussion to evaluate the impact of this change.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5892) Connector property override does not work unless setting ALL converter properties

2017-09-14 Thread Randall Hauch (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch updated KAFKA-5892:
-
Labels: newbie  (was: )

> Connector property override does not work unless setting ALL converter 
> properties
> -
>
> Key: KAFKA-5892
> URL: https://issues.apache.org/jira/browse/KAFKA-5892
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Yeva Byzek
>Priority: Minor
>  Labels: newbie
>
> A single connector setting override {{value.converter.schemas.enable=false}} 
> only takes effect if ALL of the converter properties are overridden in the 
> connector.
> At minimum, we should give user warning or error that this is will be ignored.
> We should also consider changing the behavior to allow the single property 
> override even if all the converter properties are not specified, but this 
> requires discussion to evaluate the impact of this change.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5882) NullPointerException in StreamTask

2017-09-14 Thread Matthias J. Sax (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16166549#comment-16166549
 ] 

Matthias J. Sax commented on KAFKA-5882:


We actually do have system test for broker bounces. Thus, in general it should 
work. But of course, there can always be a bug. Without more information, it 
will be hard to figure out...

> NullPointerException in StreamTask
> --
>
> Key: KAFKA-5882
> URL: https://issues.apache.org/jira/browse/KAFKA-5882
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0
>Reporter: Seweryn Habdank-Wojewodzki
>
> It seems bugfix [KAFKA-5073|https://issues.apache.org/jira/browse/KAFKA-5073] 
> is made, but introduce some other issue.
> In some cases (I am not sure which ones) I got NPE (below).
> I would expect that even in case of FATAL error anythink except NPE is thrown.
> {code}
> 2017-09-12 23:34:54 ERROR ConsumerCoordinator:269 - User provided listener 
> org.apache.kafka.streams.processor.internals.StreamThread$RebalanceListener 
> for group streamer failed on partition assignment
> java.lang.NullPointerException: null
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:123)
>  ~[myapp-streamer.jar:?]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:1234)
>  ~[myapp-streamer.jar:?]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:294)
>  ~[myapp-streamer.jar:?]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.retryWithBackoff(StreamThread.java:254)
>  ~[myapp-streamer.jar:?]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:1313)
>  ~[myapp-streamer.jar:?]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.access$1100(StreamThread.java:73)
>  ~[myapp-streamer.jar:?]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread$RebalanceListener.onPartitionsAssigned(StreamThread.java:183)
>  ~[myapp-streamer.jar:?]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:265)
>  [myapp-streamer.jar:?]
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:363)
>  [myapp-streamer.jar:?]
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:310)
>  [myapp-streamer.jar:?]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:297)
>  [myapp-streamer.jar:?]
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1078)
>  [myapp-streamer.jar:?]
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043) 
> [myapp-streamer.jar:?]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:582)
>  [myapp-streamer.jar:?]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:553)
>  [myapp-streamer.jar:?]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:527)
>  [myapp-streamer.jar:?]
> 2017-09-12 23:34:54 INFO  StreamThread:1040 - stream-thread 
> [streamer-3a44578b-faa8-4b5b-bbeb-7a7f04639563-StreamThread-1] Shutting down
> 2017-09-12 23:34:54 INFO  KafkaProducer:972 - Closing the Kafka producer with 
> timeoutMillis = 9223372036854775807 ms.
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5893) ResetIntegrationTest fails

2017-09-14 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-5893:
--

 Summary: ResetIntegrationTest fails
 Key: KAFKA-5893
 URL: https://issues.apache.org/jira/browse/KAFKA-5893
 Project: Kafka
  Issue Type: Bug
  Components: streams, unit tests
Affects Versions: 0.11.0.0, 0.11.0.1
Reporter: Matthias J. Sax
Assignee: Matthias J. Sax


{noformat}
java.lang.AssertionError: expected:<0> but was:<1>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at 
org.apache.kafka.streams.integration.ResetIntegrationTest.cleanGlobal(ResetIntegrationTest.java:363)
at 
org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterResetWithIntermediateUserTopic(ResetIntegrationTest.java:172)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5893) ResetIntegrationTest fails

2017-09-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16166623#comment-16166623
 ] 

ASF GitHub Bot commented on KAFKA-5893:
---

GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/3859

KAFKA-5893: ResetIntegrationTest fails

 - improve stderr output for better debugging

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka kafka-5893-reset-integration-test

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3859.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3859


commit fa51ab01cf0227172fe9cef5c1b247fff303569f
Author: Matthias J. Sax 
Date:   2017-09-14T17:11:23Z

KAFKA-5893: ResetIntegrationTest fails
 - improve stderr output for better debugging




> ResetIntegrationTest fails
> --
>
> Key: KAFKA-5893
> URL: https://issues.apache.org/jira/browse/KAFKA-5893
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Affects Versions: 0.11.0.0, 0.11.0.1
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>
> {noformat}
> java.lang.AssertionError: expected:<0> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at org.junit.Assert.assertEquals(Assert.java:631)
>   at 
> org.apache.kafka.streams.integration.ResetIntegrationTest.cleanGlobal(ResetIntegrationTest.java:363)
>   at 
> org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterResetWithIntermediateUserTopic(ResetIntegrationTest.java:172)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5893) ResetIntegrationTest fails

2017-09-14 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-5893:
---
Description: 
{noformat}
java.lang.AssertionError: expected:<0> but was:<1>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at 
org.apache.kafka.streams.integration.ResetIntegrationTest.cleanGlobal(ResetIntegrationTest.java:363)
at 
org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterResetWithIntermediateUserTopic(ResetIntegrationTest.java:172)
{noformat}

One issue with debugging is, that we catch exceptions and print the exception 
message that is {{null}}:
{noformat}
Standard Error
ERROR: null
ERROR: null
{noformat}

  was:
{noformat}
java.lang.AssertionError: expected:<0> but was:<1>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at 
org.apache.kafka.streams.integration.ResetIntegrationTest.cleanGlobal(ResetIntegrationTest.java:363)
at 
org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterResetWithIntermediateUserTopic(ResetIntegrationTest.java:172)
{noformat}


> ResetIntegrationTest fails
> --
>
> Key: KAFKA-5893
> URL: https://issues.apache.org/jira/browse/KAFKA-5893
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Affects Versions: 0.11.0.0, 0.11.0.1
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>
> {noformat}
> java.lang.AssertionError: expected:<0> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at org.junit.Assert.assertEquals(Assert.java:631)
>   at 
> org.apache.kafka.streams.integration.ResetIntegrationTest.cleanGlobal(ResetIntegrationTest.java:363)
>   at 
> org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterResetWithIntermediateUserTopic(ResetIntegrationTest.java:172)
> {noformat}
> One issue with debugging is, that we catch exceptions and print the exception 
> message that is {{null}}:
> {noformat}
> Standard Error
> ERROR: null
> ERROR: null
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5894) add the notion of max inflight requests to async ZookeeperClient

2017-09-14 Thread Onur Karaman (JIRA)
Onur Karaman created KAFKA-5894:
---

 Summary: add the notion of max inflight requests to async 
ZookeeperClient
 Key: KAFKA-5894
 URL: https://issues.apache.org/jira/browse/KAFKA-5894
 Project: Kafka
  Issue Type: Sub-task
Reporter: Onur Karaman
Assignee: Onur Karaman


ZookeeperClient is a zookeeper client that encourages pipelined requests to 
zookeeper. We want to add the notion of max inflight requests to the client for 
several reasons:
# to bound memory overhead associated with async requests on the client.
# to not overwhelm the zookeeper ensemble with a burst of requests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5894) add the notion of max inflight requests to async ZookeeperClient

2017-09-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16166694#comment-16166694
 ] 

ASF GitHub Bot commented on KAFKA-5894:
---

GitHub user onurkaraman opened a pull request:

https://github.com/apache/kafka/pull/3860

KAFKA-5894: add the notion of max inflight requests to async 
ZookeepeeperClient

ZookeeperClient is a zookeeper client that encourages pipelined requests to 
zookeeper. We want to add the notion of max inflight requests to the client for 
several reasons:
1. to bound memory overhead associated with async requests on the client.
2. to not overwhelm the zookeeper ensemble with a burst of requests.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/onurkaraman/kafka KAFKA-5894

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3860.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3860


commit a767c27436d4dc4da452edcee8bff54edd41dabd
Author: Onur Karaman 
Date:   2017-09-14T17:46:22Z

KAFKA-5894: add the notion of max inflight requests to async ZookeeperClient

ZookeeperClient is a zookeeper client that encourages pipelined requests to 
zookeeper. We want to add the notion of max inflight requests to the client for 
several reasons:
1. to bound memory overhead associated with async requests on the client.
2. to not overwhelm the zookeeper ensemble with a burst of requests.




> add the notion of max inflight requests to async ZookeeperClient
> 
>
> Key: KAFKA-5894
> URL: https://issues.apache.org/jira/browse/KAFKA-5894
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Onur Karaman
>Assignee: Onur Karaman
>
> ZookeeperClient is a zookeeper client that encourages pipelined requests to 
> zookeeper. We want to add the notion of max inflight requests to the client 
> for several reasons:
> # to bound memory overhead associated with async requests on the client.
> # to not overwhelm the zookeeper ensemble with a burst of requests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5301) Improve exception handling on consumer path

2017-09-14 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-5301:
--

Assignee: Matthias J. Sax

> Improve exception handling on consumer path
> ---
>
> Key: KAFKA-5301
> URL: https://issues.apache.org/jira/browse/KAFKA-5301
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 0.11.0.0
>Reporter: Eno Thereska
>Assignee: Matthias J. Sax
> Fix For: 1.0.0
>
> Attachments: 5301.v1.patch
>
>
> Used in StreamsThread.java, mostly to .poll() but also to restore data.
> Used in StreamsTask.java, mostly to .pause(), .resume()
> All exceptions here are currently caught all the way up to the main running 
> loop in a broad catch(Exception e) statement in StreamThread.run().
> One main concern on the consumer path is handling deserialization errors that 
> happen before streams has even had a chance to look at the data: 
> https://issues.apache.org/jira/browse/KAFKA-5157  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5302) Improve exception handling on streams client (communication with brokers)

2017-09-14 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-5302:
--

Assignee: Matthias J. Sax

> Improve exception handling on streams client (communication with brokers)
> -
>
> Key: KAFKA-5302
> URL: https://issues.apache.org/jira/browse/KAFKA-5302
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 0.11.0.0
>Reporter: Eno Thereska
>Assignee: Matthias J. Sax
> Fix For: 1.0.0
>
>
> These are exceptions in StreamsKafkaClient.java.
> Currently throws either StreamsException or BrokerNotFoundException.
> Used by InternalTopicManager to create topics and get their metadata.
> Used by StreamPartitionAssignor. 
> Currently InternalTopicManager retries a few times after catching an 
> exception. 
> A failure here is sent all the way up to the stream thread and will stop the 
> streams pipeline. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5313) Improve exception handling on coordinator interactions

2017-09-14 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-5313:
--

Assignee: Matthias J. Sax

> Improve exception handling on coordinator interactions
> --
>
> Key: KAFKA-5313
> URL: https://issues.apache.org/jira/browse/KAFKA-5313
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 0.11.0.0
>Reporter: Eno Thereska
>Assignee: Matthias J. Sax
> Fix For: 1.0.0
>
>
> Exceptions during assignment of tasks are caught in ConsumerCoordinator.java 
> and streams becomes aware of them during the 
> StreamThread.onPartitionsAssigned() and StreamThread.onPartitionsRevoked() 
> methods. Eventually these exceptions go through StreamThread.pollRequests() 
> all the way up to StreamThread.runLoop() and will halt the stream thread that 
> is processing these exceptions. Other stream threads may continue processing, 
> however it is likely they will experience problems too soon after.
> Exceptions here include LockExceptions that are thrown if tasks cannot use a 
> particular directory due to previous tasks not releasing locks on them during 
> reassignment. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5569) Document any changes from this task

2017-09-14 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-5569:
--

Assignee: Matthias J. Sax

> Document any changes from this task
> ---
>
> Key: KAFKA-5569
> URL: https://issues.apache.org/jira/browse/KAFKA-5569
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Eno Thereska
>Assignee: Matthias J. Sax
> Fix For: 1.0.0
>
>
> After fixing the exceptions, document what was done, e.g., KIP-161 at a 
> minimum.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5225) StreamsResetter doesn't allow custom Consumer properties

2017-09-14 Thread Matthias J. Sax (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16166731#comment-16166731
 ] 

Matthias J. Sax commented on KAFKA-5225:


[~bharatviswa] Are you still working on this?

> StreamsResetter doesn't allow custom Consumer properties
> 
>
> Key: KAFKA-5225
> URL: https://issues.apache.org/jira/browse/KAFKA-5225
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, tools
>Affects Versions: 0.10.2.1
>Reporter: Dustin Cote
>Assignee: Bharat Viswanadham
>  Labels: needs-kip
>
> The StreamsResetter doesn't let the user pass in any configurations to the 
> embedded consumer. This is a problem in secured environments because you 
> can't configure the embedded consumer to talk to the cluster. The tool should 
> take an approach similar to `kafka.admin.ConsumerGroupCommand` which allows a 
> config file to be passed in the command line for such operations.
> cc [~mjsax]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5895) Update readme to reflect that Gradle 2 is no longer good enough

2017-09-14 Thread JIRA
Matthias Weßendorf created KAFKA-5895:
-

 Summary: Update readme to reflect that Gradle 2 is no longer good 
enough
 Key: KAFKA-5895
 URL: https://issues.apache.org/jira/browse/KAFKA-5895
 Project: Kafka
  Issue Type: Improvement
  Components: documentation
Affects Versions: 0.11.0.2
Reporter: Matthias Weßendorf
Priority: Trivial


The README says:

Kafka requires Gradle 2.0 or higher.

but running with "2.13", I am getting an ERROR message, saying that 3.0+ is 
needed:

{code}
> Failed to apply plugin [class 
> 'com.github.jengelman.gradle.plugins.shadow.ShadowBasePlugin']
   > This version of Shadow supports Gradle 3.0+ only. Please upgrade.

{code}

Full log here:

{code}
➜  kafka git:(utils_improvment) ✗ gradle 
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.13/userguide/gradle_daemon.html.
Download https://jcenter.bintray.com/org/ajoberstar/grgit/1.9.3/grgit-1.9.3.pom
Download 
https://jcenter.bintray.com/com/github/ben-manes/gradle-versions-plugin/0.15.0/gradle-versions-plugin-0.15.0.pom
Download 
https://repo1.maven.org/maven2/org/scoverage/gradle-scoverage/2.1.0/gradle-scoverage-2.1.0.pom
Download 
https://jcenter.bintray.com/com/github/jengelman/gradle/plugins/shadow/2.0.1/shadow-2.0.1.pom
Download 
https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit/4.5.2.201704071617-r/org.eclipse.jgit-4.5.2.201704071617-r.pom
Download 
https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit-parent/4.5.2.201704071617-r/org.eclipse.jgit-parent-4.5.2.201704071617-r.pom
Download 
https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit.ui/4.5.2.201704071617-r/org.eclipse.jgit.ui-4.5.2.201704071617-r.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.jsch/0.0.9/jsch.agentproxy.jsch-0.0.9.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy/0.0.9/jsch.agentproxy-0.0.9.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.pageant/0.0.9/jsch.agentproxy.pageant-0.0.9.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.sshagent/0.0.9/jsch.agentproxy.sshagent-0.0.9.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.usocket-jna/0.0.9/jsch.agentproxy.usocket-jna-0.0.9.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.usocket-nc/0.0.9/jsch.agentproxy.usocket-nc-0.0.9.pom
Download https://repo1.maven.org/maven2/com/jcraft/jsch/0.1.54/jsch-0.1.54.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.core/0.0.9/jsch.agentproxy.core-0.0.9.pom
Download https://jcenter.bintray.com/org/ajoberstar/grgit/1.9.3/grgit-1.9.3.jar
Download 
https://jcenter.bintray.com/com/github/ben-manes/gradle-versions-plugin/0.15.0/gradle-versions-plugin-0.15.0.jar
Download 
https://repo1.maven.org/maven2/org/scoverage/gradle-scoverage/2.1.0/gradle-scoverage-2.1.0.jar
Download 
https://jcenter.bintray.com/com/github/jengelman/gradle/plugins/shadow/2.0.1/shadow-2.0.1.jar
Download 
https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit/4.5.2.201704071617-r/org.eclipse.jgit-4.5.2.201704071617-r.jar
Download 
https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit.ui/4.5.2.201704071617-r/org.eclipse.jgit.ui-4.5.2.201704071617-r.jar
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.jsch/0.0.9/jsch.agentproxy.jsch-0.0.9.jar
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.pageant/0.0.9/jsch.agentproxy.pageant-0.0.9.jar
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.sshagent/0.0.9/jsch.agentproxy.sshagent-0.0.9.jar
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.usocket-jna/0.0.9/jsch.agentproxy.usocket-jna-0.0.9.jar
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.usocket-nc/0.0.9/jsch.agentproxy.usocket-nc-0.0.9.jar
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.core/0.0.9/jsch.agentproxy.core-0.0.9.jar
Building project 'core' with Scala version 2.11.11

FAILURE: Build failed with an exception.

* Where:
Build file '/home/Matthias/Work/Apache/kafka/build.gradle' line: 978

* What went wrong:
A problem occurred evaluating root project 'kafka'.
> Failed to apply plugin [class 
> 'com.github.jengelman.gradle.plugins.shadow.ShadowBasePlugin']
   > This version of Shadow supports Gradle 3.0+ only. Please upgrade.

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 14.637 secs
➜  kafka git:(utils_improvment) ✗ gradle --version


Gradle 2.13
{code} 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5301) Improve exception handling on consumer path

2017-09-14 Thread Matthias J. Sax (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16166793#comment-16166793
 ] 

Matthias J. Sax commented on KAFKA-5301:


I did assign this JIRA to myself, as we need to tackle all sub-task for 
exception handling from a global point of view to improve the current state, 
that is a little patchy. Thus, it does not make sense to work on individual 
JIRAs in isolation. As the open PR did not tackle the right issue anyway and 
the ticket was unassigned I was hoping this is ok. Please let me know if you 
disagree. I don't want to "take it away" from anybody. Please let me know what 
you think. Thx.

> Improve exception handling on consumer path
> ---
>
> Key: KAFKA-5301
> URL: https://issues.apache.org/jira/browse/KAFKA-5301
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 0.11.0.0
>Reporter: Eno Thereska
>Assignee: Matthias J. Sax
> Fix For: 1.0.0
>
> Attachments: 5301.v1.patch
>
>
> Used in StreamsThread.java, mostly to .poll() but also to restore data.
> Used in StreamsTask.java, mostly to .pause(), .resume()
> All exceptions here are currently caught all the way up to the main running 
> loop in a broad catch(Exception e) statement in StreamThread.run().
> One main concern on the consumer path is handling deserialization errors that 
> happen before streams has even had a chance to look at the data: 
> https://issues.apache.org/jira/browse/KAFKA-5157  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5895) Update readme to reflect that Gradle 2 is no longer good enough

2017-09-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16166795#comment-16166795
 ] 

ASF GitHub Bot commented on KAFKA-5895:
---

GitHub user matzew opened a pull request:

https://github.com/apache/kafka/pull/3861

KAFKA-5895 

As discussed in 
[KAFKA-5895](https://issues.apache.org/jira/browse/KAFKA-5895), this trivial PR 
is simply adding a hint that Gradle 3.0+ is now needed

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/matzew/kafka Correct_Gradle_Info

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3861.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3861


commit 17ab1c6f0bdf883ef85efb5dba9bfa469f855688
Author: Matthias Wessendorf 
Date:   2017-09-14T18:38:33Z

Adding hint that Gradle 3.0+ is now needed




> Update readme to reflect that Gradle 2 is no longer good enough
> ---
>
> Key: KAFKA-5895
> URL: https://issues.apache.org/jira/browse/KAFKA-5895
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 0.11.0.2
>Reporter: Matthias Weßendorf
>Priority: Trivial
>
> The README says:
> Kafka requires Gradle 2.0 or higher.
> but running with "2.13", I am getting an ERROR message, saying that 3.0+ is 
> needed:
> {code}
> > Failed to apply plugin [class 
> > 'com.github.jengelman.gradle.plugins.shadow.ShadowBasePlugin']
>> This version of Shadow supports Gradle 3.0+ only. Please upgrade.
> {code}
> Full log here:
> {code}
> ➜  kafka git:(utils_improvment) ✗ gradle 
> To honour the JVM settings for this build a new JVM will be forked. Please 
> consider using the daemon: 
> https://docs.gradle.org/2.13/userguide/gradle_daemon.html.
> Download 
> https://jcenter.bintray.com/org/ajoberstar/grgit/1.9.3/grgit-1.9.3.pom
> Download 
> https://jcenter.bintray.com/com/github/ben-manes/gradle-versions-plugin/0.15.0/gradle-versions-plugin-0.15.0.pom
> Download 
> https://repo1.maven.org/maven2/org/scoverage/gradle-scoverage/2.1.0/gradle-scoverage-2.1.0.pom
> Download 
> https://jcenter.bintray.com/com/github/jengelman/gradle/plugins/shadow/2.0.1/shadow-2.0.1.pom
> Download 
> https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit/4.5.2.201704071617-r/org.eclipse.jgit-4.5.2.201704071617-r.pom
> Download 
> https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit-parent/4.5.2.201704071617-r/org.eclipse.jgit-parent-4.5.2.201704071617-r.pom
> Download 
> https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit.ui/4.5.2.201704071617-r/org.eclipse.jgit.ui-4.5.2.201704071617-r.pom
> Download 
> https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.jsch/0.0.9/jsch.agentproxy.jsch-0.0.9.pom
> Download 
> https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy/0.0.9/jsch.agentproxy-0.0.9.pom
> Download 
> https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.pageant/0.0.9/jsch.agentproxy.pageant-0.0.9.pom
> Download 
> https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.sshagent/0.0.9/jsch.agentproxy.sshagent-0.0.9.pom
> Download 
> https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.usocket-jna/0.0.9/jsch.agentproxy.usocket-jna-0.0.9.pom
> Download 
> https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.usocket-nc/0.0.9/jsch.agentproxy.usocket-nc-0.0.9.pom
> Download https://repo1.maven.org/maven2/com/jcraft/jsch/0.1.54/jsch-0.1.54.pom
> Download 
> https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.core/0.0.9/jsch.agentproxy.core-0.0.9.pom
> Download 
> https://jcenter.bintray.com/org/ajoberstar/grgit/1.9.3/grgit-1.9.3.jar
> Download 
> https://jcenter.bintray.com/com/github/ben-manes/gradle-versions-plugin/0.15.0/gradle-versions-plugin-0.15.0.jar
> Download 
> https://repo1.maven.org/maven2/org/scoverage/gradle-scoverage/2.1.0/gradle-scoverage-2.1.0.jar
> Download 
> https://jcenter.bintray.com/com/github/jengelman/gradle/plugins/shadow/2.0.1/shadow-2.0.1.jar
> Download 
> https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit/4.5.2.201704071617-r/org.eclipse.jgit-4.5.2.201704071617-r.jar
> Download 
> https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit.ui/4.5.2.201704071617-r/org.eclipse.jgit.ui-4.5.2.201704071617-r.jar
> Download 
> https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.jsch/0.0.9/jsch.agentproxy.jsch-0.0.9.jar
> Download 
> https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.pageant/0.0.9/jsch.agentproxy.pageant-0.0.9.jar
> Download 
> https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.sshagent/0.0.9/jsch.agentproxy.sshagent-0.0.9.jar
> Download 
> h

[jira] [Updated] (KAFKA-5895) Gradle 3.0+ is needed on the build

2017-09-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-5895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Weßendorf updated KAFKA-5895:
--
Summary: Gradle 3.0+ is needed on the build  (was: Update readme to reflect 
that Gradle 2 is no longer good enough)

> Gradle 3.0+ is needed on the build
> --
>
> Key: KAFKA-5895
> URL: https://issues.apache.org/jira/browse/KAFKA-5895
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 0.11.0.2
>Reporter: Matthias Weßendorf
>Priority: Trivial
>
> The README says:
> Kafka requires Gradle 2.0 or higher.
> but running with "2.13", I am getting an ERROR message, saying that 3.0+ is 
> needed:
> {code}
> > Failed to apply plugin [class 
> > 'com.github.jengelman.gradle.plugins.shadow.ShadowBasePlugin']
>> This version of Shadow supports Gradle 3.0+ only. Please upgrade.
> {code}
> Full log here:
> {code}
> ➜  kafka git:(utils_improvment) ✗ gradle 
> To honour the JVM settings for this build a new JVM will be forked. Please 
> consider using the daemon: 
> https://docs.gradle.org/2.13/userguide/gradle_daemon.html.
> Download 
> https://jcenter.bintray.com/org/ajoberstar/grgit/1.9.3/grgit-1.9.3.pom
> Download 
> https://jcenter.bintray.com/com/github/ben-manes/gradle-versions-plugin/0.15.0/gradle-versions-plugin-0.15.0.pom
> Download 
> https://repo1.maven.org/maven2/org/scoverage/gradle-scoverage/2.1.0/gradle-scoverage-2.1.0.pom
> Download 
> https://jcenter.bintray.com/com/github/jengelman/gradle/plugins/shadow/2.0.1/shadow-2.0.1.pom
> Download 
> https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit/4.5.2.201704071617-r/org.eclipse.jgit-4.5.2.201704071617-r.pom
> Download 
> https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit-parent/4.5.2.201704071617-r/org.eclipse.jgit-parent-4.5.2.201704071617-r.pom
> Download 
> https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit.ui/4.5.2.201704071617-r/org.eclipse.jgit.ui-4.5.2.201704071617-r.pom
> Download 
> https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.jsch/0.0.9/jsch.agentproxy.jsch-0.0.9.pom
> Download 
> https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy/0.0.9/jsch.agentproxy-0.0.9.pom
> Download 
> https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.pageant/0.0.9/jsch.agentproxy.pageant-0.0.9.pom
> Download 
> https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.sshagent/0.0.9/jsch.agentproxy.sshagent-0.0.9.pom
> Download 
> https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.usocket-jna/0.0.9/jsch.agentproxy.usocket-jna-0.0.9.pom
> Download 
> https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.usocket-nc/0.0.9/jsch.agentproxy.usocket-nc-0.0.9.pom
> Download https://repo1.maven.org/maven2/com/jcraft/jsch/0.1.54/jsch-0.1.54.pom
> Download 
> https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.core/0.0.9/jsch.agentproxy.core-0.0.9.pom
> Download 
> https://jcenter.bintray.com/org/ajoberstar/grgit/1.9.3/grgit-1.9.3.jar
> Download 
> https://jcenter.bintray.com/com/github/ben-manes/gradle-versions-plugin/0.15.0/gradle-versions-plugin-0.15.0.jar
> Download 
> https://repo1.maven.org/maven2/org/scoverage/gradle-scoverage/2.1.0/gradle-scoverage-2.1.0.jar
> Download 
> https://jcenter.bintray.com/com/github/jengelman/gradle/plugins/shadow/2.0.1/shadow-2.0.1.jar
> Download 
> https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit/4.5.2.201704071617-r/org.eclipse.jgit-4.5.2.201704071617-r.jar
> Download 
> https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit.ui/4.5.2.201704071617-r/org.eclipse.jgit.ui-4.5.2.201704071617-r.jar
> Download 
> https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.jsch/0.0.9/jsch.agentproxy.jsch-0.0.9.jar
> Download 
> https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.pageant/0.0.9/jsch.agentproxy.pageant-0.0.9.jar
> Download 
> https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.sshagent/0.0.9/jsch.agentproxy.sshagent-0.0.9.jar
> Download 
> https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.usocket-jna/0.0.9/jsch.agentproxy.usocket-jna-0.0.9.jar
> Download 
> https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.usocket-nc/0.0.9/jsch.agentproxy.usocket-nc-0.0.9.jar
> Download 
> https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.core/0.0.9/jsch.agentproxy.core-0.0.9.jar
> Building project 'core' with Scala version 2.11.11
> FAILURE: Build failed with an exception.
> * Where:
> Build file '/home/Matthias/Work/Apache/kafka/build.gradle' line: 978
> * What went wrong:
> A problem occurred evaluating root project 'kafka'.
> > Failed to apply plugin [class 
> > 'com.github.jengelman.gradle.plugins.shadow.ShadowBasePlugin']
>> This version of Shadow supports Gradle 3.0+ only. Please upgrade.
> * Try:
> Run with --stacktrace option to get 

[jira] [Created] (KAFKA-5896) Kafka Connect task threads never interrupted

2017-09-14 Thread Nick Pillitteri (JIRA)
Nick Pillitteri created KAFKA-5896:
--

 Summary: Kafka Connect task threads never interrupted
 Key: KAFKA-5896
 URL: https://issues.apache.org/jira/browse/KAFKA-5896
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Reporter: Nick Pillitteri
Priority: Minor


h2. Problem

Kafka Connect tasks associated with connectors are run in their own threads. 
When tasks are stopped or restarted, a flag is set - {{stopping}} - to indicate 
the task should stop processing records. However, if the thread the task is 
running in is blocked (waiting for a lock or performing I/O) it's possible the 
task will never stop.

I've created a connector specifically to demonstrate this issue (along with 
some more detailed instructions for reproducing the issue): 
https://github.com/smarter-travel-media/hang-connector

I believe this is an issue because it means that a single badly behaved 
connector (any connector that does I/O without timeouts) can cause the Kafka 
Connect worker to get into a state where the only solution is to restart the 
JVM.

I think, but couldn't reproduce, that this is the cause of this problem on 
Stack Overflow: 
https://stackoverflow.com/questions/43802156/inconsistent-connector-state-connectexception-task-already-exists-in-this-work

h2. Expected Result

I would expect the Worker to eventually interrupt the thread that the task is 
running in. In the past across various other libraries, this is what I've seen 
done when a thread needs to be forcibly stopped.

h2. Actual Result

In actuality, the Worker sets a {{stopping}} flag and lets the thread run 
indefinitely. It uses a timeout while waiting for the task to stop but after 
this timeout has expired it simply sets a {{cancelled}} flag. This means that 
every time a task is restarted, a new thread running the task will be created. 
Thus a task may end up with multiple instances all running in their own threads 
when there's only supposed to be a single thread.

h2. Steps to Reproduce

The problem can be replicated by using the connector available here: 
https://github.com/smarter-travel-media/hang-connector

Apologies for how involved the steps are.

I've created a patch that forcibly interrupts threads after they fail to 
gracefully shutdown here: 
https://github.com/smarter-travel-media/kafka/commit/295c747a9fd82ee8b30556c89c31e0bfcce5a2c5

I've confirmed that this fixes the issue. I can add some unit tests and submit 
a PR if people agree that this is a bug and interrupting threads is the right 
fix.

Thanks!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5897) The producerId can be reset unnecessarily

2017-09-14 Thread Apurva Mehta (JIRA)
Apurva Mehta created KAFKA-5897:
---

 Summary: The producerId can be reset unnecessarily
 Key: KAFKA-5897
 URL: https://issues.apache.org/jira/browse/KAFKA-5897
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.11.0.0, 1.0.0
Reporter: Apurva Mehta


Currently, we expire batches and reset the producer id in cases where we don't 
need to.

For instance, if a batch which has been previously sent is expired in the 
accumulator, the producerId is reset (or the transaction aborted) 
unconditionally. However, if the batch failed with certain error codes like 
{{NOT_LEADER_FOR_PARTITION}}, etc., which definitively indicate that the write 
never succeeded, we don't need to reset the producer state since the status of 
the batch is not in doubt. 

Essentially, we would like an 'reset based on failure mode' logic which would 
be a bit smarter.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5876) IQ should throw different exceptions for different errors

2017-09-14 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-5876:
---
Labels: needs-kip newbie++  (was: needs-kip)

> IQ should throw different exceptions for different errors
> -
>
> Key: KAFKA-5876
> URL: https://issues.apache.org/jira/browse/KAFKA-5876
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>  Labels: needs-kip, newbie++
> Fix For: 1.0.0
>
>
> Currently, IQ does only throws {{InvalidStateStoreException}} for all errors 
> that occur. However, we have different types of errors and should throw 
> different exceptions for those types.
> For example, if a store was migrated it must be rediscovered while if a store 
> cannot be queried yet, because it is still re-created after a rebalance, the 
> user just needs to wait until store recreation is finished.
> There might be other examples, too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5784) Add a sensor for dropped records in window stores

2017-09-14 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-5784:
---
Labels: newbie  (was: )

> Add a sensor for dropped records in window stores
> -
>
> Key: KAFKA-5784
> URL: https://issues.apache.org/jira/browse/KAFKA-5784
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>  Labels: newbie
>
> Today when a {{put(record)}} call on a windowed store does not find the 
> corresponding segment, i.e. its corresponding's window has expired, we simply 
> returns a {{null}} and hence silently drops it.
> We should consider 1) add log4j entries when it happens, 2) add metrics (we 
> can discuss whether it should be a processor-node level, or store level 
> sensor) for such cases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5660) Don't throw TopologyBuilderException during runtime

2017-09-14 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-5660:
--

Assignee: Matthias J. Sax

> Don't throw TopologyBuilderException during runtime
> ---
>
> Key: KAFKA-5660
> URL: https://issues.apache.org/jira/browse/KAFKA-5660
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>
> {{TopologyBuilderException}} is a pre-runtime exception that should only be 
> thrown {{KafkaStreams#start()}} is called.
> However, we do throw {{TopologyBuilderException}} within
> - `SourceNodeFactory#getTopics`
> - `ProcessorContextImpl#getStateStore`
> (and maybe somewhere else: we should double check if there are other places 
> in the code like those).
> We should replace those exception with either {{StreamsException}} or with a 
> new exception type.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5896) Kafka Connect task threads never interrupted

2017-09-14 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16166907#comment-16166907
 ] 

Ted Yu commented on KAFKA-5896:
---

Good find.

Mind sending a PR (hopefully with test showing the effectiveness).

> Kafka Connect task threads never interrupted
> 
>
> Key: KAFKA-5896
> URL: https://issues.apache.org/jira/browse/KAFKA-5896
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Nick Pillitteri
>Priority: Minor
>
> h2. Problem
> Kafka Connect tasks associated with connectors are run in their own threads. 
> When tasks are stopped or restarted, a flag is set - {{stopping}} - to 
> indicate the task should stop processing records. However, if the thread the 
> task is running in is blocked (waiting for a lock or performing I/O) it's 
> possible the task will never stop.
> I've created a connector specifically to demonstrate this issue (along with 
> some more detailed instructions for reproducing the issue): 
> https://github.com/smarter-travel-media/hang-connector
> I believe this is an issue because it means that a single badly behaved 
> connector (any connector that does I/O without timeouts) can cause the Kafka 
> Connect worker to get into a state where the only solution is to restart the 
> JVM.
> I think, but couldn't reproduce, that this is the cause of this problem on 
> Stack Overflow: 
> https://stackoverflow.com/questions/43802156/inconsistent-connector-state-connectexception-task-already-exists-in-this-work
> h2. Expected Result
> I would expect the Worker to eventually interrupt the thread that the task is 
> running in. In the past across various other libraries, this is what I've 
> seen done when a thread needs to be forcibly stopped.
> h2. Actual Result
> In actuality, the Worker sets a {{stopping}} flag and lets the thread run 
> indefinitely. It uses a timeout while waiting for the task to stop but after 
> this timeout has expired it simply sets a {{cancelled}} flag. This means that 
> every time a task is restarted, a new thread running the task will be 
> created. Thus a task may end up with multiple instances all running in their 
> own threads when there's only supposed to be a single thread.
> h2. Steps to Reproduce
> The problem can be replicated by using the connector available here: 
> https://github.com/smarter-travel-media/hang-connector
> Apologies for how involved the steps are.
> I've created a patch that forcibly interrupts threads after they fail to 
> gracefully shutdown here: 
> https://github.com/smarter-travel-media/kafka/commit/295c747a9fd82ee8b30556c89c31e0bfcce5a2c5
> I've confirmed that this fixes the issue. I can add some unit tests and 
> submit a PR if people agree that this is a bug and interrupting threads is 
> the right fix.
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5660) Don't throw TopologyBuilderException during runtime

2017-09-14 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-5660:
---
Issue Type: Sub-task  (was: Bug)
Parent: KAFKA-5156

> Don't throw TopologyBuilderException during runtime
> ---
>
> Key: KAFKA-5660
> URL: https://issues.apache.org/jira/browse/KAFKA-5660
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 0.11.0.0
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>
> {{TopologyBuilderException}} is a pre-runtime exception that should only be 
> thrown {{KafkaStreams#start()}} is called.
> However, we do throw {{TopologyBuilderException}} within
> - `SourceNodeFactory#getTopics`
> - `ProcessorContextImpl#getStateStore`
> (and maybe somewhere else: we should double check if there are other places 
> in the code like those).
> We should replace those exception with either {{StreamsException}} or with a 
> new exception type.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5765) Move merge() from StreamsBuilder to KStream

2017-09-14 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-5765:
---
Labels: needs-kip newbie  (was: needs-kip)

> Move merge() from StreamsBuilder to KStream
> ---
>
> Key: KAFKA-5765
> URL: https://issues.apache.org/jira/browse/KAFKA-5765
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>  Labels: needs-kip, newbie
> Fix For: 1.0.0
>
>
> Merging multiple {{KStream}} is done via {{StreamsBuilder#merge()}} (formally 
> {{KStreamBuilder#merge()}}). This is quite unnatural and should be done via 
> {{KStream#merge()}}.
> As {{StreamsBuilder}} is not released yet, this is not a backward 
> incompatible change (and KStreamBuilder is already deprecated). We still need 
> a KIP as we add a new method to a public {{KStreams}} API.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5158) Options for handling exceptions during processing

2017-09-14 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-5158:
---
Issue Type: Sub-task  (was: New Feature)
Parent: KAFKA-5156

> Options for handling exceptions during processing
> -
>
> Key: KAFKA-5158
> URL: https://issues.apache.org/jira/browse/KAFKA-5158
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Eno Thereska
>Assignee: Matthias J. Sax
> Fix For: 1.0.0
>
>
> Imagine the app-level processing of a (non-corrupted) record fails (e.g. the 
> user attempted to do a RPC to an external system, and this call failed). How 
> can you process such failed records in a scalable way? For example, imagine 
> you need to implement a retry policy such as "retry with exponential 
> backoff". Here, you have the problem that 1. you can't really pause 
> processing a single record because this will pause the processing of the full 
> stream (bottleneck!) and 2. there is no straight-forward way to "sort" failed 
> records based on their "next retry time".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5158) Options for handling exceptions during processing

2017-09-14 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-5158:
--

Assignee: Matthias J. Sax

> Options for handling exceptions during processing
> -
>
> Key: KAFKA-5158
> URL: https://issues.apache.org/jira/browse/KAFKA-5158
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: Eno Thereska
>Assignee: Matthias J. Sax
> Fix For: 1.0.0
>
>
> Imagine the app-level processing of a (non-corrupted) record fails (e.g. the 
> user attempted to do a RPC to an external system, and this call failed). How 
> can you process such failed records in a scalable way? For example, imagine 
> you need to implement a retry policy such as "retry with exponential 
> backoff". Here, you have the problem that 1. you can't really pause 
> processing a single record because this will pause the processing of the full 
> stream (bottleneck!) and 2. there is no straight-forward way to "sort" failed 
> records based on their "next retry time".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-4932) Add UUID Serde

2017-09-14 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-4932:
---
Labels: newbie  (was: )

> Add UUID Serde
> --
>
> Key: KAFKA-4932
> URL: https://issues.apache.org/jira/browse/KAFKA-4932
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, streams
>Reporter: Jeff Klukas
>Priority: Minor
>  Labels: needs-kip, newbie
>
> I propose adding serializers and deserializers for the java.util.UUID class.
> I have many use cases where I want to set the key of a Kafka message to be a 
> UUID. Currently, I need to turn UUIDs into strings or byte arrays and use 
> their associated Serdes, but it would be more convenient to serialize and 
> deserialize UUIDs directly.
> I'd propose that the serializer and deserializer use the 36-byte string 
> representation, calling UUID.toString and UUID.fromString. We would also wrap 
> these in a Serde and modify the streams Serdes class to include this in the 
> list of supported types.
> Optionally, we could have the deserializer support a 16-byte representation 
> and it would check the size of the input byte array to determine whether it's 
> a binary or string representation of the UUID. It's not well defined whether 
> the most significant bits or least significant go first, so this deserializer 
> would have to support only one or the other.
> Similary, if the deserializer supported a 16-byte representation, there could 
> be two variants of the serializer, a UUIDStringSerializer and a 
> UUIDBytesSerializer.
> I would be willing to write this PR, but am looking for feedback about 
> whether there are significant concerns here around ambiguity of what the byte 
> representation of a UUID should be, or if there's desire to keep to list of 
> built-in Serdes minimal such that a PR would be unlikely to be accepted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-4932) Add UUID Serde

2017-09-14 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-4932:
---
Labels: needs-kip newbie  (was: newbie)

> Add UUID Serde
> --
>
> Key: KAFKA-4932
> URL: https://issues.apache.org/jira/browse/KAFKA-4932
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, streams
>Reporter: Jeff Klukas
>Priority: Minor
>  Labels: needs-kip, newbie
>
> I propose adding serializers and deserializers for the java.util.UUID class.
> I have many use cases where I want to set the key of a Kafka message to be a 
> UUID. Currently, I need to turn UUIDs into strings or byte arrays and use 
> their associated Serdes, but it would be more convenient to serialize and 
> deserialize UUIDs directly.
> I'd propose that the serializer and deserializer use the 36-byte string 
> representation, calling UUID.toString and UUID.fromString. We would also wrap 
> these in a Serde and modify the streams Serdes class to include this in the 
> list of supported types.
> Optionally, we could have the deserializer support a 16-byte representation 
> and it would check the size of the input byte array to determine whether it's 
> a binary or string representation of the UUID. It's not well defined whether 
> the most significant bits or least significant go first, so this deserializer 
> would have to support only one or the other.
> Similary, if the deserializer supported a 16-byte representation, there could 
> be two variants of the serializer, a UUIDStringSerializer and a 
> UUIDBytesSerializer.
> I would be willing to write this PR, but am looking for feedback about 
> whether there are significant concerns here around ambiguity of what the byte 
> representation of a UUID should be, or if there's desire to keep to list of 
> built-in Serdes minimal such that a PR would be unlikely to be accepted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5407) Mirrormaker dont start after upgrade

2017-09-14 Thread Fernando Vega (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16166938#comment-16166938
 ] 

Fernando Vega commented on KAFKA-5407:
--

So I reproduce this test, and still nothing I now have a log file with debug 
flag enable and I can shared.
Please let me know what you guys think. I still cannot make it work.

Attached is the file

> Mirrormaker dont start after upgrade
> 
>
> Key: KAFKA-5407
> URL: https://issues.apache.org/jira/browse/KAFKA-5407
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.2.1
> Environment: Operating system
> CentOS 6.8
> HW
> Board Mfg : HP
>  Board Product : ProLiant DL380p Gen8
> CPU's x2
> Product Manufacturer  : Intel
>  Product Name  :  Intel(R) Xeon(R) CPU E5-2660 v2 @ 2.20GHz
>  Memory Type   : DDR3 SDRAM
>  SDRAM Capacity: 2048 MB
>  Total Memory: : 64GB
> Hardrives size and layout:
> 9 drives using jbod
> drive size 3.6TB each
>Reporter: Fernando Vega
>Priority: Critical
>
> Currently Im upgrading the cluster from 0.8.2-beta to 0.10.2.1
> So I followed the rolling procedure:
> Here the config files:
> Consumer
> {noformat}
> #
> # Cluster: repl
> # Topic list(goes into command line): 
> REPL-ams1-global,REPL-atl1-global,REPL-sjc2-global,REPL-ams1-global-PN_HXIDMAP_.*,REPL-atl1-global-PN_HXIDMAP_.*,REPL-sjc2-global-PN_HXIDMAP_.*,REPL-ams1-global-PN_HXCONTEXTUALV2_.*,REPL-atl1-global-PN_HXCONTEXTUALV2_.*,REPL-sjc2-global-PN_HXCONTEXTUALV2_.*
> bootstrap.servers=app001:9092,app002:9092,app003:9092,app004:9092
> group.id=hkg1_cluster
> auto.commit.interval.ms=6
> partition.assignment.strategy=org.apache.kafka.clients.consumer.RoundRobinAssignor
> {noformat}
> Producer
> {noformat}
>  hkg1
> # # Producer
> # # hkg1
> bootstrap.servers=app001:9092,app002:9092,app003:9092,app004:9092
> compression.type=gzip
> acks=0
> {noformat}
> Broker
> {noformat}
> auto.leader.rebalance.enable=true
> delete.topic.enable=true
> socket.receive.buffer.bytes=1048576
> socket.send.buffer.bytes=1048576
> default.replication.factor=2
> auto.create.topics.enable=true
> num.partitions=1
> num.network.threads=8
> num.io.threads=40
> log.retention.hours=1
> log.roll.hours=1
> num.replica.fetchers=8
> zookeeper.connection.timeout.ms=3
> zookeeper.session.timeout.ms=3
> inter.broker.protocol.version=0.10.2
> log.message.format.version=0.8.2
> {noformat}
> I tried also using stock configuraiton with no luck.
> The error that I get is this:
> {noformat}
> 2017-06-07 12:24:45,476] INFO ConsumerConfig values:
>   auto.commit.interval.ms = 6
>   auto.offset.reset = latest
>   bootstrap.servers = [app454.sjc2.mytest.com:9092, 
> app455.sjc2.mytest.com:9092, app456.sjc2.mytest.com:9092, 
> app457.sjc2.mytest.com:9092, app458.sjc2.mytest.com:9092, 
> app459.sjc2.mytest.com:9092]
>   check.crcs = true
>   client.id = MirrorMaker_hkg1-1
>   connections.max.idle.ms = 54
>   enable.auto.commit = false
>   exclude.internal.topics = true
>   fetch.max.bytes = 52428800
>   fetch.max.wait.ms = 500
>   fetch.min.bytes = 1
>   group.id = MirrorMaker_hkg1
>   heartbeat.interval.ms = 3000
>   interceptor.classes = null
>   key.deserializer = class 
> org.apache.kafka.common.serialization.ByteArrayDeserializer
>   max.partition.fetch.bytes = 1048576
>   max.poll.interval.ms = 30
>   max.poll.records = 500
>   metadata.max.age.ms = 30
>   metric.reporters = []
>   metrics.num.samples = 2
>   metrics.recording.level = INFO
>   metrics.sample.window.ms = 3
>   partition.assignment.strategy = 
> [org.apache.kafka.clients.consumer.RoundRobinAssignor]
>   receive.buffer.bytes = 65536
>   reconnect.backoff.ms = 50
>   request.timeout.ms = 305000
>   retry.backoff.ms = 100
>   sasl.jaas.config = null
>   sasl.kerberos.kinit.cmd = /usr/bin/kinit
>   sasl.kerberos.min.time.before.relogin = 6
>   sasl.kerberos.service.name = null
>   sasl.kerberos.ticket.renew.jitter = 0.05
>   sasl.kerberos.ticket.renew.window.factor = 0.8
>   sasl.mechanism = GSSAPI
>   security.protocol = PLAINTEXT
>   send.buffer.bytes = 131072
>   session.timeout.ms = 1
>   ssl.cipher.suites = null
>   ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
>   ssl.endpoint.identification.algorithm = null
>   ssl.key.password = null
>   ssl.keymanager.algorithm = SunX509
>   ssl.keystore.location = null
>   ssl.keystore.password = null
>   ssl.keystore.type = JKS
>   ssl.protocol = TLS
>   ssl.provider = null
>   ssl.secure.random.implementation = null
>   ssl.trustmanager.algorithm = PKIX
>   s

[jira] [Updated] (KAFKA-5407) Mirrormaker dont start after upgrade

2017-09-14 Thread Fernando Vega (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fernando Vega updated KAFKA-5407:
-
Attachment: mirrormaker-repl-sjc2-to-hkg1.log.8

> Mirrormaker dont start after upgrade
> 
>
> Key: KAFKA-5407
> URL: https://issues.apache.org/jira/browse/KAFKA-5407
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.2.1
> Environment: Operating system
> CentOS 6.8
> HW
> Board Mfg : HP
>  Board Product : ProLiant DL380p Gen8
> CPU's x2
> Product Manufacturer  : Intel
>  Product Name  :  Intel(R) Xeon(R) CPU E5-2660 v2 @ 2.20GHz
>  Memory Type   : DDR3 SDRAM
>  SDRAM Capacity: 2048 MB
>  Total Memory: : 64GB
> Hardrives size and layout:
> 9 drives using jbod
> drive size 3.6TB each
>Reporter: Fernando Vega
>Priority: Critical
> Attachments: mirrormaker-repl-sjc2-to-hkg1.log.8
>
>
> Currently Im upgrading the cluster from 0.8.2-beta to 0.10.2.1
> So I followed the rolling procedure:
> Here the config files:
> Consumer
> {noformat}
> #
> # Cluster: repl
> # Topic list(goes into command line): 
> REPL-ams1-global,REPL-atl1-global,REPL-sjc2-global,REPL-ams1-global-PN_HXIDMAP_.*,REPL-atl1-global-PN_HXIDMAP_.*,REPL-sjc2-global-PN_HXIDMAP_.*,REPL-ams1-global-PN_HXCONTEXTUALV2_.*,REPL-atl1-global-PN_HXCONTEXTUALV2_.*,REPL-sjc2-global-PN_HXCONTEXTUALV2_.*
> bootstrap.servers=app001:9092,app002:9092,app003:9092,app004:9092
> group.id=hkg1_cluster
> auto.commit.interval.ms=6
> partition.assignment.strategy=org.apache.kafka.clients.consumer.RoundRobinAssignor
> {noformat}
> Producer
> {noformat}
>  hkg1
> # # Producer
> # # hkg1
> bootstrap.servers=app001:9092,app002:9092,app003:9092,app004:9092
> compression.type=gzip
> acks=0
> {noformat}
> Broker
> {noformat}
> auto.leader.rebalance.enable=true
> delete.topic.enable=true
> socket.receive.buffer.bytes=1048576
> socket.send.buffer.bytes=1048576
> default.replication.factor=2
> auto.create.topics.enable=true
> num.partitions=1
> num.network.threads=8
> num.io.threads=40
> log.retention.hours=1
> log.roll.hours=1
> num.replica.fetchers=8
> zookeeper.connection.timeout.ms=3
> zookeeper.session.timeout.ms=3
> inter.broker.protocol.version=0.10.2
> log.message.format.version=0.8.2
> {noformat}
> I tried also using stock configuraiton with no luck.
> The error that I get is this:
> {noformat}
> 2017-06-07 12:24:45,476] INFO ConsumerConfig values:
>   auto.commit.interval.ms = 6
>   auto.offset.reset = latest
>   bootstrap.servers = [app454.sjc2.mytest.com:9092, 
> app455.sjc2.mytest.com:9092, app456.sjc2.mytest.com:9092, 
> app457.sjc2.mytest.com:9092, app458.sjc2.mytest.com:9092, 
> app459.sjc2.mytest.com:9092]
>   check.crcs = true
>   client.id = MirrorMaker_hkg1-1
>   connections.max.idle.ms = 54
>   enable.auto.commit = false
>   exclude.internal.topics = true
>   fetch.max.bytes = 52428800
>   fetch.max.wait.ms = 500
>   fetch.min.bytes = 1
>   group.id = MirrorMaker_hkg1
>   heartbeat.interval.ms = 3000
>   interceptor.classes = null
>   key.deserializer = class 
> org.apache.kafka.common.serialization.ByteArrayDeserializer
>   max.partition.fetch.bytes = 1048576
>   max.poll.interval.ms = 30
>   max.poll.records = 500
>   metadata.max.age.ms = 30
>   metric.reporters = []
>   metrics.num.samples = 2
>   metrics.recording.level = INFO
>   metrics.sample.window.ms = 3
>   partition.assignment.strategy = 
> [org.apache.kafka.clients.consumer.RoundRobinAssignor]
>   receive.buffer.bytes = 65536
>   reconnect.backoff.ms = 50
>   request.timeout.ms = 305000
>   retry.backoff.ms = 100
>   sasl.jaas.config = null
>   sasl.kerberos.kinit.cmd = /usr/bin/kinit
>   sasl.kerberos.min.time.before.relogin = 6
>   sasl.kerberos.service.name = null
>   sasl.kerberos.ticket.renew.jitter = 0.05
>   sasl.kerberos.ticket.renew.window.factor = 0.8
>   sasl.mechanism = GSSAPI
>   security.protocol = PLAINTEXT
>   send.buffer.bytes = 131072
>   session.timeout.ms = 1
>   ssl.cipher.suites = null
>   ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
>   ssl.endpoint.identification.algorithm = null
>   ssl.key.password = null
>   ssl.keymanager.algorithm = SunX509
>   ssl.keystore.location = null
>   ssl.keystore.password = null
>   ssl.keystore.type = JKS
>   ssl.protocol = TLS
>   ssl.provider = null
>   ssl.secure.random.implementation = null
>   ssl.trustmanager.algorithm = PKIX
>   ssl.truststore.location = null
>   ssl.truststore.password = null
>   ssl.truststore.type = JKS
>   value.deserializer = class 
> org

[jira] [Comment Edited] (KAFKA-5407) Mirrormaker dont start after upgrade

2017-09-14 Thread Fernando Vega (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16166938#comment-16166938
 ] 

Fernando Vega edited comment on KAFKA-5407 at 9/14/17 8:42 PM:
---

[~huxi_2b] [~huxili]
So I reproduce this test, and still nothing I now have a log file with debug 
flag enable and I can shared.
Please let me know what you guys think. I still cannot make it work.

Attached is the file


was (Author: fvegaucr):
So I reproduce this test, and still nothing I now have a log file with debug 
flag enable and I can shared.
Please let me know what you guys think. I still cannot make it work.

Attached is the file

> Mirrormaker dont start after upgrade
> 
>
> Key: KAFKA-5407
> URL: https://issues.apache.org/jira/browse/KAFKA-5407
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.2.1
> Environment: Operating system
> CentOS 6.8
> HW
> Board Mfg : HP
>  Board Product : ProLiant DL380p Gen8
> CPU's x2
> Product Manufacturer  : Intel
>  Product Name  :  Intel(R) Xeon(R) CPU E5-2660 v2 @ 2.20GHz
>  Memory Type   : DDR3 SDRAM
>  SDRAM Capacity: 2048 MB
>  Total Memory: : 64GB
> Hardrives size and layout:
> 9 drives using jbod
> drive size 3.6TB each
>Reporter: Fernando Vega
>Priority: Critical
> Attachments: mirrormaker-repl-sjc2-to-hkg1.log.8
>
>
> Currently Im upgrading the cluster from 0.8.2-beta to 0.10.2.1
> So I followed the rolling procedure:
> Here the config files:
> Consumer
> {noformat}
> #
> # Cluster: repl
> # Topic list(goes into command line): 
> REPL-ams1-global,REPL-atl1-global,REPL-sjc2-global,REPL-ams1-global-PN_HXIDMAP_.*,REPL-atl1-global-PN_HXIDMAP_.*,REPL-sjc2-global-PN_HXIDMAP_.*,REPL-ams1-global-PN_HXCONTEXTUALV2_.*,REPL-atl1-global-PN_HXCONTEXTUALV2_.*,REPL-sjc2-global-PN_HXCONTEXTUALV2_.*
> bootstrap.servers=app001:9092,app002:9092,app003:9092,app004:9092
> group.id=hkg1_cluster
> auto.commit.interval.ms=6
> partition.assignment.strategy=org.apache.kafka.clients.consumer.RoundRobinAssignor
> {noformat}
> Producer
> {noformat}
>  hkg1
> # # Producer
> # # hkg1
> bootstrap.servers=app001:9092,app002:9092,app003:9092,app004:9092
> compression.type=gzip
> acks=0
> {noformat}
> Broker
> {noformat}
> auto.leader.rebalance.enable=true
> delete.topic.enable=true
> socket.receive.buffer.bytes=1048576
> socket.send.buffer.bytes=1048576
> default.replication.factor=2
> auto.create.topics.enable=true
> num.partitions=1
> num.network.threads=8
> num.io.threads=40
> log.retention.hours=1
> log.roll.hours=1
> num.replica.fetchers=8
> zookeeper.connection.timeout.ms=3
> zookeeper.session.timeout.ms=3
> inter.broker.protocol.version=0.10.2
> log.message.format.version=0.8.2
> {noformat}
> I tried also using stock configuraiton with no luck.
> The error that I get is this:
> {noformat}
> 2017-06-07 12:24:45,476] INFO ConsumerConfig values:
>   auto.commit.interval.ms = 6
>   auto.offset.reset = latest
>   bootstrap.servers = [app454.sjc2.mytest.com:9092, 
> app455.sjc2.mytest.com:9092, app456.sjc2.mytest.com:9092, 
> app457.sjc2.mytest.com:9092, app458.sjc2.mytest.com:9092, 
> app459.sjc2.mytest.com:9092]
>   check.crcs = true
>   client.id = MirrorMaker_hkg1-1
>   connections.max.idle.ms = 54
>   enable.auto.commit = false
>   exclude.internal.topics = true
>   fetch.max.bytes = 52428800
>   fetch.max.wait.ms = 500
>   fetch.min.bytes = 1
>   group.id = MirrorMaker_hkg1
>   heartbeat.interval.ms = 3000
>   interceptor.classes = null
>   key.deserializer = class 
> org.apache.kafka.common.serialization.ByteArrayDeserializer
>   max.partition.fetch.bytes = 1048576
>   max.poll.interval.ms = 30
>   max.poll.records = 500
>   metadata.max.age.ms = 30
>   metric.reporters = []
>   metrics.num.samples = 2
>   metrics.recording.level = INFO
>   metrics.sample.window.ms = 3
>   partition.assignment.strategy = 
> [org.apache.kafka.clients.consumer.RoundRobinAssignor]
>   receive.buffer.bytes = 65536
>   reconnect.backoff.ms = 50
>   request.timeout.ms = 305000
>   retry.backoff.ms = 100
>   sasl.jaas.config = null
>   sasl.kerberos.kinit.cmd = /usr/bin/kinit
>   sasl.kerberos.min.time.before.relogin = 6
>   sasl.kerberos.service.name = null
>   sasl.kerberos.ticket.renew.jitter = 0.05
>   sasl.kerberos.ticket.renew.window.factor = 0.8
>   sasl.mechanism = GSSAPI
>   security.protocol = PLAINTEXT
>   send.buffer.bytes = 131072
>   session.timeout.ms = 1
>   ssl.cipher.suites = null
>   ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
>   ssl.endpoint.identificat

[jira] [Updated] (KAFKA-5765) Move merge() from StreamsBuilder to KStream

2017-09-14 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-5765:
---
Affects Version/s: 1.0.0

> Move merge() from StreamsBuilder to KStream
> ---
>
> Key: KAFKA-5765
> URL: https://issues.apache.org/jira/browse/KAFKA-5765
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0, 1.0.0
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>  Labels: needs-kip, newbie
> Fix For: 1.1.0
>
>
> Merging multiple {{KStream}} is done via {{StreamsBuilder#merge()}} (formally 
> {{KStreamBuilder#merge()}}). This is quite unnatural and should be done via 
> {{KStream#merge()}}.
> We need a KIP as we add a new method to a public {{KStreams}} API and 
> deprecate the old {{merge()}} method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5765) Move merge() from StreamsBuilder to KStream

2017-09-14 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-5765:
---
Description: 
Merging multiple {{KStream}} is done via {{StreamsBuilder#merge()}} (formally 
{{KStreamBuilder#merge()}}). This is quite unnatural and should be done via 
{{KStream#merge()}}.

We need a KIP as we add a new method to a public {{KStreams}} API and deprecate 
the old {{merge()}} method.

  was:
Merging multiple {{KStream}} is done via {{StreamsBuilder#merge()}} (formally 
{{KStreamBuilder#merge()}}). This is quite unnatural and should be done via 
{{KStream#merge()}}.

As {{StreamsBuilder}} is not released yet, this is not a backward incompatible 
change (and KStreamBuilder is already deprecated). We still need a KIP as we 
add a new method to a public {{KStreams}} API.


> Move merge() from StreamsBuilder to KStream
> ---
>
> Key: KAFKA-5765
> URL: https://issues.apache.org/jira/browse/KAFKA-5765
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0, 1.0.0
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>  Labels: needs-kip, newbie
> Fix For: 1.1.0
>
>
> Merging multiple {{KStream}} is done via {{StreamsBuilder#merge()}} (formally 
> {{KStreamBuilder#merge()}}). This is quite unnatural and should be done via 
> {{KStream#merge()}}.
> We need a KIP as we add a new method to a public {{KStreams}} API and 
> deprecate the old {{merge()}} method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5765) Move merge() from StreamsBuilder to KStream

2017-09-14 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-5765:
---
Fix Version/s: (was: 1.0.0)
   1.1.0

> Move merge() from StreamsBuilder to KStream
> ---
>
> Key: KAFKA-5765
> URL: https://issues.apache.org/jira/browse/KAFKA-5765
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0, 1.0.0
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>  Labels: needs-kip, newbie
> Fix For: 1.1.0
>
>
> Merging multiple {{KStream}} is done via {{StreamsBuilder#merge()}} (formally 
> {{KStreamBuilder#merge()}}). This is quite unnatural and should be done via 
> {{KStream#merge()}}.
> We need a KIP as we add a new method to a public {{KStreams}} API and 
> deprecate the old {{merge()}} method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5765) Move merge() from StreamsBuilder to KStream

2017-09-14 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-5765:
--

Assignee: (was: Matthias J. Sax)

> Move merge() from StreamsBuilder to KStream
> ---
>
> Key: KAFKA-5765
> URL: https://issues.apache.org/jira/browse/KAFKA-5765
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0, 1.0.0
>Reporter: Matthias J. Sax
>  Labels: needs-kip, newbie
> Fix For: 1.1.0
>
>
> Merging multiple {{KStream}} is done via {{StreamsBuilder#merge()}} (formally 
> {{KStreamBuilder#merge()}}). This is quite unnatural and should be done via 
> {{KStream#merge()}}.
> We need a KIP as we add a new method to a public {{KStreams}} API and 
> deprecate the old {{merge()}} method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5896) Kafka Connect task threads never interrupted

2017-09-14 Thread Nick Pillitteri (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16166977#comment-16166977
 ] 

Nick Pillitteri commented on KAFKA-5896:


Sure, will do!

> Kafka Connect task threads never interrupted
> 
>
> Key: KAFKA-5896
> URL: https://issues.apache.org/jira/browse/KAFKA-5896
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Nick Pillitteri
>Priority: Minor
>
> h2. Problem
> Kafka Connect tasks associated with connectors are run in their own threads. 
> When tasks are stopped or restarted, a flag is set - {{stopping}} - to 
> indicate the task should stop processing records. However, if the thread the 
> task is running in is blocked (waiting for a lock or performing I/O) it's 
> possible the task will never stop.
> I've created a connector specifically to demonstrate this issue (along with 
> some more detailed instructions for reproducing the issue): 
> https://github.com/smarter-travel-media/hang-connector
> I believe this is an issue because it means that a single badly behaved 
> connector (any connector that does I/O without timeouts) can cause the Kafka 
> Connect worker to get into a state where the only solution is to restart the 
> JVM.
> I think, but couldn't reproduce, that this is the cause of this problem on 
> Stack Overflow: 
> https://stackoverflow.com/questions/43802156/inconsistent-connector-state-connectexception-task-already-exists-in-this-work
> h2. Expected Result
> I would expect the Worker to eventually interrupt the thread that the task is 
> running in. In the past across various other libraries, this is what I've 
> seen done when a thread needs to be forcibly stopped.
> h2. Actual Result
> In actuality, the Worker sets a {{stopping}} flag and lets the thread run 
> indefinitely. It uses a timeout while waiting for the task to stop but after 
> this timeout has expired it simply sets a {{cancelled}} flag. This means that 
> every time a task is restarted, a new thread running the task will be 
> created. Thus a task may end up with multiple instances all running in their 
> own threads when there's only supposed to be a single thread.
> h2. Steps to Reproduce
> The problem can be replicated by using the connector available here: 
> https://github.com/smarter-travel-media/hang-connector
> Apologies for how involved the steps are.
> I've created a patch that forcibly interrupts threads after they fail to 
> gracefully shutdown here: 
> https://github.com/smarter-travel-media/kafka/commit/295c747a9fd82ee8b30556c89c31e0bfcce5a2c5
> I've confirmed that this fixes the issue. I can add some unit tests and 
> submit a PR if people agree that this is a bug and interrupting threads is 
> the right fix.
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5301) Improve exception handling on consumer path

2017-09-14 Thread Richard Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16166989#comment-16166989
 ] 

Richard Yu commented on KAFKA-5301:
---

[~mjsax] You could take this issue. Due to the slight ambiguity surrounding 
which classes that was needed to be changed, I misunderstood the issue.
It makes me think that I am unsuited for this type of problem.



> Improve exception handling on consumer path
> ---
>
> Key: KAFKA-5301
> URL: https://issues.apache.org/jira/browse/KAFKA-5301
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 0.11.0.0
>Reporter: Eno Thereska
>Assignee: Matthias J. Sax
> Fix For: 1.0.0
>
> Attachments: 5301.v1.patch
>
>
> Used in StreamsThread.java, mostly to .poll() but also to restore data.
> Used in StreamsTask.java, mostly to .pause(), .resume()
> All exceptions here are currently caught all the way up to the main running 
> loop in a broad catch(Exception e) statement in StreamThread.run().
> One main concern on the consumer path is handling deserialization errors that 
> happen before streams has even had a chance to look at the data: 
> https://issues.apache.org/jira/browse/KAFKA-5157  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5896) Kafka Connect task threads never interrupted

2017-09-14 Thread Randall Hauch (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16167008#comment-16167008
 ] 

Randall Hauch commented on KAFKA-5896:
--

[~56quarters], can you please @rhauch on the PR? Thanks!

> Kafka Connect task threads never interrupted
> 
>
> Key: KAFKA-5896
> URL: https://issues.apache.org/jira/browse/KAFKA-5896
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Nick Pillitteri
>Priority: Minor
>
> h2. Problem
> Kafka Connect tasks associated with connectors are run in their own threads. 
> When tasks are stopped or restarted, a flag is set - {{stopping}} - to 
> indicate the task should stop processing records. However, if the thread the 
> task is running in is blocked (waiting for a lock or performing I/O) it's 
> possible the task will never stop.
> I've created a connector specifically to demonstrate this issue (along with 
> some more detailed instructions for reproducing the issue): 
> https://github.com/smarter-travel-media/hang-connector
> I believe this is an issue because it means that a single badly behaved 
> connector (any connector that does I/O without timeouts) can cause the Kafka 
> Connect worker to get into a state where the only solution is to restart the 
> JVM.
> I think, but couldn't reproduce, that this is the cause of this problem on 
> Stack Overflow: 
> https://stackoverflow.com/questions/43802156/inconsistent-connector-state-connectexception-task-already-exists-in-this-work
> h2. Expected Result
> I would expect the Worker to eventually interrupt the thread that the task is 
> running in. In the past across various other libraries, this is what I've 
> seen done when a thread needs to be forcibly stopped.
> h2. Actual Result
> In actuality, the Worker sets a {{stopping}} flag and lets the thread run 
> indefinitely. It uses a timeout while waiting for the task to stop but after 
> this timeout has expired it simply sets a {{cancelled}} flag. This means that 
> every time a task is restarted, a new thread running the task will be 
> created. Thus a task may end up with multiple instances all running in their 
> own threads when there's only supposed to be a single thread.
> h2. Steps to Reproduce
> The problem can be replicated by using the connector available here: 
> https://github.com/smarter-travel-media/hang-connector
> Apologies for how involved the steps are.
> I've created a patch that forcibly interrupts threads after they fail to 
> gracefully shutdown here: 
> https://github.com/smarter-travel-media/kafka/commit/295c747a9fd82ee8b30556c89c31e0bfcce5a2c5
> I've confirmed that this fixes the issue. I can add some unit tests and 
> submit a PR if people agree that this is a bug and interrupting threads is 
> the right fix.
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5898) Client consume more than one time a message

2017-09-14 Thread Cyril WIRTH (JIRA)
Cyril WIRTH created KAFKA-5898:
--

 Summary: Client consume more than one time a message
 Key: KAFKA-5898
 URL: https://issues.apache.org/jira/browse/KAFKA-5898
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.11.0.0
Reporter: Cyril WIRTH
Priority: Critical
 Attachments: AbstractKafkaConfig.java, consommation.log, 
MyKafkaClient.java, MyProducer.java, production.log

Hello,
I'm using an autocommit consumer, i get some messages, and my 
SESSION_TIMEOUT_MS_CONFIG is smaller than the total message processing time. In 
that case, the consumer consume more than one time a message (twice in the log 
file), how it's possible ? As you can see in the log file, the "msg1" which is 
in the partition 0 is consume in the same time in the thread : KAFKA-0 and 
KAFKA-1, however the KAFKA-1 is assigned to the partition 1 not 0.

KAFKA-0,assigned : [vertx_logger-0]
KAFKA-2,assigned : [vertx_logger-2]
KAFKA-1,assigned : [vertx_logger-1]

Can you help me ?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-2376) Add Kafka Connect metrics

2017-09-14 Thread Randall Hauch (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch reassigned KAFKA-2376:


Assignee: Randall Hauch

> Add Kafka Connect metrics
> -
>
> Key: KAFKA-2376
> URL: https://issues.apache.org/jira/browse/KAFKA-2376
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 0.9.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Randall Hauch
>  Labels: needs-kip
>
> Kafka Connect needs good metrics for monitoring since that will be the 
> primary insight into the health of connectors as they copy data.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5900) Create Connect metrics common to source and sink tasks

2017-09-14 Thread Randall Hauch (JIRA)
Randall Hauch created KAFKA-5900:


 Summary: Create Connect metrics common to source and sink tasks
 Key: KAFKA-5900
 URL: https://issues.apache.org/jira/browse/KAFKA-5900
 Project: Kafka
  Issue Type: Sub-task
  Components: KafkaConnect
Affects Versions: 0.11.0.0
Reporter: Randall Hauch
Assignee: Randall Hauch
Priority: Critical
 Fix For: 1.0.0


See KAFKA-2376 for parent task and 
[KIP-196|https://cwiki.apache.org/confluence/display/KAFKA/KIP-196%3A+Add+metrics+to+Kafka+Connect+framework]
 for the details on the metrics. This subtask is to create the "Common Task 
Metrics".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5903) Create Connect metrics for workers

2017-09-14 Thread Randall Hauch (JIRA)
Randall Hauch created KAFKA-5903:


 Summary: Create Connect metrics for workers
 Key: KAFKA-5903
 URL: https://issues.apache.org/jira/browse/KAFKA-5903
 Project: Kafka
  Issue Type: Sub-task
  Components: KafkaConnect
Affects Versions: 0.11.0.0
Reporter: Randall Hauch
Assignee: Randall Hauch
Priority: Critical
 Fix For: 1.0.0


See KAFKA-2376 for parent task and 
[KIP-196|https://cwiki.apache.org/confluence/display/KAFKA/KIP-196%3A+Add+metrics+to+Kafka+Connect+framework]
 for the details on the metrics. This subtask is to create the "Worker Metrics".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5901) Create Connect metrics for source tasks

2017-09-14 Thread Randall Hauch (JIRA)
Randall Hauch created KAFKA-5901:


 Summary: Create Connect metrics for source tasks
 Key: KAFKA-5901
 URL: https://issues.apache.org/jira/browse/KAFKA-5901
 Project: Kafka
  Issue Type: Sub-task
  Components: KafkaConnect
Affects Versions: 0.11.0.0
Reporter: Randall Hauch
Assignee: Randall Hauch
Priority: Critical
 Fix For: 1.0.0


See KAFKA-2376 for parent task and 
[KIP-196|https://cwiki.apache.org/confluence/display/KAFKA/KIP-196%3A+Add+metrics+to+Kafka+Connect+framework]
 for the details on the metrics. This subtask is to create the "Source Task 
Metrics".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5902) Create Connect metrics for sink tasks

2017-09-14 Thread Randall Hauch (JIRA)
Randall Hauch created KAFKA-5902:


 Summary: Create Connect metrics for sink tasks
 Key: KAFKA-5902
 URL: https://issues.apache.org/jira/browse/KAFKA-5902
 Project: Kafka
  Issue Type: Sub-task
  Components: KafkaConnect
Affects Versions: 0.11.0.0
Reporter: Randall Hauch
Assignee: Randall Hauch
Priority: Critical
 Fix For: 1.0.0


See KAFKA-2376 for parent task and 
[KIP-196|https://cwiki.apache.org/confluence/display/KAFKA/KIP-196%3A+Add+metrics+to+Kafka+Connect+framework]
 for the details on the metrics. This subtask is to create the "Sink Task 
Metrics".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5904) Create Connect metrics for worker rebalances

2017-09-14 Thread Randall Hauch (JIRA)
Randall Hauch created KAFKA-5904:


 Summary: Create Connect metrics for worker rebalances
 Key: KAFKA-5904
 URL: https://issues.apache.org/jira/browse/KAFKA-5904
 Project: Kafka
  Issue Type: Sub-task
  Components: KafkaConnect
Affects Versions: 0.11.0.0
Reporter: Randall Hauch
Assignee: Randall Hauch
Priority: Critical
 Fix For: 1.0.0


See KAFKA-2376 for parent task and 
[KIP-196|https://cwiki.apache.org/confluence/display/KAFKA/KIP-196%3A+Add+metrics+to+Kafka+Connect+framework]
 for the details on the metrics. This subtask is to create the "Worker 
Rebalance Metrics".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5899) Create Connect metrics for connectors

2017-09-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16167077#comment-16167077
 ] 

ASF GitHub Bot commented on KAFKA-5899:
---

GitHub user rhauch opened a pull request:

https://github.com/apache/kafka/pull/3864

KAFKA-5899 Added Connect metrics for connectors

Added metrics for each connector using Kafka’s existing `Metrics` 
framework. Since this is the first of several changes to add several groups of 
metrics, this change starts by adding a very simple `ConnectMetrics` object 
that is owned by each worker and that makes it easy to define multiple groups 
of metrics, called `ConnectMetricGroup` objects. Each metric group maps to a 
JMX MBean, and each metric within the group maps to an MBean attribute.

Future PRs will build upon this simple pattern to add metrics for source 
and sink tasks, workers, and worker rebalances.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rhauch/kafka kafka-5899

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3864.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3864


commit 02da9b0ebaa9df70c0dfb34799d3a08b051b9570
Author: Randall Hauch 
Date:   2017-09-13T14:48:14Z

KAFKA-5899 Added Connect metrics for connectors

Added metrics for each connector using Kafka’s existing `Metrics` 
framework. Since this is the first of several changes to add several groups of 
metrics, this change starts by adding a very simple `ConnectMetrics` object 
that is owned by each worker and that makes it easy to define multiple groups 
of metrics, called `ConnectMetricGroup` objects. Each metric group maps to a 
JMX MBean, and each metric within the group maps to an MBean attribute.

Future PRs will build upon this simple pattern to add metrics for source 
and sink tasks, workers, and worker rebalances.




> Create Connect metrics for connectors
> -
>
> Key: KAFKA-5899
> URL: https://issues.apache.org/jira/browse/KAFKA-5899
> Project: Kafka
>  Issue Type: Sub-task
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Critical
> Fix For: 1.0.0
>
>
> See KAFKA-2376 for parent task and 
> [KIP-196|https://cwiki.apache.org/confluence/display/KAFKA/KIP-196%3A+Add+metrics+to+Kafka+Connect+framework]
>  for the details on the metrics. This subtask is to create the "Connect 
> Metrics", and a basic framework for easily adding other Connect metrics.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5899) Create Connect metrics for connectors

2017-09-14 Thread Randall Hauch (JIRA)
Randall Hauch created KAFKA-5899:


 Summary: Create Connect metrics for connectors
 Key: KAFKA-5899
 URL: https://issues.apache.org/jira/browse/KAFKA-5899
 Project: Kafka
  Issue Type: Sub-task
  Components: KafkaConnect
Affects Versions: 0.11.0.0
Reporter: Randall Hauch
Assignee: Randall Hauch
Priority: Critical
 Fix For: 1.0.0


See KAFKA-2376 for parent task and 
[KIP-196|https://cwiki.apache.org/confluence/display/KAFKA/KIP-196%3A+Add+metrics+to+Kafka+Connect+framework]
 for the details on the metrics. This subtask is to create the "Connect 
Metrics", and a basic framework for easily adding other Connect metrics.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-5301) Improve exception handling on consumer path

2017-09-14 Thread Richard Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16166989#comment-16166989
 ] 

Richard Yu edited comment on KAFKA-5301 at 9/14/17 10:48 PM:
-

[~mjsax] You could take this issue. 
I think I am unsuited for this type of problem.




was (Author: yohan123):
[~mjsax] You could take this issue. Due to the slight ambiguity surrounding 
which classes that was needed to be changed, I misunderstood the issue.
It makes me think that I am unsuited for this type of problem.



> Improve exception handling on consumer path
> ---
>
> Key: KAFKA-5301
> URL: https://issues.apache.org/jira/browse/KAFKA-5301
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 0.11.0.0
>Reporter: Eno Thereska
>Assignee: Matthias J. Sax
> Fix For: 1.0.0
>
> Attachments: 5301.v1.patch
>
>
> Used in StreamsThread.java, mostly to .poll() but also to restore data.
> Used in StreamsTask.java, mostly to .pause(), .resume()
> All exceptions here are currently caught all the way up to the main running 
> loop in a broad catch(Exception e) statement in StreamThread.run().
> One main concern on the consumer path is handling deserialization errors that 
> happen before streams has even had a chance to look at the data: 
> https://issues.apache.org/jira/browse/KAFKA-5157  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-5301) Improve exception handling on consumer path

2017-09-14 Thread Richard Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16166989#comment-16166989
 ] 

Richard Yu edited comment on KAFKA-5301 at 9/14/17 10:54 PM:
-

[~mjsax] You could take this issue. 




was (Author: yohan123):
[~mjsax] You could take this issue. 
I think I am unsuited for this type of problem.



> Improve exception handling on consumer path
> ---
>
> Key: KAFKA-5301
> URL: https://issues.apache.org/jira/browse/KAFKA-5301
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 0.11.0.0
>Reporter: Eno Thereska
>Assignee: Matthias J. Sax
> Fix For: 1.0.0
>
> Attachments: 5301.v1.patch
>
>
> Used in StreamsThread.java, mostly to .poll() but also to restore data.
> Used in StreamsTask.java, mostly to .pause(), .resume()
> All exceptions here are currently caught all the way up to the main running 
> loop in a broad catch(Exception e) statement in StreamThread.run().
> One main concern on the consumer path is handling deserialization errors that 
> happen before streams has even had a chance to look at the data: 
> https://issues.apache.org/jira/browse/KAFKA-5157  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5301) Improve exception handling on consumer path

2017-09-14 Thread Matthias J. Sax (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16167091#comment-16167091
 ] 

Matthias J. Sax commented on KAFKA-5301:


Thanks for your understanding [~Yohan123] !

> Improve exception handling on consumer path
> ---
>
> Key: KAFKA-5301
> URL: https://issues.apache.org/jira/browse/KAFKA-5301
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 0.11.0.0
>Reporter: Eno Thereska
>Assignee: Matthias J. Sax
> Fix For: 1.0.0
>
> Attachments: 5301.v1.patch
>
>
> Used in StreamsThread.java, mostly to .poll() but also to restore data.
> Used in StreamsTask.java, mostly to .pause(), .resume()
> All exceptions here are currently caught all the way up to the main running 
> loop in a broad catch(Exception e) statement in StreamThread.run().
> One main concern on the consumer path is handling deserialization errors that 
> happen before streams has even had a chance to look at the data: 
> https://issues.apache.org/jira/browse/KAFKA-5157  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5738) Add cumulative count attribute for all Kafka rate metrics

2017-09-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16167096#comment-16167096
 ] 

ASF GitHub Bot commented on KAFKA-5738:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3686


> Add cumulative count attribute for all Kafka rate metrics
> -
>
> Key: KAFKA-5738
> URL: https://issues.apache.org/jira/browse/KAFKA-5738
> Project: Kafka
>  Issue Type: New Feature
>  Components: metrics
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 1.0.0
>
>
> Add cumulative count attribute to all Kafka rate metrics to make downstream 
> processing simpler, more accurate, and more flexible.
>  
> See 
> [KIP-187|https://cwiki.apache.org/confluence/display/KAFKA/KIP-187+-+Add+cumulative+count+metric+for+all+Kafka+rate+metrics]
>  for details.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5494) Idempotent producer should not require max.in.flight.requests.per.connection=1

2017-09-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16167101#comment-16167101
 ] 

ASF GitHub Bot commented on KAFKA-5494:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3743


> Idempotent producer should not require max.in.flight.requests.per.connection=1
> --
>
> Key: KAFKA-5494
> URL: https://issues.apache.org/jira/browse/KAFKA-5494
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.11.0.0
>Reporter: Apurva Mehta
>Assignee: Apurva Mehta
>  Labels: exactly-once
> Fix For: 1.0.0
>
>
> Currently, the idempotent producer (and hence transactional producer) 
> requires max.in.flight.requests.per.connection=1.
> This was due to simplifying the implementation on the client and server. With 
> some additional work, we can satisfy the idempotent guarantees even with any 
> number of in flight requests. The changes on the client be summarized as 
> follows:
>  
> # We increment sequence numbers when batches are drained.
> # If for some reason, a batch fails with a retriable error, we know that all 
> future batches would fail with an out of order sequence exception. 
> # As such, the client should treat some OutOfOrderSequence errors as 
> retriable. In particular, we should maintain the 'last acked sequnece'. If 
> the batch succeeding the last ack'd sequence has an OutOfOrderSequence, that 
> is a fatal error. If a future batch fails with OutOfOrderSequence they should 
> be reenqeued.
> # With the changes above, the the producer queues should become priority 
> queues ordered by the sequence numbers. 
> # The partition is not ready unless the front of the queue has the next 
> expected sequence.
> With the changes above, we would get the benefits of multiple inflights in 
> normal cases. When there are failures, we automatically constrain to a single 
> inflight until we get back in sequence. 
> With multiple inflights, we now have the possibility of getting duplicates 
> for batches other than the last appended batch. In order to return the record 
> metadata (including offset) of the duplicates inside the log, we would 
> require a log scan at the tail to get the metadata at the tail. This can be 
> optimized by caching the metadata for the last 'n' batches. For instance, if 
> the default max.inflight is 5, we could cache the record metadata of the last 
> 5 batches, and fall back to a scan if the duplicate is not within those 5. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5905) Remove PrincipalBuilder and DefaultPrincipalBuilder

2017-09-14 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-5905:
--

 Summary: Remove PrincipalBuilder and DefaultPrincipalBuilder
 Key: KAFKA-5905
 URL: https://issues.apache.org/jira/browse/KAFKA-5905
 Project: Kafka
  Issue Type: Improvement
Reporter: Jason Gustafson
 Fix For: 2.0.0


These classes were deprecated after KIP-189: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-189%3A+Improve+principal+builder+interface+and+add+support+for+SASL,
 which is part of 1.0.0.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5898) Client consume more than one time a message

2017-09-14 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16167127#comment-16167127
 ] 

Ted Yu commented on KAFKA-5898:
---

Just curious: 0.11.0.1 has been released.

Is this reproducible using 0.11.0.1 ?

> Client consume more than one time a message
> ---
>
> Key: KAFKA-5898
> URL: https://issues.apache.org/jira/browse/KAFKA-5898
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.11.0.0
>Reporter: Cyril WIRTH
>Priority: Critical
> Attachments: AbstractKafkaConfig.java, consommation.log, 
> MyKafkaClient.java, MyProducer.java, production.log
>
>
> Hello,
> I'm using an autocommit consumer, i get some messages, and my 
> SESSION_TIMEOUT_MS_CONFIG is smaller than the total message processing time. 
> In that case, the consumer consume more than one time a message (twice in the 
> log file), how it's possible ? As you can see in the log file, the "msg1" 
> which is in the partition 0 is consume in the same time in the thread : 
> KAFKA-0 and KAFKA-1, however the KAFKA-1 is assigned to the partition 1 not 0.
> KAFKA-0,assigned : [vertx_logger-0]
> KAFKA-2,assigned : [vertx_logger-2]
> KAFKA-1,assigned : [vertx_logger-1]
> Can you help me ?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Reopened] (KAFKA-5792) Transient failure in KafkaAdminClientTest.testHandleTimeout

2017-09-14 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson reopened KAFKA-5792:


Still seeing this: 
https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/2021/tests.

> Transient failure in KafkaAdminClientTest.testHandleTimeout
> ---
>
> Key: KAFKA-5792
> URL: https://issues.apache.org/jira/browse/KAFKA-5792
> Project: Kafka
>  Issue Type: Bug
>Reporter: Apurva Mehta
>Assignee: Colin P. McCabe
>  Labels: transient-unit-test-failure
> Fix For: 1.0.0
>
>
> The {{KafkaAdminClientTest.testHandleTimeout}} test occasionally fails with 
> the following:
> {noformat}
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node 
> assignment.
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:213)
>   at 
> org.apache.kafka.clients.admin.KafkaAdminClientTest.testHandleTimeout(KafkaAdminClientTest.java:356)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting 
> for a node assignment.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5896) Kafka Connect task threads never interrupted

2017-09-14 Thread Nick Pillitteri (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16167174#comment-16167174
 ] 

Nick Pillitteri commented on KAFKA-5896:


[~rhauch] sure!

> Kafka Connect task threads never interrupted
> 
>
> Key: KAFKA-5896
> URL: https://issues.apache.org/jira/browse/KAFKA-5896
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Nick Pillitteri
>Priority: Minor
>
> h2. Problem
> Kafka Connect tasks associated with connectors are run in their own threads. 
> When tasks are stopped or restarted, a flag is set - {{stopping}} - to 
> indicate the task should stop processing records. However, if the thread the 
> task is running in is blocked (waiting for a lock or performing I/O) it's 
> possible the task will never stop.
> I've created a connector specifically to demonstrate this issue (along with 
> some more detailed instructions for reproducing the issue): 
> https://github.com/smarter-travel-media/hang-connector
> I believe this is an issue because it means that a single badly behaved 
> connector (any connector that does I/O without timeouts) can cause the Kafka 
> Connect worker to get into a state where the only solution is to restart the 
> JVM.
> I think, but couldn't reproduce, that this is the cause of this problem on 
> Stack Overflow: 
> https://stackoverflow.com/questions/43802156/inconsistent-connector-state-connectexception-task-already-exists-in-this-work
> h2. Expected Result
> I would expect the Worker to eventually interrupt the thread that the task is 
> running in. In the past across various other libraries, this is what I've 
> seen done when a thread needs to be forcibly stopped.
> h2. Actual Result
> In actuality, the Worker sets a {{stopping}} flag and lets the thread run 
> indefinitely. It uses a timeout while waiting for the task to stop but after 
> this timeout has expired it simply sets a {{cancelled}} flag. This means that 
> every time a task is restarted, a new thread running the task will be 
> created. Thus a task may end up with multiple instances all running in their 
> own threads when there's only supposed to be a single thread.
> h2. Steps to Reproduce
> The problem can be replicated by using the connector available here: 
> https://github.com/smarter-travel-media/hang-connector
> Apologies for how involved the steps are.
> I've created a patch that forcibly interrupts threads after they fail to 
> gracefully shutdown here: 
> https://github.com/smarter-travel-media/kafka/commit/295c747a9fd82ee8b30556c89c31e0bfcce5a2c5
> I've confirmed that this fixes the issue. I can add some unit tests and 
> submit a PR if people agree that this is a bug and interrupting threads is 
> the right fix.
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5822) Consistent logging of topic partitions

2017-09-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16167175#comment-16167175
 ] 

ASF GitHub Bot commented on KAFKA-5822:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3778


> Consistent logging of topic partitions
> --
>
> Key: KAFKA-5822
> URL: https://issues.apache.org/jira/browse/KAFKA-5822
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>  Labels: newbie
>
> In some cases partitions are logged as "[topic,partition]" and in others as 
> "topic-partition." It would be nice to standardize to make searching easier.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5822) Consistent logging of topic partitions

2017-09-14 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-5822.

   Resolution: Fixed
Fix Version/s: 1.0.0

> Consistent logging of topic partitions
> --
>
> Key: KAFKA-5822
> URL: https://issues.apache.org/jira/browse/KAFKA-5822
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>  Labels: newbie
> Fix For: 1.0.0
>
>
> In some cases partitions are logged as "[topic,partition]" and in others as 
> "topic-partition." It would be nice to standardize to make searching easier.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5793) Tighten up situations where OutOfOrderSequence may be returned

2017-09-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16167186#comment-16167186
 ] 

ASF GitHub Bot commented on KAFKA-5793:
---

GitHub user apurvam opened a pull request:

https://github.com/apache/kafka/pull/3865

KAFKA-5793: Tighten up the semantics of the OutOfOrderSequenceException

*WIP : Don't review yet, still to add tests*

Description of the solution can be found here: 
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Exactly+Once+-+Solving+the+problem+of+spurious+OutOfOrderSequence+errors



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apurvam/kafka 
KAFKA-5793-tighten-up-out-of-order-sequence-v2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3865.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3865


commit fb44876987bd2f75c900b187fdc755da3f85114f
Author: Apurva Mehta 
Date:   2017-09-15T01:01:58Z

Initial commit of the client and server code, with minimal tests




> Tighten up situations where OutOfOrderSequence may be returned
> --
>
> Key: KAFKA-5793
> URL: https://issues.apache.org/jira/browse/KAFKA-5793
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.11.0.0
>Reporter: Apurva Mehta
>Assignee: Apurva Mehta
> Fix For: 1.0.0
>
>
> Details of the problem are provided here: 
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Exactly+Once+-+Solving+the+problem+of+spurious+OutOfOrderSequence+errors
> A quick summary follows:
> In the discussion of KIP-185: Make exactly once in order delivery per 
> partition the default producer setting, the following point regarding the 
> OutOfOrderSequenceException was raised:
> 1. The OutOfOrderSequenceException indicates that there has been data loss on 
> the broker.. ie. a previously acknowledged message no longer exists. For most 
> part, this should only occur in rare situations (simultaneous power outages, 
> multiple disk losses, software bugs resulting in data corruption, etc.).
> 2. However, there is another perfectly normal scenario where data is removed: 
> in particular, data could be deleted because it is old and crosses the 
> retention threshold.
> Hence, if a producer remains inactive for longer than a topic's retention 
> period, we could get an OutOfOrderSequence which is a false positive: the 
> data is removed through valid processes, and this isn't an error.
> 3. We would like to eliminate the possibility of getting spurious 
> OutOfOrderSequenceExceptions – when you get it, it should always mean data 
> loss and should be taken very seriously. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5874) Incorrect command line handling

2017-09-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16167241#comment-16167241
 ] 

ASF GitHub Bot commented on KAFKA-5874:
---

GitHub user huxihx opened a pull request:

https://github.com/apache/kafka/pull/3866

KAFKA-5874: TopicCommand should check at least one parameter is given...

When altering topics, TopicCommand should ensure that at least one of 
parameters in `partitions`, `config` or `delete-config` must be specified.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/huxihx/kafka KAFKA-5874

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3866.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3866


commit 33bfbc741aeff38bfc5956df5a8cf8c9bcf51038
Author: huxihx 
Date:   2017-09-15T02:13:33Z

KAFKA-5874: TopicCommand should check at least one parameter is specified 
when altering topics




> Incorrect command line handling
> ---
>
> Key: KAFKA-5874
> URL: https://issues.apache.org/jira/browse/KAFKA-5874
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.10.2.0, 0.11.0.0
> Environment: Probably unrelated, but Windows 10 in this case
>Reporter: Viliam Durina
>Priority: Minor
>
> Extra parameters on command line are silently ignored. This leads to 
> confusion. For example, I did this command:
> {{kafka-topics.bat --alter --topic my_topic partitions 3 --zookeeper 
> localhost}}
> Seemingly innocuous command took about 3 seconds and output nothing. It took 
> me a while to realize I missed "--" before "partitions", after which I got a 
> confirmation on output.
> Erroneous command lines should produce error output.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5874) Incorrect command line handling

2017-09-14 Thread huxihx (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx reassigned KAFKA-5874:
-

Assignee: huxihx

> Incorrect command line handling
> ---
>
> Key: KAFKA-5874
> URL: https://issues.apache.org/jira/browse/KAFKA-5874
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.10.2.0, 0.11.0.0
> Environment: Probably unrelated, but Windows 10 in this case
>Reporter: Viliam Durina
>Assignee: huxihx
>Priority: Minor
>
> Extra parameters on command line are silently ignored. This leads to 
> confusion. For example, I did this command:
> {{kafka-topics.bat --alter --topic my_topic partitions 3 --zookeeper 
> localhost}}
> Seemingly innocuous command took about 3 seconds and output nothing. It took 
> me a while to realize I missed "--" before "partitions", after which I got a 
> confirmation on output.
> Erroneous command lines should produce error output.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5805) The console consumer crushes broker on Windows

2017-09-14 Thread huxihx (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16167247#comment-16167247
 ] 

huxihx commented on KAFKA-5805:
---

Do you run the Kafka broker on Window 32bit platform?

> The console consumer crushes broker on Windows
> --
>
> Key: KAFKA-5805
> URL: https://issues.apache.org/jira/browse/KAFKA-5805
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.0
> Environment: Windows 7 x86 || Windows 10 x64
> jre1.8.0_144
> jdk1.8.0_144
>Reporter: Aleksandar
>Priority: Critical
>
> I was just following Quick start guide  on 
> http://kafka.apache.org/documentation.html#quickstart
> Started ZooKeeper server
> Started Kafka Server
> Created Test topic
> Published message under test topic
> On other terminal windows I run kafka-console-consumer.bat and that crushed 
> Broker with following exception:
> java.io.IOException: Map failed
>   at sun.nio.ch.FileChannelImpl.map(Unknown Source)
>   at kafka.log.AbstractIndex.(AbstractIndex.scala:63)
>   at kafka.log.OffsetIndex.(OffsetIndex.scala:52)
>   at kafka.log.LogSegment.(LogSegment.scala:77)
>   at kafka.log.Log.loadSegments(Log.scala:385)
>   at kafka.log.Log.(Log.scala:179)
>   at kafka.log.Log$.apply(Log.scala:1580)
>   at kafka.log.LogManager$$anonfun$createLog$1.apply(LogManager.scala:417)
>   at kafka.log.LogManager$$anonfun$createLog$1.apply(LogManager.scala:412)
>   at scala.Option.getOrElse(Option.scala:121)
>   at kafka.log.LogManager.createLog(LogManager.scala:412)
>   at 
> kafka.cluster.Partition$$anonfun$getOrCreateReplica$1.apply(Partition.scala:122)
>   at 
> kafka.cluster.Partition$$anonfun$getOrCreateReplica$1.apply(Partition.scala:119)
>   at kafka.utils.Pool.getAndMaybePut(Pool.scala:70)
>   at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:118)
>   at 
> kafka.cluster.Partition$$anonfun$3$$anonfun$5.apply(Partition.scala:179)
>   at 
> kafka.cluster.Partition$$anonfun$3$$anonfun$5.apply(Partition.scala:179)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at kafka.cluster.Partition$$anonfun$3.apply(Partition.scala:179)
>   at kafka.cluster.Partition$$anonfun$3.apply(Partition.scala:173)
>   at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:213)
>   at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:221)
>   at kafka.cluster.Partition.makeLeader(Partition.scala:173)
>   at 
> kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:929)
>   at 
> kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:928)
>   at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:130)
>   at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:130)
>   at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:236)
>   at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
>   at scala.collection.mutable.HashMap.foreach(HashMap.scala:130)
>   at kafka.server.ReplicaManager.makeLeaders(ReplicaManager.scala:928)
>   at 
> kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:873)
>   at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:168)
>   at kafka.server.KafkaApis.handle(KafkaApis.scala:101)
>   at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:66)
>   at java.lang.Thread.run(Unknown Source)
> Caused by: java.lang.OutOfMemoryError: Map failed
>   at sun.nio.ch.FileChannelImpl.map0(Native Method)
>   ... 43 more
> [2017-08-29 18:36:37,110] INFO [Group Metadata Manager on Broker 0]: Removed 
> 0 expired offsets in 0 milliseconds. 
> (kafka.coordinator.group.GroupMetadataManager)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5301) Improve exception handling on consumer path

2017-09-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16167252#comment-16167252
 ] 

ASF GitHub Bot commented on KAFKA-5301:
---

Github user ConcurrencyPractitioner closed the pull request at:

https://github.com/apache/kafka/pull/3842


> Improve exception handling on consumer path
> ---
>
> Key: KAFKA-5301
> URL: https://issues.apache.org/jira/browse/KAFKA-5301
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 0.11.0.0
>Reporter: Eno Thereska
>Assignee: Matthias J. Sax
> Fix For: 1.0.0
>
> Attachments: 5301.v1.patch
>
>
> Used in StreamsThread.java, mostly to .poll() but also to restore data.
> Used in StreamsTask.java, mostly to .pause(), .resume()
> All exceptions here are currently caught all the way up to the main running 
> loop in a broad catch(Exception e) statement in StreamThread.run().
> One main concern on the consumer path is handling deserialization errors that 
> happen before streams has even had a chance to look at the data: 
> https://issues.apache.org/jira/browse/KAFKA-5157  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5765) Move merge() from StreamsBuilder to KStream

2017-09-14 Thread Richard Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16167264#comment-16167264
 ] 

Richard Yu commented on KAFKA-5765:
---

Could I work on this?

Thanks.

> Move merge() from StreamsBuilder to KStream
> ---
>
> Key: KAFKA-5765
> URL: https://issues.apache.org/jira/browse/KAFKA-5765
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0, 1.0.0
>Reporter: Matthias J. Sax
>  Labels: needs-kip, newbie
> Fix For: 1.1.0
>
>
> Merging multiple {{KStream}} is done via {{StreamsBuilder#merge()}} (formally 
> {{KStreamBuilder#merge()}}). This is quite unnatural and should be done via 
> {{KStream#merge()}}.
> We need a KIP as we add a new method to a public {{KStreams}} API and 
> deprecate the old {{merge()}} method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5407) Mirrormaker dont start after upgrade

2017-09-14 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16167363#comment-16167363
 ] 

Manikumar commented on KAFKA-5407:
--

[~fvegaucr] Can you upload broker logs at the time of error? 

> Mirrormaker dont start after upgrade
> 
>
> Key: KAFKA-5407
> URL: https://issues.apache.org/jira/browse/KAFKA-5407
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.2.1
> Environment: Operating system
> CentOS 6.8
> HW
> Board Mfg : HP
>  Board Product : ProLiant DL380p Gen8
> CPU's x2
> Product Manufacturer  : Intel
>  Product Name  :  Intel(R) Xeon(R) CPU E5-2660 v2 @ 2.20GHz
>  Memory Type   : DDR3 SDRAM
>  SDRAM Capacity: 2048 MB
>  Total Memory: : 64GB
> Hardrives size and layout:
> 9 drives using jbod
> drive size 3.6TB each
>Reporter: Fernando Vega
>Priority: Critical
> Attachments: mirrormaker-repl-sjc2-to-hkg1.log.8
>
>
> Currently Im upgrading the cluster from 0.8.2-beta to 0.10.2.1
> So I followed the rolling procedure:
> Here the config files:
> Consumer
> {noformat}
> #
> # Cluster: repl
> # Topic list(goes into command line): 
> REPL-ams1-global,REPL-atl1-global,REPL-sjc2-global,REPL-ams1-global-PN_HXIDMAP_.*,REPL-atl1-global-PN_HXIDMAP_.*,REPL-sjc2-global-PN_HXIDMAP_.*,REPL-ams1-global-PN_HXCONTEXTUALV2_.*,REPL-atl1-global-PN_HXCONTEXTUALV2_.*,REPL-sjc2-global-PN_HXCONTEXTUALV2_.*
> bootstrap.servers=app001:9092,app002:9092,app003:9092,app004:9092
> group.id=hkg1_cluster
> auto.commit.interval.ms=6
> partition.assignment.strategy=org.apache.kafka.clients.consumer.RoundRobinAssignor
> {noformat}
> Producer
> {noformat}
>  hkg1
> # # Producer
> # # hkg1
> bootstrap.servers=app001:9092,app002:9092,app003:9092,app004:9092
> compression.type=gzip
> acks=0
> {noformat}
> Broker
> {noformat}
> auto.leader.rebalance.enable=true
> delete.topic.enable=true
> socket.receive.buffer.bytes=1048576
> socket.send.buffer.bytes=1048576
> default.replication.factor=2
> auto.create.topics.enable=true
> num.partitions=1
> num.network.threads=8
> num.io.threads=40
> log.retention.hours=1
> log.roll.hours=1
> num.replica.fetchers=8
> zookeeper.connection.timeout.ms=3
> zookeeper.session.timeout.ms=3
> inter.broker.protocol.version=0.10.2
> log.message.format.version=0.8.2
> {noformat}
> I tried also using stock configuraiton with no luck.
> The error that I get is this:
> {noformat}
> 2017-06-07 12:24:45,476] INFO ConsumerConfig values:
>   auto.commit.interval.ms = 6
>   auto.offset.reset = latest
>   bootstrap.servers = [app454.sjc2.mytest.com:9092, 
> app455.sjc2.mytest.com:9092, app456.sjc2.mytest.com:9092, 
> app457.sjc2.mytest.com:9092, app458.sjc2.mytest.com:9092, 
> app459.sjc2.mytest.com:9092]
>   check.crcs = true
>   client.id = MirrorMaker_hkg1-1
>   connections.max.idle.ms = 54
>   enable.auto.commit = false
>   exclude.internal.topics = true
>   fetch.max.bytes = 52428800
>   fetch.max.wait.ms = 500
>   fetch.min.bytes = 1
>   group.id = MirrorMaker_hkg1
>   heartbeat.interval.ms = 3000
>   interceptor.classes = null
>   key.deserializer = class 
> org.apache.kafka.common.serialization.ByteArrayDeserializer
>   max.partition.fetch.bytes = 1048576
>   max.poll.interval.ms = 30
>   max.poll.records = 500
>   metadata.max.age.ms = 30
>   metric.reporters = []
>   metrics.num.samples = 2
>   metrics.recording.level = INFO
>   metrics.sample.window.ms = 3
>   partition.assignment.strategy = 
> [org.apache.kafka.clients.consumer.RoundRobinAssignor]
>   receive.buffer.bytes = 65536
>   reconnect.backoff.ms = 50
>   request.timeout.ms = 305000
>   retry.backoff.ms = 100
>   sasl.jaas.config = null
>   sasl.kerberos.kinit.cmd = /usr/bin/kinit
>   sasl.kerberos.min.time.before.relogin = 6
>   sasl.kerberos.service.name = null
>   sasl.kerberos.ticket.renew.jitter = 0.05
>   sasl.kerberos.ticket.renew.window.factor = 0.8
>   sasl.mechanism = GSSAPI
>   security.protocol = PLAINTEXT
>   send.buffer.bytes = 131072
>   session.timeout.ms = 1
>   ssl.cipher.suites = null
>   ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
>   ssl.endpoint.identification.algorithm = null
>   ssl.key.password = null
>   ssl.keymanager.algorithm = SunX509
>   ssl.keystore.location = null
>   ssl.keystore.password = null
>   ssl.keystore.type = JKS
>   ssl.protocol = TLS
>   ssl.provider = null
>   ssl.secure.random.implementation = null
>   ssl.trustmanager.algorithm = PKIX
>   ssl.truststore.location = null
>   ssl.truststore.password = null
>   ssl.tru

[jira] [Created] (KAFKA-5906) Change metric.reporters configuration key to metrics.reporters to be consistent

2017-09-14 Thread Kevin Lu (JIRA)
Kevin Lu created KAFKA-5906:
---

 Summary: Change metric.reporters configuration key to 
metrics.reporters to be consistent
 Key: KAFKA-5906
 URL: https://issues.apache.org/jira/browse/KAFKA-5906
 Project: Kafka
  Issue Type: Improvement
  Components: config, metrics
Reporter: Kevin Lu
Priority: Minor


The "metric.reporters" configuration key should be consistent with the actual 
classes. Clients have a MetricsReporter.class while the broker has a 
KafkaMetricsReporter.class. 

We have seen quite a few people configure this field incorrectly by setting it 
as "metrics.reporters".

The configuration key could be renamed to "metrics.reporters" to match the 
classes, or the classes can be renamed to MetricReporter.class and 
KafkaMetricReporter.class.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5906) Change metric.reporters configuration key to metrics.reporters to be consistent

2017-09-14 Thread Kevin Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Lu updated KAFKA-5906:

Description: 
The "metric.reporters" configuration key should be consistent with the actual 
classes. Clients have a MetricsReporter.class while the broker has a 
KafkaMetricsReporter.class. 

We have seen quite a few people configure this field incorrectly by setting it 
as "metrics.reporters".

The configuration key could be renamed to "metrics.reporters" to match the 
classes, or the classes can be renamed to MetricReporter.class and 
KafkaMetricReporter.class.

The broker configuration description for "metric.reporters" also mentions 
MetricReporter, but the actual interface to implement is KafkaMetricsReporter. 

  was:
The "metric.reporters" configuration key should be consistent with the actual 
classes. Clients have a MetricsReporter.class while the broker has a 
KafkaMetricsReporter.class. 

We have seen quite a few people configure this field incorrectly by setting it 
as "metrics.reporters".

The configuration key could be renamed to "metrics.reporters" to match the 
classes, or the classes can be renamed to MetricReporter.class and 
KafkaMetricReporter.class.



> Change metric.reporters configuration key to metrics.reporters to be 
> consistent
> ---
>
> Key: KAFKA-5906
> URL: https://issues.apache.org/jira/browse/KAFKA-5906
> Project: Kafka
>  Issue Type: Improvement
>  Components: config, metrics
>Reporter: Kevin Lu
>Priority: Minor
>  Labels: usability
>
> The "metric.reporters" configuration key should be consistent with the actual 
> classes. Clients have a MetricsReporter.class while the broker has a 
> KafkaMetricsReporter.class. 
> We have seen quite a few people configure this field incorrectly by setting 
> it as "metrics.reporters".
> The configuration key could be renamed to "metrics.reporters" to match the 
> classes, or the classes can be renamed to MetricReporter.class and 
> KafkaMetricReporter.class.
> The broker configuration description for "metric.reporters" also mentions 
> MetricReporter, but the actual interface to implement is 
> KafkaMetricsReporter. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5906) Change metric.reporters configuration key to metrics.reporters to be consistent

2017-09-14 Thread Kevin Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Lu updated KAFKA-5906:

Description: 
The "metric.reporters" configuration key should be consistent with the actual 
classes. Clients have a MetricsReporter.class while the broker has a 
KafkaMetricsReporter.class. 

We have seen quite a few people configure this field incorrectly by setting it 
as "metrics.reporters".

The configuration key could be renamed to "metrics.reporters" to match the 
classes, or the classes can be renamed to MetricReporter.class and 
KafkaMetricReporter.class.

The broker configuration description for "metric.reporters" also mentions 
MetricReporter, but the actual interface to implement is KafkaMetricsReporter. 

There also seems to be a discrepancy with "MetricReporter" in the description 
as the class name is actually "MetricsReporter". 
https://github.com/apache/kafka/pull/3867

  was:
The "metric.reporters" configuration key should be consistent with the actual 
classes. Clients have a MetricsReporter.class while the broker has a 
KafkaMetricsReporter.class. 

We have seen quite a few people configure this field incorrectly by setting it 
as "metrics.reporters".

The configuration key could be renamed to "metrics.reporters" to match the 
classes, or the classes can be renamed to MetricReporter.class and 
KafkaMetricReporter.class.

The broker configuration description for "metric.reporters" also mentions 
MetricReporter, but the actual interface to implement is KafkaMetricsReporter. 


> Change metric.reporters configuration key to metrics.reporters to be 
> consistent
> ---
>
> Key: KAFKA-5906
> URL: https://issues.apache.org/jira/browse/KAFKA-5906
> Project: Kafka
>  Issue Type: Improvement
>  Components: config, metrics
>Reporter: Kevin Lu
>Priority: Minor
>  Labels: usability
>
> The "metric.reporters" configuration key should be consistent with the actual 
> classes. Clients have a MetricsReporter.class while the broker has a 
> KafkaMetricsReporter.class. 
> We have seen quite a few people configure this field incorrectly by setting 
> it as "metrics.reporters".
> The configuration key could be renamed to "metrics.reporters" to match the 
> classes, or the classes can be renamed to MetricReporter.class and 
> KafkaMetricReporter.class.
> The broker configuration description for "metric.reporters" also mentions 
> MetricReporter, but the actual interface to implement is 
> KafkaMetricsReporter. 
> There also seems to be a discrepancy with "MetricReporter" in the description 
> as the class name is actually "MetricsReporter". 
> https://github.com/apache/kafka/pull/3867



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5906) Change metric.reporters configuration key to metrics.reporters to be consistent

2017-09-14 Thread Kevin Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Lu updated KAFKA-5906:

Description: 
The "metric.reporters" configuration key should be consistent with the actual 
classes. Clients have a MetricsReporter.class while the broker has a 
KafkaMetricsReporter.class. 

We have seen quite a few people configure this field incorrectly by setting it 
as "metrics.reporters".

The configuration key could be renamed to "metrics.reporters" to match the 
classes, or the classes can be renamed to MetricReporter.class and 
KafkaMetricReporter.class.

The broker configuration description for "metric.reporters" also mentions 
MetricReporter, but KafkaMetricsReporter is the actual interface to implement. 

There also seems to be a discrepancy with "MetricReporter" in the description 
as the class name is actually "MetricsReporter". 
https://github.com/apache/kafka/pull/3867

  was:
The "metric.reporters" configuration key should be consistent with the actual 
classes. Clients have a MetricsReporter.class while the broker has a 
KafkaMetricsReporter.class. 

We have seen quite a few people configure this field incorrectly by setting it 
as "metrics.reporters".

The configuration key could be renamed to "metrics.reporters" to match the 
classes, or the classes can be renamed to MetricReporter.class and 
KafkaMetricReporter.class.

The broker configuration description for "metric.reporters" also mentions 
MetricReporter, but the actual interface to implement is KafkaMetricsReporter. 

There also seems to be a discrepancy with "MetricReporter" in the description 
as the class name is actually "MetricsReporter". 
https://github.com/apache/kafka/pull/3867


> Change metric.reporters configuration key to metrics.reporters to be 
> consistent
> ---
>
> Key: KAFKA-5906
> URL: https://issues.apache.org/jira/browse/KAFKA-5906
> Project: Kafka
>  Issue Type: Improvement
>  Components: config, metrics
>Reporter: Kevin Lu
>Priority: Minor
>  Labels: usability
>
> The "metric.reporters" configuration key should be consistent with the actual 
> classes. Clients have a MetricsReporter.class while the broker has a 
> KafkaMetricsReporter.class. 
> We have seen quite a few people configure this field incorrectly by setting 
> it as "metrics.reporters".
> The configuration key could be renamed to "metrics.reporters" to match the 
> classes, or the classes can be renamed to MetricReporter.class and 
> KafkaMetricReporter.class.
> The broker configuration description for "metric.reporters" also mentions 
> MetricReporter, but KafkaMetricsReporter is the actual interface to 
> implement. 
> There also seems to be a discrepancy with "MetricReporter" in the description 
> as the class name is actually "MetricsReporter". 
> https://github.com/apache/kafka/pull/3867



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)