[jira] [Commented] (KAFKA-5818) KafkaStreams state transitions not correct

2017-09-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16151516#comment-16151516
 ] 

ASF GitHub Bot commented on KAFKA-5818:
---

GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/3779

KAFKA-5818: KafkaStreams state transitions not correct



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka 
kafka-5818-kafkaStreams-state-transition-01101

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3779.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3779


commit 50ae6bcfd36dd9aa98f808c476dfdeb9fd8655e3
Author: Matthias J. Sax 
Date:   2017-09-02T14:49:12Z

KAFKA-5818: KafkaStreams state transitions not correct




> KafkaStreams state transitions not correct
> --
>
> Key: KAFKA-5818
> URL: https://issues.apache.org/jira/browse/KAFKA-5818
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.1, 1.0.0
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
> Fix For: 0.11.0.1, 1.0.0
>
>
> There is a race condition revealed by failing test 
> {{KafkaStreamsTest#testCannotStartTwise}}. It fails with:
> {noformat}
> java.lang.Exception: Unexpected exception, 
> expected but 
> was
> Caused by: java.lang.IllegalThreadStateException
>   at java.lang.Thread.start(Thread.java:705)
>   at org.apache.kafka.streams.KafkaStreams.start(KafkaStreams.java:590)
>   at 
> org.apache.kafka.streams.KafkaStreamsTest.testCannotStartTwice(KafkaStreamsTest.java:251)
> {noformat}
> The race condition is a follows:
> 1) test calls {{KafkaStreams#start()}} for the first time and state transits 
> from CREATED -> RUNNING
> 2) First poll triggers a rebalance and {{StreamThread}} put {{KafkaStreams}} 
> into state REBALANCING
> 3) before REBALANCING completes, the main test thread calls 
> {{KafkaStream#start()}} again. As current state is REBALANCING, the 
> transition to RUNNING is valid and {{start()}} does not fail with 
> {{IllegalStateException}} but resumes. When it tries to start internal 
> streams, we get {{IllegalThreadStateException}} as thread are already running.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-3131) Inappropriate logging level for SSL Problem

2017-08-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147122#comment-16147122
 ] 

ASF GitHub Bot commented on KAFKA-3131:
---

GitHub user omkreddy opened a pull request:

https://github.com/apache/kafka/pull/3758

KAFKA-3131: enable error level for SSLException logs



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/omkreddy/kafka KAFKA-3131

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3758.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3758


commit e471714320b2303388dfa5d14d8202788d5c8fc9
Author: Manikumar Reddy 
Date:   2017-08-30T11:22:38Z

KAFKA-3131: enable error level for SSLException logs




> Inappropriate logging level for SSL Problem
> ---
>
> Key: KAFKA-3131
> URL: https://issues.apache.org/jira/browse/KAFKA-3131
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Jake Robb
>Assignee: Sriharsha Chintalapani
>Priority: Minor
> Attachments: kafka-ssl-error-debug-log.txt
>
>
> I didn't have my truststore set up correctly. The Kafka producer waited until 
> the connection timed out (60 seconds in my case) and then threw this 
> exception:
> {code}
> Exception in thread "main" java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TimeoutException: Failed to update metadata 
> after 6 ms.
>   at 
> org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.(KafkaProducer.java:706)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:453)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:339)
> {code}
> I changed my log level to DEBUG and found this, less than two seconds after 
> startup:
> {code}
> [DEBUG] @ 2016-01-22 10:10:34,095 
> [User: ; Server: ; Client: ; URL: ; ChangeGroup: ]
>  org.apache.kafka.common.network.Selector  - Connection with kafka02/10.0.0.2 
> disconnected 
> javax.net.ssl.SSLHandshakeException: General SSLEngine problem
>   at sun.security.ssl.Handshaker.checkThrown(Handshaker.java:1364)
>   at 
> sun.security.ssl.SSLEngineImpl.checkTaskThrown(SSLEngineImpl.java:529)
>   at 
> sun.security.ssl.SSLEngineImpl.writeAppRecord(SSLEngineImpl.java:1194)
>   at sun.security.ssl.SSLEngineImpl.wrap(SSLEngineImpl.java:1166)
>   at javax.net.ssl.SSLEngine.wrap(SSLEngine.java:469)
>   at 
> org.apache.kafka.common.network.SslTransportLayer.handshakeWrap(SslTransportLayer.java:377)
>   at 
> org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:242)
>   at 
> org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:68)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:281)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:270)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: javax.net.ssl.SSLHandshakeException: General SSLEngine problem
>   at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
>   at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1708)
>   at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:303)
>   at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:295)
>   at 
> sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1369)
>   at 
> sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:156)
>   at sun.security.ssl.Handshaker.processLoop(Handshaker.java:925)
>   at sun.security.ssl.Handshaker$1.run(Handshaker.java:865)
>   at sun.security.ssl.Handshaker$1.run(Handshaker.java:862)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at sun.security.ssl.Handshaker$DelegatedTask.run(Handshaker.java:1302)
>   at 
> org.apache.kafka.common.network.SslTransportLayer.runDelegatedTasks(SslTransportLayer.java:335)
>   at 
> org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:413)
>   at 
> org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:269)
>   ... 6 more
> Caused by: sun.security.validator.ValidatorException: PKIX path building 
> failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to 
> find valid certification path to requested target
>   at 

[jira] [Commented] (KAFKA-5804) ChangeLoggingWindowBytesStore needs to retain duplicates when writing to the log

2017-08-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147175#comment-16147175
 ] 

ASF GitHub Bot commented on KAFKA-5804:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3754


> ChangeLoggingWindowBytesStore needs to retain duplicates when writing to the 
> log
> 
>
> Key: KAFKA-5804
> URL: https://issues.apache.org/jira/browse/KAFKA-5804
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 1.0.0
>
>
> The {{ChangeLoggingWindowBytesStore}} needs to have the same duplicate 
> retaining logic as {{RocksDBWindowStore}} otherwise data loss may occur when 
> performing windowed joins. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5804) ChangeLoggingWindowBytesStore needs to retain duplicates when writing to the log

2017-08-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16145467#comment-16145467
 ] 

ASF GitHub Bot commented on KAFKA-5804:
---

GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/3754

KAFKA-5804: retain duplicates in ChangeLoggingWindowBytesStore

`ChangeLoggingWindowBytesStore` needs to have the same `retainDuplicates` 
functionality as `RocksDBWindowStore` else data could be lost upon 
failover/restoration.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka hotfix-changelog-window-store

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3754.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3754


commit d63358a67a98d2fbd53ea47f9b7d54c5dd65b937
Author: Damian Guy 
Date:   2017-08-29T15:18:31Z

retain duplicates in ChangeLoggingWindowBytesStore




> ChangeLoggingWindowBytesStore needs to retain duplicates when writing to the 
> log
> 
>
> Key: KAFKA-5804
> URL: https://issues.apache.org/jira/browse/KAFKA-5804
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 1.0.0
>
>
> The {{ChangeLoggingWindowBytesStore}} needs to have the same duplicate 
> retaining logic as {{RocksDBWindowStore}} otherwise data loss may occur when 
> performing windowed joins. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5379) ProcessorContext.appConfigs() should return parsed/validated values

2017-08-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16149150#comment-16149150
 ] 

ASF GitHub Bot commented on KAFKA-5379:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3757


> ProcessorContext.appConfigs() should return parsed/validated values
> ---
>
> Key: KAFKA-5379
> URL: https://issues.apache.org/jira/browse/KAFKA-5379
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.1
>Reporter: Tommy Becker
>Assignee: Tommy Becker
>Priority: Minor
> Fix For: 1.0.0
>
>
> As part of KAFKA-5334, it was decided that the current behavior of 
> {{ProcessorContext.appConfigs()}} is sub-optimal in that it returns the 
> original unparsed config values. Alternatively, the parsed values could be 
> returned which would allow callers to know what they are getting as well 
> avoid duplicating type conversions (e.g. className -> class).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5816) Add Produced class and new to and through overloads to KStream

2017-08-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16149303#comment-16149303
 ] 

ASF GitHub Bot commented on KAFKA-5816:
---

GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/3770

KAFKA-5816: add Produced class, KStream#to(topic, Produced), and 
KStream#through(topic, Produced)

Add the `Produced` class and `KStream` overloads that use it:
`KStream#to(String, Produced)`
`KStream#through(String, Produced)`
Deprecate all other to and through methods accept the single param methods 
that take a topic param

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka kafka-5652-produced

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3770.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3770


commit b50c2c9f2ab4d0a7968889426ccd64a52147c73f
Author: Damian Guy 
Date:   2017-08-31T17:30:19Z

add Produced class, KStream#to(topic, Produced), and KStream#through(topic, 
Produced)




> Add Produced class and new to and through overloads to KStream
> --
>
> Key: KAFKA-5816
> URL: https://issues.apache.org/jira/browse/KAFKA-5816
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Damian Guy
>Assignee: Damian Guy
>
> Add the {{Produced}} and {{KStream}} overloads that use it:
> {{KStream#to(String, Produced)}}
> {{KStream#through(String, Produced)}}
> Deprecate all other {{to}} and {{through}} methods accept the single param 
> methods that take a {{topic}} param



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5817) Add Serialized class and KStream groupBy and groupByKey overloads

2017-08-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16149394#comment-16149394
 ] 

ASF GitHub Bot commented on KAFKA-5817:
---

GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/3772

KAFKA-5817: Add Serialized class and overloads to KStream#groupBy and 
KStream#groupByKey

Part of KIP-182
- Add the `Serialized` class
- implement overloads of `KStream#groupByKey` and KStream#groupBy`
- deprecate existing methods that have more than default arguments

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka kafka-5817

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3772.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3772


commit 2ccec406738a91ffbd091632221299d794e8d035
Author: Damian Guy 
Date:   2017-08-31T18:24:10Z

Add Serialized class and overloads to KStream#groupBy and KStream#groupByKey




> Add Serialized class and KStream groupBy and groupByKey overloads
> -
>
> Key: KAFKA-5817
> URL: https://issues.apache.org/jira/browse/KAFKA-5817
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Damian Guy
>Assignee: Damian Guy
>
> Add the following classes and methods to {{KStream}}
> {{KGroupedStream groupByKey(final Serialized serialized)}}
>  
> {{ KGroupedStream groupBy(final KeyValueMapper V, KR> selector, Serialized serialized)}}
> {code}
> public class Serialized {
>  
> public static  Serialized with(final Serde keySerde, final 
> Serde valueSerde)
>  
> public Serialized withKeySerde(final Serde keySerde)
>  
> public Serialized withValueSerde(final Serde valueSerde)
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5806) Fix transient unit test failure in trogdor coordinator shutdown

2017-08-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16149542#comment-16149542
 ] 

ASF GitHub Bot commented on KAFKA-5806:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3755


> Fix transient unit test failure in trogdor coordinator shutdown
> ---
>
> Key: KAFKA-5806
> URL: https://issues.apache.org/jira/browse/KAFKA-5806
> Project: Kafka
>  Issue Type: Sub-task
>  Components: system tests
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
> Fix For: 1.0.0
>
>
> Fix transient unit test failure in trogdor coordinator shutdown



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5818) KafkaStreams state transitions not correct

2017-08-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16149726#comment-16149726
 ] 

ASF GitHub Bot commented on KAFKA-5818:
---

GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/3775

KAFKA-5818: KafkaStreams state transitions not correct

- need to check that state is CRATED at startup
- some minor test cleanup

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka 
kafka-5818-kafkaStreams-state-transition

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3775.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3775


commit 0ab4a2a15bf895ea7fdbfc476897dafa0717a1ae
Author: Matthias J. Sax 
Date:   2017-08-31T22:38:48Z

KAFKA-5818: KafkaStreams state transitions not correct




> KafkaStreams state transitions not correct
> --
>
> Key: KAFKA-5818
> URL: https://issues.apache.org/jira/browse/KAFKA-5818
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.1, 1.0.0
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>
> There is a race condition revealed by failing test 
> {{KafkaStreamsTest#testCannotStartTwise}}. It fails with:
> {noformat}
> java.lang.Exception: Unexpected exception, 
> expected but 
> was
> Caused by: java.lang.IllegalThreadStateException
>   at java.lang.Thread.start(Thread.java:705)
>   at org.apache.kafka.streams.KafkaStreams.start(KafkaStreams.java:590)
>   at 
> org.apache.kafka.streams.KafkaStreamsTest.testCannotStartTwice(KafkaStreamsTest.java:251)
> {noformat}
> The race condition is a follows:
> 1) test calls {{KafkaStreams#start()}} for the first time and state transits 
> from CREATED -> RUNNING
> 2) First poll triggers a rebalance and {{StreamThread}} put {{KafkaStreams}} 
> into state REBALANCING
> 3) before REBALANCING completes, the main test thread calls 
> {{KafkaStream#start()}} again. As current state is REBALANCING, the 
> transition to RUNNING is valid and {{start()}} does not fail with 
> {{IllegalStateException}} but resumes. When it tries to start internal 
> streams, we get {{IllegalThreadStateException}} as thread are already running.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5819) Add Joined class and relevant KStream join overloads

2017-09-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150295#comment-16150295
 ] 

ASF GitHub Bot commented on KAFKA-5819:
---

GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/3776

KAFKA-5819: Add Joined class and relevant KStream join overloads

Add the `Joined` class and the overloads to `KStream` that use it.
Deprecate existing methods that have `Serde` params

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka kip-182-stream-join

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3776.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3776


commit 5be7a49c245f762479ec02c7e1625a1da882bde9
Author: Damian Guy 
Date:   2017-09-01T08:44:16Z

add Joined class and overloads for KStream#join

commit d3991b13e5fd6f9c926c1d6cf3d4e15e77eea7ba
Author: Damian Guy 
Date:   2017-09-01T09:05:39Z

KStream#leftJoin(KStream...)

commit d8b007304e462b7d0e59e3fb4d4f2fa10a78d05b
Author: Damian Guy 
Date:   2017-09-01T09:25:13Z

stream table leftJoin

commit eeed855e153df766f1341469132f45fff62a13ce
Author: Damian Guy 
Date:   2017-09-01T09:40:01Z

outerJoin




> Add Joined class and relevant KStream join overloads
> 
>
> Key: KAFKA-5819
> URL: https://issues.apache.org/jira/browse/KAFKA-5819
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 1.0.0
>
>
> Add the {{Joined}} class as defined in KIP-182 and the following overloads to 
> {{KStream}}
> {code}
>  KStream join(final KStream other, final ValueJoiner super V, ? super VO, ? extends VR> joiner, final JoinWindows windows, final 
> Joined options);
>  
>  KStream join(final KTable other, final ValueJoiner super V, ? super VT, ? extends VR> joiner, final Joined options);
>  
>  KStream leftJoin(final KStream other, final 
> ValueJoiner joiner, final JoinWindows 
> windows, final Joined options);
>  
>  KStream leftJoin(final KTable other, final 
> ValueJoiner joiner, final Joined VT> options);
>  
>  KStream outerJoin(final KStream other, final 
> ValueJoiner joiner, final JoinWindows 
> windows, final Joined options);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5642) Use async ZookeeperClient in Controller

2017-08-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148549#comment-16148549
 ] 

ASF GitHub Bot commented on KAFKA-5642:
---

GitHub user onurkaraman opened a pull request:

https://github.com/apache/kafka/pull/3765

KAFKA-5642 [WIP]: Use async ZookeeperClient in Controller

Synchronous zookeeper writes means that we wait an entire round trip before 
doing the next write. These synchronous writes are happening at a per-partition 
granularity in several places, so partition-heavy clusters suffer from the 
controller doing many sequential round trips to zookeeper:
- PartitionStateMachine.electLeaderForPartition updates leaderAndIsr in 
zookeeper on transition to OnlinePartition. This gets triggered per-partition 
sequentially with synchronous writes during controlled shutdown of the shutting 
down broker's replicas for which it is the leader.
- ReplicaStateMachine updates leaderAndIsr in zookeeper on transition to 
OfflineReplica when calling KafkaController.removeReplicaFromIsr. This gets 
triggered per-partition sequentially with synchronous writes for failed or 
controlled shutdown brokers.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/onurkaraman/kafka KAFKA-5642

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3765.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3765


commit 4b3e4e8b1d403bf817aeaa9196fb69a15b1a13a0
Author: Onur Karaman 
Date:   2017-07-27T18:29:35Z

KAFKA-5642: Use async ZookeeperClient in Controller

Synchronous zookeeper writes means that we wait an entire round trip before 
doing the next write. These synchronous writes are happening at a per-partition 
granularity in several places, so partition-heavy clusters suffer from the 
controller doing many sequential round trips to zookeeper:
- PartitionStateMachine.electLeaderForPartition updates leaderAndIsr in 
zookeeper on transition to OnlinePartition. This gets triggered per-partition 
sequentially with synchronous writes during controlled shutdown of the shutting 
down broker's replicas for which it is the leader.
- ReplicaStateMachine updates leaderAndIsr in zookeeper on transition to 
OfflineReplica when calling KafkaController.removeReplicaFromIsr. This gets 
triggered per-partition sequentially with synchronous writes for failed or 
controlled shutdown brokers.




> Use async ZookeeperClient in Controller
> ---
>
> Key: KAFKA-5642
> URL: https://issues.apache.org/jira/browse/KAFKA-5642
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Onur Karaman
>Assignee: Onur Karaman
>
> Synchronous zookeeper writes means that we wait an entire round trip before 
> doing the next write. These synchronous writes are happening at a 
> per-partition granularity in several places, so partition-heavy clusters 
> suffer from the controller doing many sequential round trips to zookeeper.
> * PartitionStateMachine.electLeaderForPartition updates leaderAndIsr in 
> zookeeper on transition to OnlinePartition. This gets triggered per-partition 
> sequentially with synchronous writes during controlled shutdown of the 
> shutting down broker's replicas for which it is the leader.
> * ReplicaStateMachine updates leaderAndIsr in zookeeper on transition to 
> OfflineReplica when calling KafkaController.removeReplicaFromIsr. This gets 
> triggered per-partition sequentially with synchronous writes for failed or 
> controlled shutdown brokers.
> KAFKA-5501 introduced an async ZookeeperClient that encourages pipelined 
> requests to zookeeper. We should replace ZkClient's usage with this client.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4574) Transient failure in ZooKeeperSecurityUpgradeTest.test_zk_security_upgrade with security_protocol = SASL_PLAINTEXT, SSL

2017-10-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16197694#comment-16197694
 ] 

ASF GitHub Bot commented on KAFKA-4574:
---

Github user apurvam closed the pull request at:

https://github.com/apache/kafka/pull/2677


> Transient failure in ZooKeeperSecurityUpgradeTest.test_zk_security_upgrade 
> with security_protocol = SASL_PLAINTEXT, SSL
> ---
>
> Key: KAFKA-4574
> URL: https://issues.apache.org/jira/browse/KAFKA-4574
> Project: Kafka
>  Issue Type: Test
>  Components: system tests
>Reporter: Shikhar Bhushan
>Assignee: Apurva Mehta
>  Labels: kip-101
> Fix For: 0.11.0.0
>
>
> http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2016-12-29--001.1483003056--apache--trunk--dc55025/report.html
> {{ZooKeeperSecurityUpgradeTest.test_zk_security_upgrade}} failed with these 
> {{security_protocol}} parameters 
> {noformat}
> 
> test_id:
> kafkatest.tests.core.zookeeper_security_upgrade_test.ZooKeeperSecurityUpgradeTest.test_zk_security_upgrade.security_protocol=SASL_PLAINTEXT
> status: FAIL
> run time:   3 minutes 44.094 seconds
> 1 acked message did not make it to the Consumer. They are: [5076]. We 
> validated that the first 1 of these missing messages correctly made it into 
> Kafka's data files. This suggests they were lost on their way to the consumer.
> Traceback (most recent call last):
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/tests/runner_client.py",
>  line 123, in run
> data = self.run_test()
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/tests/runner_client.py",
>  line 176, in run_test
> return self.test_context.function(self.test)
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/mark/_mark.py",
>  line 321, in wrapper
> return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/tests/kafkatest/tests/core/zookeeper_security_upgrade_test.py",
>  line 117, in test_zk_security_upgrade
> self.run_produce_consume_validate(self.run_zk_migration)
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/tests/kafkatest/tests/produce_consume_validate.py",
>  line 101, in run_produce_consume_validate
> self.validate()
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/tests/kafkatest/tests/produce_consume_validate.py",
>  line 163, in validate
> assert success, msg
> AssertionError: 1 acked message did not make it to the Consumer. They are: 
> [5076]. We validated that the first 1 of these missing messages correctly 
> made it into Kafka's data files. This suggests they were lost on their way to 
> the consumer.
> {noformat}
> {noformat}
> 
> test_id:
> kafkatest.tests.core.zookeeper_security_upgrade_test.ZooKeeperSecurityUpgradeTest.test_zk_security_upgrade.security_protocol=SSL
> status: FAIL
> run time:   3 minutes 50.578 seconds
> 1 acked message did not make it to the Consumer. They are: [3559]. We 
> validated that the first 1 of these missing messages correctly made it into 
> Kafka's data files. This suggests they were lost on their way to the consumer.
> Traceback (most recent call last):
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/tests/runner_client.py",
>  line 123, in run
> data = self.run_test()
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/tests/runner_client.py",
>  line 176, in run_test
> return self.test_context.function(self.test)
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/mark/_mark.py",
>  line 321, in wrapper
> return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/tests/kafkatest/tests/core/zookeeper_security_upgrade_test.py",
>  line 117, in test_zk_security_upgrade
> self.run_produce_consume_validate(self.run_zk_migration)
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/tests/kafkatest/tests/produce_consume_validate.py",
>  line 101, in 

[jira] [Commented] (KAFKA-6026) KafkaFuture timeout fails to fire if a narrow race condition is hit

2017-10-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16197970#comment-16197970
 ] 

ASF GitHub Bot commented on KAFKA-6026:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4044


> KafkaFuture timeout fails to fire if a narrow race condition is hit
> ---
>
> Key: KAFKA-6026
> URL: https://issues.apache.org/jira/browse/KAFKA-6026
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.11.0.0
>Reporter: Bart De Vylder
>Priority: Blocker
> Fix For: 1.0.0
>
>
> I would expect the following code to raise an Exception, either in the 
> adminClient creation or a TimeoutException when getting the future (there is 
> no kafka running on localhost on that port). 
> {code:java}
> Properties config = new Properties();
> config.setProperty("bootstrap.servers", "localhost:1234");
> AdminClient admin = AdminClient.create(config);
> admin.listTopics().names().get(1, TimeUnit.SECONDS);
> {code}
> The code however seems to hang forever in the last step.
> A possible cause for the behavior might be a bug in the KafkaFutureImpl class:
> {code:java}
> private static class SingleWaiter extends BiConsumer {
>[...]
> R await(long timeout, TimeUnit unit)
> throws InterruptedException, ExecutionException, 
> TimeoutException {
> long startMs = System.currentTimeMillis();
> long waitTimeMs = (unit.toMillis(timeout) > 0) ? 
> unit.toMillis(timeout) : 1;
> long delta = 0;
> synchronized (this) {
> while (true) {
> if (exception != null)
> wrapAndThrow(exception);
> if (done)
> return value;
> if (delta > waitTimeMs) {
> throw new TimeoutException();
> }
> this.wait(waitTimeMs - delta);
> delta = System.currentTimeMillis() - startMs;
> }
> }
> }
> {code}
> While debugging I observed {{waitTimeMs}} and {{delta}} to become equal after 
> one iteration, giving a {{this.wait(0)}} in the next iteration, which 
> according to the documentation 
> http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html#wait-long- 
> results in an indefinite wait.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5212) Consumer ListOffsets request can starve group heartbeats

2017-10-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16198120#comment-16198120
 ] 

ASF GitHub Bot commented on KAFKA-5212:
---

GitHub user ConcurrencyPractitioner opened a pull request:

https://github.com/apache/kafka/pull/4049

[KAFKA-5212] Consumer ListOffsets request can starve group heartbeats

Through the identification of the poll method in ConsumerCoordinator as the 
place where the heartbeat is sent, I modified the metadata so that it polled 
periodically every interval for a heartbeat.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ConcurrencyPractitioner/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4049.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4049


commit 431bc8011ce3f8aaa677e2f6743c9b069a1eec8c
Author: Richard Yu 
Date:   2017-10-10T03:45:23Z

[KAFKA-5212] Consumer ListOffsets request can starve group heartbeats




> Consumer ListOffsets request can starve group heartbeats
> 
>
> Key: KAFKA-5212
> URL: https://issues.apache.org/jira/browse/KAFKA-5212
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Jason Gustafson
> Fix For: 1.1.0, 1.0.1
>
>
> The consumer is not able to send heartbeats while it is awaiting a 
> ListOffsets response. Typically this is not a problem because ListOffsets 
> requests are handled quickly, but in the worst case if the request takes 
> longer than the session timeout, the consumer will fall out of the group.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5835) CommitFailedException message is misleading and cause is swallowed

2017-10-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16196462#comment-16196462
 ] 

ASF GitHub Bot commented on KAFKA-5835:
---

GitHub user rekhajoshm opened a pull request:

https://github.com/apache/kafka/pull/4042

[KAFKA-5835][CLIENTS] CommitFailedException exception versus KafkaConsumer 
flow

The invalid offset/out of range offset is captured in 
IllegalArgumentException/RunTimeException.
The CommitFailedException only happens as called out in 
CommitFailedException javadoc.
Hence made that exception flow explicit, and updated KafkaConsumer javadoc.

Please let me know if it makes sense.Thanks

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rekhajoshm/kafka KAFKA-5835

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4042.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4042


commit c15beda320475e0b243ac444ab262c93012e44ba
Author: rekhajoshm 
Date:   2017-10-09T03:16:19Z

[KAFKA-5835][CLIENTS] CommitFailedException exception versus KafkaConsumer 
flow




> CommitFailedException message is misleading and cause is swallowed
> --
>
> Key: KAFKA-5835
> URL: https://issues.apache.org/jira/browse/KAFKA-5835
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.11.0.0
>Reporter: Stevo Slavic
>Priority: Trivial
>
> {{CommitFailedException}}'s message suggests that it can only be thrown as 
> consequence of rebalancing. JavaDoc of the {{CommitFailedException}} suggests 
> differently that in general it can be thrown for any kind of unrecoverable 
> failure from {{KafkaConsumer#commitSync()}} call (e.g. if offset being 
> committed is invalid / outside of range).
> {{CommitFailedException}}'s message is misleading in a way that one can just 
> see the message in logs, and without consulting JavaDoc or source code one 
> can assume that message is correct and that rebalancing is the only potential 
> cause, so one can wast time proceeding with the debugging in wrong direction.
> Additionally, since {{CommitFailedException}} can be thrown for different 
> reasons, cause should not be swallowed. This makes impossible to handle each 
> potential cause in a specific way. If the cause is another exception please 
> pass it as cause, or construct appropriate exception hierarchy with specific 
> exception for every failure cause and make {{CommitFailedException}} abstract.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6023) ThreadCache#sizeBytes() should check overflow

2017-10-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16196431#comment-16196431
 ] 

ASF GitHub Bot commented on KAFKA-6023:
---

GitHub user shivsantham opened a pull request:

https://github.com/apache/kafka/pull/4041

KAFKA-6023 ThreadCache#sizeBytes() should check overflow

long sizeBytes() {
long sizeInBytes = 0;
for (final NamedCache namedCache : caches.values()) {
sizeInBytes += namedCache.sizeInBytes();
}
return sizeInBytes;
}
The summation w.r.t. sizeInBytes may overflow.
Check similar to what is done in size() should be performed.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shivsantham/kafkaImprovements kafka-6023

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4041.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4041


commit b3517bed4909da5d4bc5eae9a66daf681d2b4cb4
Author: siva santhalingam 
Date:   2017-10-09T01:46:56Z

KAFKA-6023 ThreadCache#sizeBytes() should check overflow




> ThreadCache#sizeBytes() should check overflow
> -
>
> Key: KAFKA-6023
> URL: https://issues.apache.org/jira/browse/KAFKA-6023
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: siva santhalingam
>Priority: Minor
>
> {code}
> long sizeBytes() {
> long sizeInBytes = 0;
> for (final NamedCache namedCache : caches.values()) {
> sizeInBytes += namedCache.sizeInBytes();
> }
> return sizeInBytes;
> }
> {code}
> The summation w.r.t. sizeInBytes may overflow.
> Check similar to what is done in size() should be performed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5746) Add new metrics to support health checks

2017-10-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193480#comment-16193480
 ] 

ASF GitHub Bot commented on KAFKA-5746:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/4026

KAFKA-5746: Document new broker metrics added for health checks



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka MINOR-KIP-188-metrics-docs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4026.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4026


commit 924e3d2f56a6c24a42018f396612c95f02cc5fe1
Author: Rajini Sivaram 
Date:   2017-10-05T19:12:15Z

KAFKA-5746: Document new broker metrics added for health checks




> Add new metrics to support health checks
> 
>
> Key: KAFKA-5746
> URL: https://issues.apache.org/jira/browse/KAFKA-5746
> Project: Kafka
>  Issue Type: New Feature
>  Components: metrics
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 1.0.0
>
>
> It will be useful to have some additional metrics to support health checks.
> Details are in 
> [KIP-188|https://cwiki.apache.org/confluence/display/KAFKA/KIP-188+-+Add+new+metrics+to+support+health+checks]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5989) disableLogging() causes partitions to not be consumed

2017-10-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193554#comment-16193554
 ] 

ASF GitHub Bot commented on KAFKA-5989:
---

Github user dguy closed the pull request at:

https://github.com/apache/kafka/pull/4025


> disableLogging() causes partitions to not be consumed
> -
>
> Key: KAFKA-5989
> URL: https://issues.apache.org/jira/browse/KAFKA-5989
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.1
>Reporter: Tuan Nguyen
>Assignee: Damian Guy
>Priority: Blocker
> Fix For: 1.0.0
>
> Attachments: App2.java, App.java
>
>
> Using {{disableLogging()}} for either of the built-in state store types 
> causes an initialization loop in the StreamThread.
> Case A - this works just fine:
> {code}
>   final StateStoreSupplier testStore = Stores.create(topic)
>   .withStringKeys()
>   .withStringValues()
>   .inMemory()
> //.disableLogging() 
>   .maxEntries(10)
>   .build();
> {code}
> Case B - this does not:
> {code}
>   final StateStoreSupplier testStore = Stores.create(topic)
>   .withStringKeys()
>   .withStringValues()
>   .inMemory()
>   .disableLogging() 
>   .maxEntries(10)
>   .build();
> {code}
> A brief debugging dive shows that in Case B, 
> {{AssignedTasks.allTasksRunning()}} never returns true, because of a remnant 
> entry in {{AssignedTasks#restoring}} that never gets properly restored.
> See [^App.java] for a working test (requires ZK + Kafka ensemble, and at 
> least one keyed message produced to the "test" topic)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6008) Kafka Connect: Unsanitized workerID causes exception during startup

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191827#comment-16191827
 ] 

ASF GitHub Bot commented on KAFKA-6008:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4012


> Kafka Connect: Unsanitized workerID causes exception during startup
> ---
>
> Key: KAFKA-6008
> URL: https://issues.apache.org/jira/browse/KAFKA-6008
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.0.0
> Environment: MacOS, Java 1.8.0_77-b03
>Reporter: Jakub Scholz
>Assignee: Jakub Scholz
> Fix For: 1.0.0
>
>
> When KAfka Connect starts, it seems to use unsanitized workerId for creating 
> Metrics. As a result it throws following exception:
> {code}
> [2017-10-04 13:16:08,886] WARN Error registering AppInfo mbean 
> (org.apache.kafka.common.utils.AppInfoParser:66)
> javax.management.MalformedObjectNameException: Invalid character ':' in value 
> part of property
>   at javax.management.ObjectName.construct(ObjectName.java:618)
>   at javax.management.ObjectName.(ObjectName.java:1382)
>   at 
> org.apache.kafka.common.utils.AppInfoParser.registerAppInfo(AppInfoParser.java:60)
>   at 
> org.apache.kafka.connect.runtime.ConnectMetrics.(ConnectMetrics.java:77)
>   at org.apache.kafka.connect.runtime.Worker.(Worker.java:88)
>   at 
> org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:81)
> {code}
> It looks like in my case the generated workerId is :. The 
> workerId should be sanitized before creating the metric.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5990) Add generated documentation for Connect metrics

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191734#comment-16191734
 ] 

ASF GitHub Bot commented on KAFKA-5990:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3987


> Add generated documentation for Connect metrics
> ---
>
> Key: KAFKA-5990
> URL: https://issues.apache.org/jira/browse/KAFKA-5990
> Project: Kafka
>  Issue Type: Sub-task
>  Components: documentation, KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Blocker
> Fix For: 1.0.0
>
>
> KAFKA-5191 recently added a new {{MetricNameTemplate}} that is used to create 
> the {{MetricName}} objects in the producer and consumer, as we as in the 
> newly-added generation of metric documentation. The {{Metric.toHtmlTable}} 
> method then takes these templates and generates an HTML documentation for the 
> metrics.
> Change the Connect metrics to use these templates and update the build to 
> generate these metrics and include them in the Kafka documentation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4972) Kafka 0.10.0 Found a corrupted index file during Kafka broker startup

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191321#comment-16191321
 ] 

ASF GitHub Bot commented on KAFKA-4972:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/4016

MINOR: Simplify log cleaner and fix compiler warnings

- Simplify LogCleaner.cleanSegments and add comment regarding thread
unsafe usage of `LogSegment.append`. This was a result of investigating
KAFKA-4972.
- Fix compiler warnings.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
simplify-log-cleaner-and-fix-warnings

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4016.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4016


commit 3b26b21c4a41b9857d48a09a63a560228924df4f
Author: Ismael Juma 
Date:   2017-10-04T13:57:03Z

Simplify LogCleaner.cleanSegments and add comment regarding thread unsafe 
usage of `LogSegment.append`

commit a1e50d8fbffc977646397f0446efeaa798816d87
Author: Ismael Juma 
Date:   2017-10-04T13:57:20Z

Fix compiler warnings




> Kafka 0.10.0  Found a corrupted index file during Kafka broker startup
> --
>
> Key: KAFKA-4972
> URL: https://issues.apache.org/jira/browse/KAFKA-4972
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.10.0.0
> Environment: JDK: HotSpot  x64  1.7.0_80
> Tag: 0.10.0
>Reporter: fangjinuo
>Priority: Critical
> Fix For: 1.0.0, 0.11.0.2
>
> Attachments: Snap3.png
>
>
> -deleted text-After force shutdown all kafka brokers one by one, restart them 
> one by one, but a broker startup failure.
> The following WARN leval log was found in the log file:
> found a corrutped index file,  .index , delet it  ...
> you can view details by following attachment.
> ~I look up some codes in core module, found out :
> the nonthreadsafe method LogSegment.append(offset, messages)  has tow caller:
> 1) Log.append(messages)  // here has a synchronized 
> lock 
> 2) LogCleaner.cleanInto(topicAndPartition, source, dest, map, retainDeletes, 
> messageFormatVersion)   // here has not 
> So I guess this may be the reason for the repeated offset in 0xx.log file 
> (logsegment's .log) ~
> Although this is just my inference, but I hope that this problem can be 
> quickly repaired



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6010) Transient failure: MemoryRecordsBuilderTest.convertToV1WithMixedV0AndV2Data

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192081#comment-16192081
 ] 

ASF GitHub Bot commented on KAFKA-6010:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4018


> Transient failure: MemoryRecordsBuilderTest.convertToV1WithMixedV0AndV2Data
> ---
>
> Key: KAFKA-6010
> URL: https://issues.apache.org/jira/browse/KAFKA-6010
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Assignee: Rajini Sivaram
>  Labels: transient-unit-test-failure
> Fix For: 1.0.0
>
>
> The issue happens with various tests that call verifyRecordsProcessingStats. 
> One example:
> {code}
> Stacktrace
> java.lang.AssertionError: Processing time not recorded
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilderTest.verifyRecordsProcessingStats(MemoryRecordsBuilderTest.java:651)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilderTest.convertToV1WithMixedV0AndV2Data(MemoryRecordsBuilderTest.java:515)
> {code}
> https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/2102/tests
> cc [~rsivaram]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5970) Deadlock due to locking of DelayedProduce and group

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191797#comment-16191797
 ] 

ASF GitHub Bot commented on KAFKA-5970:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3956


> Deadlock due to locking of DelayedProduce and group
> ---
>
> Key: KAFKA-5970
> URL: https://issues.apache.org/jira/browse/KAFKA-5970
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Blocker
> Fix For: 1.0.0
>
> Attachments: jstack.txt
>
>
> From a local run of TransactionsBounceTest. Looks like we hold group lock 
> while completing DelayedProduce, which in turn may acquire group lock.
> {quote}
> Found one Java-level deadlock:
> =
> "kafka-request-handler-7":
>   waiting to lock monitor 0x7fe08891fb08 (object 0x00074a9fbc50, a 
> kafka.coordinator.group.GroupMetadata),
>   which is held by "kafka-request-handler-4"
> "kafka-request-handler-4":
>   waiting to lock monitor 0x7fe0869e4408 (object 0x000749be7bb8, a 
> kafka.server.DelayedProduce),
>   which is held by "kafka-request-handler-3"
> "kafka-request-handler-3":
>   waiting to lock monitor 0x7fe08891fb08 (object 0x00074a9fbc50, a 
> kafka.coordinator.group.GroupMetadata),
>   which is held by "kafka-request-handler-4"
> Java stack information for the threads listed above:
> ===
> "kafka-request-handler-7":
> at 
> kafka.coordinator.group.GroupMetadataManager$$anonfun$handleTxnCompletion$1.apply(GroupMetadataManager.scala:752)
>   waiting to lock <0x00074a9fbc50> (a 
> kafka.coordinator.group.GroupMetadata)
> at 
> kafka.coordinator.group.GroupMetadataManager$$anonfun$handleTxnCompletion$1.apply(GroupMetadataManager.scala:750)
> at scala.collection.mutable.HashSet.foreach(HashSet.scala:78)
> at 
> kafka.coordinator.group.GroupMetadataManager.handleTxnCompletion(GroupMetadataManager.scala:750)
> at 
> kafka.coordinator.group.GroupCoordinator.handleTxnCompletion(GroupCoordinator.scala:439)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$maybeSendResponseCallback$1(KafkaApis.scala:1556)
> at 
> kafka.server.KafkaApis$$anonfun$handleWriteTxnMarkersRequest$1$$anonfun$apply$20.apply(KafkaApis.scala:1614)
> at 
> kafka.server.KafkaApis$$anonfun$handleWriteTxnMarkersRequest$1$$anonfun$apply$20.apply(KafkaApis.scala:1614)
> at kafka.server.DelayedProduce.onComplete(DelayedProduce.scala:134)
> at 
> kafka.server.DelayedOperation.forceComplete(DelayedOperation.scala:66)
> at kafka.server.DelayedProduce.tryComplete(DelayedProduce.scala:116)
> at 
> kafka.server.DelayedProduce.safeTryComplete(DelayedProduce.scala:76)
>   locked <0x00074b21c968> (a kafka.server.DelayedProduce)
> at 
> kafka.server.DelayedOperationPurgatory$Watchers.tryCompleteWatched(DelayedOperation.scala:338)
> at 
> kafka.server.DelayedOperationPurgatory.checkAndComplete(DelayedOperation.scala:244)
> at 
> kafka.server.ReplicaManager.tryCompleteDelayedProduce(ReplicaManager.scala:284)
> at 
> kafka.cluster.Partition.tryCompleteDelayedRequests(Partition.scala:434)
> at 
> kafka.cluster.Partition.updateReplicaLogReadResult(Partition.scala:285)
> at 
> kafka.server.ReplicaManager$$anonfun$updateFollowerLogReadResults$2.apply(ReplicaManager.scala:1290)
> at 
> kafka.server.ReplicaManager$$anonfun$updateFollowerLogReadResults$2.apply(ReplicaManager.scala:1286)
> at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
> at 
> kafka.server.ReplicaManager.updateFollowerLogReadResults(ReplicaManager.scala:1286)
> at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:786)
> at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:598)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:100)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:65)
> at java.lang.Thread.run(Thread.java:748)
> "kafka-request-handler-4":
> at 
> kafka.server.DelayedProduce.safeTryComplete(DelayedProduce.scala:75)
>   waiting to lock <0x000749be7bb8> (a kafka.server.DelayedProduce)
> at 
> kafka.server.DelayedOperationPurgatory$Watchers.tryCompleteWatched(DelayedOperation.scala:338)
> at 
> kafka.server.DelayedOperationPurgatory.checkAndComplete(DelayedOperation.scala:244)
> at 
> 

[jira] [Commented] (KAFKA-6010) Transient failure: MemoryRecordsBuilderTest.convertToV1WithMixedV0AndV2Data

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191858#comment-16191858
 ] 

ASF GitHub Bot commented on KAFKA-6010:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/4018

KAFKA-6010: Relax record conversion time test to avoid build failure

For record conversion tests, check time >=0 since conversion times may be 
too small to be measured accurately. Since default value is -1, the test is 
still useful. Also increase message size in 
SslTransportLayerTest#testNetworkThreadTimeRecorded to avoid failures when 
processing time is too small.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka 
KAFKA-6010-MemoryRecordsBuilderTest

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4018.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4018


commit de88403f7a77e67c77f7ad36dabcccdfff661fe4
Author: Rajini Sivaram 
Date:   2017-10-04T18:44:31Z

KAFKA-6010: Relax record conversion time test to avoid build failure




> Transient failure: MemoryRecordsBuilderTest.convertToV1WithMixedV0AndV2Data
> ---
>
> Key: KAFKA-6010
> URL: https://issues.apache.org/jira/browse/KAFKA-6010
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Assignee: Rajini Sivaram
>  Labels: transient-unit-test-failure
>
> The issue happens with various tests that call verifyRecordsProcessingStats. 
> One example:
> {code}
> Stacktrace
> java.lang.AssertionError: Processing time not recorded
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilderTest.verifyRecordsProcessingStats(MemoryRecordsBuilderTest.java:651)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilderTest.convertToV1WithMixedV0AndV2Data(MemoryRecordsBuilderTest.java:515)
> {code}
> https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/2102/tests
> cc [~rsivaram]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6016) Use the idempotent producer in the reassign_partitions_test

2017-10-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193718#comment-16193718
 ] 

ASF GitHub Bot commented on KAFKA-6016:
---

GitHub user apurvam opened a pull request:

https://github.com/apache/kafka/pull/4029

KAFKA-6016: Make the reassign partitions system test use the idempotent 
producer

With these changes, we are ensuring that the partitions being reassigned 
are from non-zero offsets. We also ensure that every message in the log has 
producerId and sequence number. 

This means that it successfully reproduces 
https://issues.apache.org/jira/browse/KAFKA-6003, as can be seen below:

```

[2017-10-05 20:57:00,466] ERROR [ReplicaFetcher replicaId=1, leaderId=4, 
fetcherId=0] Error due to (kafka.server.ReplicaFetcherThread)
kafka.common.KafkaException: Error processing data for partition 
test_topic-16 offset 682
at 
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1$$anonfun$apply$2.apply(AbstractFetcherThread.scala:203)
at 
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1$$anonfun$apply$2.apply(AbstractFetcherThread.scala:171)
at scala.Option.foreach(Option.scala:257)
at 
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1.apply(AbstractFetcherThread.scala:171)
at 
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1.apply(AbstractFetcherThread.scala:168)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at 
scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at 
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.apply$mcV$sp(AbstractFetcherThread.scala:168)
at 
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.apply(AbstractFetcherThread.scala:168)
at 
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.apply(AbstractFetcherThread.scala:168)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:218)
at 
kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:166)
at 
kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:109)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:64)
Caused by: org.apache.kafka.common.errors.UnknownProducerIdException: Found 
no record of producerId=1000 on the broker. It is possible that the last 
message with the producerId=1000 has been removed due to hitting the retention 
limit.
```

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apurvam/kafka 
KAFKA-6016-add-idempotent-producer-to-reassign-partitions

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4029.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4029


commit af48d74be4f2c4473d8f97664ff0f3e450bfe3ec
Author: Apurva Mehta 
Date:   2017-10-05T05:27:23Z

Initial commit trying to create the scenario where we are creating a
replica from scratch but starting from a non zero sequence when doing
so.

commit 9566f91b00a5a7c249823107e4792b844809ccca
Author: Apurva Mehta 
Date:   2017-10-05T05:52:24Z

Use retention bytes to force segment deletion

commit 6087b3ed01472d24677623c9b3ef92a3678da96f
Author: Apurva Mehta 
Date:   2017-10-05T21:16:47Z

Configure the log so that we can reproduce the case where we are building 
producer state from a non zero sequence




> Use the idempotent producer in the reassign_partitions_test
> ---
>
> Key: KAFKA-6016
> URL: https://issues.apache.org/jira/browse/KAFKA-6016
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.1, 1.0.0
>Reporter: Apurva Mehta
>Assignee: Apurva Mehta
> Fix For: 1.1.0
>
>
> Currently, the reassign partitions test doesn't use the idempotent producer. 
> This means that bugs like KAFKA-6003 have gone unnoticed. We should update 
> the test to use the idempotent producer and recreate that bug on a regular 
> basis so that we are fully testing all code paths.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5953) Connect classloader isolation may be broken for JDBC drivers

2017-10-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193739#comment-16193739
 ] 

ASF GitHub Bot commented on KAFKA-5953:
---

GitHub user kkonstantine opened a pull request:

https://github.com/apache/kafka/pull/4030

KAFKA-5953: Register all jdbc drivers available in plugin and class paths



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kkonstantine/kafka 
KAFKA-5953-Connect-classloader-isolation-may-be-broken-for-JDBC-drivers

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4030.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4030






> Connect classloader isolation may be broken for JDBC drivers
> 
>
> Key: KAFKA-5953
> URL: https://issues.apache.org/jira/browse/KAFKA-5953
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Jiri Pechanec
>Priority: Critical
>
> Let's suppose there are two connectors deployed
> # using JDBC driver (Debezium MySQL connector)
> # using PostgreSQL JDBC driver (JDBC sink).
> Connector 1 is started first - it executes a statement
> {code:java}
> Connection conn = DriverManager.getConnection(url, props);
> {code}
> As a result a {{DriverManager}} calls {{ServiceLoader}} and searches for all 
> JDBC drivers. The postgres driver from connector 2) is found associated with 
> classloader from connector 1).
> Connector 2 is started after that - it executes a statement
> {code:java}
> connection = DriverManager.getConnection(url, username, password);
> {code}
> DriverManager finds the connector that was loaded in step before but becuase 
> the classloader is different - now we use classloader 2) so it refuses to 
> load the class and no JDBC driver is found.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6011) AppInfoParser should only use metrics API and should not register JMX mbeans directly

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191965#comment-16191965
 ] 

ASF GitHub Bot commented on KAFKA-6011:
---

GitHub user tedyu opened a pull request:

https://github.com/apache/kafka/pull/4019

KAFKA-6011 AppInfoParser should only use metrics API and should not 
register JMX mbeans directly

Added app ID to metrics API.

The JMX can be dropped post 1.0.0

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tedyu/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4019.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4019


commit 28b6a5868327fe827d57328c03e086eb3bb5c19c
Author: tedyu 
Date:   2017-10-04T20:10:22Z

KAFKA-6011 AppInfoParser should only use metrics API and should not 
register JMX mbeans directly




> AppInfoParser should only use metrics API and should not register JMX mbeans 
> directly
> -
>
> Key: KAFKA-6011
> URL: https://issues.apache.org/jira/browse/KAFKA-6011
> Project: Kafka
>  Issue Type: Bug
>  Components: metrics
>Reporter: Ewen Cheslack-Postava
>Priority: Minor
>
> AppInfoParser collects info about the app ID, version, and commit ID and logs 
> them + exposes corresponding metrics. For some reason we ended up with the 
> app ID metric being registered directly to JMX while the version and commit 
> ID use the metrics API. This means the app ID would not be accessible to 
> custom metrics reporter.
> This isn't a huge loss as this is probably a rarely used metric, but we 
> should really only be using the metrics API. Only using the metrics API would 
> also reduce and centralize the places we need to do name mangling to handle 
> characters that might not be valid for metrics.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6003) Replication Fetcher thread for a partition with no data fails to start

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192028#comment-16192028
 ] 

ASF GitHub Bot commented on KAFKA-6003:
---

GitHub user apurvam opened a pull request:

https://github.com/apache/kafka/pull/4020

KAFKA-6003: Accept appends on replicas and when rebuilding the log 
unconditionally

This is a port of #4004 for the 0.11.0 branch.

With this patch so that we _only_ validate appends which originate
from the client. In general, once the append is validated and written to
the leader the first time, revalidating it is undesirable since we can't
do anything if validation fails, and also because it is hard to maintain
the correct assumptions during validation, leading to spurious
validation failures.

For example, when we have compacted topics, it is possible for batches
to be compacted on the follower but not on the leader. This case would
also lead to an OutOfOrderSequencException during replication. The same
applies to when we rebuild state from compacted topics: we would get
gaps in the sequence numbers, causing the OutOfOrderSequence.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apurvam/kafka 
KAKFA-6003-0.11.0-handle-unknown-producer-on-replica

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4020.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4020


commit 0a6a0213c091c8e6b6a9c5ce7655b7e0d06c9db0
Author: Apurva Mehta 
Date:   2017-10-04T20:42:17Z

KAFKA-6003: Accept appends on replicas and when rebuilding state from
the log unconditionally.

With this patch so that we _only_ validate appends which originate
from the client. In general, once the append is validated and written to
the leader the first time, revalidating it is undesirable since we can't
do anything if validation fails, and also because it is hard to maintain
the correct assumptions during validation, leading to spurious
validation failures.

For example, when we have compacted topics, it is possible for batches
to be compacted on the follower but not on the leader. This case would
also lead to an OutOfOrderSequencException during replication. The same
applies to when we rebuild state from compacted topics: we would get
gaps in the sequence numbers, causing the OutOfOrderSequence.




> Replication Fetcher thread for a partition with no data fails to start
> --
>
> Key: KAFKA-6003
> URL: https://issues.apache.org/jira/browse/KAFKA-6003
> Project: Kafka
>  Issue Type: Bug
>  Components: replication
>Affects Versions: 0.11.0.1
>Reporter: Stanislav Chizhov
>Assignee: Apurva Mehta
>Priority: Blocker
> Fix For: 1.0.0, 0.11.0.2
>
>
> If a partition of a topic with idempotent producer has no data on 1 of the 
> brokers, but it does exist on others and some of the segments for this 
> partition have been already deleted replication thread responsible for this 
> partition on the broker which has no data for it fails to start with out of 
> order sequence exception:
> {code}
> [2017-10-02 09:44:23,825] ERROR [ReplicaFetcherThread-2-4]: Error due to 
> (kafka.server.ReplicaFetcherThread)
> kafka.common.KafkaException: error processing data for partition 
> [stage.data.adevents.v2,20] offset 1660336429
> at 
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1$$anonfun$apply$2.apply(AbstractFetcherThread.scala:203)
> at 
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1$$anonfun$apply$2.apply(AbstractFetcherThread.scala:174)
> at scala.Option.foreach(Option.scala:257)
> at 
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1.apply(AbstractFetcherThread.scala:174)
> at 
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1.apply(AbstractFetcherThread.scala:171)
> at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
> at 
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.apply$mcV$sp(AbstractFetcherThread.scala:171)
> at 
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.apply(AbstractFetcherThread.scala:171)
> at 
> 

[jira] [Commented] (KAFKA-6001) Remove <Bytes, byte[]> from usages of Materialized in Streams

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192029#comment-16192029
 ] 

ASF GitHub Bot commented on KAFKA-6001:
---

Github user dguy closed the pull request at:

https://github.com/apache/kafka/pull/4001


> Remove  from usages of Materialized in Streams
> -
>
> Key: KAFKA-6001
> URL: https://issues.apache.org/jira/browse/KAFKA-6001
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.0.0
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 1.0.0
>
>
> We can remove `` from usages of `Materialized` in the DSL. 
> This will make the api a little nicer to work with. `` is 
> already enforced.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5972) Flatten SMT does not work with null values

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192178#comment-16192178
 ] 

ASF GitHub Bot commented on KAFKA-5972:
---

GitHub user shivsantham opened a pull request:

https://github.com/apache/kafka/pull/4021

KAFKA-5972 Flatten SMT does not work with null values

A bug in Flatten SMT while doing tests with different SMTs that are 
provided out-of-box. Flatten SMT does not work as expected with schemaless JSON 
that has properties with null values.

Example json:
  {A={D=dValue, B=null, C=cValue}}
The issue is in if statement that checks for null value.

CURRENT VERSION:
  for (Map.Entry entry : originalRecord.entrySet()) {
final String fieldName = fieldName(fieldNamePrefix, 
entry.getKey());
Object value = entry.getValue();
if (value == null) {
newRecord.put(fieldName(fieldNamePrefix, entry.getKey()), 
null);
return;
}

PROPOSED VERSION:
  for (Map.Entry entry : originalRecord.entrySet()) {
final String fieldName = fieldName(fieldNamePrefix, 
entry.getKey());
Object value = entry.getValue();
if (value == null) {
newRecord.put(fieldName(fieldNamePrefix, entry.getKey()), 
null);
continue;
}

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shivsantham/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4021.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4021


commit ff377759a943c7bfb89a56ad721e7ba1b3b0b24c
Author: siva santhalingam 
Date:   2017-09-28T23:37:47Z

KAFKA-5967 Ineffective check of negative value in 
CompositeReadOnlyKeyValueStore#approximateNumEntries()

long total = 0;
   for (ReadOnlyKeyValueStore store : stores) {
  total += store.approximateNumEntries();
   }

return total < 0 ? Long.MAX_VALUE : total;

The check for negative value seems to account for wrapping. However, 
wrapping can happen within the for loop. So the check should be performed 
inside the loop.

commit 3ea736ac17a4a8ce799b1214f6c0b167b44ee977
Author: siva santhalingam 
Date:   2017-09-29T03:16:29Z

 KAFKA-5967 Ineffective check of negative value in 
CompositeReadOnlyKeyValueStore

Adding a test for #KAFKA-5967

commit 921664384a7d6f53e2cc76cf5699021cdca73893
Author: siva santhalingam 
Date:   2017-09-30T08:22:41Z

 KAFKA-5967 Ineffective check of negative value in 
CompositeReadOnlyKeyValueStore

-Fixing test

commit 48e50992139c03099b3249195efc316a84d6bba1
Author: siva santhalingam 
Date:   2017-10-03T17:41:18Z

Flatten SMT does not work with null values

A bug in Flatten SMT while doing tests with different SMTs that are 
provided out-of-box. Flatten SMT does not work as expected with schemaless JSON 
that has properties with null values.

Example json:
  {A={D=dValue, B=null, C=cValue}}
The issue is in if statement that checks for null value.

CURRENT VERSION:
  for (Map.Entry entry : originalRecord.entrySet()) {
final String fieldName = fieldName(fieldNamePrefix, 
entry.getKey());
Object value = entry.getValue();
if (value == null) {
newRecord.put(fieldName(fieldNamePrefix, entry.getKey()), 
null);
return;
}

PROPOSED VERSION:
  for (Map.Entry entry : originalRecord.entrySet()) {
final String fieldName = fieldName(fieldNamePrefix, 
entry.getKey());
Object value = entry.getValue();
if (value == null) {
newRecord.put(fieldName(fieldNamePrefix, entry.getKey()), 
null);
continue;
}

commit 7abd3d5c76febc3e3f90b140aaf4c8bec7e08e8a
Author: siva santhalingam 
Date:   2017-10-03T18:07:07Z

Revert "Flatten SMT does not work with null values"

This reverts commit 48e50992139c03099b3249195efc316a84d6bba1.

commit 6f9c11726d1baf9eb2446867899218a4f46a77a4
Author: shivsantham 
Date:   2017-10-04T22:53:49Z

Merge branch 'trunk' of https://github.com/shivsantham/kafka into trunk

commit 084466bfd1f8bb9c9a07ec4f8255a42dfc6b8768
Author: shivsantham 
Date:   2017-10-04T22:59:27Z

KAFKA-5972 Flatten SMT does not work with null 

[jira] [Commented] (KAFKA-4818) Implement transactional clients

2017-10-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16195742#comment-16195742
 ] 

ASF GitHub Bot commented on KAFKA-4818:
---

GitHub user jaceklaskowski opened a pull request:

https://github.com/apache/kafka/pull/4038

[KAFKA-4818][FOLLOW-UP] Include isolationLevel in toString of FetchRequest

Include `isolationLevel` in `toString` of `FetchRequest`

This is a follow-up to https://issues.apache.org/jira/browse/KAFKA-4818.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jaceklaskowski/kafka KAFKA-4818-isolationLevel

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4038.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4038


commit 12e6367952c93326b687d22771882d732fd41cf3
Author: Jacek Laskowski 
Date:   2017-10-07T14:48:19Z

[KAFKA-4818][FOLLOW-UP] Include isolationLevel in toString of FetchRequest




> Implement transactional clients
> ---
>
> Key: KAFKA-4818
> URL: https://issues.apache.org/jira/browse/KAFKA-4818
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Jason Gustafson
>Assignee: Apurva Mehta
> Fix For: 0.11.0.0
>
>
> This covers the implementation of the producer and consumer to support 
> transactions, as described in KIP-98: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-98+-+Exactly+Once+Delivery+and+Transactional+Messaging



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6055) Running tools on Windows fail due to misconfigured JVM config

2017-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201126#comment-16201126
 ] 

ASF GitHub Bot commented on KAFKA-6055:
---

GitHub user vahidhashemian opened a pull request:

https://github.com/apache/kafka/pull/4062

KAFKA-6055: Fix a JVM misconfiguration that affects Windows tools



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-6055

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4062.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4062


commit 7954c3db11fb2547ba66ba177de99393bdaf4fb6
Author: Vahid Hashemian 
Date:   2017-10-11T22:52:14Z

KAFKA-6055: Fix a JVM misconfiguration that affects Windows tools




> Running tools on Windows fail due to misconfigured JVM config
> -
>
> Key: KAFKA-6055
> URL: https://issues.apache.org/jira/browse/KAFKA-6055
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Vahid Hashemian
>Assignee: Vahid Hashemian
>Priority: Blocker
> Fix For: 1.0.0
>
>
> This affects the current trunk and 1.0.0 RC0.
> When running any of the Windows commands under {{bin/windows}} the following 
> error is returned:
> {code}
> Missing +/- setting for VM option 'ExplicitGCInvokesConcurrent'
> Error: Could not create the Java Virtual Machine.
> Error: A fatal exception has occurred. Program will exit.
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6032) Unit Tests should be independent of locale settings

2017-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201005#comment-16201005
 ] 

ASF GitHub Bot commented on KAFKA-6032:
---

GitHub user gilles-degols opened a pull request:

https://github.com/apache/kafka/pull/4061

KAFKA-6032: Unit Tests should be independent of locale settings



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gilles-degols/kafka 
KAFKA-6032-Unit-Tests-should-be-independent-of-locale-settings

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4061.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4061


commit d80207ec7a74cf5cffacdf2cf9cd7d43f60880a5
Author: Gilles Degols 
Date:   2017-10-11T21:30:06Z

Unit Tests should be independent of locale settings




> Unit Tests should be independent of locale settings
> ---
>
> Key: KAFKA-6032
> URL: https://issues.apache.org/jira/browse/KAFKA-6032
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Affects Versions: 0.11.0.0
> Environment: Centos 7, Windows 10
> Locale = "fr_BE.UTF-8"
>Reporter: Gilles Degols
>Assignee: Gilles Degols
>Priority: Minor
>  Labels: newbie
>
> If the system has locale settings like "fr_BE.UTF-8", 3 unit tests will fail:
> 1. org.apache.kafka.common.utils.ShellTest > testRunProgramWithErrorReturn
> 2. org.apache.kafka.common.utils.ShellTest > 
> testAttemptToRunNonExistentProgram
> 3. org.apache.kafka.common.utils.UtilsTest > testFormatBytes
> They rely on string comparisons which will not work if the system throws an 
> error in another language, or if the float format is different ("," instead 
> of "."). 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6053) NoSuchMethodError when creating ProducerRecord in upgrade system tests

2017-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200593#comment-16200593
 ] 

ASF GitHub Bot commented on KAFKA-6053:
---

GitHub user apurvam opened a pull request:

https://github.com/apache/kafka/pull/4057

KAFKA-6053: Fix NoSuchMethodError when creating ProducerRecords with older 
client versions



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apurvam/kafka 
KAFKA-6053-fix-no-such-method-error-in-producer-record

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4057.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4057


commit 221a96da1dc6221dd3f61786d5fc1119b6848a7f
Author: Apurva Mehta 
Date:   2017-10-11T16:58:44Z

Fix NoSuchMethodError when creating ProducerRecords with older client 
versions




> NoSuchMethodError when creating ProducerRecord in upgrade system tests
> --
>
> Key: KAFKA-6053
> URL: https://issues.apache.org/jira/browse/KAFKA-6053
> Project: Kafka
>  Issue Type: Bug
>Reporter: Apurva Mehta
>Assignee: Apurva Mehta
>
> This patch https://github.com/apache/kafka/pull/4029 used a new constructor 
> for {{ProducerRecord}} which doesn't exist in older clients. Hence system 
> tests which use older clients fail with: 
> {noformat}
> Exception in thread "main" java.lang.NoSuchMethodError: 
> org.apache.kafka.clients.producer.ProducerRecord.(Ljava/lang/String;Ljava/lang/Integer;Ljava/lang/Long;Ljava/lang/Object;Ljava/lang/Object;)V
> at 
> org.apache.kafka.tools.VerifiableProducer.send(VerifiableProducer.java:232)
> at 
> org.apache.kafka.tools.VerifiableProducer.run(VerifiableProducer.java:462)
> at 
> org.apache.kafka.tools.VerifiableProducer.main(VerifiableProducer.java:500)
> {"timestamp":1507711495458,"name":"shutdown_complete"}
> {"timestamp":1507711495459,"name":"tool_data","sent":0,"acked":0,"target_throughput":1,"avg_throughput":0.0}
> amehta-macbook-pro:worker6 apurva$
> {noformat}
> A trivial fix is to only use the new constructor if a message create time is 
> explicitly passed to the VerifiableProducer, since older versions of the test 
> would never use it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6057) Users forget `--execute` in the offset reset tool

2017-10-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202713#comment-16202713
 ] 

ASF GitHub Bot commented on KAFKA-6057:
---

GitHub user gilles-degols opened a pull request:

https://github.com/apache/kafka/pull/4069

KAFKA-6057: Users forget `--execute` in the offset reset tool



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gilles-degols/kafka 
kafka-6057-Users-forget-execute-in-the-offset-reset-tool

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4069.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4069


commit be80d959494460ea140b10ad83ec098488ce7d79
Author: Gilles Degols 
Date:   2017-10-12T22:14:33Z

Users forget  in the offset reset tool




> Users forget `--execute` in the offset reset tool
> -
>
> Key: KAFKA-6057
> URL: https://issues.apache.org/jira/browse/KAFKA-6057
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer, core, tools
>Reporter: Yeva Byzek
>Assignee: Gilles Degols
>  Labels: newbie
>
> Sometimes users try to reset offsets using {{kafka-consumer-groups}} but 
> forget the {{--execute}} parameter. If this is omitted, no action was 
> performed, but this is not conveyed to users. 
> This JIRA is to augment the tool such that if the parameter is omitted, then 
> give users feedback that no action was performed unless {{--execute}} is 
> provided.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6023) ThreadCache#sizeBytes() should check overflow

2017-10-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16208994#comment-16208994
 ] 

ASF GitHub Bot commented on KAFKA-6023:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4041


> ThreadCache#sizeBytes() should check overflow
> -
>
> Key: KAFKA-6023
> URL: https://issues.apache.org/jira/browse/KAFKA-6023
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Ted Yu
>Assignee: siva santhalingam
>Priority: Minor
> Fix For: 1.1.0
>
>
> {code}
> long sizeBytes() {
> long sizeInBytes = 0;
> for (final NamedCache namedCache : caches.values()) {
> sizeInBytes += namedCache.sizeInBytes();
> }
> return sizeInBytes;
> }
> {code}
> The summation w.r.t. sizeInBytes may overflow.
> Check similar to what is done in size() should be performed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6083) The Fetcher should add the InvalidRecordException as a cause to the KafkaException when invalid record is found.

2017-10-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16210374#comment-16210374
 ] 

ASF GitHub Bot commented on KAFKA-6083:
---

GitHub user efeg opened a pull request:

https://github.com/apache/kafka/pull/4093

[KAFKA-6083] The Fetcher should add the InvalidRecordException as a cause 
to the KafkaException when invalid record is found.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/efeg/kafka bug/KAFKA-6083

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4093.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4093


commit 88e07d3a3115c4342ad3714d4397ff39b326f12a
Author: Adem Efe Gencer 
Date:   2017-10-19T00:20:04Z

Add the InvalidRecordException as a cause to the KafkaException when 
invalid record is found.




> The Fetcher should add the InvalidRecordException as a cause to the 
> KafkaException when invalid record is found.
> 
>
> Key: KAFKA-6083
> URL: https://issues.apache.org/jira/browse/KAFKA-6083
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, consumer
>Affects Versions: 1.0.0
>Reporter: Jiangjie Qin
>Assignee: Adem Efe Gencer
>  Labels: newbie++
> Fix For: 1.0.1
>
>
> In the Fetcher, when there is an InvalidRecoredException thrown, we will 
> convert it to a KafkaException, we should also add the InvalidRecordException 
> to it as the cause.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6087) Scanning plugin.path needs to support relative symlinks

2017-10-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16210333#comment-16210333
 ] 

ASF GitHub Bot commented on KAFKA-6087:
---

GitHub user kkonstantine opened a pull request:

https://github.com/apache/kafka/pull/4092

KAFKA-6087: Scanning plugin.path needs to support relative symlinks.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kkonstantine/kafka 
KAFKA-6087-Scanning-plugin.path-needs-to-support-relative-symlinks

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4092.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4092


commit 1348ea60fcdf4a10ff2b27039b2dc7d5ad16141e
Author: Konstantine Karantasis 
Date:   2017-10-18T23:49:19Z

KAFKA-6087: Scanning plugin.path needs to support relative symlinks.




> Scanning plugin.path needs to support relative symlinks
> ---
>
> Key: KAFKA-6087
> URL: https://issues.apache.org/jira/browse/KAFKA-6087
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Konstantine Karantasis
>Assignee: Konstantine Karantasis
> Fix For: 1.0.0, 0.11.0.2
>
>   Original Estimate: 6h
>  Remaining Estimate: 6h
>
> Discovery of Kafka Connect plugins supports symbolic links from within the 
> {{plugin.path}} locations, but this ability is restricted to absolute 
> symbolic links.
> It's essential to support relative symbolic links, as this is the most common 
> use case from within the plugin locations. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6083) The Fetcher should add the InvalidRecordException as a cause to the KafkaException when invalid record is found.

2017-10-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16210435#comment-16210435
 ] 

ASF GitHub Bot commented on KAFKA-6083:
---

Github user efeg closed the pull request at:

https://github.com/apache/kafka/pull/4093


> The Fetcher should add the InvalidRecordException as a cause to the 
> KafkaException when invalid record is found.
> 
>
> Key: KAFKA-6083
> URL: https://issues.apache.org/jira/browse/KAFKA-6083
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, consumer
>Affects Versions: 1.0.0
>Reporter: Jiangjie Qin
>Assignee: Adem Efe Gencer
>  Labels: newbie++
> Fix For: 1.0.1
>
>
> In the Fetcher, when there is an InvalidRecoredException thrown, we will 
> convert it to a KafkaException, we should also add the InvalidRecordException 
> to it as the cause.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6083) The Fetcher should add the InvalidRecordException as a cause to the KafkaException when invalid record is found.

2017-10-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16210436#comment-16210436
 ] 

ASF GitHub Bot commented on KAFKA-6083:
---

GitHub user efeg reopened a pull request:

https://github.com/apache/kafka/pull/4093

KAFKA-6083: The Fetcher should add the InvalidRecordException as a cause to 
the KafkaException when invalid record is found.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/efeg/kafka bug/KAFKA-6083

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4093.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4093


commit 88e07d3a3115c4342ad3714d4397ff39b326f12a
Author: Adem Efe Gencer 
Date:   2017-10-19T00:20:04Z

Add the InvalidRecordException as a cause to the KafkaException when 
invalid record is found.




> The Fetcher should add the InvalidRecordException as a cause to the 
> KafkaException when invalid record is found.
> 
>
> Key: KAFKA-6083
> URL: https://issues.apache.org/jira/browse/KAFKA-6083
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, consumer
>Affects Versions: 1.0.0
>Reporter: Jiangjie Qin
>Assignee: Adem Efe Gencer
>  Labels: newbie++
> Fix For: 1.0.1
>
>
> In the Fetcher, when there is an InvalidRecoredException thrown, we will 
> convert it to a KafkaException, we should also add the InvalidRecordException 
> to it as the cause.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6085) Streams rebalancing may cause a first batch of fetched records to be dropped

2017-10-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16210579#comment-16210579
 ] 

ASF GitHub Bot commented on KAFKA-6085:
---

Github user guozhangwang closed the pull request at:

https://github.com/apache/kafka/pull/4086


> Streams rebalancing may cause a first batch of fetched records to be dropped
> 
>
> Key: KAFKA-6085
> URL: https://issues.apache.org/jira/browse/KAFKA-6085
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.1
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 1.0.0
>
>
> This is a regression introduced in KAFKA-5152:
> Assuming you have one task without any state stores (and hence no restoration 
> needed for that task), and a rebalance happened in a {{records = 
> pollRequests(pollTimeMs);}} call:
> 1. We name this `pollRequests` call A. And within call A the rebalance will 
> happen, which put the thread state from RUNNING to PARTITION_REVOKED, and 
> then from PARITION_REVOKED to PARTITION_ASSIGNED. Assume the same task gets 
> assigned again, this task will be in the initialized set of tasks but NOT in 
> the running tasks yet.
> 2. Within the same call A, a fetch request may be sent and a response with a 
> batch of records could be returned, and it will be returned from 
> `pollRequests`. At this time the thread state become PARTITION_ASSIGNED and 
> the task is not "running" yet.
> 3. Now the bug comes in this line:
> {{!records.isEmpty() && taskManager.hasActiveRunningTasks()}}
> Since the task is not ing the active running set yet, this returned set of 
> records would be skipped. Effectively these records are dropped on the floor 
> and would never be consumed again.
> 4. In the next run loop, the same `pollRequest()` will be called again. Let's 
> call it B. After B is called we will set the thread state to RUNNING and put 
> the task to the running task set. But at this point the previous batch of 
> records will not be returned any more.
> So the bug lies in the fact that within a single run loop of the stream 
> thread. We may complete a rebalance with tasks assigned but not yet 
> initialized, AND we can fetch a bunch of records for that not-initialized 
> task and drop on the floor.
> With further investigation I can confirm that the new flaky test 
> https://issues.apache.org/jira/browse/KAFKA-5140 's root cause is also this 
> bug. And a recent PR https://github.com/apache/kafka/pull/4086 exposed this 
> bug by failing the reset integration test more frequently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6085) Streams rebalancing may cause a first batch of fetched records to be dropped

2017-10-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16210580#comment-16210580
 ] 

ASF GitHub Bot commented on KAFKA-6085:
---

GitHub user guozhangwang reopened a pull request:

https://github.com/apache/kafka/pull/4086

[WIP] KAFKA-6085: Pause all partitions before tasks are initialized

Mirror of #4085 against trunk. This PR contains two fixes (one major and 
one minor):

Major: on rebalance, pause all partitions instead of the partitions for 
tasks with state stores only, so that no records will be returned in the same 
`pollRecords()` call.

Minor: during the restoration phase, when thread state is still 
PARTITION_ASSIGNED, call consumer.poll with hard-coded pollMs = 0.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka KHotfix-restore-only

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4086.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4086


commit 62bf4784779f7379e849289c4456363f352cb850
Author: Guozhang Wang 
Date:   2017-10-12T21:18:46Z

dummy

commit 5726e39cba8a79e6858e8b932c5116b60f2ae314
Author: Guozhang Wang 
Date:   2017-10-12T21:18:46Z

dummy

fix issues

Remove debugging information

commit 8214a3ee340791eb18f7e5fa77f2510470cf977a
Author: Matthias J. Sax 
Date:   2017-10-17T00:38:31Z

MINOR: update exception message for KIP-120

Author: Matthias J. Sax 

Reviewers: Guozhang Wang 

Closes #4078 from mjsax/hotfix-streams

commit 637b76342801cf4a32c9e65aa89bfe0bf76c24a7
Author: Jason Gustafson 
Date:   2017-10-17T00:49:35Z

MINOR: A few javadoc fixes

Author: Jason Gustafson 

Reviewers: Guozhang Wang 

Closes #4076 from hachikuji/javadoc-fixes

commit f57c505f6e714b891a6d30e5501b463f14316708
Author: Damian Guy 
Date:   2017-10-17T01:01:32Z

MINOR: add equals to SessionWindows

Author: Damian Guy 

Reviewers: Guozhang Wang , Matthias J. 
Sax, Bill Bejeck 

Closes #4074 from dguy/minor-session-window-equals

commit 2f1dd0d4da24eee352f20436902d825d7851c45b
Author: Guozhang Wang 
Date:   2017-10-18T01:27:35Z

normal poll with zero during restoration

commit 043f28ac89b50f9145ac719449f03a427376dcde
Author: Guozhang Wang 
Date:   2017-10-19T04:58:36Z

recheck state change




> Streams rebalancing may cause a first batch of fetched records to be dropped
> 
>
> Key: KAFKA-6085
> URL: https://issues.apache.org/jira/browse/KAFKA-6085
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.1
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 1.0.0
>
>
> This is a regression introduced in KAFKA-5152:
> Assuming you have one task without any state stores (and hence no restoration 
> needed for that task), and a rebalance happened in a {{records = 
> pollRequests(pollTimeMs);}} call:
> 1. We name this `pollRequests` call A. And within call A the rebalance will 
> happen, which put the thread state from RUNNING to PARTITION_REVOKED, and 
> then from PARITION_REVOKED to PARTITION_ASSIGNED. Assume the same task gets 
> assigned again, this task will be in the initialized set of tasks but NOT in 
> the running tasks yet.
> 2. Within the same call A, a fetch request may be sent and a response with a 
> batch of records could be returned, and it will be returned from 
> `pollRequests`. At this time the thread state become PARTITION_ASSIGNED and 
> the task is not "running" yet.
> 3. Now the bug comes in this line:
> {{!records.isEmpty() && taskManager.hasActiveRunningTasks()}}
> Since the task is not ing the active running set yet, this returned set of 
> records would be skipped. Effectively these records are dropped on the floor 
> and would never be consumed again.
> 4. In the next run loop, the same `pollRequest()` will be called again. Let's 
> call it B. After B is called we will set the thread state to RUNNING and put 
> the task to the running task set. But at this point the previous batch of 
> records will not be returned any more.
> So the bug lies in the fact that within a single run loop of the stream 
> thread. We may complete a rebalance with tasks assigned but not yet 
> 

[jira] [Commented] (KAFKA-6085) Streams rebalancing may cause a first batch of fetched records to be dropped

2017-10-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16210581#comment-16210581
 ] 

ASF GitHub Bot commented on KAFKA-6085:
---

Github user guozhangwang closed the pull request at:

https://github.com/apache/kafka/pull/4086


> Streams rebalancing may cause a first batch of fetched records to be dropped
> 
>
> Key: KAFKA-6085
> URL: https://issues.apache.org/jira/browse/KAFKA-6085
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.1
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 1.0.0
>
>
> This is a regression introduced in KAFKA-5152:
> Assuming you have one task without any state stores (and hence no restoration 
> needed for that task), and a rebalance happened in a {{records = 
> pollRequests(pollTimeMs);}} call:
> 1. We name this `pollRequests` call A. And within call A the rebalance will 
> happen, which put the thread state from RUNNING to PARTITION_REVOKED, and 
> then from PARITION_REVOKED to PARTITION_ASSIGNED. Assume the same task gets 
> assigned again, this task will be in the initialized set of tasks but NOT in 
> the running tasks yet.
> 2. Within the same call A, a fetch request may be sent and a response with a 
> batch of records could be returned, and it will be returned from 
> `pollRequests`. At this time the thread state become PARTITION_ASSIGNED and 
> the task is not "running" yet.
> 3. Now the bug comes in this line:
> {{!records.isEmpty() && taskManager.hasActiveRunningTasks()}}
> Since the task is not ing the active running set yet, this returned set of 
> records would be skipped. Effectively these records are dropped on the floor 
> and would never be consumed again.
> 4. In the next run loop, the same `pollRequest()` will be called again. Let's 
> call it B. After B is called we will set the thread state to RUNNING and put 
> the task to the running task set. But at this point the previous batch of 
> records will not be returned any more.
> So the bug lies in the fact that within a single run loop of the stream 
> thread. We may complete a rebalance with tasks assigned but not yet 
> initialized, AND we can fetch a bunch of records for that not-initialized 
> task and drop on the floor.
> With further investigation I can confirm that the new flaky test 
> https://issues.apache.org/jira/browse/KAFKA-5140 's root cause is also this 
> bug. And a recent PR https://github.com/apache/kafka/pull/4086 exposed this 
> bug by failing the reset integration test more frequently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5163) Support replicas movement between log directories (KIP-113)

2017-10-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16207849#comment-16207849
 ] 

ASF GitHub Bot commented on KAFKA-5163:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3874


> Support replicas movement between log directories (KIP-113)
> ---
>
> Key: KAFKA-5163
> URL: https://issues.apache.org/jira/browse/KAFKA-5163
> Project: Kafka
>  Issue Type: Bug
>Reporter: Dong Lin
>Assignee: Dong Lin
> Fix For: 1.1.0
>
>
> See 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-113%3A+Support+replicas+movement+between+log+directories
>  for motivation and design.
> Note that part 1 was merged via KAFKA-5694.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6070) ducker-ak: add ipaddress and enum34 dependencies to docker image

2017-10-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16208476#comment-16208476
 ] 

ASF GitHub Bot commented on KAFKA-6070:
---

GitHub user cmccabe opened a pull request:

https://github.com/apache/kafka/pull/4084

KAFKA-6070: add ipaddress and enum34 dependencies to docker image



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cmccabe/kafka KAFKA-6070

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4084.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4084


commit 8882bfe65e49bdae9b5b52f4fa5bdc3bc46eaa4f
Author: Colin P. Mccabe 
Date:   2017-10-17T21:55:57Z

KAFKA-6070: add ipaddress and enum34 dependencies to docker image




> ducker-ak: add ipaddress and enum34 dependencies to docker image
> 
>
> Key: KAFKA-6070
> URL: https://issues.apache.org/jira/browse/KAFKA-6070
> Project: Kafka
>  Issue Type: Bug
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>
> ducker-ak: add ipaddress and enum34 dependencies to docker image



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5829) Speedup broker startup after unclean shutdown by reducing unnecessary snapshot files deletion

2017-10-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16208715#comment-16208715
 ] 

ASF GitHub Bot commented on KAFKA-5829:
---

Github user lindong28 closed the pull request at:

https://github.com/apache/kafka/pull/3782


> Speedup broker startup after unclean shutdown by reducing unnecessary 
> snapshot files deletion
> -
>
> Key: KAFKA-5829
> URL: https://issues.apache.org/jira/browse/KAFKA-5829
> Project: Kafka
>  Issue Type: Bug
>Reporter: Dong Lin
>Assignee: Ismael Juma
>Priority: Blocker
> Fix For: 1.0.0
>
>
> The current Kafka implementation will cause slow startup after unclean 
> shutdown. The time to load a partition will be 10X or more than what it 
> actually needs. Here is the explanation with example:
> - Say we have a partition of 20 segments, each segment has 250 message 
> starting with offset 0. And each message has 1 MB bytes.
> - Broker experiences hard kill and the index file of the first segment is 
> corrupted.
> - When broker startup and load the first segment, it realizes that the index 
> of the first segment is corrupted. So it calls `log.recoverSegment(...)` to 
> recover this segment. This method will call 
> `stateManager.truncateAndReload(...)` which deletes the snapshot files whose 
> offset is larger than base offset of the first segment. Thus all snapshot 
> files are deleted.
> - To rebuild the snapshot files, the `log.loadSegmentFiles(...)` will have to 
> read every message in this partition even if their log and index files are 
> not corrupted. This will increase the time to load this partition by more 
> than an order of magnitude.
> In order to address this issue, one simple solution is not to delete snapshot 
> files that are than the given offset if only the index files needs re-build. 
> More specifically, we should not need to re-build producer state offset file 
> unless the log file itself is corrupted or truncated.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6051) ReplicaFetcherThread should close the ReplicaFetcherBlockingSend earlier on shutdown

2017-10-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16209685#comment-16209685
 ] 

ASF GitHub Bot commented on KAFKA-6051:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4056


> ReplicaFetcherThread should close the ReplicaFetcherBlockingSend earlier on 
> shutdown
> 
>
> Key: KAFKA-6051
> URL: https://issues.apache.org/jira/browse/KAFKA-6051
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0, 0.10.0.1, 0.10.1.0, 0.10.1.1, 0.10.2.0, 
> 0.10.2.1, 0.11.0.0
>Reporter: Maytee Chinavanichkit
> Fix For: 1.1.0
>
>
> The ReplicaFetcherBlockingSend works as designed and will blocks until it is 
> able to get data. This becomes a problem when we are gracefully shutting down 
> a broker. The controller will attempt to shutdown the fetchers and elect new 
> leaders. When the last fetch of partition is removed, as part of the 
> {{replicaManager.becomeLeaderOrFollower}} call will proceed to shut down any 
> idle ReplicaFetcherThread. The shutdown process here can block up to until 
> the last fetch request completes. This blocking delay is a big problem 
> because the {{replicaStateChangeLock}}, and {{mapLock}} in 
> {{AbstractFetcherManager}} is still locked causing latency spikes on multiple 
> brokers.
> At this point in time, we do not need the last response as the fetcher is 
> shutting down. We should close the leaderEndpoint early during 
> {{initiateShutdown()}} instead of after {{super.shutdown()}}.
> For example we see here the shutdown blocked the broker from processing more 
> replica changes for ~500 ms 
> {code}
> [2017-09-01 18:11:42,879] INFO [ReplicaFetcherThread-0-2], Shutting down 
> (kafka.server.ReplicaFetcherThread) 
> [2017-09-01 18:11:43,314] INFO [ReplicaFetcherThread-0-2], Stopped 
> (kafka.server.ReplicaFetcherThread) 
> [2017-09-01 18:11:43,314] INFO [ReplicaFetcherThread-0-2], Shutdown completed 
> (kafka.server.ReplicaFetcherThread)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6071) Use ZookeeperClient in LogManager

2017-10-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16209796#comment-16209796
 ] 

ASF GitHub Bot commented on KAFKA-6071:
---

GitHub user omkreddy opened a pull request:

https://github.com/apache/kafka/pull/4089

KAFKA-6071: Use ZookeeperClient in LogManager



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/omkreddy/kafka KAFKA-6071-ZK-LOGMANAGER

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4089.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4089


commit 0f19b38f702094f3543a490f22104a6f275c38a7
Author: Manikumar Reddy 
Date:   2017-10-18T18:24:37Z

KAFKA-6071: Use ZookeeperClient in LogManager




> Use ZookeeperClient in LogManager 
> --
>
> Key: KAFKA-6071
> URL: https://issues.apache.org/jira/browse/KAFKA-6071
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Affects Versions: 1.1.0
>Reporter: Jun Rao
>Assignee: Manikumar
>
> We want to replace the usage of ZkUtils in LogManager with ZookeeperClient.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6042) Kafka Request Handler deadlocks and brings down the cluster.

2017-10-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16212596#comment-16212596
 ] 

ASF GitHub Bot commented on KAFKA-6042:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/4103

KAFKA-6042: Avoid deadlock between two groups with delayed operations



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-6042-group-deadlock

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4103.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4103


commit 80039370b71bc0efd77d1f4cb46b855bfea45362
Author: Rajini Sivaram 
Date:   2017-10-20T12:21:46Z

KAFKA-6042: Avoid deadlock between two groups with delayed operations




> Kafka Request Handler deadlocks and brings down the cluster.
> 
>
> Key: KAFKA-6042
> URL: https://issues.apache.org/jira/browse/KAFKA-6042
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.0, 0.11.0.1, 1.0.0
> Environment: kafka version: 0.11.0.1
> client versions: 0.8.2.1-0.10.2.1
> platform: aws (eu-west-1a)
> nodes: 36 x r4.xlarge
> disk storage: 2.5 tb per node (~73% usage per node)
> topics: 250
> number of partitions: 48k (approx)
> os: ubuntu 14.04
> jvm: Java(TM) SE Runtime Environment (build 1.8.0_131-b11), Java HotSpot(TM) 
> 64-Bit Server VM (build 25.131-b11, mixed mode)
>Reporter: Ben Corlett
>Assignee: Rajini Sivaram
>Priority: Blocker
> Fix For: 1.0.0, 0.11.0.2
>
> Attachments: thread_dump.txt.gz
>
>
> We have been experiencing a deadlock that happens on a consistent server 
> within our cluster. This happens multiple times a week currently. It first 
> started happening when we upgraded to 0.11.0.0. Sadly 0.11.0.1 failed to 
> resolve the issue.
> Sequence of events:
> At a seemingly random time broker 125 goes into a deadlock. As soon as it is 
> deadlocked it will remove all the ISR's for any partition is its the leader 
> for.
> [2017-10-10 00:06:10,061] INFO Partition [XX,24] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,073] INFO Partition [XX,974] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,079] INFO Partition [XX,64] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,081] INFO Partition [XX,21] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,084] INFO Partition [XX,12] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,085] INFO Partition [XX,61] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,086] INFO Partition [XX,53] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,088] INFO Partition [XX,27] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,090] INFO Partition [XX,182] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,091] INFO Partition [XX,16] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> 
> The other nodes fail to connect to the node 125 
> [2017-10-10 00:08:42,318] WARN [ReplicaFetcherThread-0-125]: Error in fetch 
> to broker 125, request (type=FetchRequest, replicaId=101, maxWait=500, 
> minBytes=1, maxBytes=10485760, fetchData={XX-94=(offset=0, 
> logStartOffset=0, maxBytes=1048576), XX-22=(offset=0, 
> logStartOffset=0, maxBytes=1048576), XX-58=(offset=0, 
> logStartOffset=0, maxBytes=1048576), XX-11=(offset=78932482, 
> logStartOffset=50881481, maxBytes=1048576), XX-55=(offset=0, 
> logStartOffset=0, maxBytes=1048576), XX-19=(offset=0, 
> logStartOffset=0, maxBytes=1048576), XX-91=(offset=0, 
> logStartOffset=0, maxBytes=1048576), XX-5=(offset=903857106, 
> logStartOffset=0, maxBytes=1048576), XX-80=(offset=0, 
> logStartOffset=0, maxBytes=1048576), XX-88=(offset=0, 
> logStartOffset=0, maxBytes=1048576), XX-34=(offset=308, 
> logStartOffset=308, maxBytes=1048576), XX-7=(offset=369990, 
> logStartOffset=369990, maxBytes=1048576), XX-0=(offset=57965795, 
> logStartOffset=0, maxBytes=1048576)}) (kafka.server.ReplicaFetcherThread)
> java.io.IOException: 

[jira] [Commented] (KAFKA-5140) Flaky ResetIntegrationTest

2017-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215724#comment-16215724
 ] 

ASF GitHub Bot commented on KAFKA-5140:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4095


> Flaky ResetIntegrationTest
> --
>
> Key: KAFKA-5140
> URL: https://issues.apache.org/jira/browse/KAFKA-5140
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Affects Versions: 0.10.2.0
>Reporter: Matthias J. Sax
>Assignee: Guozhang Wang
> Fix For: 0.11.0.0
>
>
> {noformat}
> org.apache.kafka.streams.integration.ResetIntegrationTest > 
> testReprocessingFromScratchAfterResetWithIntermediateUserTopic FAILED
> java.lang.AssertionError: 
> Expected: <[KeyValue(2986681642095, 1), KeyValue(2986681642055, 1), 
> KeyValue(2986681642075, 1), KeyValue(2986681642035, 1), 
> KeyValue(2986681642095, 1), KeyValue(2986681642055, 1), 
> KeyValue(2986681642115, 1), KeyValue(2986681642075, 1), 
> KeyValue(2986681642075, 2), KeyValue(2986681642095, 2), 
> KeyValue(2986681642115, 1), KeyValue(2986681642135, 1), 
> KeyValue(2986681642095, 2), KeyValue(2986681642115, 2), 
> KeyValue(2986681642155, 1), KeyValue(2986681642135, 1), 
> KeyValue(2986681642115, 2), KeyValue(2986681642135, 2), 
> KeyValue(2986681642155, 1), KeyValue(2986681642175, 1), 
> KeyValue(2986681642135, 2), KeyValue(2986681642155, 2), 
> KeyValue(2986681642175, 1), KeyValue(2986681642195, 1), 
> KeyValue(2986681642135, 3), KeyValue(2986681642155, 2), 
> KeyValue(2986681642175, 2), KeyValue(2986681642195, 1), 
> KeyValue(2986681642155, 3), KeyValue(2986681642175, 2), 
> KeyValue(2986681642195, 2), KeyValue(2986681642155, 3), 
> KeyValue(2986681642175, 3), KeyValue(2986681642195, 2), 
> KeyValue(2986681642155, 4), KeyValue(2986681642175, 3), 
> KeyValue(2986681642195, 3)]>
>  but: was <[KeyValue(2986681642095, 1), KeyValue(2986681642055, 1), 
> KeyValue(2986681642075, 1), KeyValue(2986681642035, 1), 
> KeyValue(2986681642095, 1), KeyValue(2986681642055, 1), 
> KeyValue(2986681642115, 1), KeyValue(2986681642075, 1), 
> KeyValue(2986681642075, 2), KeyValue(2986681642095, 2), 
> KeyValue(2986681642115, 1), KeyValue(2986681642135, 1), 
> KeyValue(2986681642095, 2), KeyValue(2986681642115, 2), 
> KeyValue(2986681642155, 1), KeyValue(2986681642135, 1), 
> KeyValue(2986681642115, 2), KeyValue(2986681642135, 2), 
> KeyValue(2986681642155, 1), KeyValue(2986681642175, 1), 
> KeyValue(2986681642135, 2), KeyValue(2986681642155, 2), 
> KeyValue(2986681642175, 1), KeyValue(2986681642195, 1), 
> KeyValue(2986681642135, 3), KeyValue(2986681642155, 2), 
> KeyValue(2986681642175, 2), KeyValue(2986681642195, 1), 
> KeyValue(2986681642155, 3), KeyValue(2986681642175, 2), 
> KeyValue(2986681642195, 2), KeyValue(2986681642155, 3)]>
> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8)
> at 
> org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterResetWithIntermediateUserTopic(ResetIntegrationTest.java:190)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6096) Add concurrent tests to exercise all paths in group/transaction managers

2017-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215941#comment-16215941
 ] 

ASF GitHub Bot commented on KAFKA-6096:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/4122

KAFKA-6096: Add multi-threaded tests for group coordinator, txn manager



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-6096-deadlock-test

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4122.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4122


commit 5e39e73da84c0057ae4815b066cbc6e9113bc608
Author: Rajini Sivaram 
Date:   2017-10-23T20:59:04Z

KAFKA-6096: Add multi-threaded tests for group coordinator, txn manager




> Add concurrent tests to exercise all paths in group/transaction managers
> 
>
> Key: KAFKA-6096
> URL: https://issues.apache.org/jira/browse/KAFKA-6096
> Project: Kafka
>  Issue Type: Test
>  Components: core
>Reporter: Rajini Sivaram
>Assignee: Jason Gustafson
> Fix For: 1.1.0
>
>
> We don't have enough tests to test locking/deadlocks in GroupMetadataManager 
> and TransactionManager. Since we have had a lot of deadlocks (KAFKA-5970, 
> KAFKA-6042 etc.) which were not detected during testing, we should add more 
> mock tests with concurrency to verify the locking.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6105) group.id is not picked by kafka.tools.EndToEndLatency

2017-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16214757#comment-16214757
 ] 

ASF GitHub Bot commented on KAFKA-6105:
---

GitHub user cnZach opened a pull request:

https://github.com/apache/kafka/pull/4115

KAFKA-6105: load client properties in proper order for 
kafka.tools.EndToEndLatency

Currently, the property file is loaded first, and later a auto generated 
group.id is used:
consumerProps.put(ConsumerConfig.GROUP_ID_CONFIG, "test-group-" + 
System.currentTimeMillis())
so even user gives the group.id in a property file, it is not picked up.

we need to load client properties in proper order, so that we allow user to 
specify group.id and other properties, excludes only the properties provided in 
the argument list.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cnZach/kafka cnZach_KAFKA-6105

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4115.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4115


commit d83f84e14c556fffddde2a74469e5be43fc99b10
Author: Yuexin Zhang 
Date:   2017-10-23T07:13:10Z

load client properties in proper order, so that we allow user to sepcify 
group.id and other propeties, excludes only the properties provided in the 
argument list




> group.id is not picked by kafka.tools.EndToEndLatency
> -
>
> Key: KAFKA-6105
> URL: https://issues.apache.org/jira/browse/KAFKA-6105
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.11.0.0
>Reporter: Yuexin Zhang
>
> As per these lines:
> https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/tools/EndToEndLatency.scala#L64-L67
> the property file is loaded first, and later a auto generated group.id is 
> used:
> consumerProps.put(ConsumerConfig.GROUP_ID_CONFIG, "test-group-" + 
> System.currentTimeMillis())
> so even user gives the group.id in a property file, it is not picked up.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5574) kafka-consumer-perf-test.sh report header has one less column in show-detailed-stats mode

2017-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16214758#comment-16214758
 ] 

ASF GitHub Bot commented on KAFKA-5574:
---

Github user cnZach closed the pull request at:

https://github.com/apache/kafka/pull/3512


> kafka-consumer-perf-test.sh report header has one less column in 
> show-detailed-stats mode
> -
>
> Key: KAFKA-5574
> URL: https://issues.apache.org/jira/browse/KAFKA-5574
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.9.0.0, 0.10.0.0
>Reporter: Yuexin Zhang
>
> time, data.consumed.in.MB, MB.sec, data.consumed.in.nMsg, nMsg.sec
> 2017-07-09 21:40:40:369, 0, 0.1492, 2.6176, 5000, 87719.2982
> 2017-07-09 21:40:40:386, 0, 0.2983, 149.0479, 1, 500.
> 2017-07-09 21:40:40:387, 0, 0.4473, 149.0812, 15000, 500.
> there's one more column between "time" and "data.consumed.in.MB", it's 
> currently set to 0:
> https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/tools/ConsumerPerformance.scala#L158
> https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/tools/ConsumerPerformance.scala#L175
> is it a thread id? what is this id used for?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6101) Reconnecting to broker does not exponentially backoff

2017-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215148#comment-16215148
 ] 

ASF GitHub Bot commented on KAFKA-6101:
---

Github user tedyu closed the pull request at:

https://github.com/apache/kafka/pull/4108


> Reconnecting to broker does not exponentially backoff
> -
>
> Key: KAFKA-6101
> URL: https://issues.apache.org/jira/browse/KAFKA-6101
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.11.0.0
>Reporter: Sean Rohead
> Attachments: 6101.v2.txt, 6101.v3.txt, text.html
>
>
> I am using com.typesafe.akka:akka-stream-kafka:0.17 which relies on 
> kafka-clients:0.11.0.0.
> I have set the reconnect.backoff.max.ms property to 6.
> When I start the application without kafka running, I see a flood of the 
> following log message:
> [warn] o.a.k.c.NetworkClient - Connection to node -1 could not be 
> established. Broker may not be available.
> The log messages occur several times a second and the frequency of these 
> messages does not decrease over time as would be expected if exponential 
> backoff was working properly.
> I set a breakpoint in the debugger in ClusterConnectionStates:188 and noticed 
> that every time this breakpoint is hit, nodeState.failedAttempts is always 0. 
> This is why the delay does not increase exponentially. It also appears that 
> every time the breakpoint is hit, it is on a different instance, so even 
> though the number of failedAttempts is incremented, we never get the 
> breakpoint for the same instance more than one time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6101) Reconnecting to broker does not exponentially backoff

2017-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215160#comment-16215160
 ] 

ASF GitHub Bot commented on KAFKA-6101:
---

GitHub user tedyu opened a pull request:

https://github.com/apache/kafka/pull/4118

KAFKA-6101 Reconnecting to broker does not exponentially backoff



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tedyu/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4118.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4118


commit cd928d3867e42774bb167b3aaf11ca6a8dd8d48f
Author: tedyu 
Date:   2017-10-23T13:49:27Z

KAFKA-6101 Reconnecting to broker does not exponentially backoff




> Reconnecting to broker does not exponentially backoff
> -
>
> Key: KAFKA-6101
> URL: https://issues.apache.org/jira/browse/KAFKA-6101
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.11.0.0
>Reporter: Sean Rohead
> Attachments: 6101.v2.txt, 6101.v3.txt, text.html
>
>
> I am using com.typesafe.akka:akka-stream-kafka:0.17 which relies on 
> kafka-clients:0.11.0.0.
> I have set the reconnect.backoff.max.ms property to 6.
> When I start the application without kafka running, I see a flood of the 
> following log message:
> [warn] o.a.k.c.NetworkClient - Connection to node -1 could not be 
> established. Broker may not be available.
> The log messages occur several times a second and the frequency of these 
> messages does not decrease over time as would be expected if exponential 
> backoff was working properly.
> I set a breakpoint in the debugger in ClusterConnectionStates:188 and noticed 
> that every time this breakpoint is hit, nodeState.failedAttempts is always 0. 
> This is why the delay does not increase exponentially. It also appears that 
> every time the breakpoint is hit, it is on a different instance, so even 
> though the number of failedAttempts is incremented, we never get the 
> breakpoint for the same instance more than one time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6084) ReassignPartitionsCommand should propagate JSON parsing failures

2017-10-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16210101#comment-16210101
 ] 

ASF GitHub Bot commented on KAFKA-6084:
---

GitHub user viktorsomogyi opened a pull request:

https://github.com/apache/kafka/pull/4090

[KAFKA-6084] Propagate JSON parsing errors in ReassignPartitionsCommand



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/viktorsomogyi/kafka KAFKA-6084

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4090.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4090


commit 3f1a24014e022ad351ea669ba73efa645ccca5f3
Author: Viktor Somogyi 
Date:   2017-10-14T11:16:35Z

[KAFKA-6084] Propagate JSON parsing errors in ReassignPartitionsCommand




> ReassignPartitionsCommand should propagate JSON parsing failures
> 
>
> Key: KAFKA-6084
> URL: https://issues.apache.org/jira/browse/KAFKA-6084
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin
>Affects Versions: 0.11.0.0
>Reporter: Viktor Somogyi
>Assignee: Viktor Somogyi
>Priority: Minor
>  Labels: easyfix, newbie
> Attachments: Screen Shot 2017-10-18 at 23.31.22.png
>
>
> Basically looking at Json.scala it will always swallow any parsing errors:
> {code}
>   def parseFull(input: String): Option[JsonValue] =
> try Option(mapper.readTree(input)).map(JsonValue(_))
> catch { case _: JsonProcessingException => None }
> {code}
> However sometimes it is easy to figure out the problem by simply looking at 
> the JSON, in some cases it is not very trivial, such as some invisible 
> characters (like byte order mark) won't be displayed by most of the text 
> editors and can people spend time on figuring out what's the problem.
> As Jackson provides a really detailed exception about what failed and how, it 
> is easy to propagate the failure to the user.
> As an example I attached a BOM prefixed JSON which fails with the following 
> error which is very counterintuitive:
> {noformat}
> [root@localhost ~]# kafka-reassign-partitions --zookeeper localhost:2181 
> --reassignment-json-file /root/increase-replication-factor.json --execute
> Partitions reassignment failed due to Partition reassignment data file 
> /root/increase-replication-factor.json is empty
> kafka.common.AdminCommandFailedException: Partition reassignment data file 
> /root/increase-replication-factor.json is empty
> at 
> kafka.admin.ReassignPartitionsCommand$.executeAssignment(ReassignPartitionsCommand.scala:120)
> at 
> kafka.admin.ReassignPartitionsCommand$.main(ReassignPartitionsCommand.scala:52)
> at kafka.admin.ReassignPartitionsCommand.main(ReassignPartitionsCommand.scala)
> ...
> {noformat}
> In case of the above error it would be much better to see what fails exactly:
> {noformat}
> kafka.common.AdminCommandFailedException: Admin command failed
>   at 
> kafka.admin.ReassignPartitionsCommand$.parsePartitionReassignmentData(ReassignPartitionsCommand.scala:267)
>   at 
> kafka.admin.ReassignPartitionsCommand$.parseAndValidate(ReassignPartitionsCommand.scala:275)
>   at 
> kafka.admin.ReassignPartitionsCommand$.executeAssignment(ReassignPartitionsCommand.scala:197)
>   at 
> kafka.admin.ReassignPartitionsCommand$.executeAssignment(ReassignPartitionsCommand.scala:193)
>   at 
> kafka.admin.ReassignPartitionsCommand$.main(ReassignPartitionsCommand.scala:64)
>   at 
> kafka.admin.ReassignPartitionsCommand.main(ReassignPartitionsCommand.scala)
> Caused by: com.fasterxml.jackson.core.JsonParseException: Unexpected 
> character ('' (code 65279 / 0xfeff)): expected a valid value (number, 
> String, array, object, 'true', 'false' or 'null')
>  at [Source: (String)"{"version":1,
>   "partitions":[
>{"topic": "test1", "partition": 0, "replicas": [1,2]},
>{"topic": "test2", "partition": 1, "replicas": [2,3]}
> ]}"; line: 1, column: 2]
>   at 
> com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1798)
>   at 
> com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:663)
>   at 
> com.fasterxml.jackson.core.base.ParserMinimalBase._reportUnexpectedChar(ParserMinimalBase.java:561)
>   at 
> com.fasterxml.jackson.core.json.ReaderBasedJsonParser._handleOddValue(ReaderBasedJsonParser.java:1892)
>   at 
> com.fasterxml.jackson.core.json.ReaderBasedJsonParser.nextToken(ReaderBasedJsonParser.java:747)
>   at 
> com.fasterxml.jackson.databind.ObjectMapper._readTreeAndClose(ObjectMapper.java:4030)
>   at 
> 

[jira] [Commented] (KAFKA-6087) Scanning plugin.path needs to support relative symlinks

2017-10-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211764#comment-16211764
 ] 

ASF GitHub Bot commented on KAFKA-6087:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4092


> Scanning plugin.path needs to support relative symlinks
> ---
>
> Key: KAFKA-6087
> URL: https://issues.apache.org/jira/browse/KAFKA-6087
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Konstantine Karantasis
>Assignee: Konstantine Karantasis
>Priority: Blocker
> Fix For: 1.0.0, 0.11.0.2, 1.1.0
>
>   Original Estimate: 6h
>  Remaining Estimate: 6h
>
> Discovery of Kafka Connect plugins supports symbolic links from within the 
> {{plugin.path}} locations, but this ability is restricted to absolute 
> symbolic links.
> It's essential to support relative symbolic links, as this is the most common 
> use case from within the plugin locations. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6083) The Fetcher should add the InvalidRecordException as a cause to the KafkaException when invalid record is found.

2017-10-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211716#comment-16211716
 ] 

ASF GitHub Bot commented on KAFKA-6083:
---

Github user efeg closed the pull request at:

https://github.com/apache/kafka/pull/4093


> The Fetcher should add the InvalidRecordException as a cause to the 
> KafkaException when invalid record is found.
> 
>
> Key: KAFKA-6083
> URL: https://issues.apache.org/jira/browse/KAFKA-6083
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, consumer
>Affects Versions: 1.0.0
>Reporter: Jiangjie Qin
>Assignee: Adem Efe Gencer
>  Labels: newbie++
> Fix For: 1.0.1
>
>
> In the Fetcher, when there is an InvalidRecoredException thrown, we will 
> convert it to a KafkaException, we should also add the InvalidRecordException 
> to it as the cause.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6083) The Fetcher should add the InvalidRecordException as a cause to the KafkaException when invalid record is found.

2017-10-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211717#comment-16211717
 ] 

ASF GitHub Bot commented on KAFKA-6083:
---

GitHub user efeg reopened a pull request:

https://github.com/apache/kafka/pull/4093

KAFKA-6083: The Fetcher should add the InvalidRecordException as a cause to 
the KafkaException when invalid record is found.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/efeg/kafka bug/KAFKA-6083

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4093.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4093


commit 92ab51d085d8ac9b86981371457195155a955380
Author: Adem Efe Gencer 
Date:   2017-10-19T18:50:15Z

Use CorruptedRecordException (public) rather than InvalidRecordException 
(not public) as a cause to the KafkaException.




> The Fetcher should add the InvalidRecordException as a cause to the 
> KafkaException when invalid record is found.
> 
>
> Key: KAFKA-6083
> URL: https://issues.apache.org/jira/browse/KAFKA-6083
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, consumer
>Affects Versions: 1.0.0
>Reporter: Jiangjie Qin
>Assignee: Adem Efe Gencer
>  Labels: newbie++
> Fix For: 1.0.1
>
>
> In the Fetcher, when there is an InvalidRecoredException thrown, we will 
> convert it to a KafkaException, we should also add the InvalidRecordException 
> to it as the cause.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6083) The Fetcher should add the InvalidRecordException as a cause to the KafkaException when invalid record is found.

2017-10-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211720#comment-16211720
 ] 

ASF GitHub Bot commented on KAFKA-6083:
---

Github user efeg closed the pull request at:

https://github.com/apache/kafka/pull/4093


> The Fetcher should add the InvalidRecordException as a cause to the 
> KafkaException when invalid record is found.
> 
>
> Key: KAFKA-6083
> URL: https://issues.apache.org/jira/browse/KAFKA-6083
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, consumer
>Affects Versions: 1.0.0
>Reporter: Jiangjie Qin
>Assignee: Adem Efe Gencer
>  Labels: newbie++
> Fix For: 1.0.1
>
>
> In the Fetcher, when there is an InvalidRecoredException thrown, we will 
> convert it to a KafkaException, we should also add the InvalidRecordException 
> to it as the cause.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6083) The Fetcher should add the InvalidRecordException as a cause to the KafkaException when invalid record is found.

2017-10-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211731#comment-16211731
 ] 

ASF GitHub Bot commented on KAFKA-6083:
---

GitHub user efeg reopened a pull request:

https://github.com/apache/kafka/pull/4093

KAFKA-6083: The Fetcher should add the InvalidRecordException as a cause to 
the KafkaException when invalid record is found.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/efeg/kafka bug/KAFKA-6083

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4093.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4093


commit 92ab51d085d8ac9b86981371457195155a955380
Author: Adem Efe Gencer 
Date:   2017-10-19T18:50:15Z

Use CorruptedRecordException (public) rather than InvalidRecordException 
(not public) as a cause to the KafkaException.




> The Fetcher should add the InvalidRecordException as a cause to the 
> KafkaException when invalid record is found.
> 
>
> Key: KAFKA-6083
> URL: https://issues.apache.org/jira/browse/KAFKA-6083
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, consumer
>Affects Versions: 1.0.0
>Reporter: Jiangjie Qin
>Assignee: Adem Efe Gencer
>  Labels: newbie++
> Fix For: 1.0.1
>
>
> In the Fetcher, when there is an InvalidRecoredException thrown, we will 
> convert it to a KafkaException, we should also add the InvalidRecordException 
> to it as the cause.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6105) group.id is not picked by kafka.tools.EndToEndLatency

2017-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16216355#comment-16216355
 ] 

ASF GitHub Bot commented on KAFKA-6105:
---

Github user cnZach closed the pull request at:

https://github.com/apache/kafka/pull/4116


> group.id is not picked by kafka.tools.EndToEndLatency
> -
>
> Key: KAFKA-6105
> URL: https://issues.apache.org/jira/browse/KAFKA-6105
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.11.0.0
>Reporter: Yuexin Zhang
>
> As per these lines:
> https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/tools/EndToEndLatency.scala#L64-L67
> the property file is loaded first, and later a auto generated group.id is 
> used:
> consumerProps.put(ConsumerConfig.GROUP_ID_CONFIG, "test-group-" + 
> System.currentTimeMillis())
> so even user gives the group.id in a property file, it is not picked up.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6074) Use ZookeeperClient in ReplicaManager and Partition

2017-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16216308#comment-16216308
 ] 

ASF GitHub Bot commented on KAFKA-6074:
---

GitHub user tedyu opened a pull request:

https://github.com/apache/kafka/pull/4124

KAFKA-6074 Use ZookeeperClient in ReplicaManager and Partition



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tedyu/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4124.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4124


commit 0164ff44b0e67cbec9e8b56efe6e139ef87e5d69
Author: tedyu 
Date:   2017-10-24T04:51:59Z

KAFKA-6074 Use ZookeeperClient in ReplicaManager and Partition




> Use ZookeeperClient in ReplicaManager and Partition
> ---
>
> Key: KAFKA-6074
> URL: https://issues.apache.org/jira/browse/KAFKA-6074
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Affects Versions: 1.1.0
>Reporter: Jun Rao
> Fix For: 1.1.0
>
> Attachments: 6074.v1.txt
>
>
> We want to replace the usage of ZkUtils with ZookeeperClient in 
> ReplicaManager and Partition.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6105) group.id is not picked by kafka.tools.EndToEndLatency

2017-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16216356#comment-16216356
 ] 

ASF GitHub Bot commented on KAFKA-6105:
---

GitHub user cnZach opened a pull request:

https://github.com/apache/kafka/pull/4125

KAFKA-6105: load client properties in proper order for EndToEndLatency tool

Currently, the property file is loaded first, and later a auto generated 
group.id is used:
consumerProps.put(ConsumerConfig.GROUP_ID_CONFIG, "test-group-" + 
System.currentTimeMillis())

so even user gives the group.id in a property file, it is not picked up.

Change it to load client properties in proper order: set default values 
first, then try to load the custom values set in client.properties file.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cnZach/kafka cnZach_KAFKA-6105

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4125.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4125


commit 448ea9df1f735da5362eb3204e9bd7a133516fb2
Author: Yuexin Zhang 
Date:   2017-10-24T05:48:04Z

load client properties in proper order: set default values first, then try 
to load the custom values set in client.properties file




> group.id is not picked by kafka.tools.EndToEndLatency
> -
>
> Key: KAFKA-6105
> URL: https://issues.apache.org/jira/browse/KAFKA-6105
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.11.0.0
>Reporter: Yuexin Zhang
>
> As per these lines:
> https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/tools/EndToEndLatency.scala#L64-L67
> the property file is loaded first, and later a auto generated group.id is 
> used:
> consumerProps.put(ConsumerConfig.GROUP_ID_CONFIG, "test-group-" + 
> System.currentTimeMillis())
> so even user gives the group.id in a property file, it is not picked up.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-2858) Clarify usage of `Principal` in the authentication layer

2017-11-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247274#comment-16247274
 ] 

ASF GitHub Bot commented on KAFKA-2858:
---

Github user ijuma closed the pull request at:

https://github.com/apache/kafka/pull/551


> Clarify usage of `Principal` in the authentication layer
> 
>
> Key: KAFKA-2858
> URL: https://issues.apache.org/jira/browse/KAFKA-2858
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Minor
>
> We currently use `KafkaPrincipal` at the authentication and authorization 
> layer. But there is an implicit assumption that we always use a 
> `KafkaPrincipal` with principalType == USER_TYPE as we ignore the the 
> principalType of the `KafkaPrincipal` when we create `RequestChannel.Session`.
> I think it would be clearer if we used a separate `Principal` implementation 
> in the authentication layer.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6215) KafkaStreamsTest fails in trunk

2017-11-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16254030#comment-16254030
 ] 

ASF GitHub Bot commented on KAFKA-6215:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4221


> KafkaStreamsTest fails in trunk
> ---
>
> Key: KAFKA-6215
> URL: https://issues.apache.org/jira/browse/KAFKA-6215
> Project: Kafka
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Matthias J. Sax
> Fix For: 1.1.0, 1.0.1
>
>
> Two subtests fail.
> https://builds.apache.org/job/kafka-trunk-jdk9/193/testReport/junit/org.apache.kafka.streams/KafkaStreamsTest/testCannotCleanupWhileRunning/
> {code}
> org.apache.kafka.streams.errors.StreamsException: 
> org.apache.kafka.streams.errors.ProcessorStateException: state directory 
> [/tmp/kafka-streams/testCannotCleanupWhileRunning] doesn't exist and couldn't 
> be created
>   at org.apache.kafka.streams.KafkaStreams.(KafkaStreams.java:618)
>   at org.apache.kafka.streams.KafkaStreams.(KafkaStreams.java:505)
>   at 
> org.apache.kafka.streams.KafkaStreamsTest.testCannotCleanupWhileRunning(KafkaStreamsTest.java:462)
> {code}
> testCleanup fails in similar manner.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6167) Timestamp on streams directory contains a colon, which is an illegal character

2017-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16252189#comment-16252189
 ] 

ASF GitHub Bot commented on KAFKA-6167:
---

GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/4210

KAFKA-6167: Timestamp on streams directory contains a colon, which is an 
illegal character

 - change segment delimiter to .
 - added upgrade path
 - added test for old and new upgrade path


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka kafka-6167-windows-issue

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4210.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4210


commit 83b46536787c7efb5e4aa93214fd4da668124aa1
Author: Matthias J. Sax 
Date:   2017-11-14T21:12:50Z

KAFKA-6167: Timestamp on streams directory contains a colon, which is an 
illegal character




> Timestamp on streams directory contains a colon, which is an illegal character
> --
>
> Key: KAFKA-6167
> URL: https://issues.apache.org/jira/browse/KAFKA-6167
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.0.0
> Environment: AK 1.0.0
> Kubernetes
> CoreOS
> JDK 1.8
> Windows
>Reporter: Justin Manchester
>Assignee: Matthias J. Sax
>Priority: Blocker
>  Labels: user-experience
> Fix For: 1.0.1
>
>
> Problem:
> Development on Windows, which is not fully supported, however still a bug 
> that should be corrected.
> It looks like a timestamp was added to the streams directory using a colon as 
> separator. I believe this is an illegal character and potentially the cause 
> for the exception below.
> Error Stack:
> 2017-11-02 16:06:41 ERROR 
> [StreamDeduplicatorAcceptanceTest1-a3ae0ac6-a024-4006-bcb1-01ff0f433f6e-StreamThread-1]
>  org.apache.kafka.streams.processor.internals.AssignedTasks:301 - 
> stream-thread 
> [StreamDeduplicatorAcceptanceTest1-a3ae0ac6-a024-4006-bcb1-01ff0f433f6e-StreamThread-1]
>  Failed to process stream task 0_0 due to the following error: 
> org.apache.kafka.streams.errors.StreamsException: Exception caught in 
> process. taskId=0_0, processor=KSTREAM-SOURCE-00, topic=input-a_1, 
> partition=0, offset=0 
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:232)
>  ~[kafka-streams-1.0.0.jar:?] 
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.process(AssignedTasks.java:403)
>  [kafka-streams-1.0.0.jar:?] 
> at 
> org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:317)
>  [kafka-streams-1.0.0.jar:?] 
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.processAndMaybeCommit(StreamThread.java:942)
>  [kafka-streams-1.0.0.jar:?] 
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:822)
>  [kafka-streams-1.0.0.jar:?] 
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:774)
>  [kafka-streams-1.0.0.jar:?] 
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:744)
>  [kafka-streams-1.0.0.jar:?] 
> Caused by: org.apache.kafka.streams.errors.ProcessorStateException: Error 
> opening store KSTREAM-JOINTHIS-04-store:150962400 at location 
> C:\Users\ADRIAN~1.MCC\AppData\Local\Temp\kafka3548813472740086814\StreamDeduplicatorAcceptanceTest1\0_0\KSTREAM-JOINTHIS-04-store\KSTREAM-JOINTHIS-04-store:150962400
>  
> at 
> org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:204)
>  ~[kafka-streams-1.0.0.jar:?] 
> at 
> org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:174)
>  ~[kafka-streams-1.0.0.jar:?] 
> at org.apache.kafka.streams.state.internals.Segment.openDB(Segment.java:40) 
> ~[kafka-streams-1.0.0.jar:?] 
> at 
> org.apache.kafka.streams.state.internals.Segments.getOrCreateSegment(Segments.java:89)
>  ~[kafka-streams-1.0.0.jar:?] 
> at 
> org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStore.put(RocksDBSegmentedBytesStore.java:81)
>  ~[kafka-streams-1.0.0.jar:?] 
> at 
> org.apache.kafka.streams.state.internals.RocksDBWindowStore$RocksDBWindowBytesStore.put(RocksDBWindowStore.java:43)
>  ~[kafka-streams-1.0.0.jar:?] 
> at 
> org.apache.kafka.streams.state.internals.RocksDBWindowStore$RocksDBWindowBytesStore.put(RocksDBWindowStore.java:34)
>  ~[kafka-streams-1.0.0.jar:?] 
> at 
> 

[jira] [Commented] (KAFKA-6170) Add the AdminClient in Streams' KafkaClientSupplier

2017-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16252236#comment-16252236
 ] 

ASF GitHub Bot commented on KAFKA-6170:
---

GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/4211

[WIP] KAFKA-6170: Add AdminClient to Streams

1. Add The AdminClient into Kafka Streams, which is shared among all the 
threads.
2. Refactored mutual dependency between StreamPartitionAssignor / 
StreamTread to TaskManager as discussed in 
https://github.com/apache/kafka/pull/3624#discussion_r132614639.


### Committer Checklist (excluded from commit message)
- [ ] Verify design and implementation 
- [ ] Verify test coverage and CI build status
- [ ] Verify documentation (including upgrade notes)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K6170-admin-client

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4211.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4211


commit 6c5b20ea34323a101118286e9282568f428b8e34
Author: Guozhang Wang 
Date:   2017-11-07T00:14:53Z

add AdminClient

commit fc908e06d80816db1e28e0f1d05e1d10fa1d0379
Author: Guozhang Wang 
Date:   2017-11-13T22:13:37Z

Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/kafka into 
K6170-admin-client

commit d1be566efe65c71c068a6e948c59f7bd980d6bd8
Author: Guozhang Wang 
Date:   2017-11-14T21:41:20Z

refactor thread / assignor dependency

commit d1a778fff0cbaeb8ea00421d89fcd50552b93eba
Author: Guozhang Wang 
Date:   2017-11-14T21:44:09Z

revert TaskManager APIs




> Add the AdminClient in Streams' KafkaClientSupplier
> ---
>
> Key: KAFKA-6170
> URL: https://issues.apache.org/jira/browse/KAFKA-6170
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>
> We will add Java AdminClient to Kafka Streams, in order to replace the 
> internal StreamsKafkaClient. More details can be found in KIP-220 
> (https://cwiki.apache.org/confluence/display/KAFKA/KIP-220%3A+Add+AdminClient+into+Kafka+Streams%27+ClientSupplier)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4115) Grow default heap settings for distributed Connect from 256M to 1G

2017-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16252696#comment-16252696
 ] 

ASF GitHub Bot commented on KAFKA-4115:
---

GitHub user wicknicks opened a pull request:

https://github.com/apache/kafka/pull/4213

KAFKA-4115: Increasing the heap settings for connect-distributed script

Signed-off-by: Arjun Satish 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/wicknicks/kafka KAFKA-4115

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4213.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4213


commit 51e6404ec090de6d6f8919e3abe78a955ac60d7a
Author: Arjun Satish 
Date:   2017-11-14T23:33:37Z

Increasing the heap settings for connect-distributed script

Signed-off-by: Arjun Satish 




> Grow default heap settings for distributed Connect from 256M to 1G
> --
>
> Key: KAFKA-4115
> URL: https://issues.apache.org/jira/browse/KAFKA-4115
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Shikhar Bhushan
>Assignee: Arjun Satish
>  Labels: newbie
>
> Currently, both {{connect-standalone.sh}} and {{connect-distributed.sh}} 
> start the Connect JVM with the default heap settings from 
> {{kafka-run-class.sh}} of {{-Xmx256M}}.
> At least for distributed connect, we should default to a much higher limit 
> like 1G. While the 'correct' sizing is workload dependent, with a system 
> where you can run arbitrary connector plugins which may perform buffering of 
> data, we should provide for more headroom.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5859) Avoid retaining AbstractRequest in RequestChannel.Request

2017-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16252935#comment-16252935
 ] 

ASF GitHub Bot commented on KAFKA-5859:
---

GitHub user seglo opened a pull request:

https://github.com/apache/kafka/pull/4216

KAFKA-5859: Avoid retaining AbstractRequest in RequestChannel.Response

This PR removes the need to keep a reference to the parsed 
`AbstractRequest` after it's been handled in `KafkaApis`.  A reference to 
`RequestAndSize` which holds the parsed `AbstractRequest` in  
`RequestChannel.Request` was kept in scope as a consequence of being passed 
into the `RequestChannel.Response` after being handled.  

The Jira ticket 
[KAFKA-5859](https://issues.apache.org/jira/browse/KAFKA-5859) suggests 
removing this reference as soon as it's no longer needed.  I considered several 
implementations and I settled on creating a new type that contains all the 
relevant information of the Request that is required after it has been handled. 
 I think this approach allows for the least amount of invasive changes in the 
Request/Response lifecycle while retaining the immutability of the 
`RequestChannel.Request`.

A new type called `RequestChannel.RequestSummary` now contains much of the 
information that was in `RequestChannel.Request` before.  The 
`RequestChannel.Request` now generates a `RequestChannel.RequestSummary` that 
is passed into the `RequestChannel.Response` after being handled in 
`KafkaApis`.  `RequestChannel.RequestSummary` contains information such as:

* A detailed and non-detailed description of the request
* Metrics associated with the request
* Helper methods to update various Request metrics
* A special case describing whether or not the original Request was a 
`FetchRequest` and whether it was from a follower.  This information is 
required in the `updateRequestMetrics` metrics helper method.

This change does not make any behaviour changes so no additional tests were 
added.  I've verified that all unit and integration tests pass and no 
regressions were introduced.  I'm interested in seeing the before and after 
results of the Confluent Kafka system tests as described in step 11 of the 
[Contributing Code 
Changes](https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes#ContributingCodeChanges-PullRequest)
 section.  I would like to request access to kick off this system tests suite 
if you agree that it's relevant to this ticket.

This is my first contribution to this project.  I picked up this issue 
because it was marked with the newbie flag and it seemed like a good 
opportunity to learn more about about the request and response lifecycle in the 
Kafka broker.

### Committer Checklist (excluded from commit message)
- [ ] Verify design and implementation 
- [ ] Verify test coverage and CI build status
- [ ] Verify documentation (including upgrade notes)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/seglo/kafka to-request-summary-5859

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4216.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4216


commit 9bd67ea7cf16077e20db8e1a87330176eb3772de
Author: seglo 
Date:   2017-11-12T02:42:55Z

Use RequestSummary in RequestChannel.Response




> Avoid retaining AbstractRequest in RequestChannel.Request
> -
>
> Key: KAFKA-5859
> URL: https://issues.apache.org/jira/browse/KAFKA-5859
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Sean Glover
>Priority: Minor
>  Labels: newbie
>
> We currently store AbstractRequest in RequestChannel.Request.bodyAndSize. 
> RequestChannel.Request is, in turn, stored in RequestChannel.Response. We 
> keep the latter until the response is sent to the client.
> However, after KafkaApis.handle, we no longer need AbstractRequest apart from 
> its string representation for logging. We could potentially replace 
> AbstractRequest with a String representation (if the relevant logging is 
> enabled). The String representation is generally small while some 
> AbstractRequest subclasses can be pretty large. The largest one is 
> ProduceRequest and we clear the underlying ByteBuffer explicitly in 
> KafkaApis.handleProduceRequest. We could potentially remove that special case 
> if AbstractRequest subclasses were not retained.
> This was originally suggested by [~hachikuji] in the following PR 
> https://github.com/apache/kafka/pull/3801#discussion_r137592277



--
This message was sent 

[jira] [Commented] (KAFKA-6170) Add the AdminClient in Streams' KafkaClientSupplier

2017-11-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16254514#comment-16254514
 ] 

ASF GitHub Bot commented on KAFKA-6170:
---

GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/4224

[WIP] KAFKA-6170; KIP-220 Part 2: Break dependency of Assignor on 
StreamThread

This refactoring is discussed in 
https://github.com/apache/kafka/pull/3624#discussion_r132614639. More 
specifically:

1. Moved the access of `StreamThread` in `StreamPartitionAssignor` to 
`TaskManager`, removed any fields stored in `StreamThread` such as `processId` 
and `clientId` that are only to be used in `StreamPartitionAssignor`, and pass 
them to `TaskManager` if necessary.
2. Moved any in-memory states, `metadataWithInternalTopics`, 
`partitionsByHostState`, `standbyTasks`, `activeTasks` to `TaskManager` so that 
`StreamPartitionAssignor` becomes a stateless thin layer that access 
TaskManager directly.
3. Remove the reference of `StreamPartitionAssignor` in `StreamThread`, 
instead consolidate all related functionalities such as `cachedTasksIds ` in 
`TaskManager` which could be retrieved by the `StreamThread` and the 
`StreamPartitionAssignor` directly.
4. Finally, removed the two interfaces used for `StreamThread` and 
`StreamPartitionAssignor`.

5. Some minor fixes on logPrefixes, etc.

Future work: when replacing the StreamsKafkaClient, we would let 
`StreamPartitionAssignor` to retrieve it from `TaskManager` directly, and also 
its closing call do not need to be called (`KafkaStreams` will be responsible 
for closing it).

### Committer Checklist (excluded from commit message)
- [ ] Verify design and implementation 
- [ ] Verify test coverage and CI build status
- [ ] Verify documentation (including upgrade notes)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K6170-refactor-assignor

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4224.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4224


commit 6c5b20ea34323a101118286e9282568f428b8e34
Author: Guozhang Wang 
Date:   2017-11-07T00:14:53Z

add AdminClient

commit fc908e06d80816db1e28e0f1d05e1d10fa1d0379
Author: Guozhang Wang 
Date:   2017-11-13T22:13:37Z

Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/kafka into 
K6170-admin-client

commit d1be566efe65c71c068a6e948c59f7bd980d6bd8
Author: Guozhang Wang 
Date:   2017-11-14T21:41:20Z

refactor thread / assignor dependency

commit d1a778fff0cbaeb8ea00421d89fcd50552b93eba
Author: Guozhang Wang 
Date:   2017-11-14T21:44:09Z

revert TaskManager APIs

commit f9e5fbff4c18764bc64793dc9b5c376d956cd67c
Author: Guozhang Wang 
Date:   2017-11-15T02:04:55Z

move logic of assignor to task manager

commit bfd08c45cab067035d4980d85d6e7ff9cd5a6e36
Author: Guozhang Wang 
Date:   2017-11-15T02:16:37Z

minor fix

commit f95dc0bb9849356ab721c4f7e042a813fcb34330
Author: Guozhang Wang 
Date:   2017-11-15T02:22:34Z

extract delegating restore listener

commit 10ceff07c23ea555bd25ea74baa4b995ea0f3a83
Author: Guozhang Wang 
Date:   2017-11-15T02:26:59Z

add admin configs in streams config

commit 41dc2b0790866bb5f8325191102622bdbd5fbe23
Author: Guozhang Wang 
Date:   2017-11-15T19:39:29Z

add AdminClient to stream thread

commit 3592206eb7c06313a7f553242329f6eb578b4cbd
Author: Guozhang Wang 
Date:   2017-11-15T22:49:39Z

Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/kafka into 
K6170-admin-client

commit 03e64d0bb4a6581d4105f9faa4d95cd6e20f45f3
Author: Guozhang Wang 
Date:   2017-11-15T23:49:16Z

add admin prefix

commit 035b3a6a04025d397fec8abb535d9b148f722792
Author: Guozhang Wang 
Date:   2017-11-16T00:07:19Z

merge from K6170-admin-client




> Add the AdminClient in Streams' KafkaClientSupplier
> ---
>
> Key: KAFKA-6170
> URL: https://issues.apache.org/jira/browse/KAFKA-6170
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>
> We will add Java AdminClient to Kafka Streams, in order to replace the 
> internal StreamsKafkaClient. More details can be found in KIP-220 
> (https://cwiki.apache.org/confluence/display/KAFKA/KIP-220%3A+Add+AdminClient+into+Kafka+Streams%27+ClientSupplier)


[jira] [Commented] (KAFKA-6210) IllegalArgumentException if 1.0.0 is used for inter.broker.protocol.version or log.message.format.version

2017-11-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16254596#comment-16254596
 ] 

ASF GitHub Bot commented on KAFKA-6210:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4220


> IllegalArgumentException if 1.0.0 is used for inter.broker.protocol.version 
> or log.message.format.version
> -
>
> Key: KAFKA-6210
> URL: https://issues.apache.org/jira/browse/KAFKA-6210
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 1.1.0, 1.0.1
>
>
> The workaround is trivial (use 1.0), but this is sure to trip many users. 
> Also, if you have automatic restarts on crashes, it may not be obvious what's 
> going on. Reported by Brett Rahn.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6218) Optimize condition in if statement to reduce the number of comparisons

2017-11-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16254692#comment-16254692
 ] 

ASF GitHub Bot commented on KAFKA-6218:
---

GitHub user sachinbhalekar opened a pull request:

https://github.com/apache/kafka/pull/4225

KAFKA-6218 : Optimize condition in if statement to reduce the number of 
comparisons


Changed the condition in **if** statement 
**(schema.name() == null || !(schema.name().equals(LOGICAL_NAME)))** which
requires two comparisons in worst case with
**(!LOGICAL_NAME.equals(schema.name()))**  which requires single comparison
in all cases and _avoids null pointer exception.

![kafka_optimize_if](https://user-images.githubusercontent.com/32234013/32872271-afe0b954-ca3a-11e7-838d-6a3bc416b807.JPG)
_

### Committer Checklist (excluded from commit message)
- [+ ] Verify design and implementation 
- [+ ] Verify test coverage and CI build status
- [+ ] Verify documentation (including upgrade notes)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sachinbhalekar/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4225.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4225


commit 0a0058cb680ce319794d26dcabc8c6c4415d2684
Author: sachinbhalekar <32234013+sachinbhale...@users.noreply.github.com>
Date:   2017-11-16T02:40:59Z

Merge pull request #2 from apache/trunk

get latest code

commit 2281e2fbab8d0ff65f56414d0bbb1d0cb2a2a9ad
Author: sachinbhalekar 
Date:   2017-11-16T02:49:42Z

Changed the condition in if statement
(schema.name() == null || !(schema.name().equals(LOGICAL_NAME))) which
requires two comparisons in worst case with
(!LOGICAL_NAME.equals(schema.name()))  which requires single comparison
in all cases and avoids null pointer exception.




> Optimize condition in if statement to reduce the number of comparisons
> --
>
> Key: KAFKA-6218
> URL: https://issues.apache.org/jira/browse/KAFKA-6218
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 0.9.0.0, 0.9.0.1, 0.10.0.0, 0.10.0.1, 0.10.1.0, 
> 0.10.1.1, 0.10.2.0, 0.10.2.1, 0.11.0.0, 0.11.0.1, 1.0.0
>Reporter: sachin bhalekar
>Priority: Trivial
>  Labels: newbie
> Attachments: kafka_optimize_if.JPG
>
>
> Optimizing the condition in *if *statement 
> *(schema.name() == null || !(schema.name().equals(LOGICAL_NAME)))* which
> requires two comparisons in worst case with
> *(!LOGICAL_NAME.equals(schema.name()))*  which requires single comparison
> in all cases and _avoids null pointer exception_.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6167) Timestamp on streams directory contains a colon, which is an illegal character

2017-11-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16254617#comment-16254617
 ] 

ASF GitHub Bot commented on KAFKA-6167:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4210


> Timestamp on streams directory contains a colon, which is an illegal character
> --
>
> Key: KAFKA-6167
> URL: https://issues.apache.org/jira/browse/KAFKA-6167
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.0.0
> Environment: AK 1.0.0
> Kubernetes
> CoreOS
> JDK 1.8
> Windows
>Reporter: Justin Manchester
>Assignee: Matthias J. Sax
>Priority: Blocker
>  Labels: user-experience
> Fix For: 1.1.0, 1.0.1
>
>
> Problem:
> Development on Windows, which is not fully supported, however still a bug 
> that should be corrected.
> It looks like a timestamp was added to the streams directory using a colon as 
> separator. I believe this is an illegal character and potentially the cause 
> for the exception below.
> Error Stack:
> 2017-11-02 16:06:41 ERROR 
> [StreamDeduplicatorAcceptanceTest1-a3ae0ac6-a024-4006-bcb1-01ff0f433f6e-StreamThread-1]
>  org.apache.kafka.streams.processor.internals.AssignedTasks:301 - 
> stream-thread 
> [StreamDeduplicatorAcceptanceTest1-a3ae0ac6-a024-4006-bcb1-01ff0f433f6e-StreamThread-1]
>  Failed to process stream task 0_0 due to the following error: 
> org.apache.kafka.streams.errors.StreamsException: Exception caught in 
> process. taskId=0_0, processor=KSTREAM-SOURCE-00, topic=input-a_1, 
> partition=0, offset=0 
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:232)
>  ~[kafka-streams-1.0.0.jar:?] 
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.process(AssignedTasks.java:403)
>  [kafka-streams-1.0.0.jar:?] 
> at 
> org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:317)
>  [kafka-streams-1.0.0.jar:?] 
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.processAndMaybeCommit(StreamThread.java:942)
>  [kafka-streams-1.0.0.jar:?] 
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:822)
>  [kafka-streams-1.0.0.jar:?] 
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:774)
>  [kafka-streams-1.0.0.jar:?] 
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:744)
>  [kafka-streams-1.0.0.jar:?] 
> Caused by: org.apache.kafka.streams.errors.ProcessorStateException: Error 
> opening store KSTREAM-JOINTHIS-04-store:150962400 at location 
> C:\Users\ADRIAN~1.MCC\AppData\Local\Temp\kafka3548813472740086814\StreamDeduplicatorAcceptanceTest1\0_0\KSTREAM-JOINTHIS-04-store\KSTREAM-JOINTHIS-04-store:150962400
>  
> at 
> org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:204)
>  ~[kafka-streams-1.0.0.jar:?] 
> at 
> org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:174)
>  ~[kafka-streams-1.0.0.jar:?] 
> at org.apache.kafka.streams.state.internals.Segment.openDB(Segment.java:40) 
> ~[kafka-streams-1.0.0.jar:?] 
> at 
> org.apache.kafka.streams.state.internals.Segments.getOrCreateSegment(Segments.java:89)
>  ~[kafka-streams-1.0.0.jar:?] 
> at 
> org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStore.put(RocksDBSegmentedBytesStore.java:81)
>  ~[kafka-streams-1.0.0.jar:?] 
> at 
> org.apache.kafka.streams.state.internals.RocksDBWindowStore$RocksDBWindowBytesStore.put(RocksDBWindowStore.java:43)
>  ~[kafka-streams-1.0.0.jar:?] 
> at 
> org.apache.kafka.streams.state.internals.RocksDBWindowStore$RocksDBWindowBytesStore.put(RocksDBWindowStore.java:34)
>  ~[kafka-streams-1.0.0.jar:?] 
> at 
> org.apache.kafka.streams.state.internals.ChangeLoggingWindowBytesStore.put(ChangeLoggingWindowBytesStore.java:67)
>  ~[kafka-streams-1.0.0.jar:?] 
> at 
> org.apache.kafka.streams.state.internals.ChangeLoggingWindowBytesStore.put(ChangeLoggingWindowBytesStore.java:33)
>  ~[kafka-streams-1.0.0.jar:?] 
> at 
> org.apache.kafka.streams.state.internals.MeteredWindowStore.put(MeteredWindowStore.java:96)
>  ~[kafka-streams-1.0.0.jar:?] 
> at 
> org.apache.kafka.streams.state.internals.MeteredWindowStore.put(MeteredWindowStore.java:89)
>  ~[kafka-streams-1.0.0.jar:?] 
> at 
> org.apache.kafka.streams.kstream.internals.KStreamJoinWindow$KStreamJoinWindowProcessor.process(KStreamJoinWindow.java:64)
>  ~[kafka-streams-1.0.0.jar:?] 
> at 
> org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:46)
>  ~[kafka-streams-1.0.0.jar:?] 
> at 
> 

[jira] [Commented] (KAFKA-5117) Kafka Connect REST endpoints reveal Password typed values

2017-11-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16267400#comment-16267400
 ] 

ASF GitHub Bot commented on KAFKA-5117:
---

GitHub user qiao-meng-zefr opened a pull request:

https://github.com/apache/kafka/pull/4269

KAFKA-5117: Add password masking for kafka connect REST endpoint

*More detailed description of your change,
Mask all password type config parameter with "*" instead of displaying the 
plain text in kafka connect REST endpoint.

### Committer Checklist (excluded from commit message)
- [ ] Verify design and implementation 
- [ ] Verify test coverage and CI build status
- [ ] Verify documentation (including upgrade notes)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/qiao-meng-zefr/kafka mask_password

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4269.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4269


commit 51665eae52eadb9e6db4107dffac12e0bd40585a
Author: Vincent Meng 
Date:   2017-11-27T19:41:11Z

Add password masking




> Kafka Connect REST endpoints reveal Password typed values
> -
>
> Key: KAFKA-5117
> URL: https://issues.apache.org/jira/browse/KAFKA-5117
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.2.0
>Reporter: Thomas Holmes
>
> A Kafka Connect connector can specify ConfigDef keys as type of Password. 
> This type was added to prevent logging the values (instead "[hidden]" is 
> logged).
> This change does not apply to the values returned by executing a GET on 
> {{connectors/\{connector-name\}}} and 
> {{connectors/\{connector-name\}/config}}. This creates an easily accessible 
> way for an attacker who has infiltrated your network to gain access to 
> potential secrets that should not be available.
> I have started on a code change that addresses this issue by parsing the 
> config values through the ConfigDef for the connector and returning their 
> output instead (which leads to the masking of Password typed configs as 
> [hidden]).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6150) Make Repartition Topics Transient

2017-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16269162#comment-16269162
 ] 

ASF GitHub Bot commented on KAFKA-6150:
---

GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/4270

[WIP] KAFKA-6150: Purge repartition topics

*More detailed description of your change,
if necessary. The PR title and PR message become
the squashed commit message, so use a separate
comment to ping reviewers.*

*Summary of testing strategy (including rationale)
for the feature or bug fix. Unit and/or integration
tests are expected for any behaviour change and
system tests should be considered for larger changes.*

### Committer Checklist (excluded from commit message)
- [ ] Verify design and implementation 
- [ ] Verify test coverage and CI build status
- [ ] Verify documentation (including upgrade notes)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka 
K6150-purge-repartition-topics

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4270.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4270






> Make Repartition Topics Transient
> -
>
> Key: KAFKA-6150
> URL: https://issues.apache.org/jira/browse/KAFKA-6150
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>  Labels: operability
>
> Unlike changelog topics, the repartition topics could just be short-lived. 
> Today users have different ways to configure them with short retention such 
> as enforce a short retention period or use AppendTime for repartition topics. 
> All these would be cumbersome and Streams should just do this for the users.
> One way to do it is use the “purgeData” admin API (KIP-107) such that after 
> the offset of the input topics are committed, if the input topics are 
> actually repartition topics, we would purge the data immediately.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6170) Add the AdminClient in Streams' KafkaClientSupplier

2017-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16269121#comment-16269121
 ] 

ASF GitHub Bot commented on KAFKA-6170:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4224


> Add the AdminClient in Streams' KafkaClientSupplier
> ---
>
> Key: KAFKA-6170
> URL: https://issues.apache.org/jira/browse/KAFKA-6170
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 1.1.0
>
>
> We will add Java AdminClient to Kafka Streams, in order to replace the 
> internal StreamsKafkaClient. More details can be found in KIP-220 
> (https://cwiki.apache.org/confluence/display/KAFKA/KIP-220%3A+Add+AdminClient+into+Kafka+Streams%27+ClientSupplier)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5526) KIP-175: ConsumerGroupCommand no longer shows output for consumer groups which have not committed offsets

2017-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16269410#comment-16269410
 ] 

ASF GitHub Bot commented on KAFKA-5526:
---

GitHub user vahidhashemian opened a pull request:

https://github.com/apache/kafka/pull/4271

KAFKA-5526: Additional `--describe` views for ConsumerGroupCommand (KIP-175)

The `--describe` option of ConsumerGroupCommand is expanded to support:
* `--describe` or `--describe --offsets`: listing of current group offsets
* `--describe --members` or `--describe --members --verbose`: listing of 
group members
* `--describe --state`: group status

Example: With a single partition topic `test1` and a double partition topic 
`test2`, consumers `consumer1` and `consumer11` subscribed to `test`, consumers 
`consumer2` and `consumer22` and `consumer222` subscribed to `test2`, and all 
consumers belonging to group `test-group`, this is an output example of the new 
options above for `test-group`:

```
--describe, or --describe --offsets:

TOPIC   PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG 
CONSUMER-ID HOSTCLIENT-ID
test2   0  0   0   0   
consumer2-bad9496d-0889-47ab-98ff-af17d9460382  /127.0.0.1  consumer2
test2   1  0   0   0   
consumer22-c45e6ee2-0c7d-44a3-94a8-9627f63fb411 /127.0.0.1  consumer22
test1   0  0   0   0   
consumer1-d51b0345-3194-4305-80db-81a68fa6c5bf  /127.0.0.1  consumer1
```

```
--describe --members

CONSUMER-ID  HOSTCLIENT-ID  
 #PARTITIONS
consumer2-bad9496d-0889-47ab-98ff-af17d9460382   /127.0.0.1  consumer2  
 1
consumer222-ed2108cd-d368-41f1-8514-5b72aa835bcc /127.0.0.1  
consumer222 0
consumer11-dc8295d7-8f3f-4438-9b11-7270bab46760  /127.0.0.1  consumer11 
 0
consumer22-c45e6ee2-0c7d-44a3-94a8-9627f63fb411  /127.0.0.1  consumer22 
 1
consumer1-d51b0345-3194-4305-80db-81a68fa6c5bf   /127.0.0.1  consumer1  
 1
```

```
--describe --members --verbose

CONSUMER-ID  HOSTCLIENT-ID  
 #PARTITIONS ASSIGNMENT
consumer2-bad9496d-0889-47ab-98ff-af17d9460382   /127.0.0.1  consumer2  
 1   test2(0)
consumer222-ed2108cd-d368-41f1-8514-5b72aa835bcc /127.0.0.1  
consumer222 0   -
consumer11-dc8295d7-8f3f-4438-9b11-7270bab46760  /127.0.0.1  consumer11 
 0   -
consumer22-c45e6ee2-0c7d-44a3-94a8-9627f63fb411  /127.0.0.1  consumer22 
 1   test2(1)
consumer1-d51b0345-3194-4305-80db-81a68fa6c5bf   /127.0.0.1  consumer1  
 1   test1(0)
```

```
--describe --state

ASSIGNMENT-STRATEGY   STATE#MEMBERS
range Stable   5
```

Note that this PR also addresses the issue reported in 
[KAFKA-6158](https://issues.apache.org/jira/browse/KAFKA-6158) by dynamically 
setting the width of columns `TOPIC`, `CONSUMER-ID`, `HOST` and `CLIENT-ID`. 
This avoid truncation of column values when they go over the current fixed 
width of these columns.

The code has been restructured to better support testing of individual 
values and also the console output. Unit tests have been updated and extended 
to take advantage of this restructuring.

### Committer Checklist (excluded from commit message)
- [ ] Verify design and implementation 
- [ ] Verify test coverage and CI build status
- [ ] Verify documentation (including upgrade notes)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-5526

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4271.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4271


commit 1973db23d2f29191ec56b56a3040c1a2b0c00ef4
Author: Vahid Hashemian 
Date:   2017-11-28T20:08:37Z

KAFKA-5526: Additional `--describe` views for ConsumerGroupCommand (KIP-175)

The `--describe` option of ConsumerGroupCommand is expanded to support:
* `--describe` or `--describe --offsets`: listing of current group offsets
* `--describe --members` or `--describe --members --verbose`: listing of 
group members
* `--describe --state`: group status

Example: With a single partition topic `test1` and a double partition topic 
`test2`, consumers `consumer1` and 

[jira] [Commented] (KAFKA-5526) KIP-175: ConsumerGroupCommand no longer shows output for consumer groups which have not committed offsets

2017-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16269369#comment-16269369
 ] 

ASF GitHub Bot commented on KAFKA-5526:
---

Github user vahidhashemian closed the pull request at:

https://github.com/apache/kafka/pull/3448


> KIP-175: ConsumerGroupCommand no longer shows output for consumer groups 
> which have not committed offsets
> -
>
> Key: KAFKA-5526
> URL: https://issues.apache.org/jira/browse/KAFKA-5526
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ryan P
>Assignee: Vahid Hashemian
>  Labels: kip
>
> It would appear that the latest iteration of the ConsumerGroupCommand no 
> longer outputs information about group membership when no offsets have been 
> committed. It would be nice if the output generated by these tools maintained 
> some form of consistency across versions as some users have grown to depend 
> on them. 
> 0.9.x output:
> bin/kafka-consumer-groups --bootstrap-server localhost:9092 --new-consumer 
> --describe --group console-consumer-34885
> GROUP, TOPIC, PARTITION, CURRENT OFFSET, LOG END OFFSET, LAG, OWNER
> console-consumer-34885, test, 0, unknown, 0, unknown, consumer-1_/192.168.1.64
> 0.10.2 output:
> bin/kafka-consumer-groups --bootstrap-server localhost:9092 --new-consumer 
> --describe --group console-consumer-34885
> Note: This will only show information about consumers that use the Java 
> consumer API (non-ZooKeeper-based consumers).
> TOPIC  PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG 
>CONSUMER-ID   HOST 
>   CLIENT-ID



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6274) Improve KTable Source state store auto-generated names

2017-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16269295#comment-16269295
 ] 

ASF GitHub Bot commented on KAFKA-6274:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4268


> Improve KTable Source state store auto-generated names
> --
>
> Key: KAFKA-6274
> URL: https://issues.apache.org/jira/browse/KAFKA-6274
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 1.1.0, 1.0.1
>
>
> When the source KTable is generated without the store name specified, the 
> auto-generated store name use {{topic}} as the store name prefix. This would 
> generate the store name as
> {code}
> Processor: KTABLE-SOURCE-31 (stores: 
> [windowed-node-countsSTATE-STORE-29])
>   --> none
>   <-- KSTREAM-SOURCE-30
> {code}
> We'd better improve the auto-generated store name as 
> {{[topic-name]-STATE-STORE-suffix}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6255) Add ProduceBench to Trogdor

2017-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16269569#comment-16269569
 ] 

ASF GitHub Bot commented on KAFKA-6255:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4245


> Add ProduceBench to Trogdor
> ---
>
> Key: KAFKA-6255
> URL: https://issues.apache.org/jira/browse/KAFKA-6255
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
> Fix For: 1.1.0
>
>
> Add ProduceBench, a benchmark of producer latency, to Trogdor.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6065) Add zookeeper metrics to ZookeeperClient as in KIP-188

2017-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16265892#comment-16265892
 ] 

ASF GitHub Bot commented on KAFKA-6065:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/4265

KAFKA-6065: Latency metric for KafkaZkClient

Measures the latency of each request batch.

Updated existing `ZkUtils` test to use `KafkaZkClient`
instead.

### Committer Checklist (excluded from commit message)
- [ ] Verify design and implementation 
- [ ] Verify test coverage and CI build status
- [ ] Verify documentation (including upgrade notes)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka kafka-6065-async-zk-metrics

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4265.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4265


commit 900cc6f1e4a075ec8c87ac61ab8aa5fb58daa6f8
Author: Ismael Juma 
Date:   2017-11-26T01:32:43Z

KAFKA-6065: Latency metric for KafkaZkClient




> Add zookeeper metrics to ZookeeperClient as in KIP-188
> --
>
> Key: KAFKA-6065
> URL: https://issues.apache.org/jira/browse/KAFKA-6065
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Onur Karaman
>Assignee: Prasanna Gautam
> Fix For: 1.1.0
>
>
> Among other things, KIP-188 added latency metrics to ZkUtils. We should add 
> the same metrics to ZookeeperClient.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5563) Clarify handling of connector name in config

2017-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16265896#comment-16265896
 ] 

ASF GitHub Bot commented on KAFKA-5563:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4230


> Clarify handling of connector name in config 
> -
>
> Key: KAFKA-5563
> URL: https://issues.apache.org/jira/browse/KAFKA-5563
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Sönke Liebau
>Assignee: Sönke Liebau
>Priority: Minor
> Fix For: 1.1.0
>
>
> The connector name is currently being stored in two places, once at the root 
> level of the connector and once in the config:
> {code:java}
> {
>   "name": "test",
>   "config": {
>   "connector.class": 
> "org.apache.kafka.connect.tools.MockSinkConnector",
>   "tasks.max": "3",
>   "topics": "test-topic",
>   "name": "test"
>   },
>   "tasks": [
>   {
>   "connector": "test",
>   "task": 0
>   }
>   ]
> }
> {code}
> If no name is provided in the "config" element, then the name from the root 
> level is [copied there when the connector is being 
> created|https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/resources/ConnectorsResource.java#L95].
>  If however a name is provided in the config then it is not touched, which 
> means it is possible to create a connector with a different name at the root 
> level and in the config like this:
> {code:java}
> {
>   "name": "test1",
>   "config": {
>   "connector.class": 
> "org.apache.kafka.connect.tools.MockSinkConnector",
>   "tasks.max": "3",
>   "topics": "test-topic",
>   "name": "differentname"
>   },
>   "tasks": [
>   {
>   "connector": "test1",
>   "task": 0
>   }
>   ]
> }
> {code}
> I am not aware of any issues that this currently causes, but it is at least 
> confusing and probably not intended behavior and definitely bears potential 
> for bugs, if different functions take the name from different places.
> Would it make sense to add a check to reject requests that provide different 
> names in the request and the config section?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6074) Use ZookeeperClient in ReplicaManager and Partition

2017-11-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16265191#comment-16265191
 ] 

ASF GitHub Bot commented on KAFKA-6074:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4254


> Use ZookeeperClient in ReplicaManager and Partition
> ---
>
> Key: KAFKA-6074
> URL: https://issues.apache.org/jira/browse/KAFKA-6074
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Affects Versions: 1.1.0
>Reporter: Jun Rao
>Assignee: Ted Yu
> Fix For: 1.1.0
>
> Attachments: 6074.v1.txt, 6074.v10.txt
>
>
> We want to replace the usage of ZkUtils with ZookeeperClient in 
> ReplicaManager and Partition.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6261) Request logging throws exception if acks=0

2017-11-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16265502#comment-16265502
 ] 

ASF GitHub Bot commented on KAFKA-6261:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4250


> Request logging throws exception if acks=0
> --
>
> Key: KAFKA-6261
> URL: https://issues.apache.org/jira/browse/KAFKA-6261
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 1.1.0, 1.0.1
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6241) Enable dynamic reconfiguration of SSL keystores

2017-11-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16265348#comment-16265348
 ] 

ASF GitHub Bot commented on KAFKA-6241:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/4263

KAFKA-6241: Enable dynamic updates of broker SSL keystore

Enable dynamic broker configuration (see KIP-225 for details). Includes
 - Base implementation to allow specific broker configs and custom configs 
to be dynamically updated
 - Extend DescribeConfigsRequest/Response to return all synonym configs and 
their sources in the order of precedence
 - Extend AdminClient to alter dynamic broker configs
 - Dynamic update of SSL keystores

### Committer Checklist (excluded from commit message)
- [ ] Verify design and implementation 
- [ ] Verify test coverage and CI build status
- [ ] Verify documentation (including upgrade notes)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka 
KAFKA-6241-dynamic-keystore

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4263.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4263


commit 48a12506384d753a1ece6a92fcc59c3f60a35a81
Author: Rajini Sivaram 
Date:   2017-11-20T17:06:31Z

KAFKA-6241: Enable dynamic updates of broker SSL keystore




> Enable dynamic reconfiguration of SSL keystores
> ---
>
> Key: KAFKA-6241
> URL: https://issues.apache.org/jira/browse/KAFKA-6241
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 1.1.0
>
>
> See 
> [KIP-226|https://cwiki.apache.org/confluence/display/KAFKA/KIP-226+-+Dynamic+Broker+Configuration]
>  for details.
> This will include the base implementation to enable dynamic updates of broker 
> configs.
> SSL keystore update will be implemented as part of this task to enable 
> testing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6258) SSLTransportLayer should keep reading from socket until either the buffer is full or the socket has no more data

2017-11-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16262082#comment-16262082
 ] 

ASF GitHub Bot commented on KAFKA-6258:
---

GitHub user lindong28 opened a pull request:

https://github.com/apache/kafka/pull/4248

KAFKA-6258; SSLTransportLayer should keep reading from socket until either 
the buffer is full or the socket has no more data



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lindong28/kafka KAFKA-6258

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4248.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4248


commit 12907b1a13a34f1b71c073b0a662e4c23eba20c0
Author: Dong Lin 
Date:   2017-11-22T04:08:50Z

KAFKA-6258; SSLTransportLayer should keep reading from socket until either 
the buffer is full or the socket has no more data




> SSLTransportLayer should keep reading from socket until either the buffer is 
> full or the socket has no more data
> 
>
> Key: KAFKA-6258
> URL: https://issues.apache.org/jira/browse/KAFKA-6258
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Dong Lin
>Assignee: Dong Lin
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6250) Kafka Connect requires permission to create internal topics even if they exist

2017-11-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16262060#comment-16262060
 ] 

ASF GitHub Bot commented on KAFKA-6250:
---

GitHub user gavrie opened a pull request:

https://github.com/apache/kafka/pull/4247

KAFKA-6250: Use existing internal topics without requiring ACL

When using Kafka Connect with a cluster that doesn't allow the user to
create topics (due to ACL configuration), Connect fails when trying to
create its internal topics, even if these topics already exist. This is
incorrect behavior according to the documentation, which mentions that
R/W access should be enough.

This happens specifically when using Aiven Kafka, which does not permit
creation of topics via the Kafka Admin Client API.

The patch ignores the returned error, similar to the behavior for older
brokers that don't support the API.

### Committer Checklist (excluded from commit message)
- [ ] Verify design and implementation 
- [ ] Verify test coverage and CI build status
- [ ] Verify documentation (including upgrade notes)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gavrie/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4247.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4247


commit 0b17d56257784d9def1418ab87650cd240892227
Author: Gavrie Philipson 
Date:   2017-11-22T06:56:28Z

KAFKA-6250: Use existing internal topics without requiring ACL

When using Kafka Connect with a cluster that doesn't allow the user to
create topics (due to ACL configuration), Connect fails when trying to
create its internal topics, even if these topics already exist. This is
incorrect behavior according to the documentation, which mentions that
R/W access should be enough.

This happens specifically when using Aiven Kafka, which does not permit
creation of topics via the Kafka Admin Client API.

The patch ignores the returned error, similar to the behavior for older
brokers that don't support the API.




> Kafka Connect requires permission to create internal topics even if they exist
> --
>
> Key: KAFKA-6250
> URL: https://issues.apache.org/jira/browse/KAFKA-6250
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.11.0.1, 1.0.0
>Reporter: Gavrie Philipson
>
> When using Kafka Connect with a cluster that doesn't allow the user to create 
> topics (due to ACL configuration), Connect fails when trying to create its 
> internal topics, even if these topics already exist.
> This happens specifically when using hosted [Aiven 
> Kafka|https://aiven.io/kafka], which does not permit creation of topics via 
> the Kafka Admin Client API.
> The problem is that Connect tries to create the topics, and ignores some 
> specific errors such as topics that already exist, but not authorization 
> errors.
> This is what happens:
> {noformat}
> 2017-11-21 15:57:24,176 [DistributedHerder] ERROR DistributedHerder:206 - 
> Uncaught exception in herder work thread, exiting:
> org.apache.kafka.connect.errors.ConnectException: Error while attempting to 
> create/find topic(s) 'connect-offsets'
>   at 
> org.apache.kafka.connect.util.TopicAdmin.createTopics(TopicAdmin.java:245)
>   at 
> org.apache.kafka.connect.storage.KafkaOffsetBackingStore$1.run(KafkaOffsetBackingStore.java:99)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.start(KafkaBasedLog.java:126)
>   at 
> org.apache.kafka.connect.storage.KafkaOffsetBackingStore.start(KafkaOffsetBackingStore.java:109)
>   at org.apache.kafka.connect.runtime.Worker.start(Worker.java:146)
>   at 
> org.apache.kafka.connect.runtime.AbstractHerder.startServices(AbstractHerder.java:99)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:194)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Cluster 
> authorization failed.
>   at 
> 

[jira] [Commented] (KAFKA-6074) Use ZookeeperClient in ReplicaManager and Partition

2017-11-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16263468#comment-16263468
 ] 

ASF GitHub Bot commented on KAFKA-6074:
---

GitHub user tedyu opened a pull request:

https://github.com/apache/kafka/pull/4254

KAFKA-6074 Use ZookeeperClient in ReplicaManager and Partition

Replace ZkUtils with KafkaZkClient in ReplicaManager and Partition

Utilize existing unit tests

### Committer Checklist (excluded from commit message)
- [ ] Verify design and implementation 
- [ ] Verify test coverage and CI build status
- [ ] Verify documentation (including upgrade notes)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tedyu/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4254.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4254


commit f4991b2bd55ba1d0674928b92e525bb40af781e1
Author: tedyu 
Date:   2017-11-22T22:27:33Z

KAFKA-6074 Use ZookeeperClient in ReplicaManager and Partition




> Use ZookeeperClient in ReplicaManager and Partition
> ---
>
> Key: KAFKA-6074
> URL: https://issues.apache.org/jira/browse/KAFKA-6074
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Affects Versions: 1.1.0
>Reporter: Jun Rao
>Assignee: Ted Yu
> Fix For: 1.1.0
>
> Attachments: 6074.v1.txt, 6074.v10.txt
>
>
> We want to replace the usage of ZkUtils with ZookeeperClient in 
> ReplicaManager and Partition.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4499) Add "getAllKeys" API for querying windowed KTable stores

2017-11-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16263795#comment-16263795
 ] 

ASF GitHub Bot commented on KAFKA-4499:
---

GitHub user ConcurrencyPractitioner opened a pull request:

https://github.com/apache/kafka/pull/4258

[KAFKA-4499] Add all() and fetchAll() API for querying

*More detailed description of your change,
if necessary. The PR title and PR message become
the squashed commit message, so use a separate
comment to ping reviewers.*

*Summary of testing strategy (including rationale)
for the feature or bug fix. Unit and/or integration
tests are expected for any behaviour change and
system tests should be considered for larger changes.*

### Committer Checklist (excluded from commit message)
- [ ] Verify design and implementation 
- [ ] Verify test coverage and CI build status
- [ ] Verify documentation (including upgrade notes)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ConcurrencyPractitioner/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4258.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4258


commit 0dfd5a62395705d11ed73ed382aa0d3247c6da7c
Author: RichardYuSTUG 
Date:   2017-11-23T04:18:38Z

[KAFKA-4499] Add all() and fetchAll() API for querying




> Add "getAllKeys" API for querying windowed KTable stores
> 
>
> Key: KAFKA-4499
> URL: https://issues.apache.org/jira/browse/KAFKA-4499
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>  Labels: needs-kip
> Attachments: 4499-All-test-v1.patch, 4499-CachingWindowStore-v1.patch
>
>
> Currently, both {{KTable}} and windowed-{{KTable}} stores can be queried via 
> IQ feature. While {{ReadOnlyKeyValueStore}} (for {{KTable}} stores) provide 
> method {{all()}} to scan the whole store (ie, returns an iterator over all 
> stored key-value pairs), there is no similar API for {{ReadOnlyWindowStore}} 
> (for windowed-{{KTable}} stores).
> This limits the usage of a windowed store, because the user needs to know 
> what keys are stored in order the query it. It would be useful to provide 
> possible APIs like this (only a rough sketch):
>  - {{keys()}} returns all keys available in the store (maybe together with 
> available time ranges)
>  - {{all(long timeFrom, long timeTo)}} that returns all window for a specific 
> time range
>  - {{allLatest()}} that returns the latest window for each key
> Because this feature would require to scan multiple segments (ie, RockDB 
> instances) it would be quite inefficient with current store design. Thus, 
> this feature also required to redesign the underlying window store itself.
> Because this is a major change, a KIP 
> (https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals)
>  is required. The KIP should cover the actual API design as well as the store 
> refactoring.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6259) Make KafkaStreams.cleanup() clean global state directory

2017-11-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16263514#comment-16263514
 ] 

ASF GitHub Bot commented on KAFKA-6259:
---

GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/4255

KAFKA-6259: Make KafkaStreams.cleanup() clean global state directory


### Committer Checklist (excluded from commit message)
- [ ] Verify design and implementation 
- [ ] Verify test coverage and CI build status
- [ ] Verify documentation (including upgrade notes)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka kafka-6259-clean-global-state-dir

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4255.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4255


commit 6f910fbe3fea9e49501309f977441b756ead7bc0
Author: Matthias J. Sax 
Date:   2017-11-22T22:53:16Z

KAFKA-6259: Make KafkaStreams.cleanup() clean global state directory




> Make KafkaStreams.cleanup() clean global state directory
> 
>
> Key: KAFKA-6259
> URL: https://issues.apache.org/jira/browse/KAFKA-6259
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.2, 1.0, 1.0.1
>Reporter: Damian Guy
>Assignee: Matthias J. Sax
>
> We have {{KafkaStreams#cleanUp}} so that developers can remove all local 
> state during development, i.e., so they can start from a clean slate. 
> However, this presently doesn't cleanup the global state directory



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6074) Use ZookeeperClient in ReplicaManager and Partition

2017-11-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16263463#comment-16263463
 ] 

ASF GitHub Bot commented on KAFKA-6074:
---

Github user tedyu closed the pull request at:

https://github.com/apache/kafka/pull/4166


> Use ZookeeperClient in ReplicaManager and Partition
> ---
>
> Key: KAFKA-6074
> URL: https://issues.apache.org/jira/browse/KAFKA-6074
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Affects Versions: 1.1.0
>Reporter: Jun Rao
>Assignee: Ted Yu
> Fix For: 1.1.0
>
> Attachments: 6074.v1.txt, 6074.v10.txt
>
>
> We want to replace the usage of ZkUtils with ZookeeperClient in 
> ReplicaManager and Partition.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6174) Add methods in Options classes to keep binary compatibility with 0.11

2017-11-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16263646#comment-16263646
 ] 

ASF GitHub Bot commented on KAFKA-6174:
---

GitHub user lindong28 opened a pull request:

https://github.com/apache/kafka/pull/4257

KAFKA-6174; Add methods in Options classes to keep binary compatibility 
with 0.11

From 0.11 to 1.0, we moved `DescribeClusterOptions timeoutMs(Integer 
timeoutMs)` from DescribeClusterOptions to AbstractOptions. User reports that 
code compiled against 0.11.0 fails when it is executed with 1.0 kafka-clients 
jar. This patch adds back these methods to keep binary compatibility with 0.11.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lindong28/kafka KAFKA-6174

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4257.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4257


commit be3069d99e1da91d7bb2aa00be1c4e0b2050ad39
Author: Dong Lin 
Date:   2017-11-23T01:14:02Z

KAFKA-6174; Add methods in Options classes to keep binary compatibility 
with 0.11




> Add methods in Options classes to keep binary compatibility with 0.11
> -
>
> Key: KAFKA-6174
> URL: https://issues.apache.org/jira/browse/KAFKA-6174
> Project: Kafka
>  Issue Type: Bug
>Reporter: Dong Lin
>Assignee: Dong Lin
> Fix For: 1.0.1
>
>
> From 0.11 to 1.0, we moved `DescribeClusterOptions timeoutMs(Integer 
> timeoutMs)` from DescribeClusterOptions to AbstractOptions. User reports that 
> code compiled against 0.11.0 fails when it is executed with 1.0 kafka-clients 
> jar.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5631) Use Jackson for serialising to JSON

2017-11-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16263894#comment-16263894
 ] 

ASF GitHub Bot commented on KAFKA-5631:
---

GitHub user umesh9794 opened a pull request:

https://github.com/apache/kafka/pull/4259

KAFKA-5631 : Use Jackson for serialising to JSON

This PR replaces the existing `Json.encode` to use Jackson serialization. 
Since the change is spread more than one module, it relies on the existing 
tests written for those modules. 

If required, I can write new tests as well. Please suggest. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/umesh9794/kafka KAFKA-5631

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4259.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4259


commit 87b2377e1262d09ebfdeb5237bf4b47e9f2a8cdc
Author: umesh chaudhary 
Date:   2017-11-23T06:45:42Z

KAFKA-5631




> Use Jackson for serialising to JSON
> ---
>
> Key: KAFKA-5631
> URL: https://issues.apache.org/jira/browse/KAFKA-5631
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Umesh Chaudhary
>  Labels: newbie
> Fix For: 1.1.0
>
>
> We currently serialise to JSON via a manually written method `Json.encode`. 
> The implementation is naive: it does a lot of unnecessary String 
> concatenation and it doesn't handle escaping well.
> KAFKA-1595 switches to Jackson for parsing, so it would make sense to do this 
> after that one is merged.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6238) Issues with protocol version when applying a rolling upgrade to 1.0.0

2017-11-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16264421#comment-16264421
 ] 

ASF GitHub Bot commented on KAFKA-6238:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4256


> Issues with protocol version when applying a rolling upgrade to 1.0.0
> -
>
> Key: KAFKA-6238
> URL: https://issues.apache.org/jira/browse/KAFKA-6238
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 1.0.0
>Reporter: Diego Louzán
>Assignee: Jason Gustafson
>
> Hello,
> I am trying to perform a rolling upgrade from 0.10.0.1 to 1.0.0, and 
> according to the instructions in the documentation, I should only have to 
> upgrade the "inter.broker.protocol.version" parameter in the first step. But 
> after setting the value to "0.10.0" or "0.10.0.1" (tried both), the broker 
> refuses to start with the following error:
> {code}
> [2017-11-20 08:28:46,620] FATAL  (kafka.Kafka$)
> java.lang.IllegalArgumentException: requirement failed: 
> log.message.format.version 1.0-IV0 cannot be used when 
> inter.broker.protocol.version is set to 0.10.0.1
> at scala.Predef$.require(Predef.scala:224)
> at kafka.server.KafkaConfig.validateValues(KafkaConfig.scala:1205)
> at kafka.server.KafkaConfig.(KafkaConfig.scala:1170)
> at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:881)
> at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:878)
> at 
> kafka.server.KafkaServerStartable$.fromProps(KafkaServerStartable.scala:28)
> at kafka.Kafka$.main(Kafka.scala:82)
> at kafka.Kafka.main(Kafka.scala)
> {code}
> I checked the instructions for rolling upgrades to previous versions (namely 
> 0.11.0.0), and in here it's stated that is also needed to upgrade the 
> "log.message.format.version" parameter in two stages. I have tried that and 
> the upgrade worked. It seems it still applies to version 1.0.0, so I'm not 
> sure if this is wrong documentation, or an actual issue with kafka since it 
> should work as stated in the docs.
> Regards,
> Diego Louzán



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-1044) change log4j to slf4j

2017-11-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16262766#comment-16262766
 ] 

ASF GitHub Bot commented on KAFKA-1044:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3477


> change log4j to slf4j 
> --
>
> Key: KAFKA-1044
> URL: https://issues.apache.org/jira/browse/KAFKA-1044
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.8.0
>Reporter: shijinkui
>Assignee: Viktor Somogyi
>Priority: Minor
>  Labels: newbie
>
> can u chanage the log4j to slf4j, in my project, i use logback, it's conflict 
> with log4j.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4827) Kafka connect: error with special characters in connector name

2017-11-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16263010#comment-16263010
 ] 

ASF GitHub Bot commented on KAFKA-4827:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4205


> Kafka connect: error with special characters in connector name
> --
>
> Key: KAFKA-4827
> URL: https://issues.apache.org/jira/browse/KAFKA-4827
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.1.0
>Reporter: Aymeric Bouvet
>Assignee: Arjun Satish
>Priority: Minor
> Fix For: 1.1.0, 1.0.1
>
>
> When creating a connector, if the connector name (and possibly other 
> properties) end with a carriage return, kafka-connect will create the config 
> but report error
> {code}
> cat << EOF > file-connector.json
> {
>   "name": "file-connector\r",
>   "config": {
> "topic": "kafka-connect-logs\r",
> "tasks.max": "1",
> "file": "/var/log/ansible-confluent/connect.log",
> "connector.class": 
> "org.apache.kafka.connect.file.FileStreamSourceConnector"
>   }
> }
> EOF
> curl -X POST -H "Content-Type: application/json" -H "Accept: 
> application/json" -d @file-connector.json localhost:8083/connectors 
> {code}
> returns an error 500  and log the following
> {code}
> [2017-03-01 18:25:23,895] WARN  (org.eclipse.jetty.servlet.ServletHandler)
> javax.servlet.ServletException: java.lang.IllegalArgumentException: Illegal 
> character in path at index 27: /connectors/file-connector4
> at 
> org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:489)
> at 
> org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:427)
> at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:388)
> at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:341)
> at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:228)
> at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:812)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:587)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at 
> org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:159)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalArgumentException: Illegal character in path at 
> index 27: /connectors/file-connector4
> at java.net.URI.create(URI.java:852)
> at 
> org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource.createConnector(ConnectorsResource.java:100)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory$1.invoke(ResourceMethodInvocationHandlerFactory.java:81)
> at 
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:144)
> at 
> 

[jira] [Commented] (KAFKA-6261) Request logging throws exception if acks=0

2017-11-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16262852#comment-16262852
 ] 

ASF GitHub Bot commented on KAFKA-6261:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/4250

KAFKA-6261: Fix exception thrown by request logging if acks=0

Only expect responseAsString to be set if request logging is
enabled _and_ responseSend is defined.

Also fixed a couple of issues that would manifest themselves
if trace logging is enabled:

- `MemoryRecords.toString` should not throw exception if data is corrupted
- Generate `responseString` correctly if unsupported api versions request is
received.

Unit tests were added for every issue fixed. Also changed
SocketServerTest to run with trace logging enabled as
request logging breakage has been a common issue.

### Committer Checklist (excluded from commit message)
- [ ] Verify design and implementation 
- [ ] Verify test coverage and CI build status
- [ ] Verify documentation (including upgrade notes)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
fix-issues-when-trace-logging-is-enabled

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4250.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4250


commit 0d63d0100cfacfb76f7f28190084d28c5e468422
Author: Ismael Juma 
Date:   2017-11-22T16:00:29Z

Generate responseString correctly if unsupported api versions request is 
received

commit 3973541214e8f442dcf29a947610ca0bd31b42a9
Author: Ismael Juma 
Date:   2017-11-22T16:01:13Z

MemoryRecords.toString should not throw exception is data is corrupted

commit a563bac859bdc7f210f6f61f104d5b93fb129878
Author: Ismael Juma 
Date:   2017-11-22T16:03:18Z

Don't throw exception on no op responses if request logging is enabled

commit 90993c573c52a0b675b54053f635165a3f6c4854
Author: Ismael Juma 
Date:   2017-11-22T16:07:06Z

Fix nit




> Request logging throws exception if acks=0
> --
>
> Key: KAFKA-6261
> URL: https://issues.apache.org/jira/browse/KAFKA-6261
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 1.1.0, 1.0.1
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5647) Use async ZookeeperClient for Admin operations

2017-11-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16264621#comment-16264621
 ] 

ASF GitHub Bot commented on KAFKA-5647:
---

GitHub user omkreddy opened a pull request:

https://github.com/apache/kafka/pull/4260

KAFKA-5647: Use KafkaZkClient in ReassignPartitionsCommand and 
PreferredReplicaLeaderElectionCommand

*  Use KafkaZkClient in ReassignPartitionsCommand
*  Use KafkaZkClient in PreferredReplicaLeaderElectionCommand
*  Updated test classes to use new methods
*  All existing tests should pass

### Committer Checklist (excluded from commit message)
- [ ] Verify design and implementation 
- [ ] Verify test coverage and CI build status
- [ ] Verify documentation (including upgrade notes)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/omkreddy/kafka KAFKA-5647-ADMINCOMMANDS

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4260.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4260


commit b0dcf9fde0754f17e576186c47889b621b3b9769
Author: Manikumar Reddy 
Date:   2017-11-17T15:38:52Z

KAFKA-5647: Use KafkaZkClient in ReassignPartitionsCommand and 
PreferredReplicaLeaderElectionCommand




> Use async ZookeeperClient for Admin operations
> --
>
> Key: KAFKA-5647
> URL: https://issues.apache.org/jira/browse/KAFKA-5647
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Manikumar
> Fix For: 1.1.0
>
>
> Since we will be removing the ZK dependency in most of the admin clients, we 
> only need to change the admin operations used on the server side. This 
> includes converting AdminManager and the remaining usage of zkUtils in 
> KafkaApis to use ZookeeperClient/KafkaZkClient. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4827) Kafka connect: error with special characters in connector name

2017-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16270061#comment-16270061
 ] 

ASF GitHub Bot commented on KAFKA-4827:
---

GitHub user wicknicks opened a pull request:

https://github.com/apache/kafka/pull/4273

KAFKA-4827: Porting fix for KAFKA-4827 to v0.10 and v0.11



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/wicknicks/kafka KAFKA-4827-0.10.2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4273.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4273


commit c6e8dc1edb074a2b677de43031d8b59fec4a5e1e
Author: Arjun Satish 
Date:   2017-11-10T17:59:42Z

Correctly encode special chars while creating URI objects

Signed-off-by: Arjun Satish 

commit 02ddc84850acdc5dc55dc146e48744be2346a929
Author: Arjun Satish 
Date:   2017-11-10T23:48:13Z

Replace URLEncoder with URIBuilder

Also, update the tests to pass some additional characters in the connector
name along with adding a decode step using the URI class.

Signed-off-by: Arjun Satish 

commit a1cd574cf1987e59cb89fc33da12aa1ced8bf9f5
Author: Arjun Satish 
Date:   2017-11-28T22:33:16Z

Porting fix for KAFKA-4827 to v0.10 and v0.11

Signed-off-by: Arjun Satish 




> Kafka connect: error with special characters in connector name
> --
>
> Key: KAFKA-4827
> URL: https://issues.apache.org/jira/browse/KAFKA-4827
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.1.0
>Reporter: Aymeric Bouvet
>Assignee: Arjun Satish
>Priority: Minor
> Fix For: 1.1.0, 1.0.1
>
>
> When creating a connector, if the connector name (and possibly other 
> properties) end with a carriage return, kafka-connect will create the config 
> but report error
> {code}
> cat << EOF > file-connector.json
> {
>   "name": "file-connector\r",
>   "config": {
> "topic": "kafka-connect-logs\r",
> "tasks.max": "1",
> "file": "/var/log/ansible-confluent/connect.log",
> "connector.class": 
> "org.apache.kafka.connect.file.FileStreamSourceConnector"
>   }
> }
> EOF
> curl -X POST -H "Content-Type: application/json" -H "Accept: 
> application/json" -d @file-connector.json localhost:8083/connectors 
> {code}
> returns an error 500  and log the following
> {code}
> [2017-03-01 18:25:23,895] WARN  (org.eclipse.jetty.servlet.ServletHandler)
> javax.servlet.ServletException: java.lang.IllegalArgumentException: Illegal 
> character in path at index 27: /connectors/file-connector4
> at 
> org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:489)
> at 
> org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:427)
> at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:388)
> at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:341)
> at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:228)
> at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:812)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:587)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at 
> org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:159)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> 

<    2   3   4   5   6   7   8   9   10   11   >