[jira] [Commented] (KAFKA-5038) running multiple kafka streams instances causes one or more instance to get into file contention

2017-04-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15966069#comment-15966069
 ] 

ASF GitHub Bot commented on KAFKA-5038:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2848


> running multiple kafka streams instances causes one or more instance to get 
> into file contention
> 
>
> Key: KAFKA-5038
> URL: https://issues.apache.org/jira/browse/KAFKA-5038
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
> Environment: 3 Kafka broker machines and 3 kafka streams machines.
> Each machine is Linux 64 bit, CentOS 6.5 with 64GB memory, 8 vCPUs running in 
> AWS
> 31GB java heap space allocated to each KafkaStreams instance and 4GB 
> allocated to each Kafka broker.
>Reporter: Bharad Tirumala
>Assignee: Eno Thereska
> Fix For: 0.11.0.0, 0.10.2.1
>
>
> Having multiple kafka streams application instances causes one or more 
> instances to get get into file lock contention and the instance(s) become 
> unresponsive with uncaught exception.
> The exception is below:
> 22:14:37.621 [StreamThread-7] WARN  o.a.k.s.p.internals.StreamThread - 
> Unexpected state transition from RUNNING to NOT_RUNNING
> 22:14:37.621 [StreamThread-13] WARN  o.a.k.s.p.internals.StreamThread - 
> Unexpected state transition from RUNNING to NOT_RUNNING
> 22:14:37.623 [StreamThread-18] WARN  o.a.k.s.p.internals.StreamThread - 
> Unexpected state transition from RUNNING to NOT_RUNNING
> 22:14:37.625 [StreamThread-7] ERROR n.a.a.k.t.KStreamTopologyBase - Uncaught 
> Exception:org.apache.kafka.streams.errors.ProcessorStateException: task 
> directory [/data/kafka-streams/rtp-kstreams-metrics/0_119] doesn't exist and 
> couldn't be created
>   at 
> org.apache.kafka.streams.processor.internals.StateDirectory.directoryForTask(StateDirectory.java:75)
>   at 
> org.apache.kafka.streams.processor.internals.StateDirectory.lock(StateDirectory.java:102)
>   at 
> org.apache.kafka.streams.processor.internals.StateDirectory.cleanRemovedTasks(StateDirectory.java:205)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeClean(StreamThread.java:753)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:664)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:368)
> This happens within couple of minutes after the instances are up and there is 
> NO data being sent to the broker yet and the streams app is started with 
> auto.offset.reset set to "latest".
> Please note that there are no permissions or capacity issues. This may have 
> nothing to do with number of instances, but I could easily reproduce it when 
> I've 3 stream instances running. This is similar to the (and may be the same) 
> bug as [KAFKA-3758]
> Here are some relevant configuration info:
> 3 kafka brokers have one topic with 128 partitions and 1 replication
> 3 kafka streams applications (running on 3 machines) have a single processor 
> topology and this processor is not doing anything (the process() method just 
> returns and the punctuate method just commits)
> There is no data flowing yet, so the process() and puctuate() methods are not 
> even called yet.
> The 3 kafka stream instances have 43, 43 and 42 threads each respectively 
> (totally making up to 128 threads, so one task per thread distributed across 
> three streams instances on 3 machines).
> Here are the configurations that I'd played around with:
> session.timeout.ms=30
> heartbeat.interval.ms=6
> max.poll.records=100
> num.standby.replicas=1
> commit.interval.ms=1
> poll.ms=100
> When punctuate is scheduled to be called every 1000ms or 3000ms, the problem 
> happens every time. If punctuate is scheduled for 5000ms, I didn't see the 
> problem in my test scenario (described above), but it happened in my real 
> application. But this may have nothing to do with the issue, since punctuate 
> is not even called as there are no messages streaming through yet.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5059) Implement Transactional Coordinator

2017-04-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15966321#comment-15966321
 ] 

ASF GitHub Bot commented on KAFKA-5059:
---

Github user dguy closed the pull request at:

https://github.com/apache/kafka/pull/2846


> Implement Transactional Coordinator
> ---
>
> Key: KAFKA-5059
> URL: https://issues.apache.org/jira/browse/KAFKA-5059
> Project: Kafka
>  Issue Type: New Feature
>  Components: core
>Reporter: Damian Guy
>Assignee: Damian Guy
>
> This covers the implementation of the transaction coordinator to support 
> transactions, as described in KIP-98: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-98+-+Exactly+Once+Delivery+and+Transactional+Messaging



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5059) Implement Transactional Coordinator

2017-04-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15967233#comment-15967233
 ] 

ASF GitHub Bot commented on KAFKA-5059:
---

GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/2849

KAFKA-5059: Implement Transactional Coordinator



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/confluentinc/kafka exactly-once-tc

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2849.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2849


commit 4d17b7c96293ca8f9735049070512be9707aba27
Author: Guozhang Wang 
Date:   2017-03-02T01:42:49Z

Transaction log message format (#134)

* add transaction log message format
* add transaction timeout to initPid request
* collapse to one message type

commit af926510d2fd455a0ea4e82da83e10cde65db4e9
Author: Apurva Mehta 
Date:   2017-03-15T20:47:25Z

Fix build and test errors due to reabse onto idempotent-producer branch

commit fc3544bf6b55c48d487ef2b7877280d3ac90debb
Author: Guozhang Wang 
Date:   2017-03-17T05:40:49Z

Transaction log partition Immigration and Emigration (#142)

* sub-package transaction and group classes within coordinator
* add loading and cleaning up logic
* add transaction configs

commit fc5fe9226dd4374018f6b5fe3c182158530af193
Author: Guozhang Wang 
Date:   2017-03-21T04:38:35Z

Add transactions broker configs (#146)

* add all broker-side configs
* check for transaction timeout value
* added one more exception type

commit ef390df0eacc8d1f32f96b2db792326a053a5db1
Author: Guozhang Wang 
Date:   2017-03-31T22:20:05Z

Handle addPartitions and addOffsets on TC (#147)

* handling add offsets to txn
* add a pending state with prepareTransition / completeTransaction / 
abortTransition of state
* refactor handling logic for multiple in-flight requests

commit 2a6526a861546eb4102b900d1da703fd2914bd43
Author: Apurva Mehta 
Date:   2017-04-07T19:49:19Z

Fix build errors after rebase onto trunk and dropping out the request stubs 
and client changes.

commit 4d18bb178cd48364bf610e615b176ad8f0d8385f
Author: Apurva Mehta 
Date:   2017-04-03T21:17:25Z

Fix test errors after rebase:

 1. Notable conflicts are with the small API changes to
DelayedOperation and the newly introduced purgeDataBefore PR.

 2. Jason's update to support streaming decompression required a bit of
an overhaul to the way we handle aborted transactions on the consumer.

commit f639b962e8ba618baaef47611e21e2b85b5e5725
Author: Guozhang Wang 
Date:   2017-03-24T22:42:53Z

fix unit tests

commit 853c5e8abffdb723c6f6b818fdeeab94da8667ed
Author: Guozhang Wang 
Date:   2017-03-24T22:52:37Z

add sender thread

commit 879c01c3b5b305485cfd26cb8ceedf453b984067
Author: Guozhang Wang 
Date:   2017-03-28T01:04:53Z

rename TC Send Thread to general inter-broker send thread

commit 239e7f733f8b814ca2d966a80359d8d0de5dee50
Author: Guozhang Wang 
Date:   2017-03-29T21:58:45Z

add tc channel manager

commit b1561da6e2893fad7bcfacba76db4e4df6414577
Author: Guozhang Wang 
Date:   2017-03-29T21:59:26Z

missing files

commit 62685c7269fc648a2401fc7a71f31b9536d7c08a
Author: Guozhang Wang 
Date:   2017-03-31T22:15:37Z

add the txn marker channel manager

commit 298790154c9bfe46f8e4a6b2e0372297fb19896a
Author: Damian Guy 
Date:   2017-04-05T16:09:27Z

fix compilation errors

commit 4f5c23d051453d27f3179a442fe3d822b77d4e12
Author: Damian Guy 
Date:   2017-04-10T10:58:43Z

integrate EndTxnRequest

commit e5f25f31e85fd8104c3df8f8195ccb60694610bc
Author: Damian Guy 
Date:   2017-04-10T13:43:40Z

add test fo InterBrokerSendThread. Refactor to use delegation rather than 
inheritance

commit 8bbd7a07be28585cd329a1fc769fcc340f866af2
Author: Damian Guy 
Date:   2017-04-10T16:24:24Z

refactor TransactionMarkerChannelManager. Add some test

commit 195bccf8c3945696e6e15cc093072ba83e706eec
Author: Damian Guy 
Date:   2017-04-10T18:25:57Z

more tests

commit c28eb5a0b339cce023e278d7eafcf3e8a98fa8e2
Author: Damian Guy 
Date:   2017-04-11T09:23:36Z

remove some answered TODOs

commit 4346c4d36f242e2480e4a808bed0ef19df6a2335
Author: Damian Guy 
Date:   2017-04-11T15:46:37Z

update to WriteTxnMarkersRequest/Response from Trunk

commit 46880d78eae7d2e7853c404bd1d9b19b8ec4e569
Author: Damian Guy 
Date:   2017-04-11T16:19:01Z

add missing @Test annotation

commit cbcd55e0d046d8c6d88ddfa5bbdfbc230b171e13
Author: Damian Guy 
Date:   2017-04-12T19:59:19Z

fixes after rebase
Add tests for TransactionMarkerRequestCompletionHandler

commit b307e5d395afb4fafaa4546d1284b9e5bc73c146
Author: Damian G

[jira] [Commented] (KAFKA-5065) AbstractCoordinator.ensureCoordinatorReady() stuck in loop if absent any bootstrap servers

2017-04-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15967548#comment-15967548
 ] 

ASF GitHub Bot commented on KAFKA-5065:
---

GitHub user porshkevich opened a pull request:

https://github.com/apache/kafka/pull/2850

KAFKA-5065; AbstractCoordinator.ensureCoordinatorReady() stuck in loop if 
absent any bootstrap servers

add a consumer config: "max.block.ms"
default to 6 ms;
when specified, the ensureCoordinatorReady check default call will be 
limited by "max.block.ms"

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/porshkevich/kafka KAFKA-5065

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2850.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2850


commit 99004de30a5400b2d8554b4a4469039498e033d4
Author: Vladimir Porshkevich 
Date:   2017-04-13T12:41:31Z

Add max.block.ms to allow timing out ensureCoordinatorReady check.




> AbstractCoordinator.ensureCoordinatorReady() stuck in loop if absent any 
> bootstrap servers 
> ---
>
> Key: KAFKA-5065
> URL: https://issues.apache.org/jira/browse/KAFKA-5065
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.0.0, 0.10.0.1, 0.10.1.0, 0.10.1.1, 0.10.2.0
>Reporter: Vladimir Porshkevich
>  Labels: newbie
>   Original Estimate: 4m
>  Remaining Estimate: 4m
>
> If Consumer started with wrong bootstrap servers or absent any valid servers, 
> and Thread call Consumer.poll(timeout) with any timeout Thread stuck in loop 
> with debug logs like
> {noformat}
> org.apache.kafka.common.network.Selector - Connection with /172.31.1.100 
> disconnected
> java.net.ConnectException: Connection timed out: no further information
>   at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>   at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
>   at 
> org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:51)
>   at 
> org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:81)
>   at 
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:335)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:303)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:349)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:226)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:203)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:138)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:216)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:193)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:275)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1030)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995)
>   at 
> com.example.SccSpringCloudDemoApplication.main(SccSpringCloudDemoApplication.java:46)
> {noformat}
> Problem with AbstractCoordinator.ensureCoordinatorReady() method
> It uses Long.MAX_VALUE as timeout.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4346) Add foreachValue method to KStream

2017-04-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15968177#comment-15968177
 ] 

ASF GitHub Bot commented on KAFKA-4346:
---

Github user xvrl closed the pull request at:

https://github.com/apache/kafka/pull/2063


> Add foreachValue method to KStream
> --
>
> Key: KAFKA-4346
> URL: https://issues.apache.org/jira/browse/KAFKA-4346
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Xavier Léauté
>Assignee: Xavier Léauté
>Priority: Minor
>  Labels: needs-kip, newbie
>
> This would be the value-only counterpart to foreach, similar to mapValues.
> Adding this method would enhance readability and allow for Java 8 syntactic 
> sugar using method references without having to wrap existing methods that 
> only operate on the value type.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5069) add controller integration tests

2017-04-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15968497#comment-15968497
 ] 

ASF GitHub Bot commented on KAFKA-5069:
---

GitHub user onurkaraman opened a pull request:

https://github.com/apache/kafka/pull/2853

KAFKA-5069: add controller integration tests

Test the various controller protocols by observing zookeeper and broker 
state.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/onurkaraman/kafka KAFKA-5069

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2853.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2853


commit 55544d2375fa267762bc5ecc233f7a296202922d
Author: Onur Karaman 
Date:   2017-04-14T01:54:43Z

KAFKA-5069: add controller integration tests

Test the various controller protocols by observing zookeeper and broker 
state.




> add controller integration tests
> 
>
> Key: KAFKA-5069
> URL: https://issues.apache.org/jira/browse/KAFKA-5069
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Onur Karaman
>Assignee: Onur Karaman
>
> Test the various controller protocols by observing zookeeper and broker state.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4986) Add producer per task support

2017-04-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15969086#comment-15969086
 ] 

ASF GitHub Bot commented on KAFKA-4986:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2773


> Add producer per task support
> -
>
> Key: KAFKA-4986
> URL: https://issues.apache.org/jira/browse/KAFKA-4986
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>
> Add new config parameter {{processing_guarantee}} and enable "producer per 
> task" initialization of new config is set to {{exactly_once}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4986) Add producer per task support

2017-04-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15969243#comment-15969243
 ] 

ASF GitHub Bot commented on KAFKA-4986:
---

GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/2854

KAFKA-4986: Adding producer per task (follow-up)

 - addressing open Github comments from #2773
 - test clean-up

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka 
kafka-4986-producer-per-task-follow-up

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2854.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2854


commit 76e12832b718ab674129dab21ef97e675bf4ac37
Author: Matthias J. Sax 
Date:   2017-04-13T23:30:27Z

KAFKA-4986: Adding producer per task (follow-up)
 - addressing open Github comments from #2773
 - test clean-up




> Add producer per task support
> -
>
> Key: KAFKA-4986
> URL: https://issues.apache.org/jira/browse/KAFKA-4986
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
> Fix For: 0.11.0.0
>
>
> Add new config parameter {{processing_guarantee}} and enable "producer per 
> task" initialization of new config is set to {{exactly_once}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5073) Kafka Streams stuck rebalancing after exception thrown in rebalance listener

2017-04-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15969505#comment-15969505
 ] 

ASF GitHub Bot commented on KAFKA-5073:
---

GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/2856

KAFKA-5073: Kafka Streams stuck rebalancing after exception thrown in 
rebalance listener



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka kafka-5073

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2856.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2856


commit 3d47b2327f6f519616ed4471fae5dbb3b7b64251
Author: Matthias J. Sax 
Date:   2017-04-14T20:20:45Z

KAFKA-5073: Kafka Streams stuck rebalancing after exception thrown in 
rebalance listener




> Kafka Streams stuck rebalancing after exception thrown in rebalance listener
> 
>
> Key: KAFKA-5073
> URL: https://issues.apache.org/jira/browse/KAFKA-5073
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0
>Reporter: Xavier Léauté
>Assignee: Matthias J. Sax
>
> An exception thrown in the Steams rebalance listener will cause the Kafka 
> consumer coordinator to log an error, but the streams app will not bubble the 
> exception up to the uncaught exception handler.
> This will leave the app stuck in rebalancing state if for instance an 
> exception is thrown by the consumer during state restore.
> Here is an example log that shows the error when the consumer throws a CRC 
> error during state restore.
> {code}
> [2017-04-13 14:46:41,409] ERROR [XXX-StreamThread-1] User provided listener 
> org.apache.kafka.streams.processor.internals.StreamThread$RebalanceListener 
> for group XXX failed on partition assignment 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:269)
> org.apache.kafka.common.KafkaException: Record batch for partition 
> _my_topic-0 at offset 42 is invalid, cause: Record is corrupt (stored crc = 
> 1982353474, computed crc = 1572524932)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.maybeEnsureValid(Fetcher.java:904)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.nextFetchedRecord(Fetcher.java:936)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.fetchRecords(Fetcher.java:960)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.access$1200(Fetcher.java:864)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchRecords(Fetcher.java:517)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:482)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1069)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1002)
> at 
> org.apache.kafka.streams.processor.internals.StoreChangelogReader.restore(StoreChangelogReader.java:145)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread$RebalanceListener.onPartitionsAssigned(StreamThread.java:1329)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:265)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:363)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:310)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:296)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1037)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1002)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:546)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:702)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:326)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5049) Chroot check should be done for each ZkUtils instance

2017-04-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15969834#comment-15969834
 ] 

ASF GitHub Bot commented on KAFKA-5049:
---

GitHub user anukin opened a pull request:

https://github.com/apache/kafka/pull/2857

KAFKA-5049 

This PR is for 
https://issues.apache.org/jira/browse/KAFKA-5049?jql=project%20%3D%20KAFKA%20AND%20labels%20%3D%20newbie%20AND%20status%20%3D%20Open
This is for enabling check for chroot for each Zkutils instance.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/anukin/kafka KAFKA_5049_zkroot_check

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2857.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2857


commit 557332b0a4fbb7730334439deed417b0be849a96
Author: anukin 
Date:   2017-04-15T06:55:11Z

made zkpath an instance variable in zkutils




> Chroot check should be done for each ZkUtils instance
> -
>
> Key: KAFKA-5049
> URL: https://issues.apache.org/jira/browse/KAFKA-5049
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>  Labels: newbie
> Fix For: 0.11.0.0
>
>
> In KAFKA-1994, the check for ZK chroot was moved to ZkPath. However, ZkPath 
> is a JVM singleton and we may use multiple ZkClient instances with multiple 
> ZooKeeper ensembles in the same JVM (for cluster info, authorizer and 
> pluggable code provided by users).
> The right way to do this is to make ZkPath an instance variable in ZkUtils so 
> that we do the check once per ZkUtils instance.
> cc [~gwenshap] [~junrao], who reviewed KAFKA-1994, in case I am missing 
> something.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-438) Code cleanup in MessageTest

2017-04-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15970161#comment-15970161
 ] 

ASF GitHub Bot commented on KAFKA-438:
--

GitHub user jozi-k opened a pull request:

https://github.com/apache/kafka/pull/2858

KAFKA-438: Code cleanup in MessageTest



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jozi-k/kafka MessageTest-code-cleanup

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2858.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2858


commit f5975183306e42e2881fd01de80dad65df628d33
Author: Jozef Koval 
Date:   2017-04-15T21:43:24Z

KAFKA-438: Code cleanup in MessageTest




> Code cleanup in MessageTest
> ---
>
> Key: KAFKA-438
> URL: https://issues.apache.org/jira/browse/KAFKA-438
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.7.1
>Reporter: Jim Plush
>Priority: Trivial
> Attachments: KAFKA-438
>
>
> While exploring the Unit Tests this class had an unused import statement, 
> some ambiguity on which HashMap implementation was being used and assignments 
> of function returns when not required. 
> Trivial stuff



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5075) Defer exception to the next pollOnce() if consumer's fetch position has already increased

2017-04-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15970219#comment-15970219
 ] 

ASF GitHub Bot commented on KAFKA-5075:
---

GitHub user lindong28 opened a pull request:

https://github.com/apache/kafka/pull/2859

KAFKA-5075; Defer exception to the next pollOnce() if consumer's fetch 
position has already increased



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lindong28/kafka KAFKA-5075

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2859.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2859


commit a0d358396c94c6252790920fc80d42d60caf6289
Author: Dong Lin 
Date:   2017-04-15T22:42:26Z

KAFKA-5075; Defer exception to the next pollOnce() if consumer's fetch 
position has already increased




> Defer exception to the next pollOnce() if consumer's fetch position has 
> already increased
> -
>
> Key: KAFKA-5075
> URL: https://issues.apache.org/jira/browse/KAFKA-5075
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer
>Affects Versions: 0.10.2.0
>Reporter: Jiangjie Qin
>Assignee: Dong Lin
> Fix For: 0.11.0.0
>
>
> In Fetcher.fetchRecords() we iterate over the partition data to collect the 
> ConsumerRecords, after we collect some consumer records from a partition, we 
> advance the position of that partition then move on to the next partition. If 
> the next partition throws exceptions (e.g. OffsetOutOfRangeException), the 
> messages that have already been read out of the buffer will not be delivered 
> to the users. Since the positions of the previous partitions have been be 
> updated, those messages will not be consumed again either.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5075) Defer exception to the next pollOnce() if consumer's fetch position has already increased

2017-04-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15970727#comment-15970727
 ] 

ASF GitHub Bot commented on KAFKA-5075:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2859


> Defer exception to the next pollOnce() if consumer's fetch position has 
> already increased
> -
>
> Key: KAFKA-5075
> URL: https://issues.apache.org/jira/browse/KAFKA-5075
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer
>Affects Versions: 0.10.2.0
>Reporter: Jiangjie Qin
>Assignee: Dong Lin
> Fix For: 0.11.0.0
>
>
> In Fetcher.fetchRecords() we iterate over the partition data to collect the 
> ConsumerRecords, after we collect some consumer records from a partition, we 
> advance the position of that partition then move on to the next partition. If 
> the next partition throws exceptions (e.g. OffsetOutOfRangeException), the 
> messages that have already been read out of the buffer will not be delivered 
> to the users. Since the positions of the previous partitions have been be 
> updated, those messages will not be consumed again either.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4862) Kafka client connect to a shutdown node will block for a long time

2017-04-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15971112#comment-15971112
 ] 

ASF GitHub Bot commented on KAFKA-4862:
---

GitHub user pengwei-li opened a pull request:

https://github.com/apache/kafka/pull/2861

KAFKA-4862: Kafka client connect to a shutdown node will block for a long 
time

Author: pengwei 

Reviewers: Jiangjie Qin 
@becketqin 

Modify: Add a connect timeout for the kafka client to avoid long blocking if
network is down

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pengwei-li/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2861.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2861


commit a920d4e9807add634cc44e4b7cf9e156edd515cf
Author: pengwei-li 
Date:   2016-07-10T00:31:56Z

 KAFKA-1429: Yet another deadlock in controller shutdown

 Author: pengwei 

 Reviewers: NA

commit 2a5a4322c8ac359587f05b459588cd2b5843a2ac
Author: pengwei-li 
Date:   2016-11-20T11:31:21Z

Merge branch 'trunk' of https://github.com/apache/kafka into trunk

commit b827a8b4f249050ca40db9f14e8e10b01650a6b8
Author: pengwei-li 
Date:   2016-11-20T12:18:49Z

Merge branch 'trunk' of https://github.com/apache/kafka into trunk

commit 43e186f223dee1e24177a87ee6888eaae91547d9
Author: pengwei-li 
Date:   2016-11-27T01:54:00Z

Merge branch 'trunk' of https://github.com/apache/kafka into trunk

commit febe4f433452a2ad8849a329bc5c9f4d1507a317
Author: pengwei-li 
Date:   2016-11-27T03:31:26Z

issue:KAFKA-4229
reason: controoler can't start afeter several zk expired event

commit f6791b29998a49dffbefdf5414584b7849bfbd3c
Author: c00353482 
Date:   2017-01-04T02:25:55Z

Merge branch 'trunk' of https://github.com/apache/kafka into trunk

commit 2e7090567aa4cd2ffe02fa927bcfdd062029087b
Author: c00353482 
Date:   2017-04-01T07:58:09Z

Merge https://github.com/apache/kafka into trunk

commit fb77b2e7a32d1a678b45aa035211597793ff1fd0
Author: c00353482 
Date:   2017-04-14T08:16:04Z

Merge branch 'trunk' of https://github.com/apache/kafka into trunk

commit 270e0b45c08893463c8bf4f553d5a506d6603d41
Author: c00353482 
Date:   2017-04-17T08:08:35Z

Merge branch 'trunk' of https://github.com/apache/kafka into trunk

commit 13c6134dfb2313f946e37d86985ef8b4336706ed
Author: c00353482 
Date:   2017-04-17T14:15:52Z

KAFKA-4862: Kafka client connect to a shutdown node will block for a long
time
Add a connect time out for the kafka client to avoid long blocking if
network is down

Author: pengwei 

Reviewers: Ismael Juma 




> Kafka client connect to a shutdown node will block for a long time
> --
>
> Key: KAFKA-4862
> URL: https://issues.apache.org/jira/browse/KAFKA-4862
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0, 0.10.2.0
>Reporter: Pengwei
>Assignee: Pengwei
> Fix For: 0.11.0.0
>
>
> Currently in our test env, we found after one of the broker node crash(reboot 
> or os crash), the client maybe still connecting to the crash node to send 
> metadata request or other request, and it need about several  minutes to 
> aware the connection is timeout then try another node to connect to send the 
> request.  Then the client may still not aware the metadata change after 
> several minutes.
> We don't have a connection timeout for the network client, we should add a 
> connection timeout for the client



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5076) Remove usage of java.xml.bind.* classes hidden by default in JDK9

2017-04-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15971321#comment-15971321
 ] 

ASF GitHub Bot commented on KAFKA-5076:
---

GitHub user xvrl opened a pull request:

https://github.com/apache/kafka/pull/2862

KAFKA-5076: remove usage of java.xml.bind.* classes

* replaces base64 from DatatypeConverter with Base64 introduced in JDK 8

Given that we plan to stop supporting Java 7 in 0.11 this should be fine, 
but merging this would depend on when we are comfortable introducing 
backwards-incompatible changes.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xvrl/kafka KAFKA-5076

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2862.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2862


commit b610071924ea46a1792a03fcbe4d4871e518ecdc
Author: Xavier Léauté 
Date:   2017-04-14T23:20:42Z

KAFKA-5076: remove usage of java.xml.bind.* classes

* replaces base64 from DatatypeConverter with Base64 introduced in jdk 8




> Remove usage of java.xml.bind.* classes hidden by default in JDK9
> -
>
> Key: KAFKA-5076
> URL: https://issues.apache.org/jira/browse/KAFKA-5076
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Xavier Léauté
>Assignee: Xavier Léauté
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5077) Make server start script work against Java 9

2017-04-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15971406#comment-15971406
 ] 

ASF GitHub Bot commented on KAFKA-5077:
---

GitHub user xvrl opened a pull request:

https://github.com/apache/kafka/pull/2863

KAFKA-5077 fix GC logging arguments for Java 9



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xvrl/kafka fix-jdk9-gc-logs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2863.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2863


commit cf024348133e61383a2a67587389163d21b172a6
Author: Xavier Léauté 
Date:   2017-04-17T17:47:39Z

KAFKA-5077 fix GC logging arguments for Java 9




> Make server start script work against Java 9
> 
>
> Key: KAFKA-5077
> URL: https://issues.apache.org/jira/browse/KAFKA-5077
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.11.0.0
>Reporter: Xavier Léauté
>Priority: Minor
>
> Current start script fails with {{Unrecognized VM option 
> 'PrintGCDateStamps'}} using Java 9-ea



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5036) Followups from KIP-101

2017-04-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15971884#comment-15971884
 ] 

ASF GitHub Bot commented on KAFKA-5036:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2831


> Followups from KIP-101
> --
>
> Key: KAFKA-5036
> URL: https://issues.apache.org/jira/browse/KAFKA-5036
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.11.0.0
>Reporter: Jun Rao
>Assignee: Jun Rao
> Fix For: 0.11.0.0
>
>
> 1. It would be safer to hold onto the leader lock in Partition while serving 
> an OffsetForLeaderEpoch request.
> 2. Currently, we update the leader epoch in epochCache after log append in 
> the follower but before log append in the leader. It would be more consistent 
> to always do this after log append. This also avoids issues related to 
> failure in log append.
> 3. OffsetsForLeaderEpochRequest/OffsetsForLeaderEpochResponse:
> The code that does grouping can probably be replaced by calling 
> CollectionUtils.groupDataByTopic(). Done: 
> https://github.com/apache/kafka/commit/359a68510801a22630a7af275c9935fb2d4c8dbf
> 4. The following line in LeaderEpochFileCache is hit several times when 
> LogTest is executed:
> {code}
>if (cachedLatestEpoch == None) error("Attempt to assign log end offset 
> to epoch before epoch has been set. This should never happen.")
> {code}
> This should be an assert (with the tests fixed up)
> 5. The constructor of LeaderEpochFileCache has the following:
> {code}
> lock synchronized { ListBuffer(checkpoint.read(): _*) }
> {code}
> But everywhere else uses a read or write lock. We should use consistent 
> locking.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4850) RocksDb cannot use Bloom Filters

2017-04-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15971888#comment-15971888
 ] 

ASF GitHub Bot commented on KAFKA-4850:
---

GitHub user bharatviswa504 opened a pull request:

https://github.com/apache/kafka/pull/2865

KAFKA-4850: RocksDB using Bloomfilters



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bharatviswa504/kafka KAFKA-4850

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2865.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2865


commit fc9ef87a9bae3e2eba6a71f9af8f18d3678386b0
Author: Bharat Viswanadham 
Date:   2017-04-18T00:28:12Z

KAFKA-4850: RocksDB using Bloomfilters




> RocksDb cannot use Bloom Filters
> 
>
> Key: KAFKA-4850
> URL: https://issues.apache.org/jira/browse/KAFKA-4850
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Eno Thereska
>Assignee: Bharat Viswanadham
>Priority: Minor
> Fix For: 0.11.0.0
>
>
> Bloom Filters would speed up RocksDb lookups. However they currently do not 
> work in RocksDb 5.0.2. This has been fixed in trunk, but we'll have to wait 
> until that is released and tested. 
> Then we can add the line in RocksDbStore.java in openDb:
> tableConfig.setFilter(new BloomFilter(10));



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4850) RocksDb cannot use Bloom Filters

2017-04-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15971991#comment-15971991
 ] 

ASF GitHub Bot commented on KAFKA-4850:
---

Github user bharatviswa504 closed the pull request at:

https://github.com/apache/kafka/pull/2865


> RocksDb cannot use Bloom Filters
> 
>
> Key: KAFKA-4850
> URL: https://issues.apache.org/jira/browse/KAFKA-4850
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Eno Thereska
>Assignee: Bharat Viswanadham
>Priority: Minor
> Fix For: 0.11.0.0
>
>
> Bloom Filters would speed up RocksDb lookups. However they currently do not 
> work in RocksDb 5.0.2. This has been fixed in trunk, but we'll have to wait 
> until that is released and tested. 
> Then we can add the line in RocksDbStore.java in openDb:
> tableConfig.setFilter(new BloomFilter(10));



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5069) add controller integration tests

2017-04-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15973748#comment-15973748
 ] 

ASF GitHub Bot commented on KAFKA-5069:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2853


> add controller integration tests
> 
>
> Key: KAFKA-5069
> URL: https://issues.apache.org/jira/browse/KAFKA-5069
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Onur Karaman
>Assignee: Onur Karaman
> Fix For: 0.11.0.0
>
>
> Test the various controller protocols by observing zookeeper and broker state.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5088) some spelling error in code comment

2017-04-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15974182#comment-15974182
 ] 

ASF GitHub Bot commented on KAFKA-5088:
---

GitHub user auroraxlh opened a pull request:

https://github.com/apache/kafka/pull/2871

KAFKA-5088: some spelling error in code comment

fix some spelling errors

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/auroraxlh/kafka fix_spellingerror

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2871.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2871


commit 9ad468b401765060c8d116001b73c1c9db0c6e56
Author: xinlihua 
Date:   2017-04-19T06:49:53Z

KAFKA-5088: some spelling error in code comment




> some spelling error in code comment 
> 
>
> Key: KAFKA-5088
> URL: https://issues.apache.org/jira/browse/KAFKA-5088
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, tools
>Affects Versions: 0.10.2.0
>Reporter: Xin
>Priority: Trivial
>
> some spelling error in code comment :
> metadata==》metatdata...
> metadata==》metatadata
> propogated==》propagated



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5049) Chroot check should be done for each ZkUtils instance

2017-04-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15974435#comment-15974435
 ] 

ASF GitHub Bot commented on KAFKA-5049:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2857


> Chroot check should be done for each ZkUtils instance
> -
>
> Key: KAFKA-5049
> URL: https://issues.apache.org/jira/browse/KAFKA-5049
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>  Labels: newbie
> Fix For: 0.11.0.0
>
>
> In KAFKA-1994, the check for ZK chroot was moved to ZkPath. However, ZkPath 
> is a JVM singleton and we may use multiple ZkClient instances with multiple 
> ZooKeeper ensembles in the same JVM (for cluster info, authorizer and 
> pluggable code provided by users).
> The right way to do this is to make ZkPath an instance variable in ZkUtils so 
> that we do the check once per ZkUtils instance.
> cc [~gwenshap] [~junrao], who reviewed KAFKA-1994, in case I am missing 
> something.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5072) Kafka topics should allow custom metadata configs within some config namespace

2017-04-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15974878#comment-15974878
 ] 

ASF GitHub Bot commented on KAFKA-5072:
---

GitHub user soumabrata-chakraborty opened a pull request:

https://github.com/apache/kafka/pull/2873

KAFKA-5072[WIP]: Kafka topics should allow custom metadata configs within 
some config namespace

@benstopford @ijuma @granthenke @junrao 

This change allows one to define any topic property within the namespace 
"metadata.*" - for e.g. metadata.description, metadata.project, 
metadata.contact.info, etc (More details on the JIRA)

Raising a PR with [WIP] tag since I am not sure how to add this to the 
documentation given that the list of topic properties is auto-generated for the 
documentation.

This contribution is my original work and I license the work to the Kafka 
@project under the project's open source license

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/soumabrata-chakraborty/kafka KAFKA-5072

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2873.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2873


commit 6d7abaf33d1d3754b49c599f55c505f3b1929237
Author: Soumabrata Chakraborty 
Date:   2017-04-19T02:55:38Z

Allow custom metadata configs within the namespace "metadata"




> Kafka topics should allow custom metadata configs within some config namespace
> --
>
> Key: KAFKA-5072
> URL: https://issues.apache.org/jira/browse/KAFKA-5072
> Project: Kafka
>  Issue Type: Improvement
>  Components: config
>Affects Versions: 0.10.2.0
>Reporter: Soumabrata Chakraborty
>Assignee: Soumabrata Chakraborty
>Priority: Minor
>
> Kafka topics should allow custom metadata configs
> Such config properties may have some fixed namespace e.g. metadata* or custom*
> This is handy for governance.  For example, in large organizations sharing a 
> kafka cluster - it might be helpful to be able to configure properties like 
> metadata.contact.info, metadata.project, metadata.description on a topic.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5090) Kafka Streams SessionStore.findSessions javadoc broken

2017-04-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15974897#comment-15974897
 ] 

ASF GitHub Bot commented on KAFKA-5090:
---

GitHub user mihbor opened a pull request:

https://github.com/apache/kafka/pull/2874

KAFKA-5090 Kafka Streams SessionStore.findSessions javadoc broken



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mihbor/kafka patch-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2874.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2874


commit 0bcb6fba658826964589fe409f80511a31c3164b
Author: mihbor 
Date:   2017-04-19T15:18:04Z

KAFKA-5090 Kafka Streams SessionStore.findSessions javadoc broken




> Kafka Streams SessionStore.findSessions javadoc broken
> --
>
> Key: KAFKA-5090
> URL: https://issues.apache.org/jira/browse/KAFKA-5090
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Michal Borowiecki
>Priority: Trivial
>
> {code}
> /**
>  * Fetch any sessions with the matching key and the sessions end is &le 
> earliestEndTime and the sessions
>  * start is &ge latestStartTime
>  */
> KeyValueIterator, AGG> findSessions(final K key, long 
> earliestSessionEndTime, final long latestSessionStartTime);
> {code}
> The conditions in the javadoc comment are inverted (le should be ge and ge 
> shoudl be le), since this is what the code does. They were correct in the 
> original KIP:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-94+Session+Windows
> {code}
> /**
>  * Find any aggregated session values with the matching key and where the
>  * session’s end time is >= earliestSessionEndTime, i.e, the oldest 
> session to
>  * merge with, and the session’s start time is <= latestSessionStartTime, 
> i.e,
>  * the newest session to merge with.
>  */
>KeyValueIterator, AGG> findSessionsToMerge(final K key, final 
> long earliestSessionEndTime, final long latestSessionStartTime);
> {code}
> Also, the escaped html character references are missing the trailing 
> semicolon making them render as-is.
> Happy to have this assigned to me to fix as it seems trivial.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4808) send of null key to a compacted topic should throw error back to user

2017-04-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15975450#comment-15975450
 ] 

ASF GitHub Bot commented on KAFKA-4808:
---

GitHub user MayureshGharat opened a pull request:

https://github.com/apache/kafka/pull/2875

KAFKA-4808 : Send of null key to a compacted topic should throw 
non-retriable error back to user



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/MayureshGharat/kafka KAFKA-4808

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2875.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2875


commit 731f1d1fedda5e09a8bd7094baf2e1572a3ba06e
Author: MayureshGharat 
Date:   2017-04-19T17:45:08Z

Added non retriable exception for producing record with null key to a 
compacted topic




> send of null key to a compacted topic should throw error back to user
> -
>
> Key: KAFKA-4808
> URL: https://issues.apache.org/jira/browse/KAFKA-4808
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.10.2.0
>Reporter: Ismael Juma
>Assignee: Mayuresh Gharat
> Fix For: 0.11.0.0
>
>
> If a message with a null key is produced to a compacted topic, the broker 
> returns `CorruptRecordException`, which is a retriable exception. As such, 
> the producer keeps retrying until retries are exhausted or request.timeout.ms 
> expires and eventually throws a TimeoutException. This is confusing and not 
> user-friendly.
> We should throw a meaningful error back to the user. From an implementation 
> perspective, we would have to use a non retriable error code to avoid this 
> issue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4814) ZookeeperLeaderElector not respecting zookeeper.set.acl

2017-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15976360#comment-15976360
 ] 

ASF GitHub Bot commented on KAFKA-4814:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2845


> ZookeeperLeaderElector not respecting zookeeper.set.acl
> ---
>
> Key: KAFKA-4814
> URL: https://issues.apache.org/jira/browse/KAFKA-4814
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.10.1.1
>Reporter: Stevo Slavic
>Assignee: Rajini Sivaram
>  Labels: newbie
> Fix For: 0.11.0.0
>
>
> By [migration 
> guide|https://kafka.apache.org/documentation/#zk_authz_migration] for 
> enabling ZooKeeper security on an existing Apache Kafka cluster, and [broker 
> configuration 
> documentation|https://kafka.apache.org/documentation/#brokerconfigs] for 
> {{zookeeper.set.acl}} configuration property, when this property is set to 
> false Kafka brokers should not be setting any ACLs on ZooKeeper nodes, even 
> when JAAS config file is provisioned to broker. 
> Problem is that there is broker side logic, like one in 
> {{ZookeeperLeaderElector}} making use of {{JaasUtils#isZkSecurityEnabled}}, 
> which does not respect this configuration property, resulting in ACLs being 
> set even when there's just JAAS config file provisioned to Kafka broker while 
> {{zookeeper.set.acl}} is set to {{false}}.
> Notice that {{JaasUtils}} is in {{org.apache.kafka.common.security}} package 
> of {{kafka-clients}} module, while {{zookeeper.set.acl}} is broker side only 
> configuration property.
> To make it possible without downtime to enable ZooKeeper authentication on 
> existing cluster, it should be possible to have all Kafka brokers in cluster 
> first authenticate to ZooKeeper cluster, without ACLs being set. Only once 
> all ZooKeeper clients (Kafka brokers and others) are authenticating to 
> ZooKeeper cluster then ACLs can be started being set.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5095) ThreadCacheTest.cacheOverheadsSmallValues fails intermittently

2017-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15976372#comment-15976372
 ] 

ASF GitHub Bot commented on KAFKA-5095:
---

GitHub user enothereska opened a pull request:

https://github.com/apache/kafka/pull/2877

KAFKA-5095: Adjust accepted overhead



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/enothereska/kafka KAFKA-5095-cacheOverheads

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2877.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2877


commit 3f15637fc6832baed6925c9653ba72bcc02d9fb8
Author: Eno Thereska 
Date:   2017-04-20T09:34:24Z

Adjust fudge factor




> ThreadCacheTest.cacheOverheadsSmallValues fails intermittently 
> ---
>
> Key: KAFKA-5095
> URL: https://issues.apache.org/jira/browse/KAFKA-5095
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0
>Reporter: Damian Guy
>Assignee: Eno Thereska
>
> {code}
> java.lang.AssertionError: Used memory size 249731736 greater than expected 
> 2.47212045E8
> Stacktrace
> java.lang.AssertionError: Used memory size 249731736 greater than expected 
> 2.47212045E8
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.kafka.streams.state.internals.ThreadCacheTest.checkOverheads(ThreadCacheTest.java:98)
>   at 
> org.apache.kafka.streams.state.internals.ThreadCacheTest.cacheOverheadsSmallValues(ThreadCacheTest.java:111)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-3070) SASL unit tests dont work with IBM JDK

2017-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15976496#comment-15976496
 ] 

ASF GitHub Bot commented on KAFKA-3070:
---

GitHub user mimaison opened a pull request:

https://github.com/apache/kafka/pull/2878

KAFKA-3070: SASL unit tests dont work with IBM JDK

Use IBM Kerberos module for SASL tests if running on IBM JDK

Developed with @edoardocomar
Based on https://github.com/apache/kafka/pull/738 by @rajinisivaram

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mimaison/kafka KAFKA-3070

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2878.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2878


commit d64b569cfc652f0e15450f47999c9c07e97f2ab7
Author: Mickael Maison 
Date:   2017-04-19T17:56:54Z

KAFKA-3070: SASL unit tests dont work with IBM JDK

Use IBM Kerberos module for SASL tests if running on IBM JDK

Developed with @edoardocomar
Based on https://github.com/apache/kafka/pull/738 by @rajinisivaram




> SASL unit tests dont work with IBM JDK
> --
>
> Key: KAFKA-3070
> URL: https://issues.apache.org/jira/browse/KAFKA-3070
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>
> jaas.conf used for SASL tests in core use the Kerberos module 
> com.sun.security.auth.module.Krb5LoginModule and hence dont work with IBM 
> JDK. The IBM JDK Kerberos module com.ibm.security.auth.module.Krb5LoginModule 
> should be used along with properties corresponding to this module when tests 
> are run with IBM JDK.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5094) Censor SCRAM config change logging

2017-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15976518#comment-15976518
 ] 

ASF GitHub Bot commented on KAFKA-5094:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/2879

KAFKA-5094: Replace SCRAM credentials in broker logs with tag hidden



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-5094

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2879.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2879


commit b5251fa1c07bce83b56e68063d4a3ef2cef20c2c
Author: Rajini Sivaram 
Date:   2017-04-20T11:08:13Z

KAFKA-5094: Replace SCRAM credentials in broker logs with tag hidden




> Censor SCRAM config change logging
> --
>
> Key: KAFKA-5094
> URL: https://issues.apache.org/jira/browse/KAFKA-5094
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Affects Versions: 0.10.2.0
>Reporter: Johan Ström
>Assignee: Rajini Sivaram
>
> (As mentioned in comment on KAFKA-4943):
> Another possibly bad thing is that Kafka logs the credentials in the clear 
> too (0.10.2.0):
> {code}
> [2017-04-05 16:29:00,266] INFO Processing notification(s) to /config/changes 
> (kafka.common.ZkNodeChangeNotificationListener)
> [2017-04-05 16:29:00,282] INFO Processing override for entityPath: 
> users/kafka with config: 
> {SCRAM-SHA-512=salt=ZGl6dnRzeWQ5ZjJhNWo1bWdxN2draG96Ng==,stored_key=BEdel+ChGSnpdpV0f8s8J/fWlwZJbUtAD1N6FygpPLK1AiVjg0yiHCvigq1R2x+o72QSvNkyFITuVZMlrj8hZg==,server_key=/RZ/EcGAaXwAKvFknVpsBHzC4tBXBLPJQnN4tM/s0wJpMcR9qvvJTGKM9Nx+zoXCc9buNoCd+/2LpL+yWde+/w==,iterations=4096}
>  (kafka.server.DynamicConfigManager)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4937) Batch resetting offsets in Streams' StoreChangelogReader

2017-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15977466#comment-15977466
 ] 

ASF GitHub Bot commented on KAFKA-4937:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2769


> Batch resetting offsets in Streams' StoreChangelogReader
> 
>
> Key: KAFKA-4937
> URL: https://issues.apache.org/jira/browse/KAFKA-4937
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Damian Guy
>  Labels: newbie++, performance
> Fix For: 0.11.0.0
>
>
> Currently in {{StoreChangelogReader}} we are calling {{consumer.position()}} 
> when logging as well as setting starting offset right after 
> {{seekingToBeginning}}, which will incur a blocking round trip with offset 
> request. We should consider batching those in a single round trip for all 
> partitions that needs to seek to beginning (i.e. needs to reset offset).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5014) SSL Channel not ready but tcp is established and the server is hung will not sending metadata

2017-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15977533#comment-15977533
 ] 

ASF GitHub Bot commented on KAFKA-5014:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2813


> SSL Channel not ready but tcp is established and the server is hung will not 
> sending metadata
> -
>
> Key: KAFKA-5014
> URL: https://issues.apache.org/jira/browse/KAFKA-5014
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1, 0.10.2.0
>Reporter: Pengwei
>Priority: Minor
>  Labels: reliability
> Fix For: 0.11.0.0
>
>
> In our test env, QA hang one of the connecting broker of the producer, then 
> the producer will be stuck in send method, and throw the exception: fail to 
> update metadata after request timeout.
> I found the reason as follow:  when the producer chose one of the broker to 
> send metadata, it connect to the broker, but the broker is hang, the tcp is 
> connected and Network client marks this broker is connected, but the SSL 
> channel is not ready yet so the channel is not ready.
> Then the Network client chooses the connected node in the leastLoadedNode 
> every time to send the metadata, but the node's channel is not ready yet.  
> So the producer stuck in getting metadata and will not try another node to 
> request metadata.  The client should not stuck only one node is hung



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5047) NullPointerException while using GlobalKTable in KafkaStreams

2017-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15977568#comment-15977568
 ] 

ASF GitHub Bot commented on KAFKA-5047:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2834


> NullPointerException while using GlobalKTable in KafkaStreams
> -
>
> Key: KAFKA-5047
> URL: https://issues.apache.org/jira/browse/KAFKA-5047
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Ivan Ursul
>Assignee: Damian Guy
>
> Main.java : https://gist.github.com/ivanursul/dcd4bb382c05843606a96417561b4b31
> pom.xml : https://gist.github.com/ivanursul/25baebc584e57c1fa200335f2cd21b43
> Logs: https://gist.github.com/ivanursul/4d023c783a80ea745f6ebade88c6b810
> WordsWithCountsTopic messages: 
> https://gist.github.com/ivanursul/42b2312055ab200bd4bba1d4cab5791a
> I use kafka_2.10-0.10.2.0 downloaded from official website.
> Am I doing something wrong here ? I tried to follow this example:
> https://github.com/confluentinc/examples/blob/3.2.x/kafka-streams/src/main/java/io/confluent/examples/streams/GlobalKTablesExample.java
> Most probably, the problem is because there are missing keys for some 
> messages:
> https://gist.github.com/ivanursul/87949fd35b67c6a7b22ceef7af72ea1c
> https://gist.github.com/ivanursul/366689aed57ee9d861846dcf4ccdae7c
> So, to finalize, the problem is that I mistakenly choose a wrong topic for  a 
> raw text lines and put them to another topic, which should have keys and 
> which was connected to GlobalKTable. Since there were nulls, GlobalKTable 
> failed to initialize. It would be nice if Kafka can send a warning, that 
> partition contains messages with null keys.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5073) Kafka Streams stuck rebalancing after exception thrown in rebalance listener

2017-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15977582#comment-15977582
 ] 

ASF GitHub Bot commented on KAFKA-5073:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2856


> Kafka Streams stuck rebalancing after exception thrown in rebalance listener
> 
>
> Key: KAFKA-5073
> URL: https://issues.apache.org/jira/browse/KAFKA-5073
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0
>Reporter: Xavier Léauté
>Assignee: Matthias J. Sax
> Fix For: 0.11.0.0
>
>
> An exception thrown in the Steams rebalance listener will cause the Kafka 
> consumer coordinator to log an error, but the streams app will not bubble the 
> exception up to the uncaught exception handler.
> This will leave the app stuck in rebalancing state if for instance an 
> exception is thrown by the consumer during state restore.
> Here is an example log that shows the error when the consumer throws a CRC 
> error during state restore.
> {code}
> [2017-04-13 14:46:41,409] ERROR [XXX-StreamThread-1] User provided listener 
> org.apache.kafka.streams.processor.internals.StreamThread$RebalanceListener 
> for group XXX failed on partition assignment 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:269)
> org.apache.kafka.common.KafkaException: Record batch for partition 
> _my_topic-0 at offset 42 is invalid, cause: Record is corrupt (stored crc = 
> 1982353474, computed crc = 1572524932)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.maybeEnsureValid(Fetcher.java:904)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.nextFetchedRecord(Fetcher.java:936)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.fetchRecords(Fetcher.java:960)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.access$1200(Fetcher.java:864)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchRecords(Fetcher.java:517)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:482)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1069)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1002)
> at 
> org.apache.kafka.streams.processor.internals.StoreChangelogReader.restore(StoreChangelogReader.java:145)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread$RebalanceListener.onPartitionsAssigned(StreamThread.java:1329)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:265)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:363)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:310)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:296)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1037)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1002)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:546)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:702)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:326)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5097) KafkaConsumer.poll throws IllegalStateException

2017-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15977590#comment-15977590
 ] 

ASF GitHub Bot commented on KAFKA-5097:
---

Github user enothereska closed the pull request at:

https://github.com/apache/kafka/pull/2876


> KafkaConsumer.poll throws IllegalStateException
> ---
>
> Key: KAFKA-5097
> URL: https://issues.apache.org/jira/browse/KAFKA-5097
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Assignee: Eno Thereska
>Priority: Blocker
> Fix For: 0.10.2.1
>
>
> The backport of KAFKA-5075 to 0.10.2 seems to have introduced a regression: 
> If a fetch returns more data than `max.poll.records` and there is a rebalance 
> or the user changes the assignment/subscription after a `poll` that doesn't 
> return all the fetched data, the next call will throw an 
> `IllegalStateException`. More discussion in the following PR that includes a 
> fix:
> https://github.com/apache/kafka/pull/2876/files#r112413428
> This issue caused a Streams system test to fail, see KAFKA-4755.
> We should fix the regression before releasing 0.10.2.1.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5095) ThreadCacheTest.cacheOverheadsSmallValues fails intermittently

2017-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15977628#comment-15977628
 ] 

ASF GitHub Bot commented on KAFKA-5095:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2877


> ThreadCacheTest.cacheOverheadsSmallValues fails intermittently 
> ---
>
> Key: KAFKA-5095
> URL: https://issues.apache.org/jira/browse/KAFKA-5095
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0
>Reporter: Damian Guy
>Assignee: Eno Thereska
> Fix For: 0.11.0.0, 0.10.2.1
>
>
> {code}
> java.lang.AssertionError: Used memory size 249731736 greater than expected 
> 2.47212045E8
> Stacktrace
> java.lang.AssertionError: Used memory size 249731736 greater than expected 
> 2.47212045E8
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.kafka.streams.state.internals.ThreadCacheTest.checkOverheads(ThreadCacheTest.java:98)
>   at 
> org.apache.kafka.streams.state.internals.ThreadCacheTest.cacheOverheadsSmallValues(ThreadCacheTest.java:111)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5088) some spelling error in code comment

2017-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15978059#comment-15978059
 ] 

ASF GitHub Bot commented on KAFKA-5088:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2871


> some spelling error in code comment 
> 
>
> Key: KAFKA-5088
> URL: https://issues.apache.org/jira/browse/KAFKA-5088
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, tools
>Affects Versions: 0.10.2.0
>Reporter: Xin
>Assignee: Xin
>Priority: Trivial
> Fix For: 0.11.0.0
>
>
> some spelling error in code comment :
> metadata==》metatdata...
> metadata==》metatadata
> propogated==》propagated



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5008) Kafka-Clients not OSGi ready

2017-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15978165#comment-15978165
 ] 

ASF GitHub Bot commented on KAFKA-5008:
---

Github user lostiniceland closed the pull request at:

https://github.com/apache/kafka/pull/2807


> Kafka-Clients not OSGi ready
> 
>
> Key: KAFKA-5008
> URL: https://issues.apache.org/jira/browse/KAFKA-5008
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.2.0
>Reporter: Marc
>Priority: Minor
>
> The kafka-clients artifact does not provide OSGi metadata. This adds an 
> additional barrier for OSGi developers to use the artifact since it has to be 
> [wrapped|http://bnd.bndtools.org/chapters/390-wrapping.html].
> The metadata can automatically be created using bnd.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5008) Kafka-Clients not OSGi ready

2017-04-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15978201#comment-15978201
 ] 

ASF GitHub Bot commented on KAFKA-5008:
---

GitHub user lostiniceland opened a pull request:

https://github.com/apache/kafka/pull/2882

KAFKA-5008: Provide OSGi metadata for Kafka-Clients

This change uses the bnd-gradle-plugin for the kafka-clients module in 
order to generate OSGi metadata.
The bnd.bnd file is used by the plugin for instructions. All 
clilents-packages are exported. Import are
automatically calculated by bnd.

Signed-off-by: Marc Schlegel 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lostiniceland/kafka 
5008-OSGi-metadata-for-clients

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2882.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2882


commit a3ebcc3ec8d0f0df6751da28d9eaaa9084b0f805
Author: Marc Schlegel 
Date:   2017-04-21T07:09:52Z

KAFKA-5008: Provide OSGi metadata for Kafka-Clients

This change uses the bnd-gradle-plugin for the kafka-clients module in 
order to generate OSGi metadata.
The bnd.bnd file is used by the plugin for instructions. All 
clilents-packages are exported. Import are
automatically calculated by bnd.

Signed-off-by: Marc Schlegel 




> Kafka-Clients not OSGi ready
> 
>
> Key: KAFKA-5008
> URL: https://issues.apache.org/jira/browse/KAFKA-5008
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.2.0
>Reporter: Marc
>Priority: Minor
>
> The kafka-clients artifact does not provide OSGi metadata. This adds an 
> additional barrier for OSGi developers to use the artifact since it has to be 
> [wrapped|http://bnd.bndtools.org/chapters/390-wrapping.html].
> The metadata can automatically be created using bnd.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5094) Censor SCRAM config change logging

2017-04-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15978408#comment-15978408
 ] 

ASF GitHub Bot commented on KAFKA-5094:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2879


> Censor SCRAM config change logging
> --
>
> Key: KAFKA-5094
> URL: https://issues.apache.org/jira/browse/KAFKA-5094
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Affects Versions: 0.10.2.0
>Reporter: Johan Ström
>Assignee: Rajini Sivaram
>
> (As mentioned in comment on KAFKA-4943):
> Another possibly bad thing is that Kafka logs the credentials in the clear 
> too (0.10.2.0):
> {code}
> [2017-04-05 16:29:00,266] INFO Processing notification(s) to /config/changes 
> (kafka.common.ZkNodeChangeNotificationListener)
> [2017-04-05 16:29:00,282] INFO Processing override for entityPath: 
> users/kafka with config: 
> {SCRAM-SHA-512=salt=ZGl6dnRzeWQ5ZjJhNWo1bWdxN2draG96Ng==,stored_key=BEdel+ChGSnpdpV0f8s8J/fWlwZJbUtAD1N6FygpPLK1AiVjg0yiHCvigq1R2x+o72QSvNkyFITuVZMlrj8hZg==,server_key=/RZ/EcGAaXwAKvFknVpsBHzC4tBXBLPJQnN4tM/s0wJpMcR9qvvJTGKM9Nx+zoXCc9buNoCd+/2LpL+yWde+/w==,iterations=4096}
>  (kafka.server.DynamicConfigManager)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5101) Remove KafkaController's incrementControllerEpoch method parameter

2017-04-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15978659#comment-15978659
 ] 

ASF GitHub Bot commented on KAFKA-5101:
---

GitHub user baluchicken opened a pull request:

https://github.com/apache/kafka/pull/2886

KAFKA-5101 Remove KafkaController's incrementControllerEpoch method p…

@ijuma minor fix. can you please review.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/baluchicken/kafka-1 KAFKA-5101

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2886.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2886


commit e6b84b87aa39a1971758cd84d23ed2784f8953ac
Author: Balint Molnar 
Date:   2017-04-21T12:34:48Z

KAFKA-5101 Remove KafkaController's incrementControllerEpoch method 
parameter




> Remove KafkaController's incrementControllerEpoch method parameter 
> ---
>
> Key: KAFKA-5101
> URL: https://issues.apache.org/jira/browse/KAFKA-5101
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Reporter: Balint Molnar
>Assignee: Balint Molnar
>Priority: Trivial
>
> KAFKA-4814 replaced the zkClient.createPersistent method with 
> zkUtils.createPersistentPath so the zkClient parameter is no longer required.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5097) KafkaConsumer.poll throws IllegalStateException

2017-04-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15978766#comment-15978766
 ] 

ASF GitHub Bot commented on KAFKA-5097:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/2887

KAFKA-5097: Add testFetchAfterPartitionWithFetchedRecordsIsUnassigned



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka kafka-5097-unit-test

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2887.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2887


commit 29c9eb8f4ab0dccdfeafb67bc7aef5ce25891bba
Author: Ismael Juma 
Date:   2017-04-21T13:54:29Z

KAFKA-5097: Add testFetchAfterPartitionWithFetchedRecordsIsUnassigned




> KafkaConsumer.poll throws IllegalStateException
> ---
>
> Key: KAFKA-5097
> URL: https://issues.apache.org/jira/browse/KAFKA-5097
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Assignee: Eno Thereska
>Priority: Blocker
> Fix For: 0.10.2.1
>
>
> The backport of KAFKA-5075 to 0.10.2 seems to have introduced a regression: 
> If a fetch returns more data than `max.poll.records` and there is a rebalance 
> or the user changes the assignment/subscription after a `poll` that doesn't 
> return all the fetched data, the next call will throw an 
> `IllegalStateException`. More discussion in the following PR that includes a 
> fix:
> https://github.com/apache/kafka/pull/2876/files#r112413428
> This issue caused a Streams system test to fail, see KAFKA-4755.
> We should fix the regression before releasing 0.10.2.1.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5103) Refactor AdminUtils to use zkUtils methods instad of zkUtils.zkClient

2017-04-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15978786#comment-15978786
 ] 

ASF GitHub Bot commented on KAFKA-5103:
---

GitHub user baluchicken opened a pull request:

https://github.com/apache/kafka/pull/2888

KAFKA-5103 Refactor AdminUtils to use zkUtils methods instad of zkUti…

@ijuma  plz review.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/baluchicken/kafka-1 KAFKA-5103

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2888.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2888


commit a0ac0defa6560a1a2734f0b7c115ce4f6b5f61a1
Author: Balint Molnar 
Date:   2017-04-21T14:07:23Z

KAFKA-5103 Refactor AdminUtils to use zkUtils methods instad of 
zkUtils.zkClient




> Refactor AdminUtils to use zkUtils methods instad of zkUtils.zkClient
> -
>
> Key: KAFKA-5103
> URL: https://issues.apache.org/jira/browse/KAFKA-5103
> Project: Kafka
>  Issue Type: Sub-task
>  Components: admin
>Reporter: Balint Molnar
>Assignee: Balint Molnar
>
> Replace zkUtils.zkClient.createPersistentSequential(seqNode, content) to 
> zkUtils.createSequentialPersistentPath(seqNode, content).
> The zkClient variant does not respects the Acl's.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4928) Add integration test for DumpLogSegments

2017-04-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15978846#comment-15978846
 ] 

ASF GitHub Bot commented on KAFKA-4928:
---

GitHub user original-brownbear opened a pull request:

https://github.com/apache/kafka/pull/2889

KAFKA-4928: Add integration test for DumpLogSegments

Adding tests for `kafka.tools.DumpLogSegments`

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/original-brownbear/kafka KAFKA-4928

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2889.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2889






> Add integration test for DumpLogSegments
> 
>
> Key: KAFKA-4928
> URL: https://issues.apache.org/jira/browse/KAFKA-4928
> Project: Kafka
>  Issue Type: Test
>Reporter: Ismael Juma
>Assignee: Armin Braun
>  Labels: newbie
> Fix For: 0.11.0.0
>
>
> DumpLogSegments is an important tool to analyse log files, but we have no 
> JUnit tests for it. It would be good to have some tests that verify that the 
> output is sane for a populated log.
> Our system tests call DumpLogSegments, but we should be able to detect 
> regressions via the JUnit test suite.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5100) ProducerPerformanceService failing due to parsing error

2017-04-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15978977#comment-15978977
 ] 

ASF GitHub Bot commented on KAFKA-5100:
---

GitHub user junrao opened a pull request:

https://github.com/apache/kafka/pull/2890

KAFKA-5100: ProducerPerformanceService failing due to parsing error



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/junrao/kafka kafka-5100

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2890.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2890


commit c01c992644907582cb338fe3c1b3601b25b3e495
Author: Jun Rao 
Date:   2017-04-21T16:00:54Z

KAFKA-5100: ProducerPerformanceService failing due to parsing error




> ProducerPerformanceService failing due to parsing error
> ---
>
> Key: KAFKA-5100
> URL: https://issues.apache.org/jira/browse/KAFKA-5100
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Priority: Blocker
> Fix For: 0.11.0.0
>
>
> 16 tests that use ProducerPerformanceService failed with:
> {'ProducerPerformanceService-0-139930177129936-worker-1': Exception(u'Unable 
> to parse aggregate performance statistics on node 1: 150 records sent, 29.3 
> records/sec (0.08 MB/sec), 6027.2 ms avg latency, 9028.0 max latency.\n',)}
> https://jenkins.confluent.io/job/system-test-kafka/579/consoleFull
> Logs available via links in the following page:
> http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2017-04-21--001.1492775811--apache--trunk--f18a14a/report.html
> Looking at recent commits, the following seems like the most likely one:
> https://github.com/apache/kafka/commit/609e9b0b2f46ce72ed91965f7e43c512b26a609b
> cc [~huxi_2b] [~junrao]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4564) When the destination brokers are down or misconfigured in config, Streams should fail fast

2017-04-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15979128#comment-15979128
 ] 

ASF GitHub Bot commented on KAFKA-4564:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2837


> When the destination brokers are down or misconfigured in config, Streams 
> should fail fast
> --
>
> Key: KAFKA-4564
> URL: https://issues.apache.org/jira/browse/KAFKA-4564
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Matthias J. Sax
>  Labels: newbie
> Fix For: 0.11.0.0
>
>
> Today if Kafka is down or users misconfigure the bootstrap list, Streams may 
> just hangs for a while without any error messages even with the log4j 
> enabled, which is quite confusing.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5100) ProducerPerformanceService failing due to parsing error

2017-04-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15979285#comment-15979285
 ] 

ASF GitHub Bot commented on KAFKA-5100:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2890


> ProducerPerformanceService failing due to parsing error
> ---
>
> Key: KAFKA-5100
> URL: https://issues.apache.org/jira/browse/KAFKA-5100
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Priority: Blocker
> Fix For: 0.11.0.0
>
>
> 16 tests that use ProducerPerformanceService failed with:
> {'ProducerPerformanceService-0-139930177129936-worker-1': Exception(u'Unable 
> to parse aggregate performance statistics on node 1: 150 records sent, 29.3 
> records/sec (0.08 MB/sec), 6027.2 ms avg latency, 9028.0 max latency.\n',)}
> https://jenkins.confluent.io/job/system-test-kafka/579/consoleFull
> Logs available via links in the following page:
> http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2017-04-21--001.1492775811--apache--trunk--f18a14a/report.html
> Looking at recent commits, the following seems like the most likely one:
> https://github.com/apache/kafka/commit/609e9b0b2f46ce72ed91965f7e43c512b26a609b
> cc [~huxi_2b] [~junrao]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5090) Kafka Streams SessionStore.findSessions javadoc broken

2017-04-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15979324#comment-15979324
 ] 

ASF GitHub Bot commented on KAFKA-5090:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2874


> Kafka Streams SessionStore.findSessions javadoc broken
> --
>
> Key: KAFKA-5090
> URL: https://issues.apache.org/jira/browse/KAFKA-5090
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0, 0.10.2.1
>Reporter: Michal Borowiecki
>Assignee: Michal Borowiecki
>Priority: Trivial
> Fix For: 0.11.0.0
>
>
> {code}
> /**
>  * Fetch any sessions with the matching key and the sessions end is &le 
> earliestEndTime and the sessions
>  * start is &ge latestStartTime
>  */
> KeyValueIterator, AGG> findSessions(final K key, long 
> earliestSessionEndTime, final long latestSessionStartTime);
> {code}
> The conditions in the javadoc comment are inverted (le should be ge and ge 
> shoudl be le), since this is what the code does. They were correct in the 
> original KIP:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-94+Session+Windows
> {code}
> /**
>  * Find any aggregated session values with the matching key and where the
>  * session’s end time is >= earliestSessionEndTime, i.e, the oldest 
> session to
>  * merge with, and the session’s start time is <= latestSessionStartTime, 
> i.e,
>  * the newest session to merge with.
>  */
>KeyValueIterator, AGG> findSessionsToMerge(final K key, final 
> long earliestSessionEndTime, final long latestSessionStartTime);
> {code}
> Also, the escaped html character references are missing the trailing 
> semicolon making them render as-is.
> Happy to have this assigned to me to fix as it seems trivial.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5110) ConsumerGroupCommand error handling improvement

2017-04-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15979581#comment-15979581
 ] 

ASF GitHub Bot commented on KAFKA-5110:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/2892

KAFKA-5110: Check for errors when fetching the log end offset in 
ConsumerGroupCommand



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-5110

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2892.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2892


commit b1042a72ebd320246e74ad5ad5347c7845201e7f
Author: Jason Gustafson 
Date:   2017-04-21T23:32:14Z

KAFKA-5110: Check for errors when fetching the log end offset in 
ConsumerGroupCommand




> ConsumerGroupCommand error handling improvement
> ---
>
> Key: KAFKA-5110
> URL: https://issues.apache.org/jira/browse/KAFKA-5110
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.1.1
>Reporter: Dustin Cote
>Assignee: Jason Gustafson
>
> The ConsumerGroupCommand isn't handling partition errors properly. It throws 
> the following:
> {code}
> kafka-consumer-groups.sh --zookeeper 10.10.10.10:2181 --group mygroup 
> --describe
> GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG OWNER
> Error while executing consumer group command empty.head
> java.lang.UnsupportedOperationException: empty.head
> at scala.collection.immutable.Vector.head(Vector.scala:193)
> at 
> kafka.admin.ConsumerGroupCommand$ZkConsumerGroupService$$anonfun$getLogEndOffset$1.apply(ConsumerGroupCommand.scala:197)
> at 
> kafka.admin.ConsumerGroupCommand$ZkConsumerGroupService$$anonfun$getLogEndOffset$1.apply(ConsumerGroupCommand.scala:194)
> at scala.Option.map(Option.scala:146)
> at 
> kafka.admin.ConsumerGroupCommand$ZkConsumerGroupService.getLogEndOffset(ConsumerGroupCommand.scala:194)
> at 
> kafka.admin.ConsumerGroupCommand$ConsumerGroupService$class.kafka$admin$ConsumerGroupCommand$ConsumerGroupService$$describePartition(ConsumerGroupCommand.scala:125)
> at 
> kafka.admin.ConsumerGroupCommand$ConsumerGroupService$$anonfun$describeTopicPartition$2.apply(ConsumerGroupCommand.scala:107)
> at 
> kafka.admin.ConsumerGroupCommand$ConsumerGroupService$$anonfun$describeTopicPartition$2.apply(ConsumerGroupCommand.scala:106)
> at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
> at 
> kafka.admin.ConsumerGroupCommand$ConsumerGroupService$class.describeTopicPartition(ConsumerGroupCommand.scala:106)
> at 
> kafka.admin.ConsumerGroupCommand$ZkConsumerGroupService.describeTopicPartition(ConsumerGroupCommand.scala:134)
> at 
> kafka.admin.ConsumerGroupCommand$ZkConsumerGroupService.kafka$admin$ConsumerGroupCommand$ZkConsumerGroupService$$describeTopic(ConsumerGroupCommand.scala:181)
> at 
> kafka.admin.ConsumerGroupCommand$ZkConsumerGroupService$$anonfun$describeGroup$1.apply(ConsumerGroupCommand.scala:166)
> at 
> kafka.admin.ConsumerGroupCommand$ZkConsumerGroupService$$anonfun$describeGroup$1.apply(ConsumerGroupCommand.scala:166)
> at scala.collection.Iterator$class.foreach(Iterator.scala:893)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at 
> kafka.admin.ConsumerGroupCommand$ZkConsumerGroupService.describeGroup(ConsumerGroupCommand.scala:166)
> at 
> kafka.admin.ConsumerGroupCommand$ConsumerGroupService$class.describe(ConsumerGroupCommand.scala:89)
> at 
> kafka.admin.ConsumerGroupCommand$ZkConsumerGroupService.describe(ConsumerGroupCommand.scala:134)
> at kafka.admin.ConsumerGroupCommand$.main(ConsumerGroupCommand.scala:68)
> at kafka.admin.ConsumerGroupCommand.main(ConsumerGroupCommand.scala)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5112) Trunk compatibility tests should test against 0.10.2

2017-04-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15979584#comment-15979584
 ] 

ASF GitHub Bot commented on KAFKA-5112:
---

GitHub user cmccabe opened a pull request:

https://github.com/apache/kafka/pull/2893

KAFKA-5112: Trunk compatibility tests should test against 0.10.2



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cmccabe/kafka KAFKA-5112

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2893.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2893


commit 4338a85a822bfebb2f1dfce4b08d8e28d7cfe938
Author: Colin P. Mccabe 
Date:   2017-04-21T23:34:29Z

KAFKA-5112: Trunk compatibility tests should test against 0.10.2




> Trunk compatibility tests should test against 0.10.2
> 
>
> Key: KAFKA-5112
> URL: https://issues.apache.org/jira/browse/KAFKA-5112
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>
> Now that 0.10.2 has been released, our trunk compatibility tests should test 
> against it.  This will ensure that 0.11 clients are backwards compatible with 
> 0.10.2 brokers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5092) KIP 141 - ProducerRecord Interface Improvements

2017-04-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15979656#comment-15979656
 ] 

ASF GitHub Bot commented on KAFKA-5092:
---

GitHub user simplesteph opened a pull request:

https://github.com/apache/kafka/pull/2894

[KAFKA-5092] [WIP] changed ProducerRecord interface - KIP 141



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/simplesteph/kafka producer-record-changes

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2894.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2894


commit 362afaec5b4b514902b63b11a293635befb103d1
Author: simplesteph 
Date:   2017-04-22T00:30:28Z

changed ProducerRecord interface.




> KIP 141 - ProducerRecord Interface Improvements
> ---
>
> Key: KAFKA-5092
> URL: https://issues.apache.org/jira/browse/KAFKA-5092
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Stephane Maarek
>  Labels: kip
> Fix For: 0.11.0.0
>
>
> See KIP here: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP+141+-+ProducerRecord+Interface+Improvements



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5111) Improve internal Task APIs

2017-04-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15979768#comment-15979768
 ] 

ASF GitHub Bot commented on KAFKA-5111:
---

GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/2895

KAFKA-5111: Improve internal Task APIs

 Refactors Task with proper interface methods `init()`, `resume()`, 
`commit()`, `suspend()`, and `close()`. All other methods for task handling are 
internal now. This allows to simplify `StreamThread` code, avoid code 
duplication and allows for easier reasoning of control flow.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka kafka-5111-cleanup-task-code

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2895.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2895


commit d2a4593e967ca4804291c50baece5c66f3469f59
Author: Matthias J. Sax 
Date:   2017-04-21T23:24:05Z

Code Cleanup

commit 27a4d7da6716bff235557ae0ada89abddfdc0f39
Author: Matthias J. Sax 
Date:   2017-04-22T02:05:45Z

KAFKA-5111: Improve internal Task APIs

commit 45bbc171498814b49b2dd30638e10f4eb317
Author: Matthias J. Sax 
Date:   2017-04-22T04:57:08Z

Post code cleanup




> Improve internal Task APIs
> --
>
> Key: KAFKA-5111
> URL: https://issues.apache.org/jira/browse/KAFKA-5111
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
> Fix For: 0.11.0.0
>
>
> Currently, the internal interface for tasks is not very clean and it's hard 
> to reason about the control flow when tasks get closes, suspended, resumed 
> etc. This makes exception handling particularly hard.
> We want to refactor this part of the code to get a clean control flow and 
> interface.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4564) When the destination brokers are down or misconfigured in config, Streams should fail fast

2017-04-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15980091#comment-15980091
 ] 

ASF GitHub Bot commented on KAFKA-4564:
---

GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/2897

KAFKA-4564: follow up hotfix for system test



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka KAFKA-4564-follow-up

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2897.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2897


commit 3a8ca51fad7c8e9221c3709b354f37a072c5f71d
Author: Matthias J. Sax 
Date:   2017-04-22T19:12:01Z

KAFKA-4564: follow up hotfix for system test




> When the destination brokers are down or misconfigured in config, Streams 
> should fail fast
> --
>
> Key: KAFKA-4564
> URL: https://issues.apache.org/jira/browse/KAFKA-4564
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Matthias J. Sax
>  Labels: newbie
> Fix For: 0.11.0.0
>
>
> Today if Kafka is down or users misconfigure the bootstrap list, Streams may 
> just hangs for a while without any error messages even with the log4j 
> enabled, which is quite confusing.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4692) Transient test failure in org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest

2017-04-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15980117#comment-15980117
 ] 

ASF GitHub Bot commented on KAFKA-4692:
---

Github user original-brownbear closed the pull request at:

https://github.com/apache/kafka/pull/2775


> Transient test failure in 
> org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest
> -
>
> Key: KAFKA-4692
> URL: https://issues.apache.org/jira/browse/KAFKA-4692
> Project: Kafka
>  Issue Type: Sub-task
>  Components: unit tests
>Reporter: Guozhang Wang
>
> Seen a couple of failures on at least the following two test cases:
> org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest.shouldReduce
> {code}
> Error Message
> java.lang.AssertionError: 
> Expected: is <[KeyValue(A, A:A), KeyValue(B, B:B), KeyValue(C, C:C), 
> KeyValue(D, D:D), KeyValue(E, E:E)]>
>  but: was <[KeyValue(A, A), KeyValue(A, A:A), KeyValue(B, B:B), 
> KeyValue(C, C:C), KeyValue(D, D:D), KeyValue(E, E:E)]>
> Stacktrace
> java.lang.AssertionError: 
> Expected: is <[KeyValue(A, A:A), KeyValue(B, B:B), KeyValue(C, C:C), 
> KeyValue(D, D:D), KeyValue(E, E:E)]>
>  but: was <[KeyValue(A, A), KeyValue(A, A:A), KeyValue(B, B:B), 
> KeyValue(C, C:C), KeyValue(D, D:D), KeyValue(E, E:E)]>
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8)
>   at 
> org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest.shouldReduce(KStreamAggregationDedupIntegrationTest.java:138)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.GeneratedMethodAccessor4.inv

[jira] [Commented] (KAFKA-4564) When the destination brokers are down or misconfigured in config, Streams should fail fast

2017-04-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15980432#comment-15980432
 ] 

ASF GitHub Bot commented on KAFKA-4564:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2897


> When the destination brokers are down or misconfigured in config, Streams 
> should fail fast
> --
>
> Key: KAFKA-4564
> URL: https://issues.apache.org/jira/browse/KAFKA-4564
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Matthias J. Sax
>  Labels: newbie
> Fix For: 0.11.0.0
>
>
> Today if Kafka is down or users misconfigure the bootstrap list, Streams may 
> just hangs for a while without any error messages even with the log4j 
> enabled, which is quite confusing.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5081) two versions of jackson-annotations-xxx.jar in distribution tgz

2017-04-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15980479#comment-15980479
 ] 

ASF GitHub Bot commented on KAFKA-5081:
---

GitHub user dejan2609 opened a pull request:

https://github.com/apache/kafka/pull/2899

KAFKA-5081: use gradle resolution strategy 'failOnVersionConflict' 

(in order to prevent redundant jars from being bundled into kafka 
distribution)

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dejan2609/kafka KAFKA-5081

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2899.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2899


commit 9573ff6741b58603e4df935eb53de5fa59b535f2
Author: dstojadinovic 
Date:   2017-04-23T07:37:03Z

KAFKA-5081: use gradle resolution strategy 'failOnVersionConflict' (in 
order to prevent redundant jars from being bundled into kafka distribution)




> two versions of jackson-annotations-xxx.jar in distribution tgz
> ---
>
> Key: KAFKA-5081
> URL: https://issues.apache.org/jira/browse/KAFKA-5081
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Reporter: Edoardo Comar
>Priority: Minor
>
> git clone https://github.com/apache/kafka.git
> cd kafka
> gradle
> ./gradlew releaseTarGz
> then kafka/core/build/distributions/kafka-...-SNAPSHOT.tgz contains
> in the libs directory two versions of this jar
> jackson-annotations-2.8.0.jar
> jackson-annotations-2.8.5.jar



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5081) two versions of jackson-annotations-xxx.jar in distribution tgz

2017-04-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15980482#comment-15980482
 ] 

ASF GitHub Bot commented on KAFKA-5081:
---

Github user dejan2609 closed the pull request at:

https://github.com/apache/kafka/pull/2899


> two versions of jackson-annotations-xxx.jar in distribution tgz
> ---
>
> Key: KAFKA-5081
> URL: https://issues.apache.org/jira/browse/KAFKA-5081
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Reporter: Edoardo Comar
>Priority: Minor
>
> git clone https://github.com/apache/kafka.git
> cd kafka
> gradle
> ./gradlew releaseTarGz
> then kafka/core/build/distributions/kafka-...-SNAPSHOT.tgz contains
> in the libs directory two versions of this jar
> jackson-annotations-2.8.0.jar
> jackson-annotations-2.8.5.jar



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5081) two versions of jackson-annotations-xxx.jar in distribution tgz

2017-04-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15980483#comment-15980483
 ] 

ASF GitHub Bot commented on KAFKA-5081:
---

GitHub user dejan2609 opened a pull request:

https://github.com/apache/kafka/pull/2900

KAFKA-5081: use gradle resolution strategy 'failOnVersionConflict'

 (in order to prevent redundant jars from being bundled into kafka 
distribution)

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dejan2609/kafka KAFKA-5081

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2900.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2900


commit 02f9c7ad5a56e0bca02908d74cf443721f96f80d
Author: dejan2609 
Date:   2017-04-23T17:36:45Z

KAFKA-5081: use gradle resolution strategy 'failOnVersionConflict' (in 
order to prevent redundant jars from being bundled into kafka distribution)




> two versions of jackson-annotations-xxx.jar in distribution tgz
> ---
>
> Key: KAFKA-5081
> URL: https://issues.apache.org/jira/browse/KAFKA-5081
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Reporter: Edoardo Comar
>Priority: Minor
>
> git clone https://github.com/apache/kafka.git
> cd kafka
> gradle
> ./gradlew releaseTarGz
> then kafka/core/build/distributions/kafka-...-SNAPSHOT.tgz contains
> in the libs directory two versions of this jar
> jackson-annotations-2.8.0.jar
> jackson-annotations-2.8.5.jar



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5018) LogCleaner tests to verify behaviour of message format v2

2017-04-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15980527#comment-15980527
 ] 

ASF GitHub Bot commented on KAFKA-5018:
---

GitHub user original-brownbear opened a pull request:

https://github.com/apache/kafka/pull/2901

KAFKA-5018: LogCleaner tests to verify behaviour of message format v2

For https://issues.apache.org/jira/browse/KAFKA-5018:

* Added test for `baseOffset` behaviour after compaction
  * Added helper method for writing a multi-record batch
* Dried up handling of `LogConfig.SegmentIndexBytesProp` and added comments 
on the chosen magic values

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/original-brownbear/kafka KAFKA-5018

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2901.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2901


commit 9a036b934feec7c81ca34b49dfd58df680a2bb97
Author: Armin Braun 
Date:   2017-04-23T13:45:11Z

KAFKA-5018: LogCleaner tests to verify behaviour of message format v2

commit 959b5193e11ab87ca8a17eaff594408a57f44204
Author: Armin Braun 
Date:   2017-04-23T20:20:18Z

KAFKA-5018: LogCleaner tests to verify behaviour of message format v2




> LogCleaner tests to verify behaviour of message format v2
> -
>
> Key: KAFKA-5018
> URL: https://issues.apache.org/jira/browse/KAFKA-5018
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Ismael Juma
>Assignee: Armin Braun
> Fix For: 0.11.0.0
>
>
> It would be good to add LogCleaner tests to verify the behaviour of fields 
> like baseOffset after compaction.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4755) SimpleBenchmark test fails for streams

2017-04-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15981505#comment-15981505
 ] 

ASF GitHub Bot commented on KAFKA-4755:
---

Github user enothereska closed the pull request at:

https://github.com/apache/kafka/pull/2867


> SimpleBenchmark test fails for streams
> --
>
> Key: KAFKA-4755
> URL: https://issues.apache.org/jira/browse/KAFKA-4755
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Eno Thereska
>Assignee: Eno Thereska
>Priority: Blocker
> Fix For: 0.11.0.0
>
>
> This occurred Feb 10th 2017:
> kafkatest.benchmarks.streams.streams_simple_benchmark_test.StreamsSimpleBenchmarkTest.test_simple_benchmark.test=consume.scale=1
> status: FAIL
> run time:   7 minutes 36.712 seconds
> Streams Test process on ubuntu@worker2 took too long to exit
> Traceback (most recent call last):
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/tests/runner_client.py",
>  line 123, in run
> data = self.run_test()
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/tests/runner_client.py",
>  line 176, in run_test
> return self.test_context.function(self.test)
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/mark/_mark.py",
>  line 321, in wrapper
> return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/tests/kafkatest/benchmarks/streams/streams_simple_benchmark_test.py",
>  line 86, in test_simple_benchmark
> self.driver[num].wait()
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/tests/kafkatest/services/streams.py",
>  line 102, in wait
> self.wait_node(node, timeout_sec)
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/tests/kafkatest/services/streams.py",
>  line 106, in wait_node
> wait_until(lambda: not node.account.alive(pid), timeout_sec=timeout_sec, 
> err_msg="Streams Test process on " + str(node.account) + " took too long to 
> exit")
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/utils/util.py",
>  line 36, in wait_until
> raise TimeoutError(err_msg)
> TimeoutError: Streams Test process on ubuntu@worker2 took too long to exit



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4379) Remove caching of dirty and removed keys from StoreChangeLogger

2017-04-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15982362#comment-15982362
 ] 

ASF GitHub Bot commented on KAFKA-4379:
---

GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/2908

KAFKA-4379 Followup: Remove eviction listener from InMemoryLRUCache



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K4379-remove-listener

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2908.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2908


commit 44e1cea3a300b0e05e9574284260b774b852cb4e
Author: Guozhang Wang 
Date:   2017-04-25T02:07:23Z

remove eviction listener

commit 6343783ec2ffc907ad2b1d25cd764a6e52acc299
Author: Guozhang Wang 
Date:   2017-04-25T04:50:02Z

fix unit test




> Remove caching of dirty and removed keys from StoreChangeLogger
> ---
>
> Key: KAFKA-4379
> URL: https://issues.apache.org/jira/browse/KAFKA-4379
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Damian Guy
>Assignee: Damian Guy
>Priority: Minor
> Fix For: 0.10.1.1, 0.10.2.0
>
>
> The StoreChangeLogger currently keeps a cache of dirty and removed keys and 
> will batch the changelog records such that we don't send a record for each 
> update. However, with KIP-63 this is unnecessary as the batching and 
> de-duping is done by the caching layer. Further, the StoreChangeLogger relies 
> on context.timestamp() which is likely to be incorrect when caching is enabled



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5121) Implement transaction index for KIP-98

2017-04-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15982433#comment-15982433
 ] 

ASF GitHub Bot commented on KAFKA-5121:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/2910

KAFKA-5121 [WIP]: Implement transaction index for KIP-98



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka eos-txn-index

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2910.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2910


commit dfe308a58f179fc6f59d55f2c47392d2cf85f3ce
Author: Jason Gustafson 
Date:   2017-03-30T01:19:36Z

KAFKA-5121 [WIP]: Implement transaction index for KIP-98




> Implement transaction index for KIP-98
> --
>
> Key: KAFKA-5121
> URL: https://issues.apache.org/jira/browse/KAFKA-5121
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.11.0.0
>
>
> As documented in the KIP-98 proposal, the broker will maintain an index 
> containing all of the aborted transactions for each partition. This index is 
> used to respond to fetches with READ_COMMITTED isolation. This requires the 
> broker maintain the last stable offset (LSO).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-438) Code cleanup in MessageTest

2017-04-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15982961#comment-15982961
 ] 

ASF GitHub Bot commented on KAFKA-438:
--

Github user jozi-k closed the pull request at:

https://github.com/apache/kafka/pull/2858


> Code cleanup in MessageTest
> ---
>
> Key: KAFKA-438
> URL: https://issues.apache.org/jira/browse/KAFKA-438
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.7.1
>Reporter: Jim Plush
>Priority: Trivial
> Attachments: KAFKA-438
>
>
> While exploring the Unit Tests this class had an unused import statement, 
> some ambiguity on which HashMap implementation was being used and assignments 
> of function returns when not required. 
> Trivial stuff



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4942) Kafka Connect: Offset committing times out before expected

2017-04-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15983038#comment-15983038
 ] 

ASF GitHub Bot commented on KAFKA-4942:
---

GitHub user 56quarters opened a pull request:

https://github.com/apache/kafka/pull/2912

KAFKA-4942 Fix commitTimeoutMs being set before the commit actually started

This fixes KAFKA-4942

This supersededs #2730 

/cc @simplesteph @gwenshap @ewencp

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/smarter-travel-media/kafka 
fix-connect-offset-commit

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2912.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2912


commit f93bd001a723c5e1402bf92474f43fb0b991a44c
Author: simplesteph 
Date:   2017-03-24T00:03:07Z

Fixed commitTimeoutMs being set before the commit actually started

Fixes KAFKA-4942

commit e7b704d97b8de35384f6d24ba48f050a0b20be01
Author: Nick Pillitteri 
Date:   2017-04-21T19:49:37Z

Test for commitTimeoutMs being set before commit started

See KAFKA-4942




> Kafka Connect: Offset committing times out before expected
> --
>
> Key: KAFKA-4942
> URL: https://issues.apache.org/jira/browse/KAFKA-4942
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.2.0
>Reporter: Stephane Maarek
>
> On Kafka 0.10.2.0
> I run a connector that deals with a lot of data, in a kafka connect cluster
> When the offsets are getting committed, I get the following:
> {code}
> [2017-03-23 03:56:25,134] INFO WorkerSinkTask{id=MyConnector-1} Committing 
> offsets (org.apache.kafka.connect.runtime.WorkerSinkTask)
> [2017-03-23 03:56:25,135] WARN Commit of WorkerSinkTask{id=MyConnector-1} 
> offsets timed out (org.apache.kafka.connect.runtime.WorkerSinkTask)
> {code}
> If you look at the timestamps, they're 1 ms apart. My settings are the 
> following: 
> {code}
>   offset.flush.interval.ms = 12
>   offset.flush.timeout.ms = 6
>   offset.storage.topic = _connect_offsets
> {code}
> It seems the offset flush timeout setting is completely ignored for the look 
> of the logs. I would expect the timeout message to happen 60 seconds after 
> the commit offset INFO message, not 1 millisecond later.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4840) There are are still cases where producer buffer pool will not remove waiters.

2017-04-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15983300#comment-15983300
 ] 

ASF GitHub Bot commented on KAFKA-4840:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2659


> There are are still cases where producer buffer pool will not remove waiters.
> -
>
> Key: KAFKA-4840
> URL: https://issues.apache.org/jira/browse/KAFKA-4840
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.2.0
>Reporter: Sean McCauliff
>
> There are several problems dealing with errors in  BufferPool.allocate(int 
> size, long maxTimeToBlockMs):
> * The accumulated number of bytes are not put back into the available pool 
> when an exception happens and a thread is waiting for bytes to become 
> available.  This will cause the capacity of the buffer pool to decrease over 
> time any time a timeout is hit within this method.
> * If a Throwable other than InterruptedException is thrown out of await() for 
> some reason or if there is an exception thrown in the corresponding finally 
> block around the await(), for example if waitTime.record(.) throws an 
> exception, then the waiters are not removed from the waiters deque.
> * On timeout or other exception waiters could be signaled, but are not.  If 
> no other buffers are freed then the next waiting thread will also timeout and 
> so on.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4994) Fix findbugs warning about OffsetStorageWriter#currentFlushId

2017-04-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15983540#comment-15983540
 ] 

ASF GitHub Bot commented on KAFKA-4994:
---

Github user johnma14 closed the pull request at:

https://github.com/apache/kafka/pull/2913


> Fix findbugs warning about OffsetStorageWriter#currentFlushId
> -
>
> Key: KAFKA-4994
> URL: https://issues.apache.org/jira/browse/KAFKA-4994
> Project: Kafka
>  Issue Type: Bug
>Reporter: Colin P. McCabe
>Assignee: Mariam John
>  Labels: newbie
>
> We should fix the findbugs warning about 
> {{OffsetStorageWriter#currentFlushId}}
> {code}
> Multithreaded correctness Warnings
> IS2_INCONSISTENT_SYNC:   Inconsistent synchronization of 
> org.apache.kafka.connect.storage.OffsetStorageWriter.currentFlushId; locked 
> 83% of time
> IS2_INCONSISTENT_SYNC:   Inconsistent synchronization of 
> org.apache.kafka.connect.storage.OffsetStorageWriter.toFlush; locked 75% of 
> time
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4994) Fix findbugs warning about OffsetStorageWriter#currentFlushId

2017-04-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15983594#comment-15983594
 ] 

ASF GitHub Bot commented on KAFKA-4994:
---

GitHub user johnma14 opened a pull request:

https://github.com/apache/kafka/pull/2914

KAFKA-4994: Fix findbug warnings about OffsetStorageWriter

OffsetStorageWriter is not a thread-safe class and should be accessed
only from a Task's processing thread. The WorkerSourceTask class calls
the different methods (offset, beginFlush, cancelFlush, handleFinishWrite)
within a synchronized block. Hence the method definitions in
OffsetStorageWriter.java does not need to contain the keyword synchronized
again.

In the OffsetStorageWriter.java class, the doFlush() method is not 
explicitely
synchronized like the other methods in this class. Hence this can lead to
inconsistent synchronization of variables like currentFlushId and toFlush 
which
are set in the synchronized methods within this class.
- 
https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/storage/OffsetStorageWriter.java
- 
https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSourceTask.java#L295


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/johnma14/kafka bug/kafka-4994

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2914.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2914


commit d481c10818b59e67eaac0aa598e845773b07e51d
Author: Mariam John 
Date:   2017-04-25T20:47:04Z

KAFKA-4994: Fix findbug warnings about OffsetStorageWriter

OffsetStorageWriter is not a thread-safe class and should be accessed
only from a Task's processing thread. The WorkerSourceTask class calls
the different methods (offset, beginFlush, cancelFlush, handleFinishWrite)
within a synchronized block. Hence the method definitions in
OffsetStorageWriter.java does not need to contain the keyword synchronized
again.




> Fix findbugs warning about OffsetStorageWriter#currentFlushId
> -
>
> Key: KAFKA-4994
> URL: https://issues.apache.org/jira/browse/KAFKA-4994
> Project: Kafka
>  Issue Type: Bug
>Reporter: Colin P. McCabe
>Assignee: Mariam John
>  Labels: newbie
>
> We should fix the findbugs warning about 
> {{OffsetStorageWriter#currentFlushId}}
> {code}
> Multithreaded correctness Warnings
> IS2_INCONSISTENT_SYNC:   Inconsistent synchronization of 
> org.apache.kafka.connect.storage.OffsetStorageWriter.currentFlushId; locked 
> 83% of time
> IS2_INCONSISTENT_SYNC:   Inconsistent synchronization of 
> org.apache.kafka.connect.storage.OffsetStorageWriter.toFlush; locked 75% of 
> time
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5119) Transient test failure SocketServerTest.testMetricCollectionAfterShutdown

2017-04-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15983801#comment-15983801
 ] 

ASF GitHub Bot commented on KAFKA-5119:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/2915

KAFKA-5119: Improve information in assert failure for 
testMetricCollectionAfterShutdown

This test is failing consistently in 
https://jenkins.confluent.io/job/kafka-trunk/,
but nowhere else. I ran this branch in a clone of that job several times 
and this
test didn't fail. I suggest we merge this PR, which improves the test, to 
help us
gather more information about the test failure.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
socket-server-test-metric-collection-after-shutdown

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2915.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2915


commit aba3ed482e41db011974a44de72e57bc2e3b1a7d
Author: Ismael Juma 
Date:   2017-04-24T22:12:18Z

MINOR: Improve information in assert failure for 
testMetricCollectionAfterShutdown




> Transient test failure SocketServerTest.testMetricCollectionAfterShutdown
> -
>
> Key: KAFKA-5119
> URL: https://issues.apache.org/jira/browse/KAFKA-5119
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Reporter: Jason Gustafson
>
> From a recent build:
> {code}
> 20:04:15 kafka.network.SocketServerTest > testMetricCollectionAfterShutdown 
> FAILED
> 20:04:15 java.lang.AssertionError: expected:<0.0> but 
> was:<1.603886948862125>
> 20:04:15 at org.junit.Assert.fail(Assert.java:88)
> 20:04:15 at org.junit.Assert.failNotEquals(Assert.java:834)
> 20:04:15 at org.junit.Assert.assertEquals(Assert.java:553)
> 20:04:15 at org.junit.Assert.assertEquals(Assert.java:683)
> 20:04:15 at 
> kafka.network.SocketServerTest.testMetricCollectionAfterShutdown(SocketServerTest.scala:414)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5111) Improve internal Task APIs

2017-04-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15983955#comment-15983955
 ] 

ASF GitHub Bot commented on KAFKA-5111:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2895


> Improve internal Task APIs
> --
>
> Key: KAFKA-5111
> URL: https://issues.apache.org/jira/browse/KAFKA-5111
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
> Fix For: 0.11.0.0
>
>
> Currently, the internal interface for tasks is not very clean and it's hard 
> to reason about the control flow when tasks get closes, suspended, resumed 
> etc. This makes exception handling particularly hard.
> We want to refactor this part of the code to get a clean control flow and 
> interface.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-1610) Local modifications to collections generated from mapValues will be lost

2017-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984398#comment-15984398
 ] 

ASF GitHub Bot commented on KAFKA-1610:
---

Github user jozi-k closed the pull request at:

https://github.com/apache/kafka/pull/2531


> Local modifications to collections generated from mapValues will be lost
> 
>
> Key: KAFKA-1610
> URL: https://issues.apache.org/jira/browse/KAFKA-1610
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Mayuresh Gharat
>  Labels: newbie
> Attachments: KAFKA-1610_2014-08-29_09:51:51.patch, 
> KAFKA-1610_2014-08-29_10:03:55.patch, KAFKA-1610_2014-09-03_11:27:50.patch, 
> KAFKA-1610_2014-09-16_13:08:17.patch, KAFKA-1610_2014-09-16_15:23:27.patch, 
> KAFKA-1610_2014-09-30_23:21:46.patch, KAFKA-1610_2014-10-02_12:07:01.patch, 
> KAFKA-1610_2014-10-02_12:09:46.patch, KAFKA-1610.patch
>
>
> In our current Scala code base we have 40+ usages of mapValues, however it 
> has an important semantic difference with map, which is that "map" creates a 
> new map collection instance, while "mapValues" just create a map view of the 
> original map, and hence any further value changes to the view will be 
> effectively lost.
> Example code:
> {code}
> scala> case class Test(i: Int, var j: Int) {}
> defined class Test
> scala> val a = collection.mutable.Map(1 -> 1)
> a: scala.collection.mutable.Map[Int,Int] = Map(1 -> 1)
> scala> val b = a.mapValues(v => Test(v, v))
> b: scala.collection.Map[Int,Test] = Map(1 -> Test(1,1))
> scala> val c = a.map(v => v._1 -> Test(v._2, v._2))
> c: scala.collection.mutable.Map[Int,Test] = Map(1 -> Test(1,1))
> scala> b.foreach(kv => kv._2.j = kv._2.j + 1)
> scala> b
> res1: scala.collection.Map[Int,Test] = Map(1 -> Test(1,1))
> scala> c.foreach(kv => kv._2.j = kv._2.j + 1)
> scala> c
> res3: scala.collection.mutable.Map[Int,Test] = Map(1 -> Test(1,2))
> scala> a.put(1,3)
> res4: Option[Int] = Some(1)
> scala> b
> res5: scala.collection.Map[Int,Test] = Map(1 -> Test(3,3))
> scala> c
> res6: scala.collection.mutable.Map[Int,Test] = Map(1 -> Test(1,2))
> {code}
> We need to go through all these mapValue to see if they should be changed to 
> map



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5127) Replace pattern matching with foreach where the case None is unused

2017-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984837#comment-15984837
 ] 

ASF GitHub Bot commented on KAFKA-5127:
---

GitHub user baluchicken opened a pull request:

https://github.com/apache/kafka/pull/2919

KAFKA-5127 Replace pattern matching with foreach where the case None …

@ijuma plz review. This one is not complete because KafkaController, 
AdminUtils and ReplicaStateMachine has some of these but for those I will wait 
until the KAFKA-5028 and KAFKA-5103 are merged.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/baluchicken/kafka-1 KAFKA-5127

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2919.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2919


commit 553ef1acd456799f9aa4871c0a1a764eb88fad51
Author: Balint Molnar 
Date:   2017-04-26T13:44:11Z

KAFKA-5127 Replace pattern matching with foreach where the case None is 
unused




> Replace pattern matching with foreach where the case None is unused 
> 
>
> Key: KAFKA-5127
> URL: https://issues.apache.org/jira/browse/KAFKA-5127
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Balint Molnar
>Assignee: Balint Molnar
>Priority: Minor
>
> There are various place where pattern matching is used with matching only for 
> one thing and ignoring the None type, this can be replaced with foreach.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5124) shouldInnerLeftJoin unit test fails

2017-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984973#comment-15984973
 ] 

ASF GitHub Bot commented on KAFKA-5124:
---

GitHub user original-brownbear opened a pull request:

https://github.com/apache/kafka/pull/2921

KAFKA-5124: autocommit reset earliest fixes race condition

Fixes 
`org.apache.kafka.streams.integration.utils.IntegrationTestUtils#readKeyValues` 
potentially starting to `poll` for stream output after the stream finished 
sending the test data and hence missing it when working with `latest` offsets.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/original-brownbear/kafka KAFKA-5124

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2921.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2921


commit bdd06b97c57ec06714f1f654d5b2f2621143224f
Author: Armin Braun 
Date:   2017-04-26T15:10:44Z

KAFKA-5124: autocommit reset earliest fixes race condition




> shouldInnerLeftJoin unit test fails
> ---
>
> Key: KAFKA-5124
> URL: https://issues.apache.org/jira/browse/KAFKA-5124
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Eno Thereska
>Assignee: Armin Braun
> Fix For: 0.11.0.0
>
>
> Unit test on trunk gives occasional failure:
> org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
> shouldInnerLeftJoin FAILED
> java.lang.AssertionError: Condition not met within timeout 3. 
> Expecting 1 records from topic output- while only received 0: []
> at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:265)
> at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(IntegrationTestUtils.java:207)
> at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(IntegrationTestUtils.java:176)
> at 
> org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest.verifyKTableKTableJoin(KTableKTableJoinIntegrationTest.java:222)
> at 
> org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest.shouldInnerLeftJoin(KTableKTableJoinIntegrationTest.java:143)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5108) Add support for reading PID snapshot files to DumpLogSegments

2017-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985216#comment-15985216
 ] 

ASF GitHub Bot commented on KAFKA-5108:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/2922

KAFKA-5108: Add support for reading PID snapshot files to DumpLogSegments



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-5108

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2922.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2922


commit 467bb88ce7cbbf2aa89df4d222730a8a310600da
Author: Jason Gustafson 
Date:   2017-04-26T17:32:44Z

KAFKA-5108: Add support for reading PID snapshot files to DumpLogSegments




> Add support for reading PID snapshot files to DumpLogSegments
> -
>
> Key: KAFKA-5108
> URL: https://issues.apache.org/jira/browse/KAFKA-5108
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.11.0.0
>
>
> It is useful to be able to use the DumpLogSegments utility to read the PID 
> snapshot files introduced in KIP-98 for debugging purposes.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5124) shouldInnerLeftJoin unit test fails

2017-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985422#comment-15985422
 ] 

ASF GitHub Bot commented on KAFKA-5124:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2921


> shouldInnerLeftJoin unit test fails
> ---
>
> Key: KAFKA-5124
> URL: https://issues.apache.org/jira/browse/KAFKA-5124
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Eno Thereska
>Assignee: Armin Braun
> Fix For: 0.11.0.0
>
>
> Unit test on trunk gives occasional failure:
> org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
> shouldInnerLeftJoin FAILED
> java.lang.AssertionError: Condition not met within timeout 3. 
> Expecting 1 records from topic output- while only received 0: []
> at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:265)
> at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(IntegrationTestUtils.java:207)
> at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(IntegrationTestUtils.java:176)
> at 
> org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest.verifyKTableKTableJoin(KTableKTableJoinIntegrationTest.java:222)
> at 
> org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest.shouldInnerLeftJoin(KTableKTableJoinIntegrationTest.java:143)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4994) Fix findbugs warning about OffsetStorageWriter#currentFlushId

2017-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985434#comment-15985434
 ] 

ASF GitHub Bot commented on KAFKA-4994:
---

Github user johnma14 closed the pull request at:

https://github.com/apache/kafka/pull/2914


> Fix findbugs warning about OffsetStorageWriter#currentFlushId
> -
>
> Key: KAFKA-4994
> URL: https://issues.apache.org/jira/browse/KAFKA-4994
> Project: Kafka
>  Issue Type: Bug
>Reporter: Colin P. McCabe
>Assignee: Mariam John
>  Labels: newbie
>
> We should fix the findbugs warning about 
> {{OffsetStorageWriter#currentFlushId}}
> {code}
> Multithreaded correctness Warnings
> IS2_INCONSISTENT_SYNC:   Inconsistent synchronization of 
> org.apache.kafka.connect.storage.OffsetStorageWriter.currentFlushId; locked 
> 83% of time
> IS2_INCONSISTENT_SYNC:   Inconsistent synchronization of 
> org.apache.kafka.connect.storage.OffsetStorageWriter.toFlush; locked 75% of 
> time
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4994) Fix findbugs warning about OffsetStorageWriter#currentFlushId

2017-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985435#comment-15985435
 ] 

ASF GitHub Bot commented on KAFKA-4994:
---

GitHub user johnma14 reopened a pull request:

https://github.com/apache/kafka/pull/2914

KAFKA-4994: Fix findbug warnings about OffsetStorageWriter

OffsetStorageWriter is not a thread-safe class and should be accessed
only from a Task's processing thread. The WorkerSourceTask class calls
the different methods (offset, beginFlush, cancelFlush, handleFinishWrite)
within a synchronized block. Hence the method definitions in
OffsetStorageWriter.java does not need to contain the keyword synchronized
again.

In the OffsetStorageWriter.java class, the doFlush() method is not 
explicitely
synchronized like the other methods in this class. Hence this can lead to
inconsistent synchronization of variables like currentFlushId and toFlush 
which
are set in the synchronized methods within this class.
- 
https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/storage/OffsetStorageWriter.java
- 
https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSourceTask.java#L295


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/johnma14/kafka bug/kafka-4994

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2914.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2914


commit d481c10818b59e67eaac0aa598e845773b07e51d
Author: Mariam John 
Date:   2017-04-25T20:47:04Z

KAFKA-4994: Fix findbug warnings about OffsetStorageWriter

OffsetStorageWriter is not a thread-safe class and should be accessed
only from a Task's processing thread. The WorkerSourceTask class calls
the different methods (offset, beginFlush, cancelFlush, handleFinishWrite)
within a synchronized block. Hence the method definitions in
OffsetStorageWriter.java does not need to contain the keyword synchronized
again.




> Fix findbugs warning about OffsetStorageWriter#currentFlushId
> -
>
> Key: KAFKA-4994
> URL: https://issues.apache.org/jira/browse/KAFKA-4994
> Project: Kafka
>  Issue Type: Bug
>Reporter: Colin P. McCabe
>Assignee: Mariam John
>  Labels: newbie
>
> We should fix the findbugs warning about 
> {{OffsetStorageWriter#currentFlushId}}
> {code}
> Multithreaded correctness Warnings
> IS2_INCONSISTENT_SYNC:   Inconsistent synchronization of 
> org.apache.kafka.connect.storage.OffsetStorageWriter.currentFlushId; locked 
> 83% of time
> IS2_INCONSISTENT_SYNC:   Inconsistent synchronization of 
> org.apache.kafka.connect.storage.OffsetStorageWriter.toFlush; locked 75% of 
> time
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5108) Add support for reading PID snapshot files to DumpLogSegments

2017-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985518#comment-15985518
 ] 

ASF GitHub Bot commented on KAFKA-5108:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2922


> Add support for reading PID snapshot files to DumpLogSegments
> -
>
> Key: KAFKA-5108
> URL: https://issues.apache.org/jira/browse/KAFKA-5108
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.11.0.0
>
>
> It is useful to be able to use the DumpLogSegments utility to read the PID 
> snapshot files introduced in KIP-98 for debugging purposes.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5059) Implement Transactional Coordinator

2017-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985572#comment-15985572
 ] 

ASF GitHub Bot commented on KAFKA-5059:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2849


> Implement Transactional Coordinator
> ---
>
> Key: KAFKA-5059
> URL: https://issues.apache.org/jira/browse/KAFKA-5059
> Project: Kafka
>  Issue Type: New Feature
>  Components: core
>Reporter: Damian Guy
>Assignee: Damian Guy
>
> This covers the implementation of the transaction coordinator to support 
> transactions, as described in KIP-98: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-98+-+Exactly+Once+Delivery+and+Transactional+Messaging



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-3248) AdminClient Blocks Forever in send Method

2017-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985884#comment-15985884
 ] 

ASF GitHub Bot commented on KAFKA-3248:
---

Github user WarrenGreen closed the pull request at:

https://github.com/apache/kafka/pull/946


> AdminClient Blocks Forever in send Method
> -
>
> Key: KAFKA-3248
> URL: https://issues.apache.org/jira/browse/KAFKA-3248
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.9.0.0
>Reporter: John Tylwalk
>Assignee: Warren Green
>Priority: Critical
> Fix For: 0.10.0.0
>
>
> AdminClient will block forever when performing operations involving the 
> {{send()}} method, due to usage of 
> {{ConsumerNetworkClient.poll(RequestFuture)}} - which blocks indefinitely.
> Suggested fix is to use {{ConsumerNetworkClient.poll(RequestFuture, long 
> timeout)}} in {{AdminClient.send()}}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5136) Move coordinatorEpoch from WriteTxnMarkerRequest to TxnMarkerEntry

2017-04-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15986772#comment-15986772
 ] 

ASF GitHub Bot commented on KAFKA-5136:
---

GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/2925

KAFKA-5136: move coordinatorEpoch from WriteTxnMarkerRequest to 
TxnMarkerEntry

Moving the coordinatorEpoch from WriteTxnMarkerRequest to TxnMarkerEntry 
will generate fewer broker send requests

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka tc-write-txn-request-follow-up

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2925.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2925


commit 2f8d54c8207311aed57a20b46a03c272cc62020b
Author: Damian Guy 
Date:   2017-04-27T14:58:33Z

move coordinatorEpoch from WriteTxnMarkerRequest to TxnMarkerEntry




> Move coordinatorEpoch from WriteTxnMarkerRequest to TxnMarkerEntry
> --
>
> Key: KAFKA-5136
> URL: https://issues.apache.org/jira/browse/KAFKA-5136
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Reporter: Damian Guy
>Assignee: Damian Guy
>
> It makes more sense for the coordinatorEpoc to be on the TxnMarkerEntry 
> rather than the WriteTxnMarkerRequest. It will generate fewer requests per 
> broker



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5028) convert kafka controller to a single-threaded event queue model

2017-04-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15986858#comment-15986858
 ] 

ASF GitHub Bot commented on KAFKA-5028:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2816


> convert kafka controller to a single-threaded event queue model
> ---
>
> Key: KAFKA-5028
> URL: https://issues.apache.org/jira/browse/KAFKA-5028
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Onur Karaman
>Assignee: Onur Karaman
> Fix For: 0.11.0.0
>
>
> The goal of this ticket is to improve controller maintainability by 
> simplifying the controller's concurrency semantics. The controller code has a 
> lot of shared state between several threads using several concurrency 
> primitives. This makes the code hard to reason about.
> This ticket proposes we convert the controller to a single-threaded event 
> queue model. We add a new controller thread which processes events held in an 
> event queue. Note that this does not mean we get rid of all threads used by 
> the controller. We merely delegate all work that interacts with controller 
> local state to this single thread. With only a single thread accessing and 
> modifying the controller local state, we no longer need to worry about 
> concurrent access, which means we can get rid of the various concurrency 
> primitives used throughout the controller.
> Performance is expected to match existing behavior since the bulk of the 
> existing controller work today already happens sequentially in the ZkClient’s 
> single ZkEventThread.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5131) WriteTxnMarkers and complete commit/abort on partition immigration

2017-04-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15987074#comment-15987074
 ] 

ASF GitHub Bot commented on KAFKA-5131:
---

GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/2926

KAFKA-5131: WriteTxnMarkers and complete commit/abort on partition 
immigration

write txn markers and complete the commit/abort for transactions in 
PrepareXX state during partition immigration.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka kafka-5059

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2926.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2926


commit b8b986110482a08da46999c1fb167dea0d363e07
Author: Damian Guy 
Date:   2017-04-27T15:56:26Z

write txn markers for transactions in PrepareXX state during partition 
immigration




> WriteTxnMarkers and complete commit/abort on partition immigration
> --
>
> Key: KAFKA-5131
> URL: https://issues.apache.org/jira/browse/KAFKA-5131
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Reporter: Damian Guy
>Assignee: Damian Guy
>
> When partitions immigrate we need to write the txn markers and complete the 
> commit/abort for any transactions in a PrepareXX state



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5107) remove preferred replica election state from ControllerContext

2017-04-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15987184#comment-15987184
 ] 

ASF GitHub Bot commented on KAFKA-5107:
---

GitHub user onurkaraman opened a pull request:

https://github.com/apache/kafka/pull/2927

KAFKA-5107: remove preferred replica election state from ControllerContext

KAFKA-5028 moves the controller to a single-threaded model, so we would no 
longer have work interleaved between preferred replica leader election, meaning 
we don't need to keep its state.

This patch additionally addresses a bug from KAFKA-5028 where it made 
onPreferredReplicaElection keep the line calling 
topicDeletionManager.markTopicIneligibleForDeletion but removes the line 
calling topicDeletionManager.resumeDeletionForTopics

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/onurkaraman/kafka KAFKA-5107

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2927.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2927


commit 906f05c6ab9e8f7806cdfb3418b1dabfe0a4f5f1
Author: Onur Karaman 
Date:   2017-04-27T17:44:02Z

KAFKA-5107: remove preferred replica election state from ControllerContext

KAFKA-5028 moves the controller to a single-threaded model, so we would no 
longer have work interleaved between preferred replica leader election, meaning 
we don't need to keep its state.

This patch additionally addresses a bug from KAFKA-5028 where it made 
onPreferredReplicaElection keep the line calling 
topicDeletionManager.markTopicIneligibleForDeletion but removes the line 
calling topicDeletionManager.resumeDeletionForTopics




> remove preferred replica election state from ControllerContext
> --
>
> Key: KAFKA-5107
> URL: https://issues.apache.org/jira/browse/KAFKA-5107
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Onur Karaman
>Assignee: Onur Karaman
>
> KAFKA-5028 moves the controller to a single-threaded model, so we would no 
> longer have work interleaved between preferred replica leader election, 
> meaning we don't need to keep its state.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4818) Implement transactional clients

2017-04-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15987785#comment-15987785
 ] 

ASF GitHub Bot commented on KAFKA-4818:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2840


> Implement transactional clients
> ---
>
> Key: KAFKA-4818
> URL: https://issues.apache.org/jira/browse/KAFKA-4818
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Jason Gustafson
>Assignee: Apurva Mehta
> Fix For: 0.11.0.0
>
>
> This covers the implementation of the producer and consumer to support 
> transactions, as described in KIP-98: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-98+-+Exactly+Once+Delivery+and+Transactional+Messaging



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5086) Update topic expiry time in Metadata every time the topic metadata is requested

2017-04-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15987784#comment-15987784
 ] 

ASF GitHub Bot commented on KAFKA-5086:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2869


> Update topic expiry time in Metadata every time the topic metadata is 
> requested
> ---
>
> Key: KAFKA-5086
> URL: https://issues.apache.org/jira/browse/KAFKA-5086
> Project: Kafka
>  Issue Type: Bug
>Reporter: Dong Lin
>Assignee: Dong Lin
> Fix For: 0.11.0.0
>
>
> As of current implementation, KafkaProducer.waitOnMetadata() will first reset 
> topic expiry time of the topic before repeatedly sending TopicMetadataRequest 
> and waiting for metadata response. However, if the metadata of the topic is 
> not available within Metadata.TOPIC_EXPIRY_MS, which is set to 5 minutes, 
> then the topic will be expired and removed from Metadata.topics. The 
> TopicMetadataRequest will no longer include the topic and the KafkaProducer 
> will never receive the metadata of this topic. It will enter an infinite loop 
> of sending TopicMetadataRequest and waiting for metadata response.
> This problem can be fixed by updating topic expiry time every time the topic 
> metadata is requested.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5119) Transient test failure SocketServerTest.testMetricCollectionAfterShutdown

2017-04-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15987849#comment-15987849
 ] 

ASF GitHub Bot commented on KAFKA-5119:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2915


> Transient test failure SocketServerTest.testMetricCollectionAfterShutdown
> -
>
> Key: KAFKA-5119
> URL: https://issues.apache.org/jira/browse/KAFKA-5119
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Reporter: Jason Gustafson
>
> From a recent build:
> {code}
> 20:04:15 kafka.network.SocketServerTest > testMetricCollectionAfterShutdown 
> FAILED
> 20:04:15 java.lang.AssertionError: expected:<0.0> but 
> was:<1.603886948862125>
> 20:04:15 at org.junit.Assert.fail(Assert.java:88)
> 20:04:15 at org.junit.Assert.failNotEquals(Assert.java:834)
> 20:04:15 at org.junit.Assert.assertEquals(Assert.java:553)
> 20:04:15 at org.junit.Assert.assertEquals(Assert.java:683)
> 20:04:15 at 
> kafka.network.SocketServerTest.testMetricCollectionAfterShutdown(SocketServerTest.scala:414)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5101) Remove KafkaController's incrementControllerEpoch method parameter

2017-04-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15987859#comment-15987859
 ] 

ASF GitHub Bot commented on KAFKA-5101:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2886


> Remove KafkaController's incrementControllerEpoch method parameter 
> ---
>
> Key: KAFKA-5101
> URL: https://issues.apache.org/jira/browse/KAFKA-5101
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Reporter: Balint Molnar
>Assignee: Balint Molnar
>Priority: Trivial
> Fix For: 0.11.0.0
>
>
> KAFKA-4814 replaced the zkClient.createPersistent method with 
> zkUtils.createPersistentPath so the zkClient parameter is no longer required.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4763) Handle disk failure for JBOD (KIP-112)

2017-04-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15987865#comment-15987865
 ] 

ASF GitHub Bot commented on KAFKA-4763:
---

GitHub user lindong28 opened a pull request:

https://github.com/apache/kafka/pull/2929

KAFKA-4763; Handle disk failure for JBOD (KIP-112)



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lindong28/kafka KAFKA-4763

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2929.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2929


commit ab6302b82b6245d1bbf8d77d836e362b95750ca4
Author: Dong Lin 
Date:   2017-04-03T00:46:34Z

KAFKA-4763; Handle disk failure for JBOD (KIP-112)




> Handle disk failure for JBOD (KIP-112)
> --
>
> Key: KAFKA-4763
> URL: https://issues.apache.org/jira/browse/KAFKA-4763
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Dong Lin
>Assignee: Dong Lin
>
> See 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-112%3A+Handle+disk+failure+for+JBOD
>  for motivation and design.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5005) JoinIntegrationTest fails occasionally

2017-04-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15987905#comment-15987905
 ] 

ASF GitHub Bot commented on KAFKA-5005:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2920


> JoinIntegrationTest fails occasionally
> --
>
> Key: KAFKA-5005
> URL: https://issues.apache.org/jira/browse/KAFKA-5005
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Reporter: Matthias J. Sax
>Assignee: Armin Braun
> Fix For: 0.11.0.0
>
>
> testLeftKStreamKStream:
> {noformat}
> java.lang.AssertionError: Condition not met within timeout 3. Expecting 1 
> records from topic outputTopic while only received 0: []
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:265)
> at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinValuesRecordsReceived(IntegrationTestUtils.java:247)
> at 
> org.apache.kafka.streams.integration.JoinIntegrationTest.checkResult(JoinIntegrationTest.java:170)
> at 
> org.apache.kafka.streams.integration.JoinIntegrationTest.runTest(JoinIntegrationTest.java:192)
> at 
> org.apache.kafka.streams.integration.JoinIntegrationTest.testLeftKStreamKStream(JoinIntegrationTest.java:250)
> {noformat}
> testInnerKStreamKTable:
> {noformat}
> java.lang.AssertionError: Condition not met within timeout 3. Expecting 1 
> records from topic outputTopic while only received 0: []
>   at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:265)
>   at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinValuesRecordsReceived(IntegrationTestUtils.java:248)
>   at 
> org.apache.kafka.streams.integration.JoinIntegrationTest.checkResult(JoinIntegrationTest.java:171)
>   at 
> org.apache.kafka.streams.integration.JoinIntegrationTest.runTest(JoinIntegrationTest.java:193)
>   at 
> org.apache.kafka.streams.integration.JoinIntegrationTest.testInnerKStreamKTable(JoinIntegrationTest.java:305)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5111) Improve internal Task APIs

2017-04-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15987931#comment-15987931
 ] 

ASF GitHub Bot commented on KAFKA-5111:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2917


> Improve internal Task APIs
> --
>
> Key: KAFKA-5111
> URL: https://issues.apache.org/jira/browse/KAFKA-5111
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
> Fix For: 0.11.0.0
>
>
> Currently, the internal interface for tasks is not very clean and it's hard 
> to reason about the control flow when tasks get closes, suspended, resumed 
> etc. This makes exception handling particularly hard.
> We want to refactor this part of the code to get a clean control flow and 
> interface.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4379) Remove caching of dirty and removed keys from StoreChangeLogger

2017-04-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15987946#comment-15987946
 ] 

ASF GitHub Bot commented on KAFKA-4379:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2908


> Remove caching of dirty and removed keys from StoreChangeLogger
> ---
>
> Key: KAFKA-4379
> URL: https://issues.apache.org/jira/browse/KAFKA-4379
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Damian Guy
>Assignee: Damian Guy
>Priority: Minor
> Fix For: 0.10.1.1, 0.10.2.0
>
>
> The StoreChangeLogger currently keeps a cache of dirty and removed keys and 
> will batch the changelog records such that we don't send a record for each 
> update. However, with KIP-63 this is unnecessary as the batching and 
> de-duping is done by the caching layer. Further, the StoreChangeLogger relies 
> on context.timestamp() which is likely to be incorrect when caching is enabled



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5103) Refactor AdminUtils to use zkUtils methods instad of zkUtils.zkClient

2017-04-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15987984#comment-15987984
 ] 

ASF GitHub Bot commented on KAFKA-5103:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2888


> Refactor AdminUtils to use zkUtils methods instad of zkUtils.zkClient
> -
>
> Key: KAFKA-5103
> URL: https://issues.apache.org/jira/browse/KAFKA-5103
> Project: Kafka
>  Issue Type: Sub-task
>  Components: admin
>Reporter: Balint Molnar
>Assignee: Balint Molnar
>
> Replace zkUtils.zkClient.createPersistentSequential(seqNode, content) to 
> zkUtils.createSequentialPersistentPath(seqNode, content).
> The zkClient variant does not respects the Acl's.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5140) Flaky ResetIntegrationTest

2017-04-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15988155#comment-15988155
 ] 

ASF GitHub Bot commented on KAFKA-5140:
---

GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/2931

KAFKA-5140: Flaky ResetIntegrationTest



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka 
kafka-5140-flaky-reset-integration-test

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2931.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2931


commit 62b9bd6e72f28c78f6cd0f7f5d72ad38e97065e6
Author: Matthias J. Sax 
Date:   2017-04-28T03:57:26Z

KAFKA-5140: Flaky ResetIntegrationTest




> Flaky ResetIntegrationTest
> --
>
> Key: KAFKA-5140
> URL: https://issues.apache.org/jira/browse/KAFKA-5140
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Affects Versions: 0.10.2.0
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
> Fix For: 0.11.0.0
>
>
> {noformat}
> org.apache.kafka.streams.integration.ResetIntegrationTest > 
> testReprocessingFromScratchAfterResetWithIntermediateUserTopic FAILED
> java.lang.AssertionError: 
> Expected: <[KeyValue(2986681642095, 1), KeyValue(2986681642055, 1), 
> KeyValue(2986681642075, 1), KeyValue(2986681642035, 1), 
> KeyValue(2986681642095, 1), KeyValue(2986681642055, 1), 
> KeyValue(2986681642115, 1), KeyValue(2986681642075, 1), 
> KeyValue(2986681642075, 2), KeyValue(2986681642095, 2), 
> KeyValue(2986681642115, 1), KeyValue(2986681642135, 1), 
> KeyValue(2986681642095, 2), KeyValue(2986681642115, 2), 
> KeyValue(2986681642155, 1), KeyValue(2986681642135, 1), 
> KeyValue(2986681642115, 2), KeyValue(2986681642135, 2), 
> KeyValue(2986681642155, 1), KeyValue(2986681642175, 1), 
> KeyValue(2986681642135, 2), KeyValue(2986681642155, 2), 
> KeyValue(2986681642175, 1), KeyValue(2986681642195, 1), 
> KeyValue(2986681642135, 3), KeyValue(2986681642155, 2), 
> KeyValue(2986681642175, 2), KeyValue(2986681642195, 1), 
> KeyValue(2986681642155, 3), KeyValue(2986681642175, 2), 
> KeyValue(2986681642195, 2), KeyValue(2986681642155, 3), 
> KeyValue(2986681642175, 3), KeyValue(2986681642195, 2), 
> KeyValue(2986681642155, 4), KeyValue(2986681642175, 3), 
> KeyValue(2986681642195, 3)]>
>  but: was <[KeyValue(2986681642095, 1), KeyValue(2986681642055, 1), 
> KeyValue(2986681642075, 1), KeyValue(2986681642035, 1), 
> KeyValue(2986681642095, 1), KeyValue(2986681642055, 1), 
> KeyValue(2986681642115, 1), KeyValue(2986681642075, 1), 
> KeyValue(2986681642075, 2), KeyValue(2986681642095, 2), 
> KeyValue(2986681642115, 1), KeyValue(2986681642135, 1), 
> KeyValue(2986681642095, 2), KeyValue(2986681642115, 2), 
> KeyValue(2986681642155, 1), KeyValue(2986681642135, 1), 
> KeyValue(2986681642115, 2), KeyValue(2986681642135, 2), 
> KeyValue(2986681642155, 1), KeyValue(2986681642175, 1), 
> KeyValue(2986681642135, 2), KeyValue(2986681642155, 2), 
> KeyValue(2986681642175, 1), KeyValue(2986681642195, 1), 
> KeyValue(2986681642135, 3), KeyValue(2986681642155, 2), 
> KeyValue(2986681642175, 2), KeyValue(2986681642195, 1), 
> KeyValue(2986681642155, 3), KeyValue(2986681642175, 2), 
> KeyValue(2986681642195, 2), KeyValue(2986681642155, 3)]>
> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8)
> at 
> org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterResetWithIntermediateUserTopic(ResetIntegrationTest.java:190)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5137) Controlled shutdown timeout message improvement

2017-04-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15988230#comment-15988230
 ] 

ASF GitHub Bot commented on KAFKA-5137:
---

GitHub user umesh9794 opened a pull request:

https://github.com/apache/kafka/pull/2932

KAFKA-5137 : Controlled shutdown timeout message improvement

This PR improves the warning message by adding correct config details. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/umesh9794/kafka local

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2932.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2932


commit f2b35a8e56d83301a42a09fef61a9ba752acce70
Author: umesh9794 
Date:   2017-04-28T05:14:04Z

KAFKA-5137 : Controlled shutdown timeout message improvement




> Controlled shutdown timeout message improvement
> ---
>
> Key: KAFKA-5137
> URL: https://issues.apache.org/jira/browse/KAFKA-5137
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.10.2.0
>Reporter: Dustin Cote
>Priority: Minor
>  Labels: newbie
>
> Currently if you fail during controlled shutdown, you can get a message that 
> says the socket.timeout.ms has expired. This config actually doesn't exist on 
> the broker. Instead, we should explicitly say if we've hit the 
> controller.socket.timeout.ms or the request.timeout.ms as it's confusing to 
> take action given the current message. I believe the relevant code is here:
> https://github.com/apache/kafka/blob/0.10.2/core/src/main/scala/kafka/server/KafkaServer.scala#L428-L454
> I'm also not sure if there's another timeout that could be hit here or 
> another reason why IOException might be thrown. In the least we should call 
> out those two configs instead of the non-existent one but if we can direct to 
> the proper one that would be even better.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4986) Add producer per task support

2017-04-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15988288#comment-15988288
 ] 

ASF GitHub Bot commented on KAFKA-4986:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2854


> Add producer per task support
> -
>
> Key: KAFKA-4986
> URL: https://issues.apache.org/jira/browse/KAFKA-4986
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
> Fix For: 0.11.0.0
>
>
> Add new config parameter {{processing_guarantee}} and enable "producer per 
> task" initialization of new config is set to {{exactly_once}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5097) KafkaConsumer.poll throws IllegalStateException

2017-04-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15988301#comment-15988301
 ] 

ASF GitHub Bot commented on KAFKA-5097:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2887


> KafkaConsumer.poll throws IllegalStateException
> ---
>
> Key: KAFKA-5097
> URL: https://issues.apache.org/jira/browse/KAFKA-5097
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Assignee: Eno Thereska
>Priority: Blocker
> Fix For: 0.10.2.1
>
>
> The backport of KAFKA-5075 to 0.10.2 seems to have introduced a regression: 
> If a fetch returns more data than `max.poll.records` and there is a rebalance 
> or the user changes the assignment/subscription after a `poll` that doesn't 
> return all the fetched data, the next call will throw an 
> `IllegalStateException`. More discussion in the following PR that includes a 
> fix:
> https://github.com/apache/kafka/pull/2876/files#r112413428
> This issue caused a Streams system test to fail, see KAFKA-4755.
> We should fix the regression before releasing 0.10.2.1.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5059) Implement Transactional Coordinator

2017-04-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15988540#comment-15988540
 ] 

ASF GitHub Bot commented on KAFKA-5059:
---

GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/2934

KAFKA-5059: [Follow Up] remove broken locking. Fix handleAddPartitions

remove broken locking. fix handleAddPartitions after complete commit/abort
respond with CONCURRENT_TRANSACTIONS in initPid

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka follow-up-tc-work

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2934.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2934


commit 4986eef8468094a809ca1334486629043ffa34f2
Author: Damian Guy 
Date:   2017-04-28T09:19:03Z

remove broken locking. fix handleAddPartitions after complete commit/abort
respond with CONCURRENT_TRANSACTIONS in initPid




> Implement Transactional Coordinator
> ---
>
> Key: KAFKA-5059
> URL: https://issues.apache.org/jira/browse/KAFKA-5059
> Project: Kafka
>  Issue Type: New Feature
>  Components: core
>Reporter: Damian Guy
>Assignee: Damian Guy
>
> This covers the implementation of the transaction coordinator to support 
> transactions, as described in KIP-98: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-98+-+Exactly+Once+Delivery+and+Transactional+Messaging



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-3761) Controller has RunningAsBroker instead of RunningAsController state

2017-04-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15988745#comment-15988745
 ] 

ASF GitHub Bot commented on KAFKA-3761:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/2935

MINOR: onControllerResignation should be invoked if triggerControllerMove 
is called

This fixes a transient test failure due to a NPE in 
ControllerFailoverTest.testMetadataUpdate:

```text
Caused by: java.lang.NullPointerException
at 
kafka.controller.ControllerBrokerRequestBatch.addUpdateMetadataRequestForBrokers(ControllerChannelManager.scala:338)
at 
kafka.controller.KafkaController.sendUpdateMetadataRequest(KafkaController.scala:975)
at 
kafka.controller.ControllerFailoverTest.testMetadataUpdate(ControllerFailoverTest.scala:141)
```

The underlying issue is that setting `activeControllerId.set(-1)` in 
`triggerControllerMove`
causes `Reelect` not to invoke `onControllerResignation`. Among other 
things, this
causes an IllegalStateException to be thrown when `KafkaScheduler.startup` 
is invoked
for the second time without the corresponding `shutdown`.

I also updated the test so that we can trigger this issue deterministically 
instead of
transiently.

Finally, I included a few clean-ups:
1. No longer update the broker state in `onControllerFailover`. This is no 
longer needed
since we removed the `RunningAsController` state (KAFKA-3761).
2. Trivial clean-ups in KafkaController
3. Removed unused parameter in `ZkUtils.getPartitionLeaderAndIsrForTopics`

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
on-controller-resignation-if-trigger-controller-move

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2935.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2935


commit c01d29b3a95e7ddffb91550397a5505d9711d5c8
Author: Ismael Juma 
Date:   2017-04-28T12:30:03Z

MINOR: onControllerResignation should be invoked if triggerControllerMove 
is called

commit f28de697f7d109893537652e8b8216c4d06677a7
Author: Ismael Juma 
Date:   2017-04-28T12:30:47Z

Remove remnant broker state update in `onControllerFailover`

commit 898b88b59cffbfdb7df864d0b070ed7a4960601e
Author: Ismael Juma 
Date:   2017-04-28T12:31:15Z

A few trivial clean-ups in KafkaController

commit 241b9890ab47b4670e61f3f9d3b51c6aa92a8a94
Author: Ismael Juma 
Date:   2017-04-28T12:31:38Z

Remove unused parameter in `getPartitionLeaderAndIsrForTopics`




> Controller has RunningAsBroker instead of RunningAsController state
> ---
>
> Key: KAFKA-3761
> URL: https://issues.apache.org/jira/browse/KAFKA-3761
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Assignee: Roger Hoover
> Fix For: 0.10.1.0
>
>
> In `KafkaServer.start`, we start `KafkaController`:
> {code}
> /* start kafka controller */
> kafkaController = new KafkaController(config, zkUtils, brokerState, 
> kafkaMetricsTime, metrics, threadNamePrefix)
> kafkaController.startup()
> {code}
> Which sets the state to `RunningAsController` in 
> `KafkaController.onControllerFailover`:
> `brokerState.newState(RunningAsController)`
> And this later gets set to `RunningAsBroker`.
> This doesn't match the diagram in `BrokerStates`. [~junrao] suggested that we 
> should start the controller after we register the broker in ZK, but this 
> seems tricky as we need to controller in `KafkaApis`.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5096) Only log invalid user configs and overwrite with correct one

2017-04-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15989348#comment-15989348
 ] 

ASF GitHub Bot commented on KAFKA-5096:
---

GitHub user johnma14 opened a pull request:

https://github.com/apache/kafka/pull/2938

KAFKA-5096: Log invalid user configs and use defaults

Kafka Streams do not allow users to modify some consumer configurations.
If the user modifies this property, currently an exception is thrown.
The following changes were made in this patch:
- Defined a new array 'NON_CONFIGURABLE_CONSUMER_CONFIGS' to hold the names
  of the configuration parameters that is not allowed to be modified. 
Currently, this
  contains just 1 parameter - enable_auto_commit. When the 'exactly once'
  feature is implemented ( KAFKA-4923), more parameters can be added to this
  array.
- Defined a new method 'checkIfUnexpectedUserSpecifiedConsumerConfig' to
  check if user overwrote the values of any of the non configurable 
configuration
  parameters. If so, then log a warning message and reset the default values
- Updated the javadoc to include the configuration parameters that cannot be
  modified by users.
- Updated the corresponding tests in StreamsConfigTest.java to reflect the 
changes
  made in StreamsConfig.java

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/johnma14/kafka bug/kafka-5096

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2938.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2938


commit 1194fab4c3529cb3745548711a39cf9ec8753f04
Author: Mariam John 
Date:   2017-04-28T18:32:54Z

KAFKA-5096: Log invalid user configs and use defaults

Kafka Streams do not allow users to modify some consumer configurations.
Currently, it does not allow modifying the value of 'enable_auto_commit'.
If the user modifies this property, currently an exception is thrown.
The following changes were made in this patch:
- Defined a new array 'NON_CONFIGURABLE_CONSUMER_CONFIGS' to hold the names
  of the configuration parameters that is not allowed to be modified
- Defined a new method 'checkIfUnexpectedUserSpecifiedConsumerConfig' to
  check if user overwrote the values of any of the non configurable 
configuration
  parameters. If so, then log a warning message and reset the default values
- Updated the javadoc to include the configuration parameters that cannot be
  modified by users.
- Updated the corresponding tests in StreamsConfigTest.java to reflect the 
changes
  made in StreamsConfig.java




> Only log invalid user configs and overwrite with correct one
> 
>
> Key: KAFKA-5096
> URL: https://issues.apache.org/jira/browse/KAFKA-5096
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Mariam John
>Priority: Minor
>  Labels: beginner, newbie
>
> Streams does not allow to overwrite some config parameters (eg, 
> {{enable.auto.commit}}) Currently, we throw an exception, but this is 
> actually not required, as Streams can just ignore/overwrite the user provided 
> value.
> Thus, instead of throwing, we should just log a WARN message and overwrite 
> the config with the values that suits Streams. (atm it's only one parameter 
> {{enable.auto.commit}}), but with exactly-once it's going to be more (cf. 
> KAFKA-4923). Thus, the scope of this ticket depends when it will be 
> implemented (ie, before or after KAFKA-4923).
> This ticket should also include JavaDoc updates that explain what parameters 
> cannot be specified by the user.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-3266) Implement KIP-4 RPCs and APIs for creating, altering, and listing ACLs

2017-04-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15989602#comment-15989602
 ] 

ASF GitHub Bot commented on KAFKA-3266:
---

GitHub user cmccabe opened a pull request:

https://github.com/apache/kafka/pull/2941

KAFKA-3266: Implement KIP-140 RPCs and APIs for creating, altering, and 
listing ACLs



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cmccabe/kafka KAFKA-3266

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2941.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2941


commit e5d21aa4b0d779e07d11c8106fc1458c237cbab4
Author: Colin P. Mccabe 
Date:   2017-04-10T20:44:59Z

KAFKA-3265

commit 582e5c149b37d687c34bebd9b48a2d1f93e5
Author: Colin P. Mccabe 
Date:   2017-04-20T17:43:47Z

Split KafkaFuture, change whitespace, use Rule for test timeouts

commit 9aa07bf241538b70224942f0eb98547b9d2a4295
Author: Colin P. Mccabe 
Date:   2017-04-20T17:56:10Z

LegacyAdminClientTest: rename scala class name

commit 2075e09ca095b44f98d1a28a303343b95165fbf3
Author: Colin P. Mccabe 
Date:   2017-04-20T17:56:25Z

Use Utils.closeQuietly, some renames

* Use Utils.closeQuietly
* getDeadlineMs -> calcDeadlineMs
* getTimeoutMsRemainingAsInt -> calcTimeoutMsRemainingAsInt

commit e805a34642f460a0cbdb269460202100727f163e
Author: Colin P. Mccabe 
Date:   2017-04-20T17:59:20Z

metricGrpPrefix adminclient -> admin-client

commit 9f28c4b98f737ecad84e3cbe29806b5484711620
Author: Colin P. Mccabe 
Date:   2017-04-20T18:12:38Z

Add usable metrics and conf, do some renaming

commit b16319c56f87dd953cc5121a8a8aaf9bbd30f0ce
Author: Colin P. Mccabe 
Date:   2017-04-20T18:30:10Z

Errors: avoid reflection

commit 9edf1ed821e28ce59113d73547d89abf4127456d
Author: Colin P. Mccabe 
Date:   2017-04-20T20:44:07Z

Fix checkstyle issues

commit 9b52ed2c2846433cd0507617644150076075b917
Author: Colin P. Mccabe 
Date:   2017-04-21T17:09:24Z

KafkaAdminClientTest#testPrettyPrintException: fix test failure

commit 03c37d6589fb1ceedbd28d3aed8a64798f678f45
Author: Colin P. Mccabe 
Date:   2017-04-21T17:09:42Z

KafkaAdminClient#fail: improve error logging

commit 22b4ebed1217a6455ead8dd045afdb2f3936f8b6
Author: Colin P. Mccabe 
Date:   2017-04-21T17:28:47Z

Fix timeouts and improve logging of request timeouts a bit

commit b6f5030119bd7fb3b1f1cd4728d1e3cb01c64108
Author: Colin P. Mccabe 
Date:   2017-04-25T23:43:09Z

Add ACL requests and responses

commit d1be1f7df170457b649f7358c23dc7a477c321c0
Author: Colin P. Mccabe 
Date:   2017-04-26T20:56:39Z

Add AdminClient API for ACL operations

commit d5f953276e7475d81cae08b4d721e16083669178
Author: Colin P. Mccabe 
Date:   2017-04-26T21:31:03Z

Quiet down checkstyle

commit 3c369b0ccb4017ab31bff420fc8d41f5e418bf81
Author: Colin P. Mccabe 
Date:   2017-04-28T20:27:53Z

Add Broker implementation of CreateAcls, ListAcls, DeleteAcls




> Implement KIP-4 RPCs and APIs for creating, altering, and listing ACLs
> --
>
> Key: KAFKA-3266
> URL: https://issues.apache.org/jira/browse/KAFKA-3266
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Grant Henke
>Assignee: Colin P. McCabe
> Fix For: 0.11.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4208) Add Record Headers

2017-04-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15989702#comment-15989702
 ] 

ASF GitHub Bot commented on KAFKA-4208:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2772


> Add Record Headers
> --
>
> Key: KAFKA-4208
> URL: https://issues.apache.org/jira/browse/KAFKA-4208
> Project: Kafka
>  Issue Type: New Feature
>  Components: clients, core
>Reporter: Michael Andre Pearce (IG)
>Priority: Critical
> Fix For: 0.11.0.0
>
>
> Currently headers are not natively supported unlike many transport and 
> messaging platforms or standard, this is to add support for headers to kafka
> This JIRA is related to KIP found here:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-82+-+Add+Record+Headers



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


<    1   2   3   4   5   6   7   8   9   10   >