[jira] [Commented] (KAFKA-3121) KStream DSL API Improvement

2016-01-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15124460#comment-15124460
 ] 

ASF GitHub Bot commented on KAFKA-3121:
---

GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/839

KAFKA-3121: Refactor KStream Aggregate to be Lambda-able.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K3121s2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/839.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #839


commit bf4c4cb3dbb5b4066d9c3e0ada5b7ffd98eb129a
Author: Guozhang Wang 
Date:   2016-01-14T20:27:58Z

add internal source topic for tracking

commit 1485dff08a76c6ff685b0fe72226ce3b629b1d3c
Author: Guozhang Wang 
Date:   2016-01-14T22:32:08Z

minor fix for this.interSourceTopics

commit 60cafd0885c41f93e408f8d89880187ddec789a1
Author: Guozhang Wang 
Date:   2016-01-15T01:09:00Z

add KStream windowed aggregation

commit 983a626008d987828deabe45d75e26e909032843
Author: Guozhang Wang 
Date:   2016-01-15T01:34:56Z

merge from apache trunk

commit 57051720de4238feb4dc3c505053096042a87d9c
Author: Guozhang Wang 
Date:   2016-01-15T21:38:53Z

v1

commit 4a49205fcab3a05ed1fd05a34c7a9a92794b992d
Author: Guozhang Wang 
Date:   2016-01-15T22:07:17Z

minor fix on HoppingWindows

commit 9b4127e91c3a551fb655155d9b8e0df50132d0b7
Author: Guozhang Wang 
Date:   2016-01-15T22:43:14Z

fix HoppingWindows

commit 9649fe5c8a9b2e900e7746ae7b8745bb65694583
Author: Guozhang Wang 
Date:   2016-01-16T19:00:54Z

add retainDuplicate option in RocksDBWindowStore

commit 8a9ea02ac3f9962416defa79d16069431063eac0
Author: Guozhang Wang 
Date:   2016-01-16T19:06:12Z

minor fixes

commit 4123528cf4695b05235789ebfca3a63e8a832ffa
Author: Guozhang Wang 
Date:   2016-01-18T17:55:02Z

Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/kafka into 
K3104

commit 46e8c8d285c0afae6da9ec7437d082060599f3f1
Author: Guozhang Wang 
Date:   2016-01-18T19:15:47Z

add wordcount and pipe jobs

commit 582d3ac24bfe08edb1c567461971cd35c1f75a00
Author: Guozhang Wang 
Date:   2016-01-18T21:53:21Z

merge from trunk

commit 5a002fadfcf760627274ddaa016deeaed5a3199f
Author: Guozhang Wang 
Date:   2016-01-19T00:06:34Z

1. WallClockTimestampExtractor as default; 2. remove windowMS config; 3. 
override state dir with jobId prefix;

commit 7425673e523c42806b29a364564a747443712a53
Author: Guozhang Wang 
Date:   2016-01-19T01:26:11Z

Add PageViewJob

commit ca04ba8d18674c521ad67872562a7671cb0e2c0d
Author: Guozhang Wang 
Date:   2016-01-19T06:23:05Z

minor changes on topic names

commit 563cc546b3a0dd16d586d2df33c37d2c5a5bfb18
Author: Guozhang Wang 
Date:   2016-01-19T21:30:11Z

change config importance levels

commit 4218904505363e61bb4c6b60dc5b13badfd39697
Author: Guozhang Wang 
Date:   2016-01-21T00:11:34Z

Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/kafka into 
K3066

commit 26fb5f3f5a8c9b304c5b1e61778c6bc1d9d5fccb
Author: Guozhang Wang 
Date:   2016-01-21T06:43:04Z

demo examples v1

commit 6d92a55d770e058183daabb7aaef7675335fbbad
Author: Guozhang Wang 
Date:   2016-01-22T00:41:12Z

Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/kafka into 
K3066

commit 929e405058eb61d38510120f3f3ed50cd0cfaf47
Author: Guozhang Wang 
Date:   2016-01-22T01:02:04Z

add RollingRocksDBStore

commit 324eb584b97ed3c228347d108d697d2f5133ea99
Author: Guozhang Wang 
Date:   2016-01-22T01:23:32Z

modify MeteredWindowStore

commit 7ba2d90fe1de1ca776cea23ff1c2e8f8b3a6c3f2
Author: Guozhang Wang 
Date:   2016-01-22T01:35:10Z

remove getter

commit a4d78bac9d84dfd1c7dab4ae465b9115ddc451b3
Author: Guozhang Wang 
Date:   2016-01-22T01:36:51Z

remove RollingRocksDB

commit d0e8198ac6a25315d7ab8d21894acf0077f88fde
Author: Guozhang Wang 
Date:   2016-01-22T17:24:32Z

adding cache layer on RocksDB

commit 257b53d3b6df967f8a015a06c9e178d4219d0f8c
Author: Guozhang Wang 
Date:   2016-01-22T23:15:08Z

dummy

commit 25fd73107c577ac2e4b32300d4fe132ad7ff7312
Author: Guozhang Wang 
Date:   2016-01-22T23:21:29Z

merge from trunk

commit 

[jira] [Commented] (KAFKA-2607) Review `Time` interface and its usage

2016-01-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15124297#comment-15124297
 ] 

ASF GitHub Bot commented on KAFKA-2607:
---

GitHub user afine opened a pull request:

https://github.com/apache/kafka/pull/837

KAFKA-2607: Review `Time` interface and its usage



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/afine/kafka KAFKA-2607

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/837.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #837


commit 311d4e06dbc3640d53c896edb23df5a8230df380
Author: Abraham Fine 
Date:   2016-01-26T21:21:06Z

init

commit 871e2f0df2371546ec2e426c3e6d217ce03ac422
Author: Abraham Fine 
Date:   2016-01-29T22:00:24Z

ashish's review




> Review `Time` interface and its usage
> -
>
> Key: KAFKA-2607
> URL: https://issues.apache.org/jira/browse/KAFKA-2607
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.2.2
>Reporter: Ismael Juma
>  Labels: newbie
>
> Two of `Time` interface's methods are `milliseconds` and `nanoseconds` which 
> are implemented in `SystemTime` as follows:
> {code}
> @Override
> public long milliseconds() {
> return System.currentTimeMillis();
> }
> @Override
> public long nanoseconds() {
> return System.nanoTime();
> }
> {code}
> The issue with this interface is that it makes it seem that the difference is 
> about the unit (`ms` versus `ns`) whereas it's much more than that:
> https://blogs.oracle.com/dholmes/entry/inside_the_hotspot_vm_clocks
> We should probably change the names of the methods and review our usage to 
> see if we're using the right one in the various places.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3179) Kafka consumer delivers message whose offset is earlier than sought offset.

2016-01-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125612#comment-15125612
 ] 

ASF GitHub Bot commented on KAFKA-3179:
---

GitHub user becketqin opened a pull request:

https://github.com/apache/kafka/pull/842

KAFKA-3179 Fix seek on compressed messages

The fix itself is simple. 

Some explanation on unit tests. Currently we the vast majority of unit test 
is running with uncompressed messages.  I was initially thinking about run all 
the tests using compressed messages. But it seems uncompressed messages are 
necessary in a many test cases because we need the bytes sent and appended to 
the log to be predictable. In most of other cases, it does not matter whether 
the message is compressed or not, and compression will slow down the unit test. 
So I just added one method in the BaseConsumerTest to send compressed messages 
whenever we need it. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/becketqin/kafka KAFKA-3179

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/842.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #842


commit 0f5cd3184f63ba9ec93c4450f81e0222ae96b422
Author: Jiangjie Qin 
Date:   2016-02-01T01:38:37Z

KAFKA-3179 Fix seek on compressed messages




> Kafka consumer delivers message whose offset is earlier than sought offset.
> ---
>
> Key: KAFKA-3179
> URL: https://issues.apache.org/jira/browse/KAFKA-3179
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.9.0.1
>
>
> This problem is reproducible by seeking to the middle a compressed message 
> set. Because KafkaConsumer does not filter out the messages earlier than the 
> sought offset in the compressed message. The message returned to user will 
> always be the first message in the compressed message set instead of the 
> message user sought to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2875) Class path contains multiple SLF4J bindings warnings when using scripts under bin

2016-02-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15130010#comment-15130010
 ] 

ASF GitHub Bot commented on KAFKA-2875:
---

Github user ZoneMayor closed the pull request at:

https://github.com/apache/kafka/pull/693


> Class path contains multiple SLF4J bindings warnings when using scripts under 
> bin
> -
>
> Key: KAFKA-2875
> URL: https://issues.apache.org/jira/browse/KAFKA-2875
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: jin xing
>Priority: Minor
>  Labels: patch
> Fix For: 0.9.1.0
>
>
> This adds a lot of noise when running the scripts, see example when running 
> kafka-console-producer.sh:
> {code}
> ~/D/s/kafka-0.9.0.0-src ❯❯❯ ./bin/kafka-console-producer.sh --topic topic 
> --broker-list localhost:9092 ⏎
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/core/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/core/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/tools/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/api/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/runtime/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/file/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/json/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2875) Class path contains multiple SLF4J bindings warnings when using scripts under bin

2016-02-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15130011#comment-15130011
 ] 

ASF GitHub Bot commented on KAFKA-2875:
---

GitHub user ZoneMayor reopened a pull request:

https://github.com/apache/kafka/pull/693

KAFKA-2875:  remove slf4j multi binding warnings when running form source 
distribution

hi @ijuma I reopened this pr again (sorry for my inexperience using github);
I think I did much deduplication for the script;
Please have a look when you have time  : - )

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ZoneMayor/kafka trunk-KAFKA-2875

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/693.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #693


commit 34240b52e1b70aa172b65155f6042243d838b420
Author: ZoneMayor 
Date:   2015-12-18T07:22:20Z

Merge pull request #12 from apache/trunk

2015-12-18

commit ffedf6fd04280e89978531fd73e7fe37a4d9bbed
Author: jinxing 
Date:   2015-12-18T07:24:14Z

KAFKA-2875 Class path contains multiple SLF4J bindings warnings when using 
scripts under bin

commit 52d02f333e86d06cfa8fff5facd18999b3db6d83
Author: ZoneMayor 
Date:   2015-12-30T03:08:08Z

Merge pull request #13 from apache/trunk

2015-12-30

commit d56be0b9e0849660c07d656c6019f9cc2f17ae55
Author: ZoneMayor 
Date:   2016-01-10T09:24:06Z

Merge pull request #14 from apache/trunk

2016-1-10

commit a937ad38ac90b90a57a1969bdd8ce06d6faaaeb1
Author: jinxing 
Date:   2016-01-10T10:28:18Z

Merge branch 'trunk-KAFKA-2875' of https://github.com/ZoneMayor/kafka into 
trunk-KAFKA-2875

commit 83b2bcca237ba9445360bbfcb05a0de82c36274f
Author: jinxing 
Date:   2016-01-10T12:39:20Z

KAFKA-2875: wip

commit 6e6f2c20c5730253d8e818c2dc1e5e741a05ac08
Author: jinxing 
Date:   2016-01-10T14:53:28Z

KAFKA-2875: Classpath contains multiple SLF4J bindings warnings when using 
scripts under bin

commit fbd380659727d991dff242be33cc6a3bb78f4861
Author: ZoneMayor 
Date:   2016-01-28T06:28:25Z

Merge pull request #15 from apache/trunk

2016-01-28

commit f21aa55ed68907376d5b0924e228875530cc1046
Author: jinxing 
Date:   2016-01-28T07:10:30Z

KAFKA-2875: remove slf4j multi binding warnings when running form source 
distribution (merge to trunk and resolve conflict)

commit 51fcc408302ebb0c4adaf2a4d0e6647cc469c6a0
Author: jinxing 
Date:   2016-01-28T07:43:52Z

added a new line

commit 8a6cbad74ca4f07a4c70c1d522b604d58e4917c6
Author: jinxing 
Date:   2016-02-01T08:49:06Z

KAFKA-2875: create deduplicated dependant-libs and use symlink to construct 
classpath

commit 153a1177c943e76c9c8457c47244ec59ea91d6fc
Author: jinxing 
Date:   2016-02-01T09:42:37Z

small fix

commit 1d283120bd7c3b90928090c4d22376d4ac05c4d5
Author: jinxing 
Date:   2016-02-01T10:09:46Z

KAFKA-2875: modify classpath in windows bat

commit 29c1797ae4f3ba47445e45049c8fc0fc2e1609f4
Author: jinxing 
Date:   2016-02-01T10:13:20Z

mod server.properties for test

commit a1993e5ca2908862340113ce965bd7fdc5020bab
Author: jinxing 
Date:   2016-02-01T12:44:22Z

KAFKA-2875: small fix

commit e523bd2ce91e03e38c20413aef3c48998fc3c263
Author: jinxing 
Date:   2016-02-01T16:12:27Z

KAFKA-2875: small fix

commit fb27bdeba925e6833ca9bc9feb1d6d3cf55c5aaf
Author: jinxing 
Date:   2016-02-02T09:44:32Z

KAFKA-2875: replace PROJECT_NAMES with PROJECT_NAME, use the central 
deduplicated libs if PROJECT_NAME not specified

commit f8ba5a920a0db8654c0776ad8449b167689c0eb4
Author: jinxing 
Date:   2016-02-02T09:48:00Z

small fix




> Class path contains multiple SLF4J bindings warnings when using scripts under 
> bin
> -
>
> Key: KAFKA-2875
> URL: https://issues.apache.org/jira/browse/KAFKA-2875
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: jin xing
>Priority: Minor
>  Labels: patch
> Fix For: 0.9.1.0
>
>
> This adds a lot of noise when running the scripts, see example when running 
> kafka-console-producer.sh:
> {code}
> ~/D/s/kafka-0.9.0.0-src ❯❯❯ ./bin/kafka-console-producer.sh --topic topic 
> --broker-list localhost:9092  

[jira] [Commented] (KAFKA-3197) Producer can send message out of order even when in flight request is set to 1.

2016-02-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15130084#comment-15130084
 ] 

ASF GitHub Bot commented on KAFKA-3197:
---

GitHub user becketqin opened a pull request:

https://github.com/apache/kafka/pull/857

KAFKA-3197 Fix producer sending records out of order

This patch adds a new configuration to the producer to enforce the maximum 
in flight batch for each partition. The patch itself is small, but this is an 
interface change. Given this is a pretty important fix, may be we can run a 
quick KIP on it. 

This patch did not remove max.in.flight.request.per.connection 
configuration because it might still have some value to throttle the number of 
requests sent to a broker. This is primarily for broker's interest.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/becketqin/kafka KAFKA-3197

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/857.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #857


commit c12c1e2044fe92954e0c8a27f63263f2020ddd3c
Author: Jiangjie Qin 
Date:   2016-02-03T06:51:41Z

KAFKA-3197 Fix producer sending records out of order




> Producer can send message out of order even when in flight request is set to 
> 1.
> ---
>
> Key: KAFKA-3197
> URL: https://issues.apache.org/jira/browse/KAFKA-3197
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.9.0.1
>
>
> The issue we saw is following:
> 1. Producer send message 0 to topic-partition-0 on broker A. The in-flight 
> request to broker A is 1.
> 2. The request is somehow lost
> 3. Producer refreshed its topic metadata and found leader of 
> topic-partition-0 migrated from broker A to broker B.
> 4. Because there is no in-flight request to broker B. All the subsequent 
> messages to topic-partition-0 in the record accumulator are sent to broker B.
> 5. Later on when the request in step (1) times out, message 0 will be retried 
> and sent to broker B. At this point, all the later messages has already been 
> sent, so we have re-order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3194) Validate security.inter.broker.protocol against the advertised.listeners protocols

2016-02-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15131393#comment-15131393
 ] 

ASF GitHub Bot commented on KAFKA-3194:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/851


> Validate security.inter.broker.protocol against the advertised.listeners 
> protocols
> --
>
> Key: KAFKA-3194
> URL: https://issues.apache.org/jira/browse/KAFKA-3194
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.9.0.1
>
>
> When testing Kafka I found that Kafka can run in a very unhealthy state due 
> to a misconfigured security.inter.broker.protocol. There are errors in the 
> log such (shown below) but it would be better to prevent startup with a clear 
> error message in this scenario.
> Sample error in the server logs:
> {code}
> ERROR kafka.controller.ReplicaStateMachine$BrokerChangeListener: 
> [BrokerChangeListener on Controller 71]: Error while handling broker changes
> kafka.common.BrokerEndPointNotAvailableException: End point PLAINTEXT not 
> found for broker 69
>   at kafka.cluster.Broker.getBrokerEndPoint(Broker.scala:141)
>   at 
> kafka.controller.ControllerChannelManager.kafka$controller$ControllerChannelManager$$addNewBroker(ControllerChannelManager.scala:88)
>   at 
> kafka.controller.ControllerChannelManager.addBroker(ControllerChannelManager.scala:73)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1$$anonfun$apply$mcV$sp$4.apply(ReplicaStateMachine.scala:372)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1$$anonfun$apply$mcV$sp$4.apply(ReplicaStateMachine.scala:372)
>   at scala.collection.immutable.Set$Set1.foreach(Set.scala:79)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ReplicaStateMachine.scala:372)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:359)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:359)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply$mcV$sp(ReplicaStateMachine.scala:358)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:357)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:357)
>   at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:259)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener.handleChildChange(ReplicaStateMachine.scala:356)
>   at org.I0Itec.zkclient.ZkClient$10.run(ZkClient.java:842)
>   at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3198) Ticket Renewal Thread exits prematurely due to inverted comparison

2016-02-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15131378#comment-15131378
 ] 

ASF GitHub Bot commented on KAFKA-3198:
---

GitHub user kunickiaj opened a pull request:

https://github.com/apache/kafka/pull/858

KAFKA-3198: Ticket Renewal Thread exits prematurely due to inverted c…

KAFKA-3198: Ticket Renewal Thread exits prematurely due to inverted 
comparison

The >= should be < since we are actually able to renew if the renewTill 
time is later than the current ticket expiration.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kunickiaj/kafka KAFKA-3198

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/858.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #858


commit 65b10cd2e0bae97833be5459c29953695d8d396a
Author: Adam Kunicki 
Date:   2016-02-03T18:35:29Z

KAFKA-3198: Ticket Renewal Thread exits prematurely due to inverted 
comparison




> Ticket Renewal Thread exits prematurely due to inverted comparison
> --
>
> Key: KAFKA-3198
> URL: https://issues.apache.org/jira/browse/KAFKA-3198
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Adam Kunicki
>Assignee: Adam Kunicki
>Priority: Critical
>
> Line 152 of Login.java:
> {code}
> if (isUsingTicketCache && tgt.getRenewTill() != null && 
> tgt.getRenewTill().getTime() >= expiry) {
> {code}
> This line is used to determine whether to exit the thread and issue an error 
> to the user.
> The >= should be < since we are actually able to renew if the renewTill time 
> is later than the current ticket expiration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3186) Kafka authorizer should be aware of principal types it supports.

2016-02-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15131490#comment-15131490
 ] 

ASF GitHub Bot commented on KAFKA-3186:
---

GitHub user SinghAsDev opened a pull request:

https://github.com/apache/kafka/pull/861

KAFKA-3186: Make Kafka authorizer aware of principal types it supports.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka KAFKA-3186

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/861.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #861


commit 0617c75ee808d58009afdaf976fc1ae455cfb2e8
Author: Ashish Singh 
Date:   2016-02-04T01:06:29Z

KAFKA-3186: Make Kafka authorizer aware of principal types it supports.




> Kafka authorizer should be aware of principal types it supports.
> 
>
> Key: KAFKA-3186
> URL: https://issues.apache.org/jira/browse/KAFKA-3186
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> Currently, Kafka authorizer is agnostic of principal types it supports, so 
> are the acls CRUD methods in {{kafka.security.auth.Authorizer}}. The intent 
> behind is to keep Kafka authorization pluggable, which is really great. 
> However, this leads to following issues.
> 1. {{kafka-acls.sh}} supports pluggable authorizer and custom principals, 
> however is some what integrated with {{SimpleAclsAuthorizer}}. The help 
> messages has details which might not be true for a custom authorizer. For 
> instance, assuming User is a supported PrincipalType.
> 2. Acls CRUD methods perform no check on validity of acls, as they are not 
> aware of what principal types the support. This opens up space for lots of 
> user errors, KAFKA-3097 is an instance.
> I suggest we add a {{getSupportedPrincipalTypes}} method to authorizer and 
> use that for acls verification during acls CRUD, and make {{kafka-acls.sh}} 
> help messages more generic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3199) LoginManager should allow using an existing Subject

2016-02-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15131609#comment-15131609
 ] 

ASF GitHub Bot commented on KAFKA-3199:
---

GitHub user kunickiaj opened a pull request:

https://github.com/apache/kafka/pull/862

KAFKA-3199: LoginManager should allow using an existing Subject

One possible solution which doesn't require a new configuration parameter:
But it assumes that if there is already a Subject you want to use its 
existing credentials, and not login from another keytab specified by 
kafka_client_jaas.conf.

Because this makes the jaas.conf no longer required, a missing KafkaClient 
context is no longer an error, but merely a warning.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kunickiaj/kafka KAFKA-3199

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/862.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #862


commit 83fcc6c6150e9b22d82573b9935ee43a0692ffa4
Author: Adam Kunicki 
Date:   2016-02-04T02:35:11Z

KAFKA-3199: LoginManager should allow using an existing Subject




> LoginManager should allow using an existing Subject
> ---
>
> Key: KAFKA-3199
> URL: https://issues.apache.org/jira/browse/KAFKA-3199
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Adam Kunicki
>Assignee: Adam Kunicki
>Priority: Critical
>
> LoginManager currently creates a new Login in the constructor which then 
> performs a login and starts a ticket renewal thread. The problem here is that 
> because Kafka performs its own login, it doesn't offer the ability to re-use 
> an existing subject that's already managed by the client application.
> The goal of LoginManager appears to be to be able to return a valid Subject. 
> It would be a simple fix to have LoginManager.acquireLoginManager() check for 
> a new config e.g. kerberos.use.existing.subject. 
> This would instead of creating a new Login in the constructor simply call 
> Subject.getSubject(AccessController.getContext()); to use the already logged 
> in Subject.
> This is also doable without introducing a new configuration and simply 
> checking if there is already a valid Subject available, but I think it may be 
> preferable to require that users explicitly request this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3198) Ticket Renewal Thread exits prematurely due to inverted comparison

2016-02-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15131813#comment-15131813
 ] 

ASF GitHub Bot commented on KAFKA-3198:
---

Github user kunickiaj closed the pull request at:

https://github.com/apache/kafka/pull/858


> Ticket Renewal Thread exits prematurely due to inverted comparison
> --
>
> Key: KAFKA-3198
> URL: https://issues.apache.org/jira/browse/KAFKA-3198
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Adam Kunicki
>Assignee: Adam Kunicki
>Priority: Critical
> Fix For: 0.9.0.1
>
>
> Line 152 of Login.java:
> {code}
> if (isUsingTicketCache && tgt.getRenewTill() != null && 
> tgt.getRenewTill().getTime() >= expiry) {
> {code}
> This line is used to determine whether to exit the thread and issue an error 
> to the user.
> The >= should be < since we are actually able to renew if the renewTill time 
> is later than the current ticket expiration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3198) Ticket Renewal Thread exits prematurely due to inverted comparison

2016-02-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15131814#comment-15131814
 ] 

ASF GitHub Bot commented on KAFKA-3198:
---

GitHub user kunickiaj reopened a pull request:

https://github.com/apache/kafka/pull/858

KAFKA-3198: Ticket Renewal Thread exits prematurely due to inverted c…

KAFKA-3198: Ticket Renewal Thread exits prematurely due to inverted 
comparison

The >= should be < since we are actually able to renew if the renewTill 
time is later than the current ticket expiration.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kunickiaj/kafka KAFKA-3198

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/858.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #858


commit 65b10cd2e0bae97833be5459c29953695d8d396a
Author: Adam Kunicki 
Date:   2016-02-03T18:35:29Z

KAFKA-3198: Ticket Renewal Thread exits prematurely due to inverted 
comparison




> Ticket Renewal Thread exits prematurely due to inverted comparison
> --
>
> Key: KAFKA-3198
> URL: https://issues.apache.org/jira/browse/KAFKA-3198
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Adam Kunicki
>Assignee: Adam Kunicki
>Priority: Critical
> Fix For: 0.9.0.1
>
>
> Line 152 of Login.java:
> {code}
> if (isUsingTicketCache && tgt.getRenewTill() != null && 
> tgt.getRenewTill().getTime() >= expiry) {
> {code}
> This line is used to determine whether to exit the thread and issue an error 
> to the user.
> The >= should be < since we are actually able to renew if the renewTill time 
> is later than the current ticket expiration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3198) Ticket Renewal Thread exits prematurely due to inverted comparison

2016-02-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15131885#comment-15131885
 ] 

ASF GitHub Bot commented on KAFKA-3198:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/858


> Ticket Renewal Thread exits prematurely due to inverted comparison
> --
>
> Key: KAFKA-3198
> URL: https://issues.apache.org/jira/browse/KAFKA-3198
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Adam Kunicki
>Assignee: Adam Kunicki
>Priority: Critical
> Fix For: 0.9.0.1, 0.9.1.0
>
>
> Line 152 of Login.java:
> {code}
> if (isUsingTicketCache && tgt.getRenewTill() != null && 
> tgt.getRenewTill().getTime() >= expiry) {
> {code}
> This line is used to determine whether to exit the thread and issue an error 
> to the user.
> The >= should be < since we are actually able to renew if the renewTill time 
> is later than the current ticket expiration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2522) ConsumerGroupCommand sends all output to STDOUT

2016-01-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15124749#comment-15124749
 ] 

ASF GitHub Bot commented on KAFKA-2522:
---

GitHub user melan opened a pull request:

https://github.com/apache/kafka/pull/840

KAFKA-2522: ConsumerGroupCommand writes error messages to STDERR instead of 
STDOUT

ConsumerGroupCommand sends errors and valuable output to different streams. 
It simplifies results parsing.

@ijuma this is reincarnation of a PR #197 - I rebased those changes on top 
of the current trunk

The contribution is my original work and I license the work to the project 
under the project's open source license.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/melan/kafka kafka_2522_print_err_to_stderr

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/840.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #840


commit b49d0034ef04c9e1fe9df9c042285ea3d37b6d2a
Author: Dmitry Melanchenko 
Date:   2015-09-06T06:31:18Z

[8433829] KAFKA-2522: ConsumerGroupCommand writes error messages to STDERR 
instead of STDOUT




> ConsumerGroupCommand sends all output to STDOUT
> ---
>
> Key: KAFKA-2522
> URL: https://issues.apache.org/jira/browse/KAFKA-2522
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.8.1.1, 0.8.2.0, 0.8.2.1, 0.9.0.0, 0.10.0.0, 0.8.2.2
>Reporter: Dmitry Melanchenko
>Priority: Trivial
> Fix For: 0.10.0.0
>
> Attachments: kafka_2522_print_err_to_stderr.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> kafka.admin.ConsumerGroupCommand sends all messages to STDOUT. To be 
> consistent it should send normal output to STDOUT and error messages to 
> STDERR.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3174) Re-evaluate the CRC32 class performance.

2016-01-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15124778#comment-15124778
 ] 

ASF GitHub Bot commented on KAFKA-3174:
---

GitHub user becketqin opened a pull request:

https://github.com/apache/kafka/pull/841

KAFKA-3174: Change Crc32 to use java.util.zip.CRC32



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/becketqin/kafka KAFKA-3174

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/841.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #841


commit 4419cb58d84466e36c8b1119704eac93566c9974
Author: Jiangjie Qin 
Date:   2016-01-30T06:44:20Z

Change Crc32 to use java.util.zip.CRC32




> Re-evaluate the CRC32 class performance.
> 
>
> Key: KAFKA-3174
> URL: https://issues.apache.org/jira/browse/KAFKA-3174
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.9.0.1
>
>
> We used org.apache.kafka.common.utils.CRC32 in clients because it has better 
> performance than java.util.zip.CRC32 in Java 1.6.
> In a recent test I ran it looks in Java 1.8 the CRC32 class is 2x as fast as 
> the Crc32 class we are using now. We may want to re-evaluate the performance 
> of Crc32 class and see it makes sense to simply use java CRC32 instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2522) ConsumerGroupCommand sends all output to STDOUT

2016-01-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15124746#comment-15124746
 ] 

ASF GitHub Bot commented on KAFKA-2522:
---

Github user melan closed the pull request at:

https://github.com/apache/kafka/pull/197


> ConsumerGroupCommand sends all output to STDOUT
> ---
>
> Key: KAFKA-2522
> URL: https://issues.apache.org/jira/browse/KAFKA-2522
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.8.1.1, 0.8.2.0, 0.8.2.1, 0.9.0.0, 0.10.0.0, 0.8.2.2
>Reporter: Dmitry Melanchenko
>Priority: Trivial
> Fix For: 0.10.0.0
>
> Attachments: kafka_2522_print_err_to_stderr.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> kafka.admin.ConsumerGroupCommand sends all messages to STDOUT. To be 
> consistent it should send normal output to STDOUT and error messages to 
> STDERR.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3169) Kafka broker throws OutOfMemory error with invalid SASL packet

2016-01-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15124854#comment-15124854
 ] 

ASF GitHub Bot commented on KAFKA-3169:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/831


> Kafka broker throws OutOfMemory error with invalid SASL packet
> --
>
> Key: KAFKA-3169
> URL: https://issues.apache.org/jira/browse/KAFKA-3169
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.1
>
>
> Receive buffer used in Kafka servers to process SASL packets is unbounded. 
> This can results in brokers crashing with OutOfMemory error when an invalid 
> SASL packet is received. 
> There is a standard SASL property in Java _javax.security.sasl.maxbuffer_ 
> that can be used to specify buffer size. When properties are added to the 
> Sasl implementation in KAFKA-3149, we can use the standard property to limit 
> receive buffer size. 
> But since this is a potential DoS issue, we should set a reasonable limit in 
> 0.9.0.1. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3164) Document client and mirrormaker upgrade procedure/requirements

2016-01-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125159#comment-15125159
 ] 

ASF GitHub Bot commented on KAFKA-3164:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/838


> Document client and mirrormaker upgrade procedure/requirements
> --
>
> Key: KAFKA-3164
> URL: https://issues.apache.org/jira/browse/KAFKA-3164
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>Priority: Minor
>
> Many users in the mailing list have asked questions about new clients working 
> on old brokers, and mirrormaker breaking when upgrading to 0.9.0. Adding a 
> section to the upgrade docs to let users know to upgrade brokers before 
> clients and downstream mirrormaker first should help other users from making 
> the same mistake. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3068) NetworkClient may connect to a different Kafka cluster than originally configured

2016-02-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15128638#comment-15128638
 ] 

ASF GitHub Bot commented on KAFKA-3068:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/823


> NetworkClient may connect to a different Kafka cluster than originally 
> configured
> -
>
> Key: KAFKA-3068
> URL: https://issues.apache.org/jira/browse/KAFKA-3068
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Eno Thereska
> Fix For: 0.9.0.1, 0.9.1.0
>
>
> In https://github.com/apache/kafka/pull/290, we added the logic to cache all 
> brokers (id and ip) that the client has ever seen. If we can't find an 
> available broker from the current Metadata, we will pick a broker that we 
> have ever seen (in NetworkClient.leastLoadedNode()).
> One potential problem this logic can introduce is the following. Suppose that 
> we have a broker with id 1 in a Kafka cluster. A producer client remembers 
> this broker in nodesEverSeen. At some point, we bring down this broker and 
> use the host in a different Kafka cluster. Then, the producer client uses 
> this broker from nodesEverSeen to refresh metadata. It will find the metadata 
> in a different Kafka cluster and start producing data there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3170) Default value of fetch_min_bytes in new consumer is 1024 while doc says it is 1

2016-02-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15128641#comment-15128641
 ] 

ASF GitHub Bot commented on KAFKA-3170:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/832


> Default value of fetch_min_bytes in new consumer is 1024 while doc says it is 
> 1
> ---
>
> Key: KAFKA-3170
> URL: https://issues.apache.org/jira/browse/KAFKA-3170
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>
> FETCH_MIN_BYTES_DOC says:
> {quote}
> The minimum amount of data the server should return for a fetch request. If 
> insufficient data is available the request will wait for that much data to 
> accumulate before answering the request. The default setting of 1 byte means 
> that fetch requests are answered as soon as a single byte of data is 
> available or the fetch request times out waiting for data to arrive. Setting 
> this to something greater than 1 will cause the server to wait for larger 
> amounts of data to accumulate which can improve server throughput a bit at 
> the cost of some additional latency.
> {quote}
> But the default value is actually set to 1024. Either the doc or the value 
> needs to be changed. Perhaps 1 is a better default?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3092) Rename SinkTask.onPartitionsAssigned/onPartitionsRevoked and Clarify Contract

2016-02-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15130955#comment-15130955
 ] 

ASF GitHub Bot commented on KAFKA-3092:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/815


> Rename SinkTask.onPartitionsAssigned/onPartitionsRevoked and Clarify Contract
> -
>
> Key: KAFKA-3092
> URL: https://issues.apache.org/jira/browse/KAFKA-3092
> Project: Kafka
>  Issue Type: Improvement
>  Components: copycat
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.9.1.0
>
>
> The purpose of the onPartitionsRevoked() and onPartitionsAssigned() methods 
> exposed in Kafka Connect's SinkTask interface seems a little unclear and too 
> closely tied to consumer semantics. From the javadoc, these APIs are used to 
> open/close per-partition resources, but that would suggest that we should 
> always get one call to onPartitionsAssigned() before writing any records for 
> the corresponding partitions and one call to onPartitionsRevoked() when we 
> have finished with them. However, the same methods on the consumer are used 
> to indicate phases of the rebalance operation: onPartitionsRevoked() is 
> called before the rebalance begins and onPartitionsAssigned() is called after 
> it completes. In particular, the consumer does not guarantee a final call to 
> onPartitionsRevoked(). 
> This mismatch makes the contract of these methods unclear. In fact, the 
> WorkerSinkTask currently does not guarantee the initial call to 
> onPartitionsAssigned(), nor the final call to onPartitionsRevoked(). Instead, 
> the task implementation must pull the initial assignment from the 
> SinkTaskContext. To make it more confusing, the call to commit offsets 
> following onPartitionsRevoked() causes a flush() on a partition which had 
> already been revoked. All of this makes it difficult to use this API as 
> suggested in the javadocs.
> To fix this, we should clarify the behavior of these methods and consider 
> renaming them to avoid confusion with the same methods in the consumer API. 
> If onPartitionsAssigned() is meant for opening resources, maybe we can rename 
> it to open(). Similarly, onPartitionsRevoked() can be renamed to close(). We 
> can then fix the code to ensure that a typical open/close contract is 
> enforced. This would also mean removing the need to pass the initial 
> assignment in the SinkTaskContext. This would give the following API:
> {code}
> void open(Collection partitions);
> void close(Collection partitions);
> {code}
> We could also consider going a little further. Instead of depending on 
> onPartitionsAssigned() to open resources, tasks could open partition 
> resources on demand as records are received. In general, connectors will need 
> some way to close partition-specific resources, but there might not be any 
> need to pass the full list of partitions to close since the only open 
> resources should be those that have received writes since the last rebalance. 
> In this case, we just have a single method:
> {code}
> void close();
> {code}
> The downside to this is that the difference between close() and stop() then 
> becomes a little unclear.
> Obviously these are not compatible changes and connectors would have to be 
> updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2875) Class path contains multiple SLF4J bindings warnings when using scripts under bin

2016-02-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15129723#comment-15129723
 ] 

ASF GitHub Bot commented on KAFKA-2875:
---

GitHub user ZoneMayor reopened a pull request:

https://github.com/apache/kafka/pull/693

KAFKA-2875:  remove slf4j multi binding warnings when running form source 
distribution

hi @ijuma I reopened this pr again (sorry for my inexperience using github);
I think I did much deduplication for the script;
Please have a look when you have time  : - )

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ZoneMayor/kafka trunk-KAFKA-2875

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/693.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #693


commit 34240b52e1b70aa172b65155f6042243d838b420
Author: ZoneMayor 
Date:   2015-12-18T07:22:20Z

Merge pull request #12 from apache/trunk

2015-12-18

commit ffedf6fd04280e89978531fd73e7fe37a4d9bbed
Author: jinxing 
Date:   2015-12-18T07:24:14Z

KAFKA-2875 Class path contains multiple SLF4J bindings warnings when using 
scripts under bin

commit 52d02f333e86d06cfa8fff5facd18999b3db6d83
Author: ZoneMayor 
Date:   2015-12-30T03:08:08Z

Merge pull request #13 from apache/trunk

2015-12-30

commit d56be0b9e0849660c07d656c6019f9cc2f17ae55
Author: ZoneMayor 
Date:   2016-01-10T09:24:06Z

Merge pull request #14 from apache/trunk

2016-1-10

commit a937ad38ac90b90a57a1969bdd8ce06d6faaaeb1
Author: jinxing 
Date:   2016-01-10T10:28:18Z

Merge branch 'trunk-KAFKA-2875' of https://github.com/ZoneMayor/kafka into 
trunk-KAFKA-2875

commit 83b2bcca237ba9445360bbfcb05a0de82c36274f
Author: jinxing 
Date:   2016-01-10T12:39:20Z

KAFKA-2875: wip

commit 6e6f2c20c5730253d8e818c2dc1e5e741a05ac08
Author: jinxing 
Date:   2016-01-10T14:53:28Z

KAFKA-2875: Classpath contains multiple SLF4J bindings warnings when using 
scripts under bin

commit fbd380659727d991dff242be33cc6a3bb78f4861
Author: ZoneMayor 
Date:   2016-01-28T06:28:25Z

Merge pull request #15 from apache/trunk

2016-01-28

commit f21aa55ed68907376d5b0924e228875530cc1046
Author: jinxing 
Date:   2016-01-28T07:10:30Z

KAFKA-2875: remove slf4j multi binding warnings when running form source 
distribution (merge to trunk and resolve conflict)

commit 51fcc408302ebb0c4adaf2a4d0e6647cc469c6a0
Author: jinxing 
Date:   2016-01-28T07:43:52Z

added a new line

commit 8a6cbad74ca4f07a4c70c1d522b604d58e4917c6
Author: jinxing 
Date:   2016-02-01T08:49:06Z

KAFKA-2875: create deduplicated dependant-libs and use symlink to construct 
classpath

commit 153a1177c943e76c9c8457c47244ec59ea91d6fc
Author: jinxing 
Date:   2016-02-01T09:42:37Z

small fix

commit 1d283120bd7c3b90928090c4d22376d4ac05c4d5
Author: jinxing 
Date:   2016-02-01T10:09:46Z

KAFKA-2875: modify classpath in windows bat

commit 29c1797ae4f3ba47445e45049c8fc0fc2e1609f4
Author: jinxing 
Date:   2016-02-01T10:13:20Z

mod server.properties for test

commit a1993e5ca2908862340113ce965bd7fdc5020bab
Author: jinxing 
Date:   2016-02-01T12:44:22Z

KAFKA-2875: small fix

commit e523bd2ce91e03e38c20413aef3c48998fc3c263
Author: jinxing 
Date:   2016-02-01T16:12:27Z

KAFKA-2875: small fix

commit fb27bdeba925e6833ca9bc9feb1d6d3cf55c5aaf
Author: jinxing 
Date:   2016-02-02T09:44:32Z

KAFKA-2875: replace PROJECT_NAMES with PROJECT_NAME, use the central 
deduplicated libs if PROJECT_NAME not specified

commit f8ba5a920a0db8654c0776ad8449b167689c0eb4
Author: jinxing 
Date:   2016-02-02T09:48:00Z

small fix




> Class path contains multiple SLF4J bindings warnings when using scripts under 
> bin
> -
>
> Key: KAFKA-2875
> URL: https://issues.apache.org/jira/browse/KAFKA-2875
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: jin xing
>Priority: Minor
>  Labels: patch
> Fix For: 0.9.1.0
>
>
> This adds a lot of noise when running the scripts, see example when running 
> kafka-console-producer.sh:
> {code}
> ~/D/s/kafka-0.9.0.0-src ❯❯❯ ./bin/kafka-console-producer.sh --topic topic 
> --broker-list localhost:9092  

[jira] [Commented] (KAFKA-2875) Class path contains multiple SLF4J bindings warnings when using scripts under bin

2016-02-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15129722#comment-15129722
 ] 

ASF GitHub Bot commented on KAFKA-2875:
---

Github user ZoneMayor closed the pull request at:

https://github.com/apache/kafka/pull/693


> Class path contains multiple SLF4J bindings warnings when using scripts under 
> bin
> -
>
> Key: KAFKA-2875
> URL: https://issues.apache.org/jira/browse/KAFKA-2875
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: jin xing
>Priority: Minor
>  Labels: patch
> Fix For: 0.9.1.0
>
>
> This adds a lot of noise when running the scripts, see example when running 
> kafka-console-producer.sh:
> {code}
> ~/D/s/kafka-0.9.0.0-src ❯❯❯ ./bin/kafka-console-producer.sh --topic topic 
> --broker-list localhost:9092 ⏎
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/core/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/core/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/tools/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/api/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/runtime/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/file/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/json/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3195) Transient test failure in OffsetCheckpointTest.testReadWrite

2016-02-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15129646#comment-15129646
 ] 

ASF GitHub Bot commented on KAFKA-3195:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/855


> Transient test failure in OffsetCheckpointTest.testReadWrite
> 
>
> Key: KAFKA-3195
> URL: https://issues.apache.org/jira/browse/KAFKA-3195
> Project: Kafka
>  Issue Type: Bug
>  Components: kafka streams
>Reporter: Ewen Cheslack-Postava
>Assignee: Ismael Juma
> Fix For: 0.9.1.0
>
>
> It looks like its probably an issue with parallel tests trying to access the 
> same fixed path, where one test deletes the file. Saw this on 
> 86a9036a7b03c8ae07d014c25a5eedc315544139.
> {quote}
> org.apache.kafka.streams.state.internals.OffsetCheckpointTest > testReadWrite 
> FAILED
> java.io.FileNotFoundException: 
> /tmp/kafka-streams/offset_checkpoint.test.tmp (No such file or directory)
> at java.io.FileOutputStream.open(Native Method)
> at java.io.FileOutputStream.(FileOutputStream.java:221)
> at java.io.FileOutputStream.(FileOutputStream.java:171)
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:68)
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpointTest.testReadWrite(OffsetCheckpointTest.java:48)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3189) Kafka server returns UnknownServerException for inherited exceptions

2016-02-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15129701#comment-15129701
 ] 

ASF GitHub Bot commented on KAFKA-3189:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/856

KAFKA-3189: Kafka server returns UnknownServerException for inherited…

… exceptions

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka inherited-errors

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/856.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #856


commit 4de1cce990d3df283f9688d4b91d7a6582b55853
Author: Grant Henke 
Date:   2016-02-03T03:10:30Z

KAFKA-3189: Kafka server returns UnknownServerException for inherited 
exceptions




> Kafka server returns UnknownServerException for inherited exceptions
> 
>
> Key: KAFKA-3189
> URL: https://issues.apache.org/jira/browse/KAFKA-3189
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jiangjie Qin
>Assignee: Grant Henke
> Fix For: 0.9.1.0
>
>
> This issue was introduced in KAFKA-2929. The problem is that we are using 
> o.a.k.common.protocol.Errors.forException() while some exceptions thrown by 
> the broker are still using old scala exception. This cause 
> Errors.forException() always return UnknownServerException.
> InvalidMessageException is inherited from CorruptRecordException. But it 
> seems Errors.forException() needs the exception class to be the exact class, 
> so it does not map the subclass InvalidMessageException to the correct error 
> code. Instead it returns -1 which is UnknownServerException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2143) Replicas get ahead of leader and fail

2016-01-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122919#comment-15122919
 ] 

ASF GitHub Bot commented on KAFKA-2143:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/129


> Replicas get ahead of leader and fail
> -
>
> Key: KAFKA-2143
> URL: https://issues.apache.org/jira/browse/KAFKA-2143
> Project: Kafka
>  Issue Type: Bug
>  Components: replication
>Affects Versions: 0.8.2.1
>Reporter: Evan Huus
>Assignee: Jiangjie Qin
> Fix For: 0.9.0.1
>
>
> On a cluster of 6 nodes, we recently saw a case where a single 
> under-replicated partition suddenly appeared, replication lag spiked, and 
> network IO spiked. The cluster appeared to recover eventually on its own,
> Looking at the logs, the thing which failed was partition 7 of the topic 
> {{background_queue}}. It had an ISR of 1,4,3 and its leader at the time was 
> 3. Here are the interesting log lines:
> On node 3 (the leader):
> {noformat}
> [2015-04-23 16:50:05,879] ERROR [Replica Manager on Broker 3]: Error when 
> processing fetch request for partition [background_queue,7] offset 3722949957 
> from follower with correlation id 148185816. Possible cause: Request for 
> offset 3722949957 but we only have log segments in the range 3648049863 to 
> 3722949955. (kafka.server.ReplicaManager)
> [2015-04-23 16:50:05,879] ERROR [Replica Manager on Broker 3]: Error when 
> processing fetch request for partition [background_queue,7] offset 3722949957 
> from follower with correlation id 156007054. Possible cause: Request for 
> offset 3722949957 but we only have log segments in the range 3648049863 to 
> 3722949955. (kafka.server.ReplicaManager)
> [2015-04-23 16:50:13,960] INFO Partition [background_queue,7] on broker 3: 
> Shrinking ISR for partition [background_queue,7] from 1,4,3 to 3 
> (kafka.cluster.Partition)
> {noformat}
> Note that both replicas suddenly asked for an offset *ahead* of the available 
> offsets.
> And on nodes 1 and 4 (the replicas) many occurrences of the following:
> {noformat}
> [2015-04-23 16:50:05,935] INFO Scheduling log segment 3648049863 for log 
> background_queue-7 for deletion. (kafka.log.Log) (edited)
> {noformat}
> Based on my reading, this looks like the replicas somehow got *ahead* of the 
> leader, asked for an invalid offset, got confused, and re-replicated the 
> entire topic from scratch to recover (this matches our network graphs, which 
> show 3 sending a bunch of data to 1 and 4).
> Taking a stab in the dark at the cause, there appears to be a race condition 
> where replicas can receive a new offset before the leader has committed it 
> and is ready to replicate?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2221) Log the entire cause which caused a reconnect in the SimpleConsumer

2016-01-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122923#comment-15122923
 ] 

ASF GitHub Bot commented on KAFKA-2221:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/138


> Log the entire cause which caused a reconnect in the SimpleConsumer
> ---
>
> Key: KAFKA-2221
> URL: https://issues.apache.org/jira/browse/KAFKA-2221
> Project: Kafka
>  Issue Type: Improvement
>Reporter: jaikiran pai
>Assignee: jaikiran pai
>Priority: Minor
> Fix For: 0.9.0.1
>
> Attachments: KAFKA-2221.patch
>
>
> Currently if the SimpleConsumer goes for a reconnect, it logs the exception's 
> message which caused the reconnect. However, in some occasions the message in 
> the exception can be null, thus making it difficult to narrow down the cause 
> for the reconnect. An example of this can be seen in this user mailing list 
> thread 
> http://mail-archives.apache.org/mod_mbox/kafka-users/201505.mbox/%3CCABME_6T%2Bt90%2B-eQUtnu6R99NqRdMpVj3tqa95Pygg8KOQSNppw%40mail.gmail.com%3E
> {quote}
> kafka.consumer.SimpleConsumer: Reconnect due to socket error: null.
> {quote}
> It would help narrowing down the problem if the entire exception stacktrace 
> was logged.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2653) Stateful operations in the KStream DSL layer

2016-01-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122951#comment-15122951
 ] 

ASF GitHub Bot commented on KAFKA-2653:
---

Github user guozhangwang closed the pull request at:

https://github.com/apache/kafka/pull/691


> Stateful operations in the KStream DSL layer
> 
>
> Key: KAFKA-2653
> URL: https://issues.apache.org/jira/browse/KAFKA-2653
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 0.9.1.0
>
>
> This includes the interface design the implementation for stateful operations 
> including:
> 0. table representation in KStream.
> 1. stream-stream join.
> 2. stream-table join.
> 3. table-table join.
> 4. stream / table aggregations.
> With 0 and 3 being tackled in KAFKA-2856 and KAFKA-2962 separately, this 
> ticket is going to only focus on windowing definition and 1 / 2 / 4 above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-948) ISR list in LeaderAndISR path not updated for partitions when Broker (which is not leader) is down

2016-01-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122731#comment-15122731
 ] 

ASF GitHub Bot commented on KAFKA-948:
--

Github user dibbhatt closed the pull request at:

https://github.com/apache/kafka/pull/5


> ISR list in LeaderAndISR path not updated for partitions when Broker (which 
> is not leader) is down
> --
>
> Key: KAFKA-948
> URL: https://issues.apache.org/jira/browse/KAFKA-948
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.8.0
>Reporter: Dibyendu Bhattacharya
>Assignee: Neha Narkhede
>
> When the broker which is the leader for a partition is down, the ISR list in 
> the LeaderAndISR path is updated. But if the broker , which is not a leader 
> of the partition is down, the ISR list is not getting updated. This is an 
> issues because ISR list contains the stale entry.
> This issue I found in kafka-0.8.0-beta1-candidate1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2802) Add integration tests for Kafka Streams

2016-02-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15159573#comment-15159573
 ] 

ASF GitHub Bot commented on KAFKA-2802:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/930


> Add integration tests for Kafka Streams
> ---
>
> Key: KAFKA-2802
> URL: https://issues.apache.org/jira/browse/KAFKA-2802
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Yasuhiro Matsuda
> Fix For: 0.9.1.0
>
>
> We want to test the following criterion:
> 1. Tasks are created / migrated on the right stream threads.
> 2. State stores are created with change-log topics in the right numbers and 
> assigned properly to tasks.
> 3. Co-partitioned topic partitions are assigned in the right way to tasks.
> 4. At least once processing guarantees (this include correct state store 
> flushing / offset committing / producer flushing behavior).
> Under the following scenarios:
> 1. Stream process killed (both -15 and -9)
> 2. Broker service killed (both -15 and -9)
> 3. Stream process got long GC.
> 4. New topic added to subscribed lists.
> 5. New partitions added to subscribed topics.
> 6. New stream processes started.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3214) Add consumer system tests for compressed topics

2016-02-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15159778#comment-15159778
 ] 

ASF GitHub Bot commented on KAFKA-3214:
---

GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/958

KAFKA-3214: Added system tests for compressed topics

Added the following tests:
1. Extended TestVerifiableProducer (sanity check test) to test Trunk with 
snappy compression (one producer/one topic).
2. Added CompressionTest that tests 3 producers: 2a) each uses a different 
compression; 2b) each either uses snappy compression or no compression.

Enabled VerifiableProducer to run producers with different compression 
types (passed in the constructor).



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apovzner/kafka kafka-3214

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/958.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #958


commit 588d4caf3f8830dfcc185da30dfdb40de04cd7cd
Author: Anna Povzner 
Date:   2016-02-23T22:22:34Z

KAFKA-3214: Added system tests for compressed topics




> Add consumer system tests for compressed topics
> ---
>
> Key: KAFKA-3214
> URL: https://issues.apache.org/jira/browse/KAFKA-3214
> Project: Kafka
>  Issue Type: Test
>  Components: consumer
>Reporter: Jason Gustafson
>Assignee: Anna Povzner
>
> As far as I can tell, we don't have any ducktape tests which verify 
> correctness when compression is enabled. If we did, we might have caught 
> KAFKA-3179 earlier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3245) need a way to specify the number of replicas for change log topics

2016-02-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15159815#comment-15159815
 ] 

ASF GitHub Bot commented on KAFKA-3245:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/948


> need a way to specify the number of replicas for change log topics
> --
>
> Key: KAFKA-3245
> URL: https://issues.apache.org/jira/browse/KAFKA-3245
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Affects Versions: 0.9.1.0
>Reporter: Yasuhiro Matsuda
> Fix For: 0.10.0.0
>
>
> Currently the number of replicas of auto-created change log topics is one. 
> This make stream processing not fault tolerant. A way to specify the number 
> of replicas in config is desired.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2832) support exclude.internal.topics in new consumer

2016-02-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15159809#comment-15159809
 ] 

ASF GitHub Bot commented on KAFKA-2832:
---

GitHub user vahidhashemian reopened a pull request:

https://github.com/apache/kafka/pull/932

KAFKA-2832: Add a consumer config option to exclude internal topics

A new consumer config option 'exclude.internal.topics' was added to allow 
excluding internal topics when wildcards are used to specify consumers.
The new option takes a boolean value, with a default 'false' value (i.e. no 
exclusion).

This patch is co-authored with @rajinisivaram.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-2832

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/932.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #932


commit 1b03ad9d1e8dd0a40adb3272ff857c48289b1e05
Author: Vahid Hashemian 
Date:   2016-02-18T15:53:39Z

KAFKA-2832: Add a consumer config option to exclude internal topics

A new consumer config option 'exclude.internal.topics' was added to allow 
excluding internal topics when wildcards are used to specify consumers.
The new option takes a boolean value, with a default of 'true' (i.e. 
exclude internal topics).

This patch is co-authored with @rajinisivaram.




> support exclude.internal.topics in new consumer
> ---
>
> Key: KAFKA-2832
> URL: https://issues.apache.org/jira/browse/KAFKA-2832
> Project: Kafka
>  Issue Type: New Feature
>  Components: clients
>Reporter: Jun Rao
>Assignee: Vahid Hashemian
> Fix For: 0.9.1.0
>
>
> The old consumer supports exclude.internal.topics that prevents internal 
> topics from being consumed by default. It would be useful to add that in the 
> new consumer, especially when wildcards are used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2832) support exclude.internal.topics in new consumer

2016-02-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15159808#comment-15159808
 ] 

ASF GitHub Bot commented on KAFKA-2832:
---

Github user vahidhashemian closed the pull request at:

https://github.com/apache/kafka/pull/932


> support exclude.internal.topics in new consumer
> ---
>
> Key: KAFKA-2832
> URL: https://issues.apache.org/jira/browse/KAFKA-2832
> Project: Kafka
>  Issue Type: New Feature
>  Components: clients
>Reporter: Jun Rao
>Assignee: Vahid Hashemian
> Fix For: 0.9.1.0
>
>
> The old consumer supports exclude.internal.topics that prevents internal 
> topics from being consumed by default. It would be useful to add that in the 
> new consumer, especially when wildcards are used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3046) add ByteBuffer Serializer

2016-02-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15159831#comment-15159831
 ] 

ASF GitHub Bot commented on KAFKA-3046:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/718


> add ByteBuffer Serializer
> --
>
> Key: KAFKA-3046
> URL: https://issues.apache.org/jira/browse/KAFKA-3046
> Project: Kafka
>  Issue Type: New Feature
>  Components: clients
>Reporter: Xin Wang
> Fix For: 0.10.0.0
>
>
> ByteBuffer is widely used in many scenarios. (eg: storm-sql can specify kafka 
> as the external data Source, we can use ByteBuffer for value serializer.) 
> Adding ByteBuffer Serializer officially will be convenient for 
> users to use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3093) Keep track of connector and task status info, expose it via the REST API

2016-02-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157534#comment-15157534
 ] 

ASF GitHub Bot commented on KAFKA-3093:
---

Github user hachikuji closed the pull request at:

https://github.com/apache/kafka/pull/920


> Keep track of connector and task status info, expose it via the REST API
> 
>
> Key: KAFKA-3093
> URL: https://issues.apache.org/jira/browse/KAFKA-3093
> Project: Kafka
>  Issue Type: Improvement
>  Components: copycat
>Reporter: jin xing
>Assignee: Jason Gustafson
>
> Relate to KAFKA-3054;
> We should keep track of the status of connector and task during their 
> startup, execution, and handle exceptions thrown by connector and task;
> Users should be able to fetch these informations by REST API and send some 
> necessary commands(reconfiguring, restarting, pausing, unpausing) to 
> connectors and tasks by REST API;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3093) Keep track of connector and task status info, expose it via the REST API

2016-02-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157535#comment-15157535
 ] 

ASF GitHub Bot commented on KAFKA-3093:
---

GitHub user hachikuji reopened a pull request:

https://github.com/apache/kafka/pull/920

KAFKA-3093: Add Connect status tracking API



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-3093

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/920.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #920


commit 9cf01b36020816d034af539b6e91109a073c3c4c
Author: Jason Gustafson 
Date:   2016-02-10T23:25:25Z

KAFKA-3093 [WIP]: Add status tracking API

commit ccf93d8ca32d63ded27866d50a8dd5f5576e8d01
Author: Jason Gustafson 
Date:   2016-02-16T21:06:27Z

additional cleanup and testing

commit 3e8a938fb907dc7683d827db23dce1cd8f44c310
Author: Jason Gustafson 
Date:   2016-02-16T21:08:16Z

remove unneeded test

commit 3ba67c37c42063f14957eccf38b90d3fc8d167c1
Author: Jason Gustafson 
Date:   2016-02-16T22:10:00Z

improve docs and cancel method to WorkerTask

commit 05d8dc81e1d61eaef590fb81cad26106f4e8a85e
Author: Jason Gustafson 
Date:   2016-02-17T03:36:36Z

testing/fixes

commit f7a81fe5e96f2a1ba420c060595152738eb6054f
Author: Jason Gustafson 
Date:   2016-02-17T18:05:54Z

add more testing

commit 8e9047422ae63c8aece23343f7920055ff10a056
Author: Jason Gustafson 
Date:   2016-02-17T18:49:47Z

fix checkstyle error

commit e3cdc47a070271dbe49424943852680174a48e91
Author: Jason Gustafson 
Date:   2016-02-19T23:51:22Z

make Herder get connector/task status API synchronous

commit 781a4f9378dd24c1ab963ed3c9179e4a261aa255
Author: Jason Gustafson 
Date:   2016-02-19T23:53:21Z

remove unused lifecycle listener in WorkerTask

commit d9849c7f65f2b7a4b2d45c9cb3454d2e5db0776d
Author: Jason Gustafson 
Date:   2016-02-20T00:07:59Z

batch stopping/awaiting tasks in herders

commit 62dda0f2d5a6a049fc8b33de505e4e20af58560c
Author: Jason Gustafson 
Date:   2016-02-20T00:10:30Z

workerId should be worker_id in status response

commit 624e355cb28d76a7cb0efc272f6d3b236329cd73
Author: Jason Gustafson 
Date:   2016-02-22T18:43:20Z

add retry and max in-flight requests config for KafkaBasedLog




> Keep track of connector and task status info, expose it via the REST API
> 
>
> Key: KAFKA-3093
> URL: https://issues.apache.org/jira/browse/KAFKA-3093
> Project: Kafka
>  Issue Type: Improvement
>  Components: copycat
>Reporter: jin xing
>Assignee: Jason Gustafson
>
> Relate to KAFKA-3054;
> We should keep track of the status of connector and task during their 
> startup, execution, and handle exceptions thrown by connector and task;
> Users should be able to fetch these informations by REST API and send some 
> necessary commands(reconfiguring, restarting, pausing, unpausing) to 
> connectors and tasks by REST API;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3245) need a way to specify the number of replicas for change log topics

2016-02-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157720#comment-15157720
 ] 

ASF GitHub Bot commented on KAFKA-3245:
---

GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/948

KAFKA-3245: config for changelog replication factor

@guozhangwang 


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka changelog_topic_replication

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/948.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #948


commit 9f3ec06214d6bdbad5833ffb3b68512ae9c58bbc
Author: Yasuhiro Matsuda 
Date:   2016-02-18T21:37:27Z

change log replication

commit ce5ebe42cdbc79a73fedccc5dbaf3d9c8d03597f
Author: Yasuhiro Matsuda 
Date:   2016-02-22T21:26:28Z

Merge branch 'trunk' of github.com:apache/kafka into 
changelog_topic_replication




> need a way to specify the number of replicas for change log topics
> --
>
> Key: KAFKA-3245
> URL: https://issues.apache.org/jira/browse/KAFKA-3245
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Affects Versions: 0.9.1.0
>Reporter: Yasuhiro Matsuda
>
> Currently the number of replicas of auto-created change log topics is one. 
> This make stream processing not fault tolerant. A way to specify the number 
> of replicas in config is desired.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3248) AdminClient Blocks Forever in send Method

2016-02-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157515#comment-15157515
 ] 

ASF GitHub Bot commented on KAFKA-3248:
---

GitHub user WarrenGreen opened a pull request:

https://github.com/apache/kafka/pull/946

KAFKA-3248: AdminClient Blocks Forever in send Method

 Block while in bounds of timeout

 Author: Warren Green 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/WarrenGreen/kafka KAFKA-3248

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/946.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #946


commit aa5e556413e32b4d8205cf9f5c5379321e659526
Author: Warren Green 
Date:   2016-02-22T19:09:51Z

KAFKA-3248: AdminClient Blocks Forever in send Method

 Block while in bounds of timeout

 Author: Warren Green 




> AdminClient Blocks Forever in send Method
> -
>
> Key: KAFKA-3248
> URL: https://issues.apache.org/jira/browse/KAFKA-3248
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.9.0.0
>Reporter: John Tylwalk
>Priority: Minor
>
> AdminClient will block forever when performing operations involving the 
> {{send()}} method, due to usage of 
> {{ConsumerNetworkClient.poll(RequestFuture)}} - which blocks indefinitely.
> Suggested fix is to use {{ConsumerNetworkClient.poll(RequestFuture, long 
> timeout)}} in {{AdminClient.send()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3279) SASL implementation checks for unused System property java.security.auth.login.config

2016-02-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15163226#comment-15163226
 ] 

ASF GitHub Bot commented on KAFKA-3279:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/967

KAFKA-3279: Remove checks for JAAS system property

JAAS configuration may be set using other methods and hence the check for 
System property doesn't  always match where the actual configuration used by 
Kafka is loaded from.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-3279

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/967.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #967


commit 16f17a90412a8f66a942e0c6736578b95ef7942b
Author: Rajini Sivaram 
Date:   2016-02-24T15:39:09Z

KAFKA-3279: Remove checks for system property
java.security.auth.login.config




> SASL implementation checks for unused System property 
> java.security.auth.login.config
> -
>
> Key: KAFKA-3279
> URL: https://issues.apache.org/jira/browse/KAFKA-3279
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>
> In many environments (eg. JEE containers), JAAS configuration may be set 
> using methods different from the System property 
> {{java.security.auth.login.config}}. While Kafka obtains JAAS configuration 
> correctly using {{Configuration.getConfiguration()}},  an exception is thrown 
> if the System property {{java.security.auth.login.config}} is not set even 
> when the property is never used. There are also misleading error messages 
> which refer to the value of this property which may or may not be the 
> configuration for which the error is being reported. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3278) clientId is not unique in producer/consumer registration leads to mbean warning

2016-02-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15162775#comment-15162775
 ] 

ASF GitHub Bot commented on KAFKA-3278:
---

GitHub user tomdearman opened a pull request:

https://github.com/apache/kafka/pull/965

KAFKA-3278 add thread number to clientId passed into StreamThread



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tomdearman/kafka KAFKA-3278

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/965.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #965


commit 811b258116d34b3116ba65f1fed301731ccbe233
Author: tomdearman 
Date:   2016-02-24T10:00:19Z

KAFKA-3278 add thread number to clientId passed into StreamThread




> clientId is not unique in producer/consumer registration leads to mbean 
> warning
> ---
>
> Key: KAFKA-3278
> URL: https://issues.apache.org/jira/browse/KAFKA-3278
> Project: Kafka
>  Issue Type: Bug
>  Components: kafka streams
>Affects Versions: 0.9.0.1
> Environment: Mac OS
>Reporter: Tom Dearman
>Assignee: Tom Dearman
>Priority: Minor
> Fix For: 0.10.0.0
>
>
> The clientId passed through to StreamThread is not unique and this is used to 
> create consumers and producers, which in turn try to register mbeans, this 
> leads to a warn that mbean already registered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3277) Update trunk version to be 0.10.0.0-SNAPSHOT

2016-02-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15163348#comment-15163348
 ] 

ASF GitHub Bot commented on KAFKA-3277:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/963


> Update trunk version to be 0.10.0.0-SNAPSHOT
> 
>
> Key: KAFKA-3277
> URL: https://issues.apache.org/jira/browse/KAFKA-3277
> Project: Kafka
>  Issue Type: Sub-task
>  Components: build
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.10.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3093) Keep track of connector and task status info, expose it via the REST API

2016-02-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15160279#comment-15160279
 ] 

ASF GitHub Bot commented on KAFKA-3093:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/920


> Keep track of connector and task status info, expose it via the REST API
> 
>
> Key: KAFKA-3093
> URL: https://issues.apache.org/jira/browse/KAFKA-3093
> Project: Kafka
>  Issue Type: Improvement
>  Components: copycat
>Reporter: jin xing
>Assignee: Jason Gustafson
> Fix For: 0.9.1.0
>
>
> Relate to KAFKA-3054;
> We should keep track of the status of connector and task during their 
> startup, execution, and handle exceptions thrown by connector and task;
> Users should be able to fetch these informations by REST API and send some 
> necessary commands(reconfiguring, restarting, pausing, unpausing) to 
> connectors and tasks by REST API;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3259) KIP-31/KIP-32 clean-ups

2016-02-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15163566#comment-15163566
 ] 

ASF GitHub Bot commented on KAFKA-3259:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/943


> KIP-31/KIP-32 clean-ups
> ---
>
> Key: KAFKA-3259
> URL: https://issues.apache.org/jira/browse/KAFKA-3259
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.10.0.0
>
>
> During review, I found a few things that could potentially be improved but 
> were not important enough to block the PR from being merged.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3018) Kafka producer hangs on producer.close() call if the producer topic contains single quotes in the topic name

2016-02-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15163922#comment-15163922
 ] 

ASF GitHub Bot commented on KAFKA-3018:
---

Github user choang closed the pull request at:

https://github.com/apache/kafka/pull/961


> Kafka producer hangs on producer.close() call if the producer topic contains 
> single quotes in the topic name
> 
>
> Key: KAFKA-3018
> URL: https://issues.apache.org/jira/browse/KAFKA-3018
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.2.0
>Reporter: kanav anand
>Assignee: Jun Rao
>
> While creating topics with quotes in the name throws a exception but if you 
> try to close a producer configured with a topic name with quotes the producer 
> hangs.
> It can be easily replicated and verified by setting topic.name for a producer 
> with a string containing single quotes in it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3018) Kafka producer hangs on producer.close() call if the producer topic contains single quotes in the topic name

2016-02-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15160018#comment-15160018
 ] 

ASF GitHub Bot commented on KAFKA-3018:
---

GitHub user choang opened a pull request:

https://github.com/apache/kafka/pull/961

KAFKA-3018: added topic name validator to ProducerRecord

Added validation for topic name when creating a `ProducerRecord`, and added 
corresponding tests.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/choang/kafka kafka-3018

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/961.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #961


commit de2e599268189921bf768de66bf003e56b9e879f
Author: Chi Hoang 
Date:   2016-02-23T22:06:58Z

KAFKA-3018: added topic name validator to ProducerRecord




> Kafka producer hangs on producer.close() call if the producer topic contains 
> single quotes in the topic name
> 
>
> Key: KAFKA-3018
> URL: https://issues.apache.org/jira/browse/KAFKA-3018
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.2.0
>Reporter: kanav anand
>Assignee: Jun Rao
>
> While creating topics with quotes in the name throws a exception but if you 
> try to close a producer configured with a topic name with quotes the producer 
> hangs.
> It can be easily replicated and verified by setting topic.name for a producer 
> with a string containing single quotes in it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3007) Implement max.poll.records for new consumer (KIP-41)

2016-02-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15159888#comment-15159888
 ] 

ASF GitHub Bot commented on KAFKA-3007:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/931


> Implement max.poll.records for new consumer (KIP-41)
> 
>
> Key: KAFKA-3007
> URL: https://issues.apache.org/jira/browse/KAFKA-3007
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: aarti gupta
>Assignee: Jason Gustafson
> Fix For: 0.9.1.0
>
>
> Currently, the consumer.poll(timeout)
> returns all messages that have not been acked since the last fetch
> The only way to process a single message, is to throw away all but the first 
> message in the list
> This would mean we are required to fetch all messages into memory, and this 
> coupled with the client being not thread-safe, (i.e. we cannot use a 
> different thread to ack messages, makes it hard to consume messages when the 
> order of message arrival is important, and a large number of messages are 
> pending to be consumed)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3272) Add debugging options to kafka-run-class.sh so we can easily run remote debugging

2016-02-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15159905#comment-15159905
 ] 

ASF GitHub Bot commented on KAFKA-3272:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/955


> Add debugging options to kafka-run-class.sh so we can easily run remote 
> debugging
> -
>
> Key: KAFKA-3272
> URL: https://issues.apache.org/jira/browse/KAFKA-3272
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 0.9.0.1
>Reporter: Christian Posta
>Priority: Minor
> Fix For: 0.9.1.0
>
>
> Add a KAFKA_DEBUG environment variable to easily enable remote debugging



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2698) add paused API

2016-02-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15160048#comment-15160048
 ] 

ASF GitHub Bot commented on KAFKA-2698:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/962

KAFKA-2698: Add paused() method to o.a.k.c.c.Consumer



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2698

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/962.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #962


commit 21c3cafc714c4b25673b87a9b62c81a87d720f65
Author: Tom Lee 
Date:   2015-11-02T00:58:38Z

KAFKA-2698: Add paused() method to o.a.k.c.c.Consumer




> add paused API
> --
>
> Key: KAFKA-2698
> URL: https://issues.apache.org/jira/browse/KAFKA-2698
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Onur Karaman
>Priority: Critical
> Fix For: 0.9.1.0
>
>
> org.apache.kafka.clients.consumer.Consumer tends to follow a pattern of 
> having an action API paired with a query API:
> subscribe() has subscription()
> assign() has assignment()
> There's no analogous API for pause.
> Should there be a paused() API returning Set?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3256) Large number of system test failures

2016-02-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157780#comment-15157780
 ] 

ASF GitHub Bot commented on KAFKA-3256:
---

GitHub user becketqin opened a pull request:

https://github.com/apache/kafka/pull/949

KAFKA-3256: Add print.timestamp option to console consumer.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/becketqin/kafka KAFKA-3256

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/949.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #949


commit f4a2ebd5feb75cde8b44b3cb1512152805259383
Author: Jiangjie Qin 
Date:   2016-02-21T06:03:27Z

KAFKA-3256: Add print.timestamp option to console consumer. It is disabled 
by default




> Large number of system test failures
> 
>
> Key: KAFKA-3256
> URL: https://issues.apache.org/jira/browse/KAFKA-3256
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Jiangjie Qin
>
> Confluent's nightly run of the kafka system tests reported a large number of 
> failures beginning 2/20/2016
> Test run: 2016-02-19--001.1455897182--apache--trunk--eee9522/
> Link: 
> http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2016-02-19--001.1455897182--apache--trunk--eee9522/report.html
> Pass: 136
> Fail: 0
> Test run: 2016-02-20--001.1455979842--apache--trunk--5caa800/
> Link: 
> http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2016-02-20--001.1455979842--apache--trunk--5caa800/report.html
> Pass: 72
> Fail: 64
> I.e. trunk@eee9522 was the last passing run, and trunk@5caa800 had a large 
> number of failures.
> Given its complexity, the most likely culprit is 45c8195fa, and I confirmed 
> this is the first commit with failures on a small number of tests.
> [~becket_qin] do you mind investigating?
> {code}
> commit 5caa800e217c6b83f62ee3e6b5f02f56e331b309
> Author: Jun Rao 
> Date:   Fri Feb 19 09:40:59 2016 -0800
> trivial fix to authorization CLI table
> commit 45c8195fa14c766b200c720f316836dbb84e9d8b
> Author: Jiangjie Qin 
> Date:   Fri Feb 19 07:56:40 2016 -0800
> KAFKA-3025; Added timetamp to Message and use relative offset.
> commit eee95228fabe1643baa016a2d49fb0a9fe2c66bd
> Author: Yasuhiro Matsuda 
> Date:   Thu Feb 18 09:39:30 2016 +0800
> MINOR: remove streams config params from producer/consumer configs
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3277) Update trunk version to be 0.10.0.0-SNAPSHOT

2016-02-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15160241#comment-15160241
 ] 

ASF GitHub Bot commented on KAFKA-3277:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/963

KAFKA-3277; Update trunk version to be 0.10.0.0-SNAPSHOT

Also update `kafka-merge-pr.py` and `tests/kafkatest/__init__.py`.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka update-trunk-0.10.0.0-SNAPSHOT

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/963.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #963


commit 679fc72f34d38e032cab56cce6cf70e0fced0e4f
Author: Ismael Juma 
Date:   2016-02-24T06:19:14Z

Update trunk version to be 0.10.0.0-SNAPSHOT

Also update `kafka-merge-pr.py` and
`tests/kafkatest/__init__.py`.




> Update trunk version to be 0.10.0.0-SNAPSHOT
> 
>
> Key: KAFKA-3277
> URL: https://issues.apache.org/jira/browse/KAFKA-3277
> Project: Kafka
>  Issue Type: Sub-task
>  Components: build
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.10.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3272) Add debugging options to kafka-run-class.sh so we can easily run remote debugging

2016-02-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15159515#comment-15159515
 ] 

ASF GitHub Bot commented on KAFKA-3272:
---

GitHub user christian-posta opened a pull request:

https://github.com/apache/kafka/pull/955

Fix for https://issues.apache.org/jira/browse/KAFKA-3272 to easily en…

…able remote debugging to Kafka tools scripts

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/christian-posta/kafka 
ceposta-enable-jvm-debugging-opts

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/955.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #955


commit 89be18d945274ab8a51f51492bfe2943e75638bc
Author: Christian Posta 
Date:   2016-02-23T19:39:56Z

Fix for https://issues.apache.org/jira/browse/KAFKA-3272 to easily enable 
remote debugging to Kafka tools scripts




> Add debugging options to kafka-run-class.sh so we can easily run remote 
> debugging
> -
>
> Key: KAFKA-3272
> URL: https://issues.apache.org/jira/browse/KAFKA-3272
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 0.9.0.1
>Reporter: Christian Posta
>Priority: Minor
>
> Add a KAFKA_DEBUG environment variable to easily enable remote debugging



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3242) "Add Partition" log message doesn't actually indicate adding a partition

2016-02-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15159629#comment-15159629
 ] 

ASF GitHub Bot commented on KAFKA-3242:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/924


> "Add Partition" log message doesn't actually indicate adding a partition
> 
>
> Key: KAFKA-3242
> URL: https://issues.apache.org/jira/browse/KAFKA-3242
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
> Fix For: 0.9.1.0
>
>
> We log:
> "Add Partition triggered " ... " for path "...
> on every trigger of addPartitionListener
> The listener is triggered not just when partition is added but on any 
> modification of the partition assignment in ZK. So this is a bit misleading.
> Calling the listener updatePartitionListener and logging something more 
> meaningful will be nice.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3214) Add consumer system tests for compressed topics

2016-02-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169889#comment-15169889
 ] 

ASF GitHub Bot commented on KAFKA-3214:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/958


> Add consumer system tests for compressed topics
> ---
>
> Key: KAFKA-3214
> URL: https://issues.apache.org/jira/browse/KAFKA-3214
> Project: Kafka
>  Issue Type: Test
>  Components: consumer
>Reporter: Jason Gustafson
>Assignee: Anna Povzner
> Fix For: 0.10.0.0
>
>
> As far as I can tell, we don't have any ducktape tests which verify 
> correctness when compression is enabled. If we did, we might have caught 
> KAFKA-3179 earlier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3300) Calculate the initial/max size of offset index files and reduce the memory footprint for memory mapped index files.

2016-02-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15171293#comment-15171293
 ] 

ASF GitHub Bot commented on KAFKA-3300:
---

GitHub user becketqin opened a pull request:

https://github.com/apache/kafka/pull/983

KAFKA-3300: Avoid over allocating disk space and memory for index files.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/becketqin/kafka KAFKA-3300

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/983.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #983


commit b49a9af4c19513e458ced92ef49504f7a1c237df
Author: Jiangjie Qin 
Date:   2016-02-29T01:39:18Z

KAFKA-3300: Avoid over allocating disk space and memory for index files.




> Calculate the initial/max size of offset index files and reduce the memory 
> footprint for memory mapped index files.
> ---
>
> Key: KAFKA-3300
> URL: https://issues.apache.org/jira/browse/KAFKA-3300
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.1
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.10.0.0
>
>
> Currently the initial/max size of offset index file is configured by 
> {{log.index.max.bytes}}. This will be the offset index file size for active 
> log segment until it rolls out. 
> Theoretically, we can calculate the upper bound of offset index size using 
> the following formula:
> {noformat}
> log.segment.bytes / index.interval.bytes * 8
> {noformat}
> With default setting the bytes needed for an offset index size is 1GB / 4K * 
> 8 = 2MB. And the default log.index.max.bytes is 10MB.
> This means we are over-allocating at least 8MB on disk and mapping it to 
> memory.
> We can probably do the following:
> 1. When creating a new offset index, calculate the size using the above 
> formula,
> 2. If the result in (1) is greater than log.index.max.bytes, we allocate 
> log.index.max.bytes instead.
> This should be able to significantly save memory if a broker has a lot of 
> partitions on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3286) Add plugin to quickly check for outdated dependencies

2016-02-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15167486#comment-15167486
 ] 

ASF GitHub Bot commented on KAFKA-3286:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/973

KAFKA-3286: Add plugin to quickly check for outdated dependencies

Adds a gradle task to generate a report of outdate release dependencies:
`gradle dependencyUpdates`

Updates a few minor versions.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka outdated-deps

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/973.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #973


commit 892f0148bebc724b73d84545466c8bfe29fe32cf
Author: Grant Henke 
Date:   2016-02-25T17:17:18Z

KAFKA-3286: Add plugin to quickly check for outdated dependencies

Adds a gradle task to generate a report of outdate release dependencies:
`gradle dependencyUpdates`

Updates a few minor versions.




> Add plugin to quickly check for outdated dependencies
> -
>
> Key: KAFKA-3286
> URL: https://issues.apache.org/jira/browse/KAFKA-3286
> Project: Kafka
>  Issue Type: Sub-task
>  Components: build
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.10.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3280) KafkaConsumer Javadoc contains misleading description of heartbeat behavior and correct use

2016-02-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166588#comment-15166588
 ] 

ASF GitHub Bot commented on KAFKA-3280:
---

GitHub user rwhaling opened a pull request:

https://github.com/apache/kafka/pull/968

KAFKA-3280: KafkaConsumer Javadoc contains misleading description of 
heartbeat behavior and correct use

This is my original work and I license the work to the project under the 
project's open source license.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rwhaling/kafka 
docs/kafkaconsumer-heartbeat-doc-improvement

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/968.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #968


commit fd789dc9b0d9d7c7aa724ab5f4aa0d49a95fcff2
Author: Richard Whaling 
Date:   2016-02-25T02:13:32Z

clarified documentation of KafkaConsumer heartbeat behavior




> KafkaConsumer Javadoc contains misleading description of heartbeat behavior 
> and correct use
> ---
>
> Key: KAFKA-3280
> URL: https://issues.apache.org/jira/browse/KAFKA-3280
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Richard Whaling
>Assignee: Neha Narkhede
>  Labels: doc
> Fix For: 0.10.0.0
>
>   Original Estimate: 0.25h
>  Remaining Estimate: 0.25h
>
> The KafkaConsumer Javadoc says that: "The consumer will automatically ping 
> the cluster periodically, which lets the cluster know that it is alive. As 
> long as the consumer is able to do this it is considered alive and retains 
> the right to consume from the partitions assigned to it."  This is false.  
> The heartbeat process is neither automatic nor periodic.  The consumer 
> heartbeats exactly once when poll() is called.  The consumer's run thread is 
> responsible for calling poll() before session.timeout.ms elapses.
> Based on this misinformation, it is easy for a naive implementer to build a 
> batch-based kafka consumer that takes longer than session.timeout.ms between 
> poll() calls and encounter very ugly rebalance loops that can be very hard to 
> diagnose.  Clarification in the docs would help a lot--I'll submit a patch 
> shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3257) bootstrap-test-env.sh version check fails when grep has --colour option enabled.

2016-02-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166721#comment-15166721
 ] 

ASF GitHub Bot commented on KAFKA-3257:
---

GitHub user zhuchen1018 opened a pull request:

https://github.com/apache/kafka/pull/969

KAFKA-3257: disable bootstrap-test-env.sh --colour option

@becketqin, when you get a chance, could you take a look at the patch? 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zhuchen1018/kafka KAFKA-3257

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/969.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #969


commit 00d99e751e938c82dd9791de74a00634e2f35418
Author: zhuchen1018 
Date:   2016-02-25T04:35:06Z

KAFKA-3257: disable bootstrap-test-env.sh --colour option




> bootstrap-test-env.sh version check fails when grep has --colour option 
> enabled.
> 
>
> Key: KAFKA-3257
> URL: https://issues.apache.org/jira/browse/KAFKA-3257
> Project: Kafka
>  Issue Type: Bug
>  Components: system tests
>Affects Versions: 0.9.0.1
>Reporter: Jiangjie Qin
>Assignee: chen zhu
>  Labels: newbie++
> Fix For: 0.10.0.0
>
>
> When checking the versions, we use the following command:
> {code}
> vagrant --version | egrep -o "[0-9]+\.[0-9]+\.[0-9]+"
> {code}
> This does not work if user box has --colour option enabled. In my case it 
> complains:
> Found Vagrant version 1.8.1. Please upgrade to 1.6.4 or higher (see 
> http://www.vagrantup.com for details)
> We should change this line to:
> {code}
> vagrant --version | egrep --colour=never -o "[0-9]+\.[0-9]+\.[0-9]+"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2698) add paused API

2016-02-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166680#comment-15166680
 ] 

ASF GitHub Bot commented on KAFKA-2698:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/962


> add paused API
> --
>
> Key: KAFKA-2698
> URL: https://issues.apache.org/jira/browse/KAFKA-2698
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Onur Karaman
>Assignee: Jason Gustafson
>Priority: Critical
> Fix For: 0.10.0.0
>
>
> org.apache.kafka.clients.consumer.Consumer tends to follow a pattern of 
> having an action API paired with a query API:
> subscribe() has subscription()
> assign() has assignment()
> There's no analogous API for pause.
> Should there be a paused() API returning Set?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3281) Improve message of stop scripts when no processes are running

2016-02-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166952#comment-15166952
 ] 

ASF GitHub Bot commented on KAFKA-3281:
---

GitHub user sasakitoa opened a pull request:

https://github.com/apache/kafka/pull/971

KAFKA-3281: Improve stop script's message when no processes are running

Stop scritps such as kafka-server-stop.sh log messages of kill command's 
error when processes aren't running.
This PR changes this message to "No kafka server to stop".

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sasakitoa/kafka 
stop_scripts_says_not_good_message

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/971.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #971


commit d459d199f01b0ddf75363b423f859882b366005a
Author: Sasaki Toru 
Date:   2016-02-25T08:43:13Z

Improve stop script's message when no processes are running.




> Improve message of stop scripts when no processes are running
> -
>
> Key: KAFKA-3281
> URL: https://issues.apache.org/jira/browse/KAFKA-3281
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.1
>Reporter: Sasaki Toru
>Priority: Minor
> Fix For: 0.10.0.0
>
>
> Stop scritps such as kafka-server-stop.sh log messages of kill command's 
> error when processes aren't running.
> Example(Brokers are not running):
> {code}
> $ bin/kafka-server-stop.sh 
> kill: invalid argument S
> Usage:
>  kill [options]  [...]
> Options:
>   [...]send signal to every  listed
>  -, -s, --signal 
> specify the  to be sent
>  -l, --list=[]  list all signal names, or convert one to a name
>  -L, --tablelist all signal names in a nice table
>  -h, --help display this help and exit
>  -V, --version  output version information and exit
> For more details see kill(1).
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3273) MessageFormatter and MessageReader interfaces should be resilient to changes

2016-02-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15167214#comment-15167214
 ] 

ASF GitHub Bot commented on KAFKA-3273:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/972

KAFKA-3273; MessageFormatter and MessageReader interfaces should be 
resilient to changes

* Change `MessageFormat.writeTo` to take a `ConsumerRecord`
* Change `MessageReader.readMessage()` to use `ProducerRecord`

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-3273-message-formatter-and-reader-resilient

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/972.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #972


commit 45d3fbd88401bafe09b5097c1c039b236fda3be7
Author: Ismael Juma 
Date:   2016-02-25T13:25:48Z

Change `MessageFormat.writeTo` to take a `ConsumerRecord`

commit 4219bfea0f46b7c5498cf507a9421fc80b021709
Author: Ismael Juma 
Date:   2016-02-25T13:50:45Z

Change `MessageReader.readMessage()` to use `ProducerRecord`




> MessageFormatter and MessageReader interfaces should be resilient to changes
> 
>
> Key: KAFKA-3273
> URL: https://issues.apache.org/jira/browse/KAFKA-3273
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.10.0.0
>
>
> They should use `ConsumerRecord` and `ProducerRecord` as parameters and 
> return types respectively in order to avoid breaking users each time a new 
> parameter is added.
> An additional question is whether we need to maintain compatibility with 
> previous releases. [~junrao] suggested that we do not, but [~ewencp] thought 
> we should.
> Note that the KIP-31/32 change has broken compatibility for 
> `MessageFormatter` so we need to do _something_ for the next release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3291) DumpLogSegment tool should also provide an option to only verify index sanity.

2016-02-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172169#comment-15172169
 ] 

ASF GitHub Bot commented on KAFKA-3291:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/975


> DumpLogSegment tool should also provide an option to only verify index sanity.
> --
>
> Key: KAFKA-3291
> URL: https://issues.apache.org/jira/browse/KAFKA-3291
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Parth Brahmbhatt
>Assignee: Parth Brahmbhatt
> Fix For: 0.10.0.0
>
>
> DumpLogSegment tool should call index.sanityCheck function as part of index 
> sanity check as that function determines if an index will be rebuilt on 
> restart or not. This is a cheap check as it only checks the file size and can 
> help in scenarios where customer is trying to figure out which index files 
> will be rebuilt on startup which directly affects the broker bootstrap time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3133) Add putIfAbsent function to KeyValueStore

2016-02-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172571#comment-15172571
 ] 

ASF GitHub Bot commented on KAFKA-3133:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/912


> Add putIfAbsent function to KeyValueStore
> -
>
> Key: KAFKA-3133
> URL: https://issues.apache.org/jira/browse/KAFKA-3133
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
> Fix For: 0.10.0.0
>
>
> Since a local store will only be accessed by a single stream thread, there is 
> no atomicity concerns and hence this API should be easy to add.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3236) Honor Producer Configuration "block.on.buffer.full"

2016-02-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172729#comment-15172729
 ] 

ASF GitHub Bot commented on KAFKA-3236:
---

Github user knusbaum closed the pull request at:

https://github.com/apache/kafka/pull/934


> Honor Producer Configuration "block.on.buffer.full"
> ---
>
> Key: KAFKA-3236
> URL: https://issues.apache.org/jira/browse/KAFKA-3236
> Project: Kafka
>  Issue Type: Improvement
>  Components: producer 
>Affects Versions: 0.9.0.0
>Reporter: Thomas Graves
>Assignee: Thomas Graves
>
> In Kafka-0.9, "max.block.ms" is used to control how long the following 
> methods will block.
> KafkaProducer.send() when
>* Buffer is full
>* Metadata is unavailable
> KafkaProducer.partitionsFor() when
>* Metadata is unavailable
> However when "block.on.buffer.full" is set to false, "max.block.ms" is in 
> effect whenever a buffer is requested/allocated from the Producer BufferPool. 
> Instead it should throw a BufferExhaustedException without waiting for 
> "max.block.ms"
> This is particulary useful if a producer application does not wish to block 
> at all on KafkaProducer.send() . We avoid waiting on KafkaProducer.send() 
> when metadata is unavailable by invoking send() only if the producer instance 
> has fetched the metadata for the topic in a different thread using the same 
> producer instance. However "max.block.ms" is still required to specify a 
> timeout for bootstrapping the metadata fetch.
> We should resolve this limitation by decoupling "max.block.ms" and 
> "block.on.buffer.full".
>* "max.block.ms" will be used exclusively for fetching metadata when
> "block.on.buffer.full" = false (in pure non-blocking mode )
>* "max.block.ms" will be applicable to both fetching metadata as well as 
> buffer allocation when "block.on.buffer.full = true



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3236) Honor Producer Configuration "block.on.buffer.full"

2016-02-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172730#comment-15172730
 ] 

ASF GitHub Bot commented on KAFKA-3236:
---

GitHub user knusbaum reopened a pull request:

https://github.com/apache/kafka/pull/934

KAFKA-3236: Honor Producer Configuration "block.on.buffer.full"



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/knusbaum/kafka KAFKA-3236-master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/934.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #934


commit e8b63f045ff5d854d3d3d4bc0751cbba0d37d69f
Author: Sanjiv Raj 
Date:   2016-02-11T22:20:26Z

[ADDHR-1240] Honor block.on.buffer.full producer configuration

commit 50e4f01bc5f2b004533ac60bad5d8c396508e762
Author: Sanjiv Raj 
Date:   2016-02-18T18:43:20Z

Fix failing producer integration test

commit 836afe6159ee3b902a4c809cefd0345f61e6b026
Author: Kyle Nusbaum 
Date:   2016-02-17T17:05:38Z

Updating config documentation.

commit 6009eccb3a65c0a8cc8f441c89d902708475271e
Author: Kyle Nusbaum 
Date:   2016-02-18T21:45:39Z

Fixing TestUtils

commit 5cf40a2065674d72298bdcce2a64ada1c6ca0163
Author: Kyle Nusbaum 
Date:   2016-02-19T16:50:46Z

Merge branch 'trunk' of github.com:apache/kafka into KAFKA-3236-master

commit 6e2d64ee8eedc90efff54fdd952c8d5f98a8b0d5
Author: Kyle Nusbaum 
Date:   2016-02-24T21:31:03Z

Fixing config descriptions.




> Honor Producer Configuration "block.on.buffer.full"
> ---
>
> Key: KAFKA-3236
> URL: https://issues.apache.org/jira/browse/KAFKA-3236
> Project: Kafka
>  Issue Type: Improvement
>  Components: producer 
>Affects Versions: 0.9.0.0
>Reporter: Thomas Graves
>Assignee: Thomas Graves
>
> In Kafka-0.9, "max.block.ms" is used to control how long the following 
> methods will block.
> KafkaProducer.send() when
>* Buffer is full
>* Metadata is unavailable
> KafkaProducer.partitionsFor() when
>* Metadata is unavailable
> However when "block.on.buffer.full" is set to false, "max.block.ms" is in 
> effect whenever a buffer is requested/allocated from the Producer BufferPool. 
> Instead it should throw a BufferExhaustedException without waiting for 
> "max.block.ms"
> This is particulary useful if a producer application does not wish to block 
> at all on KafkaProducer.send() . We avoid waiting on KafkaProducer.send() 
> when metadata is unavailable by invoking send() only if the producer instance 
> has fetched the metadata for the topic in a different thread using the same 
> producer instance. However "max.block.ms" is still required to specify a 
> timeout for bootstrapping the metadata fetch.
> We should resolve this limitation by decoupling "max.block.ms" and 
> "block.on.buffer.full".
>* "max.block.ms" will be used exclusively for fetching metadata when
> "block.on.buffer.full" = false (in pure non-blocking mode )
>* "max.block.ms" will be applicable to both fetching metadata as well as 
> buffer allocation when "block.on.buffer.full = true



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3192) Add implicit unlimited windowed aggregation for KStream

2016-02-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172703#comment-15172703
 ] 

ASF GitHub Bot commented on KAFKA-3192:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/870


> Add implicit unlimited windowed aggregation for KStream
> ---
>
> Key: KAFKA-3192
> URL: https://issues.apache.org/jira/browse/KAFKA-3192
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
> Fix For: 0.10.0.0
>
>
> Some users would want to have a convenient way to specify "unlimited windowed 
> aggregation" for KStreams. We can add that as a syntax-suger like the 
> following:
> {code}
> KTable aggregateByKey(aggregator)
> {code}
> Where it computes the aggregate WITHOUT windowing, and the underlying 
> implementation just use a RocksDBStore instead of a RocksDBWindowStore, and 
> the returned type will be KTable, not KTable. 
> With this we can also remove UnlimitedWindows specs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3307) Add ProtocolVersion request/response and server side handling.

2016-02-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172880#comment-15172880
 ] 

ASF GitHub Bot commented on KAFKA-3307:
---

GitHub user SinghAsDev reopened a pull request:

https://github.com/apache/kafka/pull/986

KAFKA-3307: Add ProtocolVersion request/response and server side handling.

The patch does the following.
1. For unknown requests or protocol versions, broker sends an empty 
response, instead of simple closing the connection.
2. Adds ProtocolVersion request and response, and server side 
implementation.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka KAFKA-3307

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/986.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #986


commit 3765aa58300f5ffc6997cad22e86ad90f383129e
Author: Ashish Singh 
Date:   2016-02-29T00:42:36Z

Init patch. Add req/resp. Add req handling. Add basic ut.

commit e4465910e94cf3300589f94aeeafe55c0ff7ed3e
Author: Ashish Singh 
Date:   2016-02-29T04:23:52Z

Respond with empty response body for invalid requests

commit 2985d6dad7241092c8fe9170d56e85f621d8fa1b
Author: Ashish Singh 
Date:   2016-02-29T04:28:40Z

Remove commented code

commit 3f103945afee264a9a5d663c6a8048e1388896ad
Author: Ashish Singh 
Date:   2016-02-29T22:14:28Z

Add mechanism to deprecate a protocol version. Populate ProtocolVersion's 
apiDeprecatedVersions using this mechanism.




> Add ProtocolVersion request/response and server side handling.
> --
>
> Key: KAFKA-3307
> URL: https://issues.apache.org/jira/browse/KAFKA-3307
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3307) Add ProtocolVersion request/response and server side handling.

2016-02-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172879#comment-15172879
 ] 

ASF GitHub Bot commented on KAFKA-3307:
---

Github user SinghAsDev closed the pull request at:

https://github.com/apache/kafka/pull/986


> Add ProtocolVersion request/response and server side handling.
> --
>
> Key: KAFKA-3307
> URL: https://issues.apache.org/jira/browse/KAFKA-3307
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3278) clientId is not unique in producer/consumer registration leads to mbean warning

2016-02-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168803#comment-15168803
 ] 

ASF GitHub Bot commented on KAFKA-3278:
---

GitHub user tomdearman opened a pull request:

https://github.com/apache/kafka/pull/978

KAFKA-3278 concatenate thread name to clientId when producer and consumers 
config is created

@guozhangwang made the changes as requested, I reverted my original commit 
and that seems to have closed the other pull request - sorry if that mucks up 
the process a bit

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tomdearman/kafka KAFKA-3278

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/978.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #978


commit 172230a76033d22ed9712fa55f454d32fa37286b
Author: tomdearman 
Date:   2016-02-26T10:32:05Z

KAFKA-3278 concatenate thread name to clientId when producer and consumers 
config is created




> clientId is not unique in producer/consumer registration leads to mbean 
> warning
> ---
>
> Key: KAFKA-3278
> URL: https://issues.apache.org/jira/browse/KAFKA-3278
> Project: Kafka
>  Issue Type: Bug
>  Components: kafka streams
>Affects Versions: 0.9.0.1
> Environment: Mac OS
>Reporter: Tom Dearman
>Assignee: Tom Dearman
>Priority: Minor
> Fix For: 0.10.0.0
>
>
> The clientId passed through to StreamThread is not unique and this is used to 
> create consumers and producers, which in turn try to register mbeans, this 
> leads to a warn that mbean already registered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3278) clientId is not unique in producer/consumer registration leads to mbean warning

2016-02-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168748#comment-15168748
 ] 

ASF GitHub Bot commented on KAFKA-3278:
---

Github user tomdearman closed the pull request at:

https://github.com/apache/kafka/pull/965


> clientId is not unique in producer/consumer registration leads to mbean 
> warning
> ---
>
> Key: KAFKA-3278
> URL: https://issues.apache.org/jira/browse/KAFKA-3278
> Project: Kafka
>  Issue Type: Bug
>  Components: kafka streams
>Affects Versions: 0.9.0.1
> Environment: Mac OS
>Reporter: Tom Dearman
>Assignee: Tom Dearman
>Priority: Minor
> Fix For: 0.10.0.0
>
>
> The clientId passed through to StreamThread is not unique and this is used to 
> create consumers and producers, which in turn try to register mbeans, this 
> leads to a warn that mbean already registered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3299) KafkaConnect: DistributedHerder shouldn't wait forever to read configs after rebalance

2016-02-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15170172#comment-15170172
 ] 

ASF GitHub Bot commented on KAFKA-3299:
---

GitHub user gwenshap opened a pull request:

https://github.com/apache/kafka/pull/981

KAFKA-3299: Ensure that reading config log on rebalance doesn't hang the 
herder



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gwenshap/kafka KAFKA-3299

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/981.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #981


commit 319672d563f31fb7df85645fd802acc4e1151f75
Author: Gwen Shapira 
Date:   2016-02-27T00:07:38Z

Ensure that reading config log on rebalance doesn't hang the herder




> KafkaConnect: DistributedHerder shouldn't wait forever to read configs after 
> rebalance
> --
>
> Key: KAFKA-3299
> URL: https://issues.apache.org/jira/browse/KAFKA-3299
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>
> Right now, the handleRebalance code calls readConfigToEnd with timeout of 
> MAX_INT if it isn't the leader.
> The normal workerSyncTimeoutMs is probably sufficient.
> At least this allows a worker to time-out, get back to the tick() loop and 
> check the "stopping" flag to see if it should shut down, to prevent it from 
> hanging forever.
> It doesn't resolve the question of what we should do with a worker that 
> repeatedly fails to read configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3278) clientId is not unique in producer/consumer registration leads to mbean warning

2016-02-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15170045#comment-15170045
 ] 

ASF GitHub Bot commented on KAFKA-3278:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/978


> clientId is not unique in producer/consumer registration leads to mbean 
> warning
> ---
>
> Key: KAFKA-3278
> URL: https://issues.apache.org/jira/browse/KAFKA-3278
> Project: Kafka
>  Issue Type: Bug
>  Components: kafka streams
>Affects Versions: 0.9.0.1
> Environment: Mac OS
>Reporter: Tom Dearman
>Assignee: Tom Dearman
>Priority: Minor
> Fix For: 0.10.0.0
>
>
> The clientId passed through to StreamThread is not unique and this is used to 
> create consumers and producers, which in turn try to register mbeans, this 
> leads to a warn that mbean already registered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3201) Add system test for KIP-31 and KIP-32 - Upgrade Test

2016-02-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15170091#comment-15170091
 ] 

ASF GitHub Bot commented on KAFKA-3201:
---

GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/980

KAFKA-3201: Added rolling upgrade system tests from 0.8 and 0.9 to 0.10

Three main tests:
1. Setup: Producer (0.8) → Kafka Cluster → Consumer (0.8)
First rolling bounce: Set inter.broker.protocol.version = 0.8 and 
message.format.version = 0.8
Second rolling bonus, use latest (default) inter.broker.protocol.version 
and message.format.version
2. Setup: Producer (0.9) → Kafka Cluster → Consumer (0.9)
First rolling bounce: Set inter.broker.protocol.version = 0.9 and 
message.format.version = 0.9
Second rolling bonus, use latest (default) inter.broker.protocol.version 
and message.format.version
3. Setup: Producer (0.9) → Kafka Cluster → Consumer (0.9)
First rolling bounce: Set inter.broker.protocol.version = 0.9 and 
message.format.version = 0.9
Second rolling bonus: use inter.broker.protocol.version = 0.10 and 
message.format.version = 0.9

Plus couple of variations of these tests using old/new consumer or no 
compression / snappy compression. 

Also added optional extra verification to ProduceConsumeValidate test to 
verify that all acks received by producer are successful. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apovzner/kafka kafka-3201-02

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/980.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #980


commit 35e7362e316419675cf6614787fcc2d12fae6e74
Author: Anna Povzner 
Date:   2016-02-26T22:09:14Z

KAFKA-3201: Added rolling upgrade system tests from 0.8 and 0.9 to 0.10

commit 208a50458ecff8ef1bf9b601c1162e796ad7de28
Author: Anna Povzner 
Date:   2016-02-26T22:59:22Z

Upgrade system tests ensure all producer acks are successful

commit dce6ff016c575aae30587c92f71159886158972c
Author: Anna Povzner 
Date:   2016-02-26T23:18:37Z

Using one producer in upgrade test, because --prefixValue is only supported 
in verifiable producer in trunk




> Add system test for KIP-31 and KIP-32 - Upgrade Test
> 
>
> Key: KAFKA-3201
> URL: https://issues.apache.org/jira/browse/KAFKA-3201
> Project: Kafka
>  Issue Type: Sub-task
>  Components: system tests
>Reporter: Jiangjie Qin
>Assignee: Anna Povzner
> Fix For: 0.10.0.0
>
>
> This system test should test the procedure to upgrade a Kafka broker from 
> 0.8.x and 0.9.0 to 0.10.0
> The procedure is documented in KIP-32:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-32+-+Add+timestamps+to+Kafka+message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3260) Increase the granularity of commit for SourceTask

2016-02-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157935#comment-15157935
 ] 

ASF GitHub Bot commented on KAFKA-3260:
---

GitHub user jcustenborder opened a pull request:

https://github.com/apache/kafka/pull/950

KAFKA-3260 - Added SourceTask.commitRecord

Added commitRecord(SourceRecord record) to SourceTask. This method is 
called during the callback from producer.send() when the message has been sent 
successfully. Added commitTaskRecord(SourceRecord record) to WorkerSourceTask 
to handle calling commitRecord on the SourceTask. Updated tests for calls to 
commitRecord.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jcustenborder/kafka KAFKA-3260

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/950.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #950


commit f4e1826f659af99e39189d45af214fd9f030b77b
Author: Jeremy Custenborder 
Date:   2016-02-22T23:22:02Z

KAFKA-3260 - Added commitRecord(SourceRecord record) to SourceTask. This 
method during the callback from producer.send() when the message has been sent 
successfully. Added commitTaskRecord(SourceRecord record) to WorkerSourceTask 
to handle calling commitRecord on the SourceTask. Updated tests for calls to 
commitRecord.




> Increase the granularity of commit for SourceTask
> -
>
> Key: KAFKA-3260
> URL: https://issues.apache.org/jira/browse/KAFKA-3260
> Project: Kafka
>  Issue Type: Improvement
>  Components: copycat
>Affects Versions: 0.9.0.1
>Reporter: Jeremy Custenborder
>Assignee: Ewen Cheslack-Postava
>
> As of right now when commit is called the developer does not know which 
> messages have been accepted since the last poll. I'm proposing that we extend 
> the SourceTask class to allow records to be committed individually.
> {code}
> public void commitRecord(SourceRecord record) throws InterruptedException 
> {
> // This space intentionally left blank.
> }
> {code}
> This method could be overridden to receive a SourceRecord during the callback 
> of producer.send. This will give us messages that have been successfully 
> written to Kafka. The developer then has the capability to commit messages to 
> the source individually or in batch.   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3256) Large number of system test failures

2016-02-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157972#comment-15157972
 ] 

ASF GitHub Bot commented on KAFKA-3256:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/949


> Large number of system test failures
> 
>
> Key: KAFKA-3256
> URL: https://issues.apache.org/jira/browse/KAFKA-3256
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Jiangjie Qin
> Fix For: 0.9.1.0
>
>
> Confluent's nightly run of the kafka system tests reported a large number of 
> failures beginning 2/20/2016
> Test run: 2016-02-19--001.1455897182--apache--trunk--eee9522/
> Link: 
> http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2016-02-19--001.1455897182--apache--trunk--eee9522/report.html
> Pass: 136
> Fail: 0
> Test run: 2016-02-20--001.1455979842--apache--trunk--5caa800/
> Link: 
> http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2016-02-20--001.1455979842--apache--trunk--5caa800/report.html
> Pass: 72
> Fail: 64
> I.e. trunk@eee9522 was the last passing run, and trunk@5caa800 had a large 
> number of failures.
> Given its complexity, the most likely culprit is 45c8195fa, and I confirmed 
> this is the first commit with failures on a small number of tests.
> [~becket_qin] do you mind investigating?
> {code}
> commit 5caa800e217c6b83f62ee3e6b5f02f56e331b309
> Author: Jun Rao 
> Date:   Fri Feb 19 09:40:59 2016 -0800
> trivial fix to authorization CLI table
> commit 45c8195fa14c766b200c720f316836dbb84e9d8b
> Author: Jiangjie Qin 
> Date:   Fri Feb 19 07:56:40 2016 -0800
> KAFKA-3025; Added timetamp to Message and use relative offset.
> commit eee95228fabe1643baa016a2d49fb0a9fe2c66bd
> Author: Yasuhiro Matsuda 
> Date:   Thu Feb 18 09:39:30 2016 +0800
> MINOR: remove streams config params from producer/consumer configs
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3196) KIP-42 (part 2): add record size and CRC to RecordMetadata and ConsumerRecords

2016-02-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157965#comment-15157965
 ] 

ASF GitHub Bot commented on KAFKA-3196:
---

GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/951

KAFKA-3196: Added checksum and size to RecordMetadata and 
ConsumerRecordetadata and ConsumerRecord

This is the second (remaining) part of KIP-42. See 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-42%3A+Add+Producer+and+Consumer+Interceptors

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apovzner/kafka kafka-3196

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/951.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #951


commit ce10691e621a74070243c16dc8c0aa5ada531c72
Author: Anna Povzner 
Date:   2016-02-22T23:49:31Z

KAFKA-3196: KIP-42 (part 2) Added checksum and record size to 
RecordMetadata and ConsumerRecord




> KIP-42 (part 2): add record size and CRC to RecordMetadata and ConsumerRecords
> --
>
> Key: KAFKA-3196
> URL: https://issues.apache.org/jira/browse/KAFKA-3196
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Anna Povzner
>Assignee: Anna Povzner
>
> This is the second (smaller) part of KIP-42, which includes: Add record size 
> and CRC to RecordMetadata and ConsumerRecord.
> See details in KIP-42 wiki: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-42%3A+Add+Producer+and+Consumer+Interceptors
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1476) Get a list of consumer groups

2016-02-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157349#comment-15157349
 ] 

ASF GitHub Bot commented on KAFKA-1476:
---

GitHub user christian-posta opened a pull request:

https://github.com/apache/kafka/pull/945

tidy up spacing for ConsumerGroupCommand related to KAFKA-1476 …

https://issues.apache.org/jira/browse/KAFKA-1476

Let me know if these kind of contributions should have their own requisite 
JIRA opened in advance.

Cheers..

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/christian-posta/kafka 
ceposta-tidy-up-consumer-groups-describe

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/945.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #945


commit dd9ab774dbe4105666de012212d565c0a0ec2ffa
Author: Christian Posta 
Date:   2016-02-22T17:29:46Z

tidy up spacing for ConsumerGroupCommand related to 
https://issues.apache.org/jira/browse/KAFKA-1476




> Get a list of consumer groups
> -
>
> Key: KAFKA-1476
> URL: https://issues.apache.org/jira/browse/KAFKA-1476
> Project: Kafka
>  Issue Type: Wish
>  Components: tools
>Affects Versions: 0.8.1.1
>Reporter: Ryan Williams
>Assignee: Onur Karaman
>  Labels: newbie
> Fix For: 0.9.0.0
>
> Attachments: ConsumerCommand.scala, KAFKA-1476-LIST-GROUPS.patch, 
> KAFKA-1476-RENAME.patch, KAFKA-1476-REVIEW-COMMENTS.patch, KAFKA-1476.patch, 
> KAFKA-1476.patch, KAFKA-1476.patch, KAFKA-1476.patch, 
> KAFKA-1476_2014-11-10_11:58:26.patch, KAFKA-1476_2014-11-10_12:04:01.patch, 
> KAFKA-1476_2014-11-10_12:06:35.patch, KAFKA-1476_2014-12-05_12:00:12.patch, 
> KAFKA-1476_2015-01-12_16:22:26.patch, KAFKA-1476_2015-01-12_16:31:20.patch, 
> KAFKA-1476_2015-01-13_10:36:18.patch, KAFKA-1476_2015-01-15_14:30:04.patch, 
> KAFKA-1476_2015-01-22_02:32:52.patch, KAFKA-1476_2015-01-30_11:09:59.patch, 
> KAFKA-1476_2015-02-04_15:41:50.patch, KAFKA-1476_2015-02-04_18:03:15.patch, 
> KAFKA-1476_2015-02-05_03:01:09.patch, KAFKA-1476_2015-02-09_14:37:30.patch, 
> sample-kafka-consumer-groups-sh-output-1-23-2015.txt, 
> sample-kafka-consumer-groups-sh-output-2-5-2015.txt, 
> sample-kafka-consumer-groups-sh-output-2-9-2015.txt, 
> sample-kafka-consumer-groups-sh-output.txt
>
>
> It would be useful to have a way to get a list of consumer groups currently 
> active via some tool/script that ships with kafka. This would be helpful so 
> that the system tools can be explored more easily.
> For example, when running the ConsumerOffsetChecker, it requires a group 
> option
> bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --topic test --group 
> ?
> But, when just getting started with kafka, using the console producer and 
> consumer, it is not clear what value to use for the group option.  If a list 
> of consumer groups could be listed, then it would be clear what value to use.
> Background:
> http://mail-archives.apache.org/mod_mbox/kafka-users/201405.mbox/%3cCAOq_b1w=slze5jrnakxvak0gu9ctdkpazak1g4dygvqzbsg...@mail.gmail.com%3e



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3297) More optimally balanced partition assignment strategy (new consumer)

2016-02-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169074#comment-15169074
 ] 

ASF GitHub Bot commented on KAFKA-3297:
---

GitHub user noslowerdna opened a pull request:

https://github.com/apache/kafka/pull/979

KAFKA-3297: Fair consumer partition assignment strategy (new consumer)

Pull request for https://issues.apache.org/jira/browse/KAFKA-3297

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/noslowerdna/kafka KAFKA-3297

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/979.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #979


commit 34f113453f03bf2e791181fa0db5d41ea985022a
Author: Andrew Olson 
Date:   2016-02-26T14:15:05Z

KAFKA-3297: Fair consumer partition assignment strategy (new consumer)




> More optimally balanced partition assignment strategy (new consumer)
> 
>
> Key: KAFKA-3297
> URL: https://issues.apache.org/jira/browse/KAFKA-3297
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Andrew Olson
>Assignee: Andrew Olson
>
> While the roundrobin partition assignment strategy is an improvement over the 
> range strategy, when the consumer topic subscriptions are not identical 
> (previously disallowed but will be possible as of KAFKA-2172) it can produce 
> heavily skewed assignments. As suggested 
> [here|https://issues.apache.org/jira/browse/KAFKA-2172?focusedCommentId=14530767=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14530767]
>  it would be nice to have a strategy that attempts to assign an equal number 
> of partitions to each consumer in a group, regardless of how similar their 
> individual topic subscriptions are. We can accomplish this by tracking the 
> number of partitions assigned to each consumer, and having the partition 
> assignment loop assign each partition to a consumer interested in that topic 
> with the least number of partitions assigned. 
> Additionally, we can optimize the distribution fairness by adjusting the 
> partition assignment order:
> * Topics with fewer consumers are assigned first.
> * In the event of a tie for least consumers, the topic with more partitions 
> is assigned first.
> The general idea behind these two rules is to keep the most flexible 
> assignment choices available as long as possible by starting with the most 
> constrained partitions/consumers.
> This JIRA addresses the new consumer. For the original high-level consumer, 
> see KAFKA-2435.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3280) KafkaConsumer Javadoc contains misleading description of heartbeat behavior and correct use

2016-02-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168304#comment-15168304
 ] 

ASF GitHub Bot commented on KAFKA-3280:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/968


> KafkaConsumer Javadoc contains misleading description of heartbeat behavior 
> and correct use
> ---
>
> Key: KAFKA-3280
> URL: https://issues.apache.org/jira/browse/KAFKA-3280
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Richard Whaling
>Assignee: Neha Narkhede
>  Labels: doc
> Fix For: 0.10.0.0
>
>   Original Estimate: 0.25h
>  Remaining Estimate: 0.25h
>
> The KafkaConsumer Javadoc says that: "The consumer will automatically ping 
> the cluster periodically, which lets the cluster know that it is alive. As 
> long as the consumer is able to do this it is considered alive and retains 
> the right to consume from the partitions assigned to it."  This is false.  
> The heartbeat process is neither automatic nor periodic.  The consumer 
> heartbeats exactly once when poll() is called.  The consumer's run thread is 
> responsible for calling poll() before session.timeout.ms elapses.
> Based on this misinformation, it is easy for a naive implementer to build a 
> batch-based kafka consumer that takes longer than session.timeout.ms between 
> poll() calls and encounter very ugly rebalance loops that can be very hard to 
> diagnose.  Clarification in the docs would help a lot--I'll submit a patch 
> shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3292) ClientQuotaManager.getOrCreateQuotaSensors() may return a null ClientSensors.throttleTimeSensor

2016-02-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168270#comment-15168270
 ] 

ASF GitHub Bot commented on KAFKA-3292:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/977


> ClientQuotaManager.getOrCreateQuotaSensors() may return a null 
> ClientSensors.throttleTimeSensor
> ---
>
> Key: KAFKA-3292
> URL: https://issues.apache.org/jira/browse/KAFKA-3292
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jun Rao
>Assignee: Ismael Juma
> Fix For: 0.10.0.0
>
>
> It seems that the following situation is possible. Two threads try to call 
> getOrCreateQuotaSensors() at the same time. Initially, quotaSensor is not 
> registered, then both threads try to get the write lock to register 
> quotaSensor. Thread 1 grabs the write lock and registers both quotaSensor and 
> throttleTimeSensor, and then releases the lock. Thread 2 grabs the write lock 
> again and reads a non-null quotaSensor. It then skips the logic to register 
> throttleTimeSensor and returns. However, the returned throttleTimeSensor will 
> be null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3243) Fix Kafka basic ops documentation for Mirror maker, blacklist is not supported for new consumers

2016-02-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169497#comment-15169497
 ] 

ASF GitHub Bot commented on KAFKA-3243:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/923


> Fix Kafka basic ops documentation for Mirror maker, blacklist is not 
> supported for new consumers
> 
>
> Key: KAFKA-3243
> URL: https://issues.apache.org/jira/browse/KAFKA-3243
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.10.0.0
>
>
> Fix Kafka basic ops documentation for Mirror maker, blacklist is not 
> supported for new consumers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3292) ClientQuotaManager.getOrCreateQuotaSensors() may return a null ClientSensors.throttleTimeSensor

2016-02-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168202#comment-15168202
 ] 

ASF GitHub Bot commented on KAFKA-3292:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/977

KAFKA-3292; ClientQuotaManager.getOrCreateQuotaSensors() may return a null 
ClientSensors.throttleTimeSensor



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-3292-null-throttle-time-sensor

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/977.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #977


commit 406330598bd0fc9b49b81ef4058db6ae97914163
Author: Ismael Juma 
Date:   2016-02-26T00:54:45Z

Ensure that `ClientSensors` has non-null parameters




> ClientQuotaManager.getOrCreateQuotaSensors() may return a null 
> ClientSensors.throttleTimeSensor
> ---
>
> Key: KAFKA-3292
> URL: https://issues.apache.org/jira/browse/KAFKA-3292
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jun Rao
>Assignee: Ismael Juma
> Fix For: 0.10.0.0
>
>
> It seems that the following situation is possible. Two threads try to call 
> getOrCreateQuotaSensors() at the same time. Initially, quotaSensor is not 
> registered, then both threads try to get the write lock to register 
> quotaSensor. Thread 1 grabs the write lock and registers both quotaSensor and 
> throttleTimeSensor, and then releases the lock. Thread 2 grabs the write lock 
> again and reads a non-null quotaSensor. It then skips the logic to register 
> throttleTimeSensor and returns. However, the returned throttleTimeSensor will 
> be null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3291) DumpLogSegment tool should also provide an option to only verify index sanity.

2016-02-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168160#comment-15168160
 ] 

ASF GitHub Bot commented on KAFKA-3291:
---

GitHub user Parth-Brahmbhatt opened a pull request:

https://github.com/apache/kafka/pull/975

KAFKA-3291: DumpLogSegment tool should also provide an option to only…

… verify index sanity.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Parth-Brahmbhatt/kafka KAFKA-3291

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/975.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #975


commit fdc52b7edf1d9a9f73cf7b42ee4c30382e8471eb
Author: Parth Brahmbhatt 
Date:   2016-02-26T00:14:34Z

KAFKA-3291: DumpLogSegment tool should also provide an option to only 
verify index sanity.




> DumpLogSegment tool should also provide an option to only verify index sanity.
> --
>
> Key: KAFKA-3291
> URL: https://issues.apache.org/jira/browse/KAFKA-3291
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Parth Brahmbhatt
>Assignee: Parth Brahmbhatt
>
> DumpLogSegment tool should call index.sanityCheck function as part of index 
> sanity check as that function determines if an index will be rebuilt on 
> restart or not. This is a cheap check as it only checks the file size and can 
> help in scenarios where customer is trying to figure out which index files 
> will be rebuilt on startup which directly affects the broker bootstrap time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3255) Extra unit tests for NetworkClient.connectionDelay(Node node, long now)

2016-02-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157408#comment-15157408
 ] 

ASF GitHub Bot commented on KAFKA-3255:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/941


> Extra unit tests for NetworkClient.connectionDelay(Node node, long now)
> ---
>
> Key: KAFKA-3255
> URL: https://issues.apache.org/jira/browse/KAFKA-3255
> Project: Kafka
>  Issue Type: Test
>  Components: core
>Affects Versions: 0.9.0.1
>Reporter: Frank Scholten
>Priority: Trivial
>  Labels: test
> Fix For: 0.9.1.0
>
> Attachments: KAFKA-3255.patch
>
>
> I am exploring the Kafka codebase and noticed that this method was not 
> covered so I added some tests. Also saw that the method isConnecting is not 
> used anywhere in the code. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-627) Make UnknownTopicOrPartitionException a WARN in broker

2016-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15146725#comment-15146725
 ] 

ASF GitHub Bot commented on KAFKA-627:
--

GitHub user kichristensen opened a pull request:

https://github.com/apache/kafka/pull/913

KAFKA-627: Make UnknownTopicOrPartitionException a WARN in broker



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kichristensen/kafka KAFKA-627

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/913.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #913


commit 9dc94fc95ceed94bd14cffa321023c902b11e19d
Author: Kim Christensen 
Date:   2016-02-14T20:45:13Z

UnknownTopicOrPartition should be logged as warn




> Make UnknownTopicOrPartitionException a WARN in broker
> --
>
> Key: KAFKA-627
> URL: https://issues.apache.org/jira/browse/KAFKA-627
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.0
> Environment: Kafka 0.8, RHEL6, Java 1.6
>Reporter: Chris Riccomini
>
> Currently, when sending messages to a topic that doesn't yet exist, the 
> broker spews out these "errors" as it tries to auto-create new topics. I 
> spoke with Neha, and she said that this should be a warning, not an error.
> Could you please change it to something less scary, if, in fact, it's not 
> scary.
> 2012/11/14 22:38:53.238 INFO [LogManager] [kafka-request-handler-6] [kafka] 
> []  [Log Manager on Broker 464] Created log for 'firehoseReads'-5
> 2012/11/14 22:38:53.241 WARN [HighwaterMarkCheckpoint] 
> [kafka-request-handler-6] [kafka] []  No previously checkpointed 
> highwatermark value found for topic firehoseReads partition 5. Returning 0 as 
> the highwatermark
> 2012/11/14 22:38:53.242 INFO [Log] [kafka-request-handler-6] [kafka] []  
> [Kafka Log on Broker 464], Truncated log segment 
> /export/content/kafka/i001_caches/firehoseReads-5/.log to 
> target offset 0
> 2012/11/14 22:38:53.242 INFO [ReplicaFetcherManager] 
> [kafka-request-handler-6] [kafka] []  [ReplicaFetcherManager on broker 464] 
> adding fetcher on topic firehoseReads, partion 5, initOffset 0 to broker 466 
> with fetcherId 0
> 2012/11/14 22:38:53.248 ERROR [ReplicaFetcherThread] 
> [ReplicaFetcherThread-466-0-on-broker-464] [kafka] []  
> [ReplicaFetcherThread-466-0-on-broker-464], error for firehoseReads 5 to 
> broker 466
> kafka.common.UnknownTopicOrPartitionException
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> at java.lang.Class.newInstance0(Class.java:355)
> at java.lang.Class.newInstance(Class.java:308)
> at kafka.common.ErrorMapping$.exceptionFor(ErrorMapping.scala:68)
> at 
> kafka.server.AbstractFetcherThread$$anonfun$doWork$5$$anonfun$apply$3.apply(AbstractFetcherThread.scala:124)
> at 
> kafka.server.AbstractFetcherThread$$anonfun$doWork$5$$anonfun$apply$3.apply(AbstractFetcherThread.scala:124)
> at kafka.utils.Logging$class.error(Logging.scala:102)
> at kafka.utils.ShutdownableThread.error(ShutdownableThread.scala:23)
> at 
> kafka.server.AbstractFetcherThread$$anonfun$doWork$5.apply(AbstractFetcherThread.scala:123)
> at 
> kafka.server.AbstractFetcherThread$$anonfun$doWork$5.apply(AbstractFetcherThread.scala:99)
> at 
> scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:125)
> at 
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:344)
> at 
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:344)
> at 
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:99)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:50)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3235) Unclosed stream in AppInfoParser static block

2016-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15146726#comment-15146726
 ] 

ASF GitHub Bot commented on KAFKA-3235:
---

GitHub user kichristensen opened a pull request:

https://github.com/apache/kafka/pull/914

KAFKA-3235: Unclosed stream in AppInfoParser static block

Always close the stream

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kichristensen/kafka KAFKA-3235

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/914.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #914


commit 190f62aa659b13a74695118dbb9528679f58b578
Author: Kim Christensen 
Date:   2016-02-14T20:55:13Z

Close stream




> Unclosed stream in AppInfoParser static block
> -
>
> Key: KAFKA-3235
> URL: https://issues.apache.org/jira/browse/KAFKA-3235
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
> Fix For: 0.9.1.0
>
>
> {code}
> static {
> try {
> Properties props = new Properties();
> 
> props.load(AppInfoParser.class.getResourceAsStream("/kafka/kafka-version.properties"));
> version = props.getProperty("version", version).trim();
> commitId = props.getProperty("commitId", commitId).trim();
> {code}
> The stream returned by getResourceAsStream() should be closed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3263) Add Markdown support for ConfigDef

2016-02-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15158412#comment-15158412
 ] 

ASF GitHub Bot commented on KAFKA-3263:
---

GitHub user jcustenborder opened a pull request:

https://github.com/apache/kafka/pull/952

KAFKA-3263 - Support for markdown generation.

Added support to generate markdown from ConfigDef entries. Added test 
toMarkdown() to ConfigDefTest. Added toMarkdown() to ConfigDef.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jcustenborder/kafka KAFKA-3263

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/952.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #952


commit f2f91fadfe5a4fb3647b0d521cae967129651967
Author: Jeremy Custenborder 
Date:   2016-02-23T06:51:54Z

KAFKA-3263 - Added support to generate markdown from ConfigDef entries. 
Added test toMarkdown() to ConfigDefTest. Added toMarkdown() to ConfigDef.




> Add Markdown support for ConfigDef
> --
>
> Key: KAFKA-3263
> URL: https://issues.apache.org/jira/browse/KAFKA-3263
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.9.0.1
>Reporter: Jeremy Custenborder
>Priority: Minor
>
> The ability to output markdown for ConfigDef would be nice given a lot of 
> people use README.md files in their repositories.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3066) Add Demo Examples for Kafka Streams

2016-01-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15110168#comment-15110168
 ] 

ASF GitHub Bot commented on KAFKA-3066:
---

GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/797

KAFKA-3066: Demo Examples for Kafka Streams



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K3066

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/797.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #797


commit bf4c4cb3dbb5b4066d9c3e0ada5b7ffd98eb129a
Author: Guozhang Wang 
Date:   2016-01-14T20:27:58Z

add internal source topic for tracking

commit 1485dff08a76c6ff685b0fe72226ce3b629b1d3c
Author: Guozhang Wang 
Date:   2016-01-14T22:32:08Z

minor fix for this.interSourceTopics

commit 60cafd0885c41f93e408f8d89880187ddec789a1
Author: Guozhang Wang 
Date:   2016-01-15T01:09:00Z

add KStream windowed aggregation

commit 983a626008d987828deabe45d75e26e909032843
Author: Guozhang Wang 
Date:   2016-01-15T01:34:56Z

merge from apache trunk

commit 57051720de4238feb4dc3c505053096042a87d9c
Author: Guozhang Wang 
Date:   2016-01-15T21:38:53Z

v1

commit 4a49205fcab3a05ed1fd05a34c7a9a92794b992d
Author: Guozhang Wang 
Date:   2016-01-15T22:07:17Z

minor fix on HoppingWindows

commit 9b4127e91c3a551fb655155d9b8e0df50132d0b7
Author: Guozhang Wang 
Date:   2016-01-15T22:43:14Z

fix HoppingWindows

commit 9649fe5c8a9b2e900e7746ae7b8745bb65694583
Author: Guozhang Wang 
Date:   2016-01-16T19:00:54Z

add retainDuplicate option in RocksDBWindowStore

commit 8a9ea02ac3f9962416defa79d16069431063eac0
Author: Guozhang Wang 
Date:   2016-01-16T19:06:12Z

minor fixes

commit 4123528cf4695b05235789ebfca3a63e8a832ffa
Author: Guozhang Wang 
Date:   2016-01-18T17:55:02Z

Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/kafka into 
K3104

commit 46e8c8d285c0afae6da9ec7437d082060599f3f1
Author: Guozhang Wang 
Date:   2016-01-18T19:15:47Z

add wordcount and pipe jobs

commit 582d3ac24bfe08edb1c567461971cd35c1f75a00
Author: Guozhang Wang 
Date:   2016-01-18T21:53:21Z

merge from trunk

commit 5a002fadfcf760627274ddaa016deeaed5a3199f
Author: Guozhang Wang 
Date:   2016-01-19T00:06:34Z

1. WallClockTimestampExtractor as default; 2. remove windowMS config; 3. 
override state dir with jobId prefix;

commit 7425673e523c42806b29a364564a747443712a53
Author: Guozhang Wang 
Date:   2016-01-19T01:26:11Z

Add PageViewJob

commit ca04ba8d18674c521ad67872562a7671cb0e2c0d
Author: Guozhang Wang 
Date:   2016-01-19T06:23:05Z

minor changes on topic names

commit 563cc546b3a0dd16d586d2df33c37d2c5a5bfb18
Author: Guozhang Wang 
Date:   2016-01-19T21:30:11Z

change config importance levels

commit 4218904505363e61bb4c6b60dc5b13badfd39697
Author: Guozhang Wang 
Date:   2016-01-21T00:11:34Z

Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/kafka into 
K3066

commit 26fb5f3f5a8c9b304c5b1e61778c6bc1d9d5fccb
Author: Guozhang Wang 
Date:   2016-01-21T06:43:04Z

demo examples v1




> Add Demo Examples for Kafka Streams
> ---
>
> Key: KAFKA-3066
> URL: https://issues.apache.org/jira/browse/KAFKA-3066
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>
> We need a couple of demo examples for Kafka Streams to illustrate the 
> programmability and functionality of the framework.
> Also extract examples as a separate package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3080) ConsoleConsumerTest.test_version system test fails consistently

2016-01-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1531#comment-1531
 ] 

ASF GitHub Bot commented on KAFKA-3080:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/770


> ConsoleConsumerTest.test_version system test fails consistently
> ---
>
> Key: KAFKA-3080
> URL: https://issues.apache.org/jira/browse/KAFKA-3080
> Project: Kafka
>  Issue Type: Bug
>  Components: system tests
>Reporter: Ewen Cheslack-Postava
> Fix For: 0.9.1.0
>
>
> This test on trunk is failing consistently:
> {quote}
> test_id:
> 2016-01-07--001.kafkatest.sanity_checks.test_console_consumer.ConsoleConsumerTest.test_version
> status: FAIL
> run time:   38.451 seconds
> num_produced: 1000, num_consumed: 0
> Traceback (most recent call last):
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.3.8-py2.7.egg/ducktape/tests/runner.py",
>  line 101, in run_all_tests
> result.data = self.run_single_test()
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.3.8-py2.7.egg/ducktape/tests/runner.py",
>  line 151, in run_single_test
> return self.current_test_context.function(self.current_test)
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/tests/kafkatest/sanity_checks/test_console_consumer.py",
>  line 93, in test_version
> assert num_produced == num_consumed, "num_produced: %d, num_consumed: %d" 
> % (num_produced, num_consumed)
> AssertionError: num_produced: 1000, num_consumed: 0
> {quote}
> Example run where it fails: 
> http://jenkins.confluent.io/job/kafka_system_tests/79/console



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2146) adding partition did not find the correct startIndex

2016-01-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15111670#comment-15111670
 ] 

ASF GitHub Bot commented on KAFKA-2146:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/329


> adding partition did not find the correct startIndex 
> -
>
> Key: KAFKA-2146
> URL: https://issues.apache.org/jira/browse/KAFKA-2146
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.8.2.0
>Reporter: chenshangan
>Assignee: chenshangan
>Priority: Minor
> Fix For: 0.9.1.0
>
> Attachments: KAFKA-2146.2.patch, KAFKA-2146.patch
>
>
> TopicCommand provide a tool to add partitions for existing topics. It try to 
> find the startIndex from existing partitions. There's a minor flaw in this 
> process, it try to use the first partition fetched from zookeeper as the 
> start partition, and use the first replica id in this partition as the 
> startIndex.
> One thing, the first partition fetched from zookeeper is not necessary to be 
> the start partition. As partition id begin from zero, we should use partition 
> with id zero as the start partition.
> The other, broker id does not necessary begin from 0, so the startIndex is 
> not necessary to be the first replica id in the start partition. 
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3136) Rename KafkaStreaming to KafkaStreams

2016-01-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15113056#comment-15113056
 ] 

ASF GitHub Bot commented on KAFKA-3136:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/800


> Rename KafkaStreaming to KafkaStreams
> -
>
> Key: KAFKA-3136
> URL: https://issues.apache.org/jira/browse/KAFKA-3136
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>
> To be aligned with the module name. Also change the config / metrics class 
> names accordingly. So that:
> 1. The entry process of Kafka Streams is KafkaStreams, and its corresponding 
> StreamConfig and StreamMetrics.
> 2. The high-level DSL is called KStream, and low-level programming API is 
> called Processor with ProcessorTopology.
> Also merge KeyValue and Entry into a top-level KeyValue class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3068) NetworkClient may connect to a different Kafka cluster than originally configured

2016-01-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15113009#comment-15113009
 ] 

ASF GitHub Bot commented on KAFKA-3068:
---

GitHub user enothereska opened a pull request:

https://github.com/apache/kafka/pull/804

KAFKA-3068: Keep track of bootstrap nodes instead of all nodes ever seen



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/enothereska/kafka kafka-3068

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/804.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #804


commit 32f3bffb2281a03fa6449627c144478a0ce666ad
Author: Eno Thereska 
Date:   2016-01-22T20:36:27Z

Keep track of bootstrap nodes instead of all nodes ever seen




> NetworkClient may connect to a different Kafka cluster than originally 
> configured
> -
>
> Key: KAFKA-3068
> URL: https://issues.apache.org/jira/browse/KAFKA-3068
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Eno Thereska
>
> In https://github.com/apache/kafka/pull/290, we added the logic to cache all 
> brokers (id and ip) that the client has ever seen. If we can't find an 
> available broker from the current Metadata, we will pick a broker that we 
> have ever seen (in NetworkClient.leastLoadedNode()).
> One potential problem this logic can introduce is the following. Suppose that 
> we have a broker with id 1 in a Kafka cluster. A producer client remembers 
> this broker in nodesEverSeen. At some point, we bring down this broker and 
> use the host in a different Kafka cluster. Then, the producer client uses 
> this broker from nodesEverSeen to refresh metadata. It will find the metadata 
> in a different Kafka cluster and start producing data there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3134) Missing required configuration "value.deserializer" when initializing a KafkaConsumer with a valid "valueDeserializer"

2016-01-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15112919#comment-15112919
 ] 

ASF GitHub Bot commented on KAFKA-3134:
---

GitHub user happymap opened a pull request:

https://github.com/apache/kafka/pull/803

KAFKA-3134: Fix missing value.deserializer error during KafkaConsumer…

… initialization

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/happymap/kafka KAFKA-3134

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/803.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #803


commit 1afc67d4d944486dc8fb6361922a7e7a8b573ad5
Author: Yifan Ying 
Date:   2016-01-22T19:24:02Z

KAFKA-3134: Fix missing value.deserializer error during KafkaConsumer 
initialization




> Missing required configuration "value.deserializer" when initializing a 
> KafkaConsumer with a valid "valueDeserializer"
> --
>
> Key: KAFKA-3134
> URL: https://issues.apache.org/jira/browse/KAFKA-3134
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Yifan Ying
>
> I tried to initialize a KafkaConsumer object using with a null 
> keyDeserializer and a non-null valueDeserializer:
> {code}
> public KafkaConsumer(Properties properties, Deserializer keyDeserializer,
>  Deserializer valueDeserializer)
> {code}
> Then I got an exception as follows:
> {code}
> Caused by: org.apache.kafka.common.config.ConfigException: Missing required 
> configuration "value.deserializer" which has no default value.
>   at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:148)
>   at 
> org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:49)
>   at 
> org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:56)
>   at 
> org.apache.kafka.clients.consumer.ConsumerConfig.(ConsumerConfig.java:336)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:518)
>   .
> {code}
> Then I went to ConsumerConfig.java file and found this block of code causing 
> the problem:
> {code}
> public static Map addDeserializerToConfig(Map 
> configs,
>   Deserializer 
> keyDeserializer,
>   Deserializer 
> valueDeserializer) {
> Map newConfigs = new HashMap();
> newConfigs.putAll(configs);
> if (keyDeserializer != null)
> newConfigs.put(KEY_DESERIALIZER_CLASS_CONFIG, 
> keyDeserializer.getClass());
> if (keyDeserializer != null)
> newConfigs.put(VALUE_DESERIALIZER_CLASS_CONFIG, 
> valueDeserializer.getClass());
> return newConfigs;
> }
> public static Properties addDeserializerToConfig(Properties properties,
>  Deserializer 
> keyDeserializer,
>  Deserializer 
> valueDeserializer) {
> Properties newProperties = new Properties();
> newProperties.putAll(properties);
> if (keyDeserializer != null)
> newProperties.put(KEY_DESERIALIZER_CLASS_CONFIG, 
> keyDeserializer.getClass().getName());
> if (keyDeserializer != null)
> newProperties.put(VALUE_DESERIALIZER_CLASS_CONFIG, 
> valueDeserializer.getClass().getName());
> return newProperties;
> }
> {code}
> Instead of checking valueDeserializer, the code checks keyDeserializer every 
> time. So when keyDeserializer is null but valueDeserializer is not, the 
> valueDeserializer property will never get set. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3140) PatternSyntaxException thrown in MM, causes MM to hang

2016-01-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15113231#comment-15113231
 ] 

ASF GitHub Bot commented on KAFKA-3140:
---

GitHub user SinghAsDev opened a pull request:

https://github.com/apache/kafka/pull/805

KAFKA-3140: Fix PatternSyntaxException and hand caused by it in Mirro…

Fix PatternSyntaxException and hand caused by it in MirrorMaker on passing 
invalid java regex string as whitelist

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka KAFKA-3140

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/805.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #805


commit 0f120f976ce66e37cb5180df2b024def1cfcd6bb
Author: Ashish Singh 
Date:   2016-01-22T22:22:05Z

KAFKA-3140: Fix PatternSyntaxException and hand caused by it in MirrorMaker 
on passing invalid java regex string as whitelist.




> PatternSyntaxException thrown in MM, causes MM to hang
> --
>
> Key: KAFKA-3140
> URL: https://issues.apache.org/jira/browse/KAFKA-3140
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> On passing an invalid java regex string as whitelist to MM, 
> PatternSyntaxException is thrown and MM hangs. Below is relevant ST.
> {code}
> java.util.regex.PatternSyntaxException: Dangling meta character '*' near 
> index 0
> *
> ^
>   at java.util.regex.Pattern.error(Pattern.java:1955)
>   at java.util.regex.Pattern.sequence(Pattern.java:2123)
>   at java.util.regex.Pattern.expr(Pattern.java:1996)
>   at java.util.regex.Pattern.compile(Pattern.java:1696)
>   at java.util.regex.Pattern.(Pattern.java:1351)
>   at java.util.regex.Pattern.compile(Pattern.java:1028)
>   at 
> kafka.tools.MirrorMaker$MirrorMakerNewConsumer.init(MirrorMaker.scala:521)
>   at kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:389)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3076) BrokerChangeListener should log the brokers in order

2016-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15116228#comment-15116228
 ] 

ASF GitHub Bot commented on KAFKA-3076:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/749


> BrokerChangeListener should log the brokers in order
> 
>
> Key: KAFKA-3076
> URL: https://issues.apache.org/jira/browse/KAFKA-3076
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Konrad Kalita
>  Labels: newbie
> Fix For: 0.9.1.0
>
>
> Currently, in BrokerChangeListener, we log the full, new and deleted broker 
> set in random order. It would be better if we log them in sorted order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3100) Broker.createBroker should work if json is version > 2, but still compatible

2016-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15116257#comment-15116257
 ] 

ASF GitHub Bot commented on KAFKA-3100:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/773


> Broker.createBroker should work if json is version > 2, but still compatible
> 
>
> Key: KAFKA-3100
> URL: https://issues.apache.org/jira/browse/KAFKA-3100
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.0.1
>
>
> Description from Jun:
> In 0.9.0.0, the old consumer reads broker info directly from ZK and the code 
> throws an exception if the version in json is not 1 or 2. This old consumer 
> will break when we upgrade the broker json to version 3 in ZK in 0.9.1, which 
> will be an issue. We overlooked this issue in 0.9.0.0. The easiest fix is 
> probably not to check the version in ZkUtils.getBrokerInfo().
> This way, as long as we are only adding new fields in broker json, we can 
> preserve the compatibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3125) Exception Hierarchy for Streams

2016-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15116358#comment-15116358
 ] 

ASF GitHub Bot commented on KAFKA-3125:
---

GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/809

KAFKA-3125: Add Kafka Streams Exceptions



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K3125

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/809.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #809


commit a23f4225609a765b12c45e982dda2a7c888f74e7
Author: Guozhang Wang 
Date:   2016-01-25T22:32:19Z

re-order shutdown modules

commit 52efdbf97324eac12bfb369545a2fd29c31fd9ba
Author: Guozhang Wang 
Date:   2016-01-26T00:21:27Z

add errors package and fixed a few thrown exceptions




> Exception Hierarchy for Streams
> ---
>
> Key: KAFKA-3125
> URL: https://issues.apache.org/jira/browse/KAFKA-3125
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>
> Currently Kafka Streams do not have its own exception category: we only have 
> one TopologyException that extends from KafkaException.
> It's better to start thinking about categorizing exceptions in Streams with a 
> common parent of "StreamsException". For example:
> 1. What type of exceptions should be exposed to users at job runtime; what 
> type of exceptions should be exposed at "topology build time".
> 2. Should KafkaStreaming.start / stop ever need to throw any exceptions?
> 3. etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3122) Memory leak in `Sender.completeBatch` on TOPIC_AUTHORIZATION_FAILED

2016-01-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15107701#comment-15107701
 ] 

ASF GitHub Bot commented on KAFKA-3122:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/791

KAFKA-3122; Fix memory leak in `Sender` on TOPIC_AUTHORIZATION_FAILED

Also fix missing call to `sensors.record` on this error.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
fix-producer-memory-leak-on-authorization-exception

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/791.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #791


commit 9d6c451995ae5b50bec7f3841778840977e4d3b5
Author: Ismael Juma 
Date:   2016-01-19T23:54:46Z

Fix memory leak in `Sender` on TOPIC_AUTHORIZATION_FAILED

Also fix missing call to `sensors.record` on this error.




> Memory leak in `Sender.completeBatch` on TOPIC_AUTHORIZATION_FAILED
> ---
>
> Key: KAFKA-3122
> URL: https://issues.apache.org/jira/browse/KAFKA-3122
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3122) Memory leak in `Sender.completeBatch` on TOPIC_AUTHORIZATION_FAILED

2016-01-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15108122#comment-15108122
 ] 

ASF GitHub Bot commented on KAFKA-3122:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/791


> Memory leak in `Sender.completeBatch` on TOPIC_AUTHORIZATION_FAILED
> ---
>
> Key: KAFKA-3122
> URL: https://issues.apache.org/jira/browse/KAFKA-3122
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.0.1
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3121) KStream DSL API Improvement

2016-01-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15109381#comment-15109381
 ] 

ASF GitHub Bot commented on KAFKA-3121:
---

GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/795

KAFKA-3121: Remove aggregatorSupplier and add Reduce functions



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K3121s1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/795.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #795


commit 10837b24c7a8af0e2843a8ae2102610dc28aff3c
Author: Guozhang Wang 
Date:   2016-01-20T02:48:07Z

refactor aggregate and add reduce

commit c8fe67f78809276b2c9aa3ca4f36d64b79483c1a
Author: Guozhang Wang 
Date:   2016-01-20T06:48:19Z

add new files

commit ad90555eee8ce7de799a258c175522a95cafb372
Author: Guozhang Wang 
Date:   2016-01-20T20:52:24Z

KAFKA-3121: Remove AggregateSupplier for aggregate and add reduce functions




> KStream DSL API Improvement
> ---
>
> Key: KAFKA-3121
> URL: https://issues.apache.org/jira/browse/KAFKA-3121
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>
> From some collected feedbacks, here is a list of potential improvements that 
> we want to make:
> 1. Remove AggregateSupplier for aggregate, and provide more built-in 
> aggregations.
> 2. Join to return KeyValue<>.
> 3. "Windows" class syntax-sugers.
> 4. Add print() to KTable / KStream.
> 5. flatMap / flatMapValues to return arrays in addition to Iterable.
> 6. make the API function names aligned with Java 8+, e.g. filterOut -> 
> filterNot
> 7. collapse process() and transform() in KStream.
> 8. validate Streaming configs and allow passing properties to KafkaStreaming.
> 9. Rename KafkaStreaming to Streams.
> Also move some of the state package into internals, and create a new 
> top-level common folder with KeyValue / etc in it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


<    9   10   11   12   13   14   15   16   17   18   >