[jira] [Created] (KAFKA-5977) Upgrade RocksDB dependency to legally acceptable one

2017-09-26 Thread Stevo Slavic (JIRA)
Stevo Slavic created KAFKA-5977:
---

 Summary: Upgrade RocksDB dependency to legally acceptable one
 Key: KAFKA-5977
 URL: https://issues.apache.org/jira/browse/KAFKA-5977
 Project: Kafka
  Issue Type: Task
  Components: streams
Affects Versions: 0.10.0.0
Reporter: Stevo Slavic
Priority: Critical
 Fix For: 1.0.0


RocksDB 5.5.5+ seems to be legally acceptable. For more info see
- https://issues.apache.org/jira/browse/LEGAL-303 and
- https://www.apache.org/legal/resolved.html#category-x

Even latest trunk of Apache Kafka depends on older RocksDB 
https://github.com/apache/kafka/blob/trunk/gradle/dependencies.gradle#L67

If I'm not mistaken, this makes all current Apache Kafka 0.10+ releases not 
legally acceptable Apache products.

Please consider upgrading the dependency. If possible please include the change 
in Apache Kafka 1.0.0 release, if not also in patch releases of older still 
supported 0.x Apache Kafka branches.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5835) CommitFailedException message is misleading and cause is swallowed

2017-09-05 Thread Stevo Slavic (JIRA)
Stevo Slavic created KAFKA-5835:
---

 Summary: CommitFailedException message is misleading and cause is 
swallowed
 Key: KAFKA-5835
 URL: https://issues.apache.org/jira/browse/KAFKA-5835
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.11.0.0
Reporter: Stevo Slavic
Priority: Trivial


{{CommitFailedException}}'s message suggests that it can only be thrown as 
consequence of rebalancing. JavaDoc of the {{CommitFailedException}} suggests 
differently that in general it can be thrown for any kind of unrecoverable 
failure from {{KafkaConsumer#commitSync()}} call (e.g. if offset being 
committed is invalid / outside of range).

{{CommitFailedException}}'s message is misleading in a way that one can just 
see the message in logs, and without consulting JavaDoc or source code one can 
assume that message is correct and that rebalancing is the only potential 
cause, so one can wast time proceeding with the debugging in wrong direction.

Additionally, since {{CommitFailedException}} can be thrown for different 
reasons, cause should not be swallowed. This makes impossible to handle each 
potential cause in a specific way. If the cause is another exception please 
pass it as cause, or construct appropriate exception hierarchy with specific 
exception for every failure cause and make {{CommitFailedException}} abstract.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4867) zookeeper-security-migration.sh does not clear ACLs from all nodes

2017-03-09 Thread Stevo Slavic (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stevo Slavic resolved KAFKA-4867.
-
Resolution: Duplicate

> zookeeper-security-migration.sh does not clear ACLs from all nodes
> --
>
> Key: KAFKA-4867
> URL: https://issues.apache.org/jira/browse/KAFKA-4867
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.1
>Reporter: Stevo Slavic
>Priority: Minor
>
> zookeeper-security-migration.sh help for --zookeeper.acl switch with 
> 'secure'/'unsecure' as possible values suggests that command should apply the 
> change to all Kafka znodes. That doesn't seem to be the case at least for 
> 'unsecure', so clearing ACLs use case.
> With ACLs set on Kafka znodes, I ran
> {noformat}
> bin/zookeeper-security-migration.sh --zookeeper.acl 'unsecure' 
> --zookeeper.connect x.y.z.w:2181
> {noformat}
> and with zookeeper-shell.sh getAcl checked ACLs set on few nodes. Node 
> _/brokers/topics_ had ACL cleared (only default one that world can do 
> anything remained). On the other hand node _/brokers_ still had secure ACLs 
> set that world can read and owner can do everything. Nodes and respective sub 
> trees of _/cluster_ and _/controller_ also had secure ACLs still set.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4867) zookeeper-security-migration.sh does not clear ACLs from all nodes

2017-03-09 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15902683#comment-15902683
 ] 

Stevo Slavic commented on KAFKA-4867:
-

Duplicate of KAFKA-4864. Sorry.

> zookeeper-security-migration.sh does not clear ACLs from all nodes
> --
>
> Key: KAFKA-4867
> URL: https://issues.apache.org/jira/browse/KAFKA-4867
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.1
>Reporter: Stevo Slavic
>Priority: Minor
>
> zookeeper-security-migration.sh help for --zookeeper.acl switch with 
> 'secure'/'unsecure' as possible values suggests that command should apply the 
> change to all Kafka znodes. That doesn't seem to be the case at least for 
> 'unsecure', so clearing ACLs use case.
> With ACLs set on Kafka znodes, I ran
> {noformat}
> bin/zookeeper-security-migration.sh --zookeeper.acl 'unsecure' 
> --zookeeper.connect x.y.z.w:2181
> {noformat}
> and with zookeeper-shell.sh getAcl checked ACLs set on few nodes. Node 
> _/brokers/topics_ had ACL cleared (only default one that world can do 
> anything remained). On the other hand node _/brokers_ still had secure ACLs 
> set that world can read and owner can do everything. Nodes and respective sub 
> trees of _/cluster_ and _/controller_ also had secure ACLs still set.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4867) zookeeper-security-migration.sh does not clear ACLs from all nodes

2017-03-09 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15902670#comment-15902670
 ] 

Stevo Slavic commented on KAFKA-4867:
-

Same problem affects setting ACLs with
{noformat}
bin/zookeeper-security-migration.sh --zookeeper.acl 'secure' 
--zookeeper.connect x.y.z.w:2181
{noformat}

> zookeeper-security-migration.sh does not clear ACLs from all nodes
> --
>
> Key: KAFKA-4867
> URL: https://issues.apache.org/jira/browse/KAFKA-4867
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.1
>Reporter: Stevo Slavic
>Priority: Minor
>
> zookeeper-security-migration.sh help for --zookeeper.acl switch with 
> 'secure'/'unsecure' as possible values suggests that command should apply the 
> change to all Kafka znodes. That doesn't seem to be the case at least for 
> 'unsecure', so clearing ACLs use case.
> With ACLs set on Kafka znodes, I ran
> {noformat}
> bin/zookeeper-security-migration.sh --zookeeper.acl 'unsecure' 
> --zookeeper.connect x.y.z.w:2181
> {noformat}
> and with zookeeper-shell.sh getAcl checked ACLs set on few nodes. Node 
> _/brokers/topics_ had ACL cleared (only default one that world can do 
> anything remained). On the other hand node _/brokers_ still had secure ACLs 
> set that world can read and owner can do everything. Nodes and respective sub 
> trees of _/cluster_ and _/controller_ also had secure ACLs still set.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-4867) zookeeper-security-migration.sh does not clear ACLs from all nodes

2017-03-08 Thread Stevo Slavic (JIRA)
Stevo Slavic created KAFKA-4867:
---

 Summary: zookeeper-security-migration.sh does not clear ACLs from 
all nodes
 Key: KAFKA-4867
 URL: https://issues.apache.org/jira/browse/KAFKA-4867
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.10.1.1
Reporter: Stevo Slavic
Priority: Minor


zookeeper-security-migration.sh help for --zookeeper.acl switch with 
'secure'/'unsecure' as possible values suggests that command should apply the 
change to all Kafka znodes. That doesn't seem to be the case at least for 
'unsecure', so clearing ACLs use case.

With ACLs set on Kafka znodes, I ran

{noformat}
bin/zookeeper-security-migration.sh --zookeeper.acl 'unsecure' 
--zookeeper.connect x.y.z.w:2181
{noformat}

and with zookeeper-shell.sh getAcl checked ACLs set on few nodes. Node 
_/brokers/topics_ had ACL cleared (only default one that world can do anything 
remained). On the other hand node _/brokers_ still had secure ACLs set that 
world can read and owner can do everything. Nodes and respective sub trees of 
_/cluster_ and _/controller_ also had secure ACLs still set.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-4814) ZookeeperLeaderElector not respecting zookeeper.set.acl

2017-02-28 Thread Stevo Slavic (JIRA)
Stevo Slavic created KAFKA-4814:
---

 Summary: ZookeeperLeaderElector not respecting zookeeper.set.acl
 Key: KAFKA-4814
 URL: https://issues.apache.org/jira/browse/KAFKA-4814
 Project: Kafka
  Issue Type: Bug
  Components: security
Affects Versions: 0.10.1.1
Reporter: Stevo Slavic
Priority: Minor


By [migration guide|https://kafka.apache.org/documentation/#zk_authz_migration] 
for enabling ZooKeeper security on an existing Apache Kafka cluster, and 
[broker configuration 
documentation|https://kafka.apache.org/documentation/#brokerconfigs] for 
{{zookeeper.set.acl}} configuration property, when this property is set to 
false Kafka brokers should not be setting any ACLs on ZooKeeper nodes, even 
when JAAS config file is provisioned to broker. 

Problem is that there is broker side logic, like one in 
{{ZookeeperLeaderElector}} making use of {{JaasUtils#isZkSecurityEnabled}}, 
which does not respect this configuration property, resulting in ACLs being set 
even when there's just JAAS config file provisioned to Kafka broker while 
{{zookeeper.set.acl}} is set to {{false}}.

Notice that {{JaasUtils}} is in {{org.apache.kafka.common.security}} package of 
{{kafka-clients}} module, while {{zookeeper.set.acl}} is broker side only 
configuration property.

To make it possible without downtime to enable ZooKeeper authentication on 
existing cluster, it should be possible to have all Kafka brokers in cluster 
first authenticate to ZooKeeper cluster, without ACLs being set. Only once all 
ZooKeeper clients (Kafka brokers and others) are authenticating to ZooKeeper 
cluster then ACLs can be started being set.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4385) producer is sending too many unnecessary meta data request if the meta data for a topic is not available and "auto.create.topics.enable" =false

2017-02-17 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15872645#comment-15872645
 ] 

Stevo Slavic commented on KAFKA-4385:
-

[~Jun Yao] [~ewencp] it would open up more options if producer was made aware 
whether topic auto creation is enabled at all (e.g. through producer 
configuration, or through producer on startup requesting configuration from 
broker). Nevertheless, I think it would be enough to make model of metadata 
cache in producer more rich, it should keep for every topic:
- last known metadata, but also
- last metadata request status error code

Then {{KafkaProducer}} thread waiting for metadata, at least on timeout could 
check if there's no metadata and last error code is {{UnknownTopicOrPartition}} 
to then throw {{UnknownTopicOrPartitionException}}, otherwise to throw 
{{TimeoutException}}. For that no protocol change is needed. Maybe metadata 
cache doesn't already keep last metadata request status error code per topic - 
that and mentioned timeout handling logic change should be enough to be able to 
distinguish between non-existing topic and timeout.

If {{KafkaProducer}} was aware of auto topic creation configuration 
(enabled/disabled), then it could give up sooner, but that's optimization, with 
arguable value while (at least option where configuration is obtained from 
brokers) it might require protocol change. Somewhat related KAFKA-2948 already 
makes more efficient one aspect - makes sure that topic metadata for 
non-existing or no-longer-existing topics will eventually stop being retrieved.

> producer is sending too many unnecessary meta data request if the meta data 
> for a topic is not available and "auto.create.topics.enable" =false
> ---
>
> Key: KAFKA-4385
> URL: https://issues.apache.org/jira/browse/KAFKA-4385
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
>Reporter: Jun Yao
>
> All current kafka-client producer implementation (<= 0.10.1.0),
> When sending a msg to a topic, it will first check if meta data for this 
> topic is available or not, 
> when not available, it will set "metadata.requestUpdate()" and wait for meta 
> data from brokers, 
> The thing is inside "org.apache.kafka.clients.Metadata.awaitUpdate()", it's 
> already doing a "while (this.version <= lastVersion)" loop waiting for new 
> version response, 
> So the loop inside 
> "org.apache.kafka.clients.producer.KafkaProducer.waitOnMetadata() is not 
> needed, 
> When "auto.create.topics.enable" is false, sending msgs to a non-exist topic 
> will trigger too many meta requests, everytime a metadata response is 
> returned, because it does not contain the metadata for the topic, it's going 
> to try again until TimeoutException is thrown; 
> This is a waste and sometimes causes too much overhead when unexpected msgs 
> are arrived. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4385) producer is sending too many unnecessary meta data request if the meta data for a topic is not available and "auto.create.topics.enable" =false

2017-01-27 Thread Stevo Slavic (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stevo Slavic updated KAFKA-4385:

Affects Version/s: 0.9.0.1

> producer is sending too many unnecessary meta data request if the meta data 
> for a topic is not available and "auto.create.topics.enable" =false
> ---
>
> Key: KAFKA-4385
> URL: https://issues.apache.org/jira/browse/KAFKA-4385
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
>Reporter: Jun Yao
>
> All current kafka-client producer implementation (<= 0.10.1.0),
> When sending a msg to a topic, it will first check if meta data for this 
> topic is available or not, 
> when not available, it will set "metadata.requestUpdate()" and wait for meta 
> data from brokers, 
> The thing is inside "org.apache.kafka.clients.Metadata.awaitUpdate()", it's 
> already doing a "while (this.version <= lastVersion)" loop waiting for new 
> version response, 
> So the loop inside 
> "org.apache.kafka.clients.producer.KafkaProducer.waitOnMetadata() is not 
> needed, 
> When "auto.create.topics.enable" is false, sending msgs to a non-exist topic 
> will trigger too many meta requests, everytime a metadata response is 
> returned, because it does not contain the metadata for the topic, it's going 
> to try again until TimeoutException is thrown; 
> This is a waste and sometimes causes too much overhead when unexpected msgs 
> are arrived. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4385) producer is sending too many unnecessary meta data request if the meta data for a topic is not available and "auto.create.topics.enable" =false

2017-01-27 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842831#comment-15842831
 ] 

Stevo Slavic commented on KAFKA-4385:
-

{{UnknownTopicOrPartitionException}} extends {{InvalidMetadataException}} which 
extends {{RetriableException}}.

This shows root cause of the problem - by current Kafka design 
{{UnknownTopicOrPartitionException}} is considered to be retriable exception at 
all times. IMO, when auto topic creation is disabled, 
{{UnknownTopicOrPartitionException}} should not be considered as retriable.

Besides of unnecessary metadata retrieval retries, I've found that 
{{KafkaProducer}}, in 0.9.0.1 and 0.10.1.1 with auto topic creation disabled, 
when one tries to send to non-existing topic, registered callback will not be 
completed with {{UnknownTopicOrPartitionException}}. Instead 
{noformat}
"org.apache.kafka.common.errors.TimeoutException: Failed to update metadata 
after X ms."
{noformat}
will be thrown.

> producer is sending too many unnecessary meta data request if the meta data 
> for a topic is not available and "auto.create.topics.enable" =false
> ---
>
> Key: KAFKA-4385
> URL: https://issues.apache.org/jira/browse/KAFKA-4385
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jun Yao
>
> All current kafka-client producer implementation (<= 0.10.1.0),
> When sending a msg to a topic, it will first check if meta data for this 
> topic is available or not, 
> when not available, it will set "metadata.requestUpdate()" and wait for meta 
> data from brokers, 
> The thing is inside "org.apache.kafka.clients.Metadata.awaitUpdate()", it's 
> already doing a "while (this.version <= lastVersion)" loop waiting for new 
> version response, 
> So the loop inside 
> "org.apache.kafka.clients.producer.KafkaProducer.waitOnMetadata() is not 
> needed, 
> When "auto.create.topics.enable" is false, sending msgs to a non-exist topic 
> will trigger too many meta requests, everytime a metadata response is 
> returned, because it does not contain the metadata for the topic, it's going 
> to try again until TimeoutException is thrown; 
> This is a waste and sometimes causes too much overhead when unexpected msgs 
> are arrived. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3502) Build is killed during kafka streams tests due to `pure virtual method called` error

2017-01-06 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15806437#comment-15806437
 ] 

Stevo Slavic commented on KAFKA-3502:
-

This is happening every time when [PR 
#96|https://github.com/apache/kafka/pull/96] changes are being built only in 
kafka-pr-jdk8-scala2.12 build job. Couldn't reproduce locally.

> Build is killed during kafka streams tests due to `pure virtual method 
> called` error
> 
>
> Key: KAFKA-3502
> URL: https://issues.apache.org/jira/browse/KAFKA-3502
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ashish K Singh
>  Labels: transient-unit-test-failure
>
> Build failed due to failure in streams' test. Not clear which test led to 
> this.
> Jenkins console: 
> https://builds.apache.org/job/kafka-trunk-git-pr-jdk7/3210/console
> {code}
> org.apache.kafka.streams.kstream.internals.KTableFilterTest > testValueGetter 
> PASSED
> org.apache.kafka.streams.kstream.internals.KStreamFlatMapTest > testFlatMap 
> PASSED
> org.apache.kafka.streams.kstream.internals.KTableAggregateTest > testAggBasic 
> PASSED
> org.apache.kafka.streams.kstream.internals.KStreamFlatMapValuesTest > 
> testFlatMapValues PASSED
> org.apache.kafka.streams.kstream.KStreamBuilderTest > testMerge PASSED
> org.apache.kafka.streams.kstream.KStreamBuilderTest > testFrom PASSED
> org.apache.kafka.streams.kstream.KStreamBuilderTest > testNewName PASSED
> pure virtual method called
> terminate called without an active exception
> :streams:test FAILED
> FAILURE: Build failed with an exception.
> * What went wrong:
> Execution failed for task ':streams:test'.
> > Process 'Gradle Test Executor 4' finished with non-zero exit value 134
> {code}
> Tried reproducing the issue locally, but could not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4338) Release Kafka 0.10.1.0 on Maven Central

2016-10-24 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15603250#comment-15603250
 ] 

Stevo Slavic commented on KAFKA-4338:
-

It's already there. See 
http://repo1.maven.org/maven2/org/apache/kafka/kafka_2.10/0.10.1.0/ and 
http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.apache.kafka%22

> Release Kafka 0.10.1.0 on Maven Central
> ---
>
> Key: KAFKA-4338
> URL: https://issues.apache.org/jira/browse/KAFKA-4338
> Project: Kafka
>  Issue Type: Task
>Reporter: Emanuele Cesena
>
> Unless I'm missing something, Kafka 0.10.1.0 doesn't seem to be on maven 
> central yet:
> https://mvnrepository.com/artifact/org.apache.kafka/kafka_2.10



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4130) [docs] Link to Varnish architect notes is broken

2016-09-06 Thread Stevo Slavic (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stevo Slavic updated KAFKA-4130:

Description: 
Paragraph in Kafka documentation
{quote}
This style of pagecache-centric design is described in an article on the design 
of Varnish here (along with a healthy dose of arrogance). 
{quote}
contains a broken link.

Should probably link to http://varnish-cache.org/wiki/ArchitectNotes

  was:
Paraagraph in Kafka documentation
{quote}
This style of pagecache-centric design is described in an article on the design 
of Varnish here (along with a healthy dose of arrogance). 
{quote}
contains a broken link.

Should probably link to http://varnish-cache.org/wiki/ArchitectNotes


> [docs] Link to Varnish architect notes is broken
> 
>
> Key: KAFKA-4130
> URL: https://issues.apache.org/jira/browse/KAFKA-4130
> Project: Kafka
>  Issue Type: Bug
>  Components: website
>Affects Versions: 0.9.0.1, 0.10.0.1
>Reporter: Stevo Slavic
>Priority: Trivial
>
> Paragraph in Kafka documentation
> {quote}
> This style of pagecache-centric design is described in an article on the 
> design of Varnish here (along with a healthy dose of arrogance). 
> {quote}
> contains a broken link.
> Should probably link to http://varnish-cache.org/wiki/ArchitectNotes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4130) [docs] Link to Varnish architect notes is broken

2016-09-06 Thread Stevo Slavic (JIRA)
Stevo Slavic created KAFKA-4130:
---

 Summary: [docs] Link to Varnish architect notes is broken
 Key: KAFKA-4130
 URL: https://issues.apache.org/jira/browse/KAFKA-4130
 Project: Kafka
  Issue Type: Bug
  Components: website
Affects Versions: 0.10.0.1, 0.9.0.1
Reporter: Stevo Slavic
Priority: Trivial


Paraagraph in Kafka documentation
{quote}
This style of pagecache-centric design is described in an article on the design 
of Varnish here (along with a healthy dose of arrogance). 
{quote}
contains a broken link.

Should probably link to http://varnish-cache.org/wiki/ArchitectNotes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1993) Enable topic deletion as default

2016-08-28 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15443669#comment-15443669
 ] 

Stevo Slavic commented on KAFKA-1993:
-

There is a patch available for KAFKA-2000

> Enable topic deletion as default
> 
>
> Key: KAFKA-1993
> URL: https://issues.apache.org/jira/browse/KAFKA-1993
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
> Attachments: KAFKA-1993.patch
>
>
> Since topic deletion is now throughly tested and works as well as most Kafka 
> features, we should enable it by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-873) Consider replacing zkclient with curator (with zkclient-bridge)

2016-07-04 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15361433#comment-15361433
 ] 

Stevo Slavic commented on KAFKA-873:


Even with latest 0.10.0.0 Kafka clients, explicit topic management requires 
using AdminTools and working with ZkUtils/ZkClient.

I guess better course of action (than switching dependency now) is to work on 
topic management broker API, to be able to remove ZK dependency from clients 
and tools completely, make only brokers talk with ZooKeeper. So work on 
https://issues.apache.org/jira/browse/KAFKA-2945 and likes in 
https://issues.apache.org/jira/browse/KAFKA-1694

After that, it will be also easier to abstract away in Broker metadata storage 
and coordination. 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-30+-+Allow+for+brokers+to+have+plug-able+consensus+and+meta+data+storage+sub+systems

> Consider replacing zkclient with curator (with zkclient-bridge)
> ---
>
> Key: KAFKA-873
> URL: https://issues.apache.org/jira/browse/KAFKA-873
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.0
>Reporter: Scott Clasen
>Assignee: Grant Henke
>
> If zkclient was replaced with curator and curator-x-zkclient-bridge it would 
> be initially a drop-in replacement
> https://github.com/Netflix/curator/wiki/ZKClient-Bridge
> With the addition of a few more props to ZkConfig, and a bit of code this 
> would open up the possibility of using ACLs in zookeeper (which arent 
> supported directly by zkclient), as well as integrating with netflix 
> exhibitor for those of us using that.
> Looks like KafkaZookeeperClient needs some love anyhow...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-873) Consider replacing zkclient with curator (with zkclient-bridge)

2016-07-04 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15361334#comment-15361334
 ] 

Stevo Slavic commented on KAFKA-873:


Compare 
https://github.com/sgroschupf/zkclient/blob/master/src/main/java/org/I0Itec/zkclient/ZkClient.java#L899
 configurable from file only (could maybe provide fake file path, and call 
Configuration.setConfiguration before instantiating ZkClient but that's dirty), 
to 
https://github.com/apache/curator/blob/master/curator-framework/src/main/java/org/apache/curator/framework/CuratorFrameworkFactory.java#L185
 buildable from any auth credentials/secrets source (e.g. env vars or dynamic 
configuration properties).

> Consider replacing zkclient with curator (with zkclient-bridge)
> ---
>
> Key: KAFKA-873
> URL: https://issues.apache.org/jira/browse/KAFKA-873
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.0
>Reporter: Scott Clasen
>Assignee: Grant Henke
>
> If zkclient was replaced with curator and curator-x-zkclient-bridge it would 
> be initially a drop-in replacement
> https://github.com/Netflix/curator/wiki/ZKClient-Bridge
> With the addition of a few more props to ZkConfig, and a bit of code this 
> would open up the possibility of using ACLs in zookeeper (which arent 
> supported directly by zkclient), as well as integrating with netflix 
> exhibitor for those of us using that.
> Looks like KafkaZookeeperClient needs some love anyhow...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-873) Consider replacing zkclient with curator (with zkclient-bridge)

2016-07-04 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15361175#comment-15361175
 ] 

Stevo Slavic commented on KAFKA-873:


One pro-Curator benefit to note - it seems easier to provision ZooKeeper client 
authentication to Curator than to ZkClient, out of the box not limited to JAAS 
config file only.

> Consider replacing zkclient with curator (with zkclient-bridge)
> ---
>
> Key: KAFKA-873
> URL: https://issues.apache.org/jira/browse/KAFKA-873
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.0
>Reporter: Scott Clasen
>Assignee: Grant Henke
>
> If zkclient was replaced with curator and curator-x-zkclient-bridge it would 
> be initially a drop-in replacement
> https://github.com/Netflix/curator/wiki/ZKClient-Bridge
> With the addition of a few more props to ZkConfig, and a bit of code this 
> would open up the possibility of using ACLs in zookeeper (which arent 
> supported directly by zkclient), as well as integrating with netflix 
> exhibitor for those of us using that.
> Looks like KafkaZookeeperClient needs some love anyhow...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-1452) Killing last replica for partition doesn't change ISR/Leadership if replica is running controller

2016-06-12 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15326383#comment-15326383
 ] 

Stevo Slavic edited comment on KAFKA-1452 at 6/12/16 11:32 AM:
---

This bug is still present in 0.10.0.0, I reproduced it by starting clean 
cluster with 1 zookeeper and 3 brokers, creating single topic with 2 partitions 
and replication factor of 2, then stopped non controller broker and finally 
stopped controller broker. Only remaining broker would become controller but 
partition that lost all replicas would still be labeled that it has one 
remaining even in-sync replica - the dead initial controller broker.

Things are fine if non-controller broker stopped is one that is in replica 
assignment of partition together with another non-controller broker. So problem 
affects only partitions for which controller is part of replica set, and when 
controller is last replica to be stopped. Not even killing brokers is needed to 
reproduce issue.

I wish one could choose which brokers in cluster are controller only and which 
are data only (see related KAFKA-2310).


was (Author: sslavic):
This bug is still present in 0.10.0.0, I reproduced it by starting clean 
cluster with 1 zookeeper and 3 brokers, creating single topic with 2 partitions 
and replication factor of 2, then stopped non controller broker and finally 
stopped controller broker. Only remaining broker would become controller but 
partition that lost all replicas would still be labeled that it has one 
remaining replica - dead initial controller broker.

Things are fine if non-controller broker stopped is one that is in replica 
assignment of partition together with another non-controller broker. So problem 
affects only partitions for which controller is part of replica set, and when 
controller is last replica to be stopped. Not even killing brokers is needed to 
reproduce issue.

I wish one could choose which brokers in cluster are controller only and which 
are data only (see related KAFKA-2310).

> Killing last replica for partition doesn't change ISR/Leadership if replica 
> is running controller
> -
>
> Key: KAFKA-1452
> URL: https://issues.apache.org/jira/browse/KAFKA-1452
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.8.1.1
>Reporter: Alexander Demidko
>Assignee: Neha Narkhede
>
> Kafka version is 0.8.1.1. We have three machines: A,B,C. Let’s say there is a 
> topic with replication 2 and one of it’s partitions - partition 1 is placed 
> on brokers A and B. If the broker A is already down than for the partition 1 
> we have: Leader: B, ISR: [B]. If the current controller is node C, than 
> killing broker B will turn partition 1 into state: Leader:  -1, ISR: []. But 
> if the current controller is node B, than killing it won’t update 
> leadership/isr for partition 1 even when controller will be restarted on node 
> C, so partition 1 will forever think it’s leader is node B which is dead.
> It looks that KafkaController.onBrokerFailure handles situation when the 
> broker down is the partition leader - it sets the new leader value to -1. To 
> the contrary, KafkaController.onControllerFailover never removes leader from 
> the partition with all replicas offline - allegedly because partition gets 
> into ReplicaDeletionIneligible state. Is it intended behavior?
> This behavior affects DefaultEventHandler.getPartition in the null key case - 
> it can’t determine partition 1 as having no leader, and this results into 
> events send failure.
> What we are trying to achieve - is to be able to write data even if some 
> partitions lost all replicas, which is rare yet still possible scenario. 
> Using null key looked suitable with minor DefaultEventHandler modifications 
> (like getting rid from DefaultEventHandler.sendPartitionPerTopicCache to 
> avoid caching and uneven events distribution) as we neither use logs 
> compaction nor rely on partitioning of the data. We had such behavior with 
> kafka 0.7 - if the node is down, simply produce to a different one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1452) Killing last replica for partition doesn't change ISR/Leadership if replica is running controller

2016-06-12 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15326383#comment-15326383
 ] 

Stevo Slavic commented on KAFKA-1452:
-

This bug is still present in 0.10.0.0, I reproduced it by starting clean 
cluster with 1 zookeeper and 3 brokers, creating single topic with 2 partitions 
and replication factor of 2, then stopped non controller broker and finally 
stopped controller broker. Only remaining broker would become controller but 
partition that lost all replicas would still be labeled that it has one 
remaining replica - dead initial controller broker.

Things are fine if non-controller broker stopped is one that is in replica 
assignment of partition together with another non-controller broker. So problem 
affects only partitions for which controller is part of replica set, and when 
controller is last replica to be stopped. Not even killing brokers is needed to 
reproduce issue.

I wish one could choose which brokers in cluster are controller only and which 
are data only (see related KAFKA-2310).

> Killing last replica for partition doesn't change ISR/Leadership if replica 
> is running controller
> -
>
> Key: KAFKA-1452
> URL: https://issues.apache.org/jira/browse/KAFKA-1452
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.8.1.1
>Reporter: Alexander Demidko
>Assignee: Neha Narkhede
>
> Kafka version is 0.8.1.1. We have three machines: A,B,C. Let’s say there is a 
> topic with replication 2 and one of it’s partitions - partition 1 is placed 
> on brokers A and B. If the broker A is already down than for the partition 1 
> we have: Leader: B, ISR: [B]. If the current controller is node C, than 
> killing broker B will turn partition 1 into state: Leader:  -1, ISR: []. But 
> if the current controller is node B, than killing it won’t update 
> leadership/isr for partition 1 even when controller will be restarted on node 
> C, so partition 1 will forever think it’s leader is node B which is dead.
> It looks that KafkaController.onBrokerFailure handles situation when the 
> broker down is the partition leader - it sets the new leader value to -1. To 
> the contrary, KafkaController.onControllerFailover never removes leader from 
> the partition with all replicas offline - allegedly because partition gets 
> into ReplicaDeletionIneligible state. Is it intended behavior?
> This behavior affects DefaultEventHandler.getPartition in the null key case - 
> it can’t determine partition 1 as having no leader, and this results into 
> events send failure.
> What we are trying to achieve - is to be able to write data even if some 
> partitions lost all replicas, which is rare yet still possible scenario. 
> Using null key looked suitable with minor DefaultEventHandler modifications 
> (like getting rid from DefaultEventHandler.sendPartitionPerTopicCache to 
> avoid caching and uneven events distribution) as we neither use logs 
> compaction nor rely on partitioning of the data. We had such behavior with 
> kafka 0.7 - if the node is down, simply produce to a different one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3390) ReplicaManager may infinitely try-fail to shrink ISR set of deleted partition

2016-03-15 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15195612#comment-15195612
 ] 

Stevo Slavic commented on KAFKA-3390:
-

Yes. Topic ZK node and subnodes would not get deleted if topic and partitions 
did not get actually deleted.
Replica manager cache didn't see these changes, so every time it got triggered 
to shrink ISR it would get 
{{org.apache.zookeeper.KeeperException$NoNodeException}}.
For whatever reason it also didn't ever get unscheduled not to do this check 
and shrink ISR.

Only workaround was - restart the node. That flushed the cache, changed 
controller and things were peaceful again.

> ReplicaManager may infinitely try-fail to shrink ISR set of deleted partition
> -
>
> Key: KAFKA-3390
> URL: https://issues.apache.org/jira/browse/KAFKA-3390
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
>Reporter: Stevo Slavic
>Assignee: Mayuresh Gharat
>
> For a topic whose deletion has been requested, Kafka replica manager may end 
> up infinitely trying and failing to shrink ISR.
> Here is fragment from server.log where this recurring and never ending 
> condition has been noticed:
> {noformat}
> [2016-03-04 09:42:13,894] INFO Partition [foo,0] on broker 1: Shrinking ISR 
> for partition [foo,0] from 1,3,2 to 1 (kafka.cluster.Partition)
> [2016-03-04 09:42:13,897] WARN Conditional update of path 
> /brokers/topics/foo/partitions/0/state with data 
> {"controller_epoch":53,"leader":1,"version":1,"leader_epoch":34,"isr":[1]} 
> and expected version 68 failed due to 
> org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /brokers/topics/foo/partitions/0/state (kafka.utils.ZkUtils)
> [2016-03-04 09:42:13,898] INFO Partition [foo,0] on broker 1: Cached 
> zkVersion [68] not equal to that in zookeeper, skip updating ISR 
> (kafka.cluster.Partition)
> [2016-03-04 09:42:23,894] INFO Partition [foo,0] on broker 1: Shrinking ISR 
> for partition [foo,0] from 1,3,2 to 1 (kafka.cluster.Partition)
> [2016-03-04 09:42:23,897] WARN Conditional update of path 
> /brokers/topics/foo/partitions/0/state with data 
> {"controller_epoch":53,"leader":1,"version":1,"leader_epoch":34,"isr":[1]} 
> and expected version 68 failed due to 
> org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /brokers/topics/foo/partitions/0/state (kafka.utils.ZkUtils)
> [2016-03-04 09:42:23,897] INFO Partition [foo,0] on broker 1: Cached 
> zkVersion [68] not equal to that in zookeeper, skip updating ISR 
> (kafka.cluster.Partition)
> [2016-03-04 09:42:33,894] INFO Partition [foo,0] on broker 1: Shrinking ISR 
> for partition [foo,0] from 1,3,2 to 1 (kafka.cluster.Partition)
> [2016-03-04 09:42:33,897] WARN Conditional update of path 
> /brokers/topics/foo/partitions/0/state with data 
> {"controller_epoch":53,"leader":1,"version":1,"leader_epoch":34,"isr":[1]} 
> and expected version 68 failed due to 
> org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /brokers/topics/foo/partitions/0/state (kafka.utils.ZkUtils)
> [2016-03-04 09:42:33,897] INFO Partition [foo,0] on broker 1: Cached 
> zkVersion [68] not equal to that in zookeeper, skip updating ISR 
> (kafka.cluster.Partition)
> ...
> {noformat}
> Before topic deletion was requested, this was state in ZK of its sole 
> partition:
> {noformat}
> Zxid: 0x181045
> Cxid: 0xc92
> Client id:0x3532dd88fd2
> Time: Mon Feb 29 16:46:23 CET 2016
> Operation:setData
> Path: /brokers/topics/foo/partitions/0/state
> Data: 
> {"controller_epoch":53,"leader":1,"version":1,"leader_epoch":34,"isr":[1,3,2]}
> Version:  68
> {noformat}
> Topic (sole partition) had no data ever published to it. I guess at some 
> point after topic deletion has been requested, partition state first got 
> updated and this was updated state:
> {noformat}
> Zxid: 0x18b0be
> Cxid: 0x141e4
> Client id:0x3532dd88fd2
> Time: Fri Mar 04 9:41:52 CET 2016
> Operation:setData
> Path: /brokers/topics/foo/partitions/0/state
> Data: 
> {"controller_epoch":54,"leader":1,"version":1,"leader_epoch":35,"isr":[1,3]}
> Version:  69
> {noformat}
> For whatever reason replica manager (some cache it uses, I guess 
> ReplicaManager.allPartitions) never sees this update, nor does it see that 
> the partition state, partition, partitions node and finally topic node got 
> deleted:
> {noformat}
> Zxid: 0x18b0bf
> Cxid: 0x40fb
> Client id:0x3532dd88fd2000a
> Time: Fri Mar 04 9:41:52 CET 2016
> Operation:delete
> Path: /brokers/topics/foo/partitions/0/state
> ---
> Zxid: 0x18b0c0
> Cxid: 0x40fe
> 

[jira] [Comment Edited] (KAFKA-3390) ReplicaManager may infinitely try-fail to shrink ISR set of deleted partition

2016-03-15 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15195612#comment-15195612
 ] 

Stevo Slavic edited comment on KAFKA-3390 at 3/15/16 4:44 PM:
--

Yes. Topic ZK node and subnodes would not get deleted if topic and partitions 
did not get actually deleted.
Replica manager cache didn't see these changes, so every time it got triggered 
to shrink ISR it would get 
{{org.apache.zookeeper.KeeperException$NoNodeException}}.
For whatever reason it also didn't ever get unscheduled not to do this check 
and shrink ISR.

Only workaround was - restart the node. That "flushed" the cache, changed 
controller and things were peaceful again.


was (Author: sslavic):
Yes. Topic ZK node and subnodes would not get deleted if topic and partitions 
did not get actually deleted.
Replica manager cache didn't see these changes, so every time it got triggered 
to shrink ISR it would get 
{{org.apache.zookeeper.KeeperException$NoNodeException}}.
For whatever reason it also didn't ever get unscheduled not to do this check 
and shrink ISR.

Only workaround was - restart the node. That flushed the cache, changed 
controller and things were peaceful again.

> ReplicaManager may infinitely try-fail to shrink ISR set of deleted partition
> -
>
> Key: KAFKA-3390
> URL: https://issues.apache.org/jira/browse/KAFKA-3390
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
>Reporter: Stevo Slavic
>Assignee: Mayuresh Gharat
>
> For a topic whose deletion has been requested, Kafka replica manager may end 
> up infinitely trying and failing to shrink ISR.
> Here is fragment from server.log where this recurring and never ending 
> condition has been noticed:
> {noformat}
> [2016-03-04 09:42:13,894] INFO Partition [foo,0] on broker 1: Shrinking ISR 
> for partition [foo,0] from 1,3,2 to 1 (kafka.cluster.Partition)
> [2016-03-04 09:42:13,897] WARN Conditional update of path 
> /brokers/topics/foo/partitions/0/state with data 
> {"controller_epoch":53,"leader":1,"version":1,"leader_epoch":34,"isr":[1]} 
> and expected version 68 failed due to 
> org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /brokers/topics/foo/partitions/0/state (kafka.utils.ZkUtils)
> [2016-03-04 09:42:13,898] INFO Partition [foo,0] on broker 1: Cached 
> zkVersion [68] not equal to that in zookeeper, skip updating ISR 
> (kafka.cluster.Partition)
> [2016-03-04 09:42:23,894] INFO Partition [foo,0] on broker 1: Shrinking ISR 
> for partition [foo,0] from 1,3,2 to 1 (kafka.cluster.Partition)
> [2016-03-04 09:42:23,897] WARN Conditional update of path 
> /brokers/topics/foo/partitions/0/state with data 
> {"controller_epoch":53,"leader":1,"version":1,"leader_epoch":34,"isr":[1]} 
> and expected version 68 failed due to 
> org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /brokers/topics/foo/partitions/0/state (kafka.utils.ZkUtils)
> [2016-03-04 09:42:23,897] INFO Partition [foo,0] on broker 1: Cached 
> zkVersion [68] not equal to that in zookeeper, skip updating ISR 
> (kafka.cluster.Partition)
> [2016-03-04 09:42:33,894] INFO Partition [foo,0] on broker 1: Shrinking ISR 
> for partition [foo,0] from 1,3,2 to 1 (kafka.cluster.Partition)
> [2016-03-04 09:42:33,897] WARN Conditional update of path 
> /brokers/topics/foo/partitions/0/state with data 
> {"controller_epoch":53,"leader":1,"version":1,"leader_epoch":34,"isr":[1]} 
> and expected version 68 failed due to 
> org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /brokers/topics/foo/partitions/0/state (kafka.utils.ZkUtils)
> [2016-03-04 09:42:33,897] INFO Partition [foo,0] on broker 1: Cached 
> zkVersion [68] not equal to that in zookeeper, skip updating ISR 
> (kafka.cluster.Partition)
> ...
> {noformat}
> Before topic deletion was requested, this was state in ZK of its sole 
> partition:
> {noformat}
> Zxid: 0x181045
> Cxid: 0xc92
> Client id:0x3532dd88fd2
> Time: Mon Feb 29 16:46:23 CET 2016
> Operation:setData
> Path: /brokers/topics/foo/partitions/0/state
> Data: 
> {"controller_epoch":53,"leader":1,"version":1,"leader_epoch":34,"isr":[1,3,2]}
> Version:  68
> {noformat}
> Topic (sole partition) had no data ever published to it. I guess at some 
> point after topic deletion has been requested, partition state first got 
> updated and this was updated state:
> {noformat}
> Zxid: 0x18b0be
> Cxid: 0x141e4
> Client id:0x3532dd88fd2
> Time: Fri Mar 04 9:41:52 CET 2016
> Operation:setData
> Path: /brokers/topics/foo/partitions/0/state
> Data: 
> 

[jira] [Updated] (KAFKA-3390) ReplicaManager may infinitely try-fail to shrink ISR set of deleted partition

2016-03-13 Thread Stevo Slavic (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stevo Slavic updated KAFKA-3390:

Description: 
For a topic whose deletion has been requested, Kafka replica manager may end up 
infinitely trying and failing to shrink ISR.

Here is fragment from server.log where this recurring and never ending 
condition has been noticed:

{noformat}
[2016-03-04 09:42:13,894] INFO Partition [foo,0] on broker 1: Shrinking ISR for 
partition [foo,0] from 1,3,2 to 1 (kafka.cluster.Partition)
[2016-03-04 09:42:13,897] WARN Conditional update of path 
/brokers/topics/foo/partitions/0/state with data 
{"controller_epoch":53,"leader":1,"version":1,"leader_epoch":34,"isr":[1]} and 
expected version 68 failed due to 
org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode 
for /brokers/topics/foo/partitions/0/state (kafka.utils.ZkUtils)
[2016-03-04 09:42:13,898] INFO Partition [foo,0] on broker 1: Cached zkVersion 
[68] not equal to that in zookeeper, skip updating ISR (kafka.cluster.Partition)
[2016-03-04 09:42:23,894] INFO Partition [foo,0] on broker 1: Shrinking ISR for 
partition [foo,0] from 1,3,2 to 1 (kafka.cluster.Partition)
[2016-03-04 09:42:23,897] WARN Conditional update of path 
/brokers/topics/foo/partitions/0/state with data 
{"controller_epoch":53,"leader":1,"version":1,"leader_epoch":34,"isr":[1]} and 
expected version 68 failed due to 
org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode 
for /brokers/topics/foo/partitions/0/state (kafka.utils.ZkUtils)
[2016-03-04 09:42:23,897] INFO Partition [foo,0] on broker 1: Cached zkVersion 
[68] not equal to that in zookeeper, skip updating ISR (kafka.cluster.Partition)
[2016-03-04 09:42:33,894] INFO Partition [foo,0] on broker 1: Shrinking ISR for 
partition [foo,0] from 1,3,2 to 1 (kafka.cluster.Partition)
[2016-03-04 09:42:33,897] WARN Conditional update of path 
/brokers/topics/foo/partitions/0/state with data 
{"controller_epoch":53,"leader":1,"version":1,"leader_epoch":34,"isr":[1]} and 
expected version 68 failed due to 
org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode 
for /brokers/topics/foo/partitions/0/state (kafka.utils.ZkUtils)
[2016-03-04 09:42:33,897] INFO Partition [foo,0] on broker 1: Cached zkVersion 
[68] not equal to that in zookeeper, skip updating ISR (kafka.cluster.Partition)
...
{noformat}

Before topic deletion was requested, this was state in ZK of its sole partition:
{noformat}
Zxid:   0x181045
Cxid:   0xc92
Client id:  0x3532dd88fd2
Time:   Mon Feb 29 16:46:23 CET 2016
Operation:  setData
Path:   /brokers/topics/foo/partitions/0/state
Data:   
{"controller_epoch":53,"leader":1,"version":1,"leader_epoch":34,"isr":[1,3,2]}
Version:68
{noformat}

Topic (sole partition) had no data ever published to it. I guess at some point 
after topic deletion has been requested, partition state first got updated and 
this was updated state:
{noformat}
Zxid:   0x18b0be
Cxid:   0x141e4
Client id:  0x3532dd88fd2
Time:   Fri Mar 04 9:41:52 CET 2016
Operation:  setData
Path:   /brokers/topics/foo/partitions/0/state
Data:   
{"controller_epoch":54,"leader":1,"version":1,"leader_epoch":35,"isr":[1,3]}
Version:69
{noformat}

For whatever reason replica manager (some cache it uses, I guess 
ReplicaManager.allPartitions) never sees this update, nor does it see that the 
partition state, partition, partitions node and finally topic node got deleted:
{noformat}
Zxid:   0x18b0bf
Cxid:   0x40fb
Client id:  0x3532dd88fd2000a
Time:   Fri Mar 04 9:41:52 CET 2016
Operation:  delete
Path:   /brokers/topics/foo/partitions/0/state
---
Zxid:   0x18b0c0
Cxid:   0x40fe
Client id:  0x3532dd88fd2000a
Time:   Fri Mar 04 9:41:52 CET 2016
Operation:  delete
Path:   /brokers/topics/foo/partitions/0
---
Zxid:   0x18b0c1
Cxid:   0x4100
Client id:  0x3532dd88fd2000a
Time:   Fri Mar 04 9:41:52 CET 2016
Operation:  delete
Path:   /brokers/topics/foo/partitions
---
Zxid:   0x18b0c2
Cxid:   0x4102
Client id:  0x3532dd88fd2000a
Time:   Fri Mar 04 9:41:52 CET 2016
Operation:  delete
Path:   /brokers/topics/foo
{noformat}

it just keeps on trying, every {{replica.lag.time.max.ms}}, to shrink ISR even 
for partition/topic that has been deleted.

Broker 1 was controller in the cluster; notice that the same broker was lead 
for the partition before it was deleted.

  was:
For a topic whose deletion has been requested, Kafka replica manager may end up 
infinitely trying and failing to shrink ISR.

Here is fragment from server.log where this recurring and never ending 
condition has been noticed:

{noformat}

[jira] [Created] (KAFKA-3390) ReplicaManager may infinitely try-fail to shrink ISR set of deleted partition

2016-03-13 Thread Stevo Slavic (JIRA)
Stevo Slavic created KAFKA-3390:
---

 Summary: ReplicaManager may infinitely try-fail to shrink ISR set 
of deleted partition
 Key: KAFKA-3390
 URL: https://issues.apache.org/jira/browse/KAFKA-3390
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.9.0.1
Reporter: Stevo Slavic


For a topic whose deletion has been requested, Kafka replica manager may end up 
infinitely trying and failing to shrink ISR.

Here is fragment from server.log where this recurring and never ending 
condition has been noticed:

{noformat}
[2016-03-04 09:42:13,894] INFO Partition [foo,0] on broker 1: Shrinking ISR for 
partition [foo,0] from 1,3,2 to 1 (kafka.cluster.Partition)
[2016-03-04 09:42:13,897] WARN Conditional update of path 
/brokers/topics/foo/partitions/0/state with data 
{"controller_epoch":53,"leader":1,"version":1,"leader_epoch":34,"isr":[1]} and 
expected version 68 failed due to 
org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode 
for /brokers/topics/foo/partitions/0/state (kafka.utils.ZkUtils)
[2016-03-04 09:42:13,898] INFO Partition [foo,0] on broker 1: Cached zkVersion 
[68] not equal to that in zookeeper, skip updating ISR (kafka.cluster.Partition)
[2016-03-04 09:42:23,894] INFO Partition [foo,0] on broker 1: Shrinking ISR for 
partition [foo,0] from 1,3,2 to 1 (kafka.cluster.Partition)
[2016-03-04 09:42:23,897] WARN Conditional update of path 
/brokers/topics/foo/partitions/0/state with data 
{"controller_epoch":53,"leader":1,"version":1,"leader_epoch":34,"isr":[1]} and 
expected version 68 failed due to 
org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode 
for /brokers/topics/foo/partitions/0/state (kafka.utils.ZkUtils)
[2016-03-04 09:42:23,897] INFO Partition [foo,0] on broker 1: Cached zkVersion 
[68] not equal to that in zookeeper, skip updating ISR (kafka.cluster.Partition)
[2016-03-04 09:42:33,894] INFO Partition [foo,0] on broker 1: Shrinking ISR for 
partition [foo,0] from 1,3,2 to 1 (kafka.cluster.Partition)
[2016-03-04 09:42:33,897] WARN Conditional update of path 
/brokers/topics/foo/partitions/0/state with data 
{"controller_epoch":53,"leader":1,"version":1,"leader_epoch":34,"isr":[1]} and 
expected version 68 failed due to 
org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode 
for /brokers/topics/foo/partitions/0/state (kafka.utils.ZkUtils)
[2016-03-04 09:42:33,897] INFO Partition [foo,0] on broker 1: Cached zkVersion 
[68] not equal to that in zookeeper, skip updating ISR (kafka.cluster.Partition)
...
{noformat}

Before topic deletion was requested, this was state in ZK of its sole partition:
{noformat}
Zxid:   0x181045
Cxid:   0xc92
Client id:  0x3532dd88fd2
Time:   Mon Feb 29 16:46:23 CET 2016
Operation:  setData
Path:   /brokers/topics/foo/partitions/0/state
Data:   
{"controller_epoch":53,"leader":1,"version":1,"leader_epoch":34,"isr":[1,3,2]}
Version:68
{noformat}

Topic (sole partition) had no data ever published to it. I guess at some point 
after topic deletion has been requested, partition state first got updated and 
this was updated state:
{noformat}
Zxid:   0x18b0be
Cxid:   0x141e4
Client id:  0x3532dd88fd2
Time:   Fri Mar 04 9:41:52 CET 2016
Operation:  setData
Path:   /brokers/topics/foo/partitions/0/state
Data:   
{"controller_epoch":54,"leader":1,"version":1,"leader_epoch":35,"isr":[1,3]}
Version:69
{noformat}

For whatever reason replica manager (some cache it uses, I guess 
ReplicaManager.allPartitions) never sees this update, nor does it see that the 
partition state, partition, partitions node and finally topic node got deleted:
{noformat}
Zxid:   0x18b0bf
Cxid:   0x40fb
Client id:  0x3532dd88fd2000a
Time:   Fri Mar 04 9:41:52 CET 2016
Operation:  delete
Path:   /brokers/topics/foo/partitions/0/state
---
Zxid:   0x18b0c0
Cxid:   0x40fe
Client id:  0x3532dd88fd2000a
Time:   Fri Mar 04 9:41:52 CET 2016
Operation:  delete
Path:   /brokers/topics/foo/partitions/0
---
Zxid:   0x18b0c1
Cxid:   0x4100
Client id:  0x3532dd88fd2000a
Time:   Fri Mar 04 9:41:52 CET 2016
Operation:  delete
Path:   /brokers/topics/foo/partitions
---
Zxid:   0x18b0c2
Cxid:   0x4102
Client id:  0x3532dd88fd2000a
Time:   Fri Mar 04 9:41:52 CET 2016
Operation:  delete
Path:   /brokers/topics/foo
{noformat}

it just keeps on trying, every {{replica.lag.time.max.ms}}, to shrink ISR even 
for partition/topic that has been deleted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3389) ReplicaStateMachine areAllReplicasForTopicDeleted check not handling well case when there are no replicas for topic

2016-03-13 Thread Stevo Slavic (JIRA)
Stevo Slavic created KAFKA-3389:
---

 Summary: ReplicaStateMachine areAllReplicasForTopicDeleted check 
not handling well case when there are no replicas for topic
 Key: KAFKA-3389
 URL: https://issues.apache.org/jira/browse/KAFKA-3389
 Project: Kafka
  Issue Type: Bug
  Components: controller
Affects Versions: 0.9.0.1
Reporter: Stevo Slavic
Assignee: Neha Narkhede
Priority: Minor


Line ReplicaStateMachine.scala#L285
{noformat}
replicaStatesForTopic.forall(_._2 == ReplicaDeletionSuccessful)
{noformat}

which is return value of {{areAllReplicasForTopicDeleted}} function/check, 
probably should better be checking for
{noformat}
replicaStatesForTopic.isEmpty || replicaStatesForTopic.forall(_._2 == 
ReplicaDeletionSuccessful)
{noformat}
I noticed it because in controller logs I found entries like:
{noformat}
[2016-03-04 13:27:29,115] DEBUG [Replica state machine on controller 1]: Are 
all replicas for topic foo deleted Map() (kafka.controller.ReplicaStateMachine)
{noformat}
even though normally they look like:
{noformat}
[2016-03-04 09:33:41,036] DEBUG [Replica state machine on controller 1]: Are 
all replicas for topic foo deleted Map([Topic=foo,Partition=0,Replica=0] -> 
ReplicaDeletionStarted, [Topic=foo,Partition=0,Replica=3] -> 
ReplicaDeletionStarted, [Topic=foo,Partition=0,Replica=1] -> 
ReplicaDeletionSuccessful) (kafka.controller.ReplicaStateMachine)
{noformat}

This may cause topic deletion request never to be cleared from ZK even when 
topic has been deleted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1448) Filter-plugins for messages

2016-01-22 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15112349#comment-15112349
 ] 

Stevo Slavic commented on KAFKA-1448:
-

Would be nice to be able as consumer to pass a lambda or something for 
filtering to occur on server side.
One potential use case would be to model multiple different logical topics in 
messages stored in Kafka, on top of smaller number of physical Kafka topics, 
and have filtering to occur on server side. Kafka+ZooKeeper is limited in 
number of physical topics or partitions per cluster, there are reports of ~1M 
being a limit, so this may be one workaround.

> Filter-plugins for messages
> ---
>
> Key: KAFKA-1448
> URL: https://issues.apache.org/jira/browse/KAFKA-1448
> Project: Kafka
>  Issue Type: Wish
>  Components: consumer
>Reporter: Moritz Möller
>Assignee: Neha Narkhede
>
> Hi,
> we use Kafka to transmit different events that occur on different products, 
> and would like to be able to subscribe only to a certain set of events for 
> certain products.
> Using one topic for each event * product combination would yield around 2000 
> topics, which seems to be not what kafka is designed for.
> What we would need is a way to add a consumer filter plugin to kafka (a 
> simple class that can accept or reject a message) and to pass a parameter 
> from the consumer to that filter class.
> Is there a better way to do this already, or if not, would you accept a patch 
> upstream that adds such a mechanism?
> Thanks,
> Mo



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3059) ConsumerGroupCommand should allow resetting offsets for consumer groups

2016-01-04 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081756#comment-15081756
 ] 

Stevo Slavic commented on KAFKA-3059:
-

KAFKA-3057 was earlier created to fix related documentation.

> ConsumerGroupCommand should allow resetting offsets for consumer groups
> ---
>
> Key: KAFKA-3059
> URL: https://issues.apache.org/jira/browse/KAFKA-3059
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Jason Gustafson
>
> As discussed here:
> http://mail-archives.apache.org/mod_mbox/kafka-users/201601.mbox/%3CCA%2BndhHpf3ib%3Ddsh9zvtfVjRiUjSz%2B%3D8umXm4myW%2BpBsbTYATAQ%40mail.gmail.com%3E
> * Given a consumer group, remove all stored offsets
> * Given a group and a topic, remove offset for group  and topic
> * Given a group, topic, partition and offset - set the offset for the 
> specified partition and group with the given value



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3057) "Checking consumer position" docs are referencing (only) deprecated ConsumerOffsetChecker

2016-01-04 Thread Stevo Slavic (JIRA)
Stevo Slavic created KAFKA-3057:
---

 Summary: "Checking consumer position" docs are referencing (only) 
deprecated ConsumerOffsetChecker
 Key: KAFKA-3057
 URL: https://issues.apache.org/jira/browse/KAFKA-3057
 Project: Kafka
  Issue Type: Bug
  Components: admin, website
Affects Versions: 0.9.0.0
Reporter: Stevo Slavic
Priority: Trivial


["Checking consumer position" operations 
instructions|http://kafka.apache.org/090/documentation.html#basic_ops_consumer_lag]
 are referencing only ConsumerOffsetChecker which is mentioned as deprecated in 
[Potential breaking changes in 
0.9.0.0|http://kafka.apache.org/documentation.html#upgrade_9_breaking]

Please consider updating docs with new ways for checking consumer position, 
covering differences between old and new way, and recommendation which one is 
preferred and why.

Would be nice to document (and support if not already available), not only how 
to read/fetch/check consumer (group) offset, but also how to set offset for 
consumer group using Kafka's operations tools.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3057) "Checking consumer position" docs are referencing (only) deprecated ConsumerOffsetChecker

2016-01-04 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081128#comment-15081128
 ] 

Stevo Slavic commented on KAFKA-3057:
-

[~ztyx] that is covered already on wiki page: 
https://cwiki.apache.org/confluence/display/KAFKA/Committing+and+fetching+consumer+offsets+in+Kafka
Not sure if it needs updating with any 0.9.0.x changes.

> "Checking consumer position" docs are referencing (only) deprecated 
> ConsumerOffsetChecker
> -
>
> Key: KAFKA-3057
> URL: https://issues.apache.org/jira/browse/KAFKA-3057
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, website
>Affects Versions: 0.9.0.0
>Reporter: Stevo Slavic
>Priority: Trivial
>
> ["Checking consumer position" operations 
> instructions|http://kafka.apache.org/090/documentation.html#basic_ops_consumer_lag]
>  are referencing only ConsumerOffsetChecker which is mentioned as deprecated 
> in [Potential breaking changes in 
> 0.9.0.0|http://kafka.apache.org/documentation.html#upgrade_9_breaking]
> Please consider updating docs with new ways for checking consumer position, 
> covering differences between old and new way, and recommendation which one is 
> preferred and why.
> Would be nice to document (and support if not already available), not only 
> how to read/fetch/check consumer (group) offset, but also how to set offset 
> for consumer group using Kafka's operations tools.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3037) Number of alive brokers not known after single node cluster startup

2015-12-23 Thread Stevo Slavic (JIRA)
Stevo Slavic created KAFKA-3037:
---

 Summary: Number of alive brokers not known after single node 
cluster startup
 Key: KAFKA-3037
 URL: https://issues.apache.org/jira/browse/KAFKA-3037
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2.1
Reporter: Stevo Slavic
Priority: Minor


Single broker cluster is not aware of itself being alive. This can cause 
failure in logic which relies on number of alive brokers being known - e.g. 
consumer offsets topic creation logic success depends on number of alive 
brokers being known.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2000) Delete consumer offsets from kafka once the topic is deleted

2015-12-22 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067936#comment-15067936
 ] 

Stevo Slavic commented on KAFKA-2000:
-

I'm still on 0.8.2.x and it's hurting my system smoke tests, reusing same 
topics over and over again in the test, (consumer) state preserved - deleting 
topic, creating it, publishing message, not being able to read just published 
message. Now have to introduce dummy read after topic is recreated, just to 
have existing offset fall outside of the valid range, and get reset.

Curious, are there any plans to backport this fix to 0.9.0.x or even 0.8.2.x?

> Delete consumer offsets from kafka once the topic is deleted
> 
>
> Key: KAFKA-2000
> URL: https://issues.apache.org/jira/browse/KAFKA-2000
> Project: Kafka
>  Issue Type: Bug
>Reporter: Sriharsha Chintalapani
>Assignee: Sriharsha Chintalapani
>  Labels: newbie++
> Fix For: 0.9.1.0
>
> Attachments: KAFKA-2000.patch, KAFKA-2000_2015-05-03_10:39:11.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2310) Add config to prevent broker becoming controller

2015-12-18 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15064136#comment-15064136
 ] 

Stevo Slavic commented on KAFKA-2310:
-

Preventing broker from becoming controller, and controller nodes from becoming 
leads for any partition, would help minimize downtime when controller node goes 
down - when that happens, if controller node was a lead for any partitions no 
other replica can become lead for some time even when there are ISRs available, 
at least not until new controller is elected, and then new lead for partitions 
is elected as well and during that time those partitions cannot be written to 
or read from.

KAFKA-1778 (KIP-39) is somewhat related and would further help one move 
controller around.

With all this, maybe it would make sense to split controller process into 
separate app and candidates cluster into separate one from data nodes.

> Add config to prevent broker becoming controller
> 
>
> Key: KAFKA-2310
> URL: https://issues.apache.org/jira/browse/KAFKA-2310
> Project: Kafka
>  Issue Type: Bug
>Reporter: Andrii Biletskyi
>Assignee: Andrii Biletskyi
> Attachments: KAFKA-2310.patch, KAFKA-2310_0.8.1.patch, 
> KAFKA-2310_0.8.2.patch
>
>
> The goal is to be able to specify which cluster brokers can serve as a 
> controller and which cannot. This way it will be possible to "reserve" 
> particular, not overloaded with partitions and other operations, broker as 
> controller.
> Proposed to add config _controller.eligibility_ defaulted to true (for 
> backward compatibility, since now any broker can become a controller)
> Patch will be available for trunk, 0.8.2 and 0.8.1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-2310) Add config to prevent broker becoming controller

2015-12-18 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15064136#comment-15064136
 ] 

Stevo Slavic edited comment on KAFKA-2310 at 12/18/15 4:04 PM:
---

Enabling only specific brokers to become controllers, and disabling controller 
nodes from becoming leads for any partition (becoming data nodes), would help 
minimize downtime when controller node goes down - when that happens, if 
controller node was a lead for any partitions no other replica can become lead 
for some time even when there are ISRs available, at least not until new 
controller is elected, and then new lead for partitions is elected as well and 
during that time those partitions cannot be written to or read from.

KAFKA-1778 (KIP-39) is somewhat related and would further help one move 
controller around.

With all this, maybe it would make sense to split controller process into 
separate app and candidates cluster into separate one from data nodes.


was (Author: sslavic):
Preventing broker from becoming controller, and controller nodes from becoming 
leads for any partition, would help minimize downtime when controller node goes 
down - when that happens, if controller node was a lead for any partitions no 
other replica can become lead for some time even when there are ISRs available, 
at least not until new controller is elected, and then new lead for partitions 
is elected as well and during that time those partitions cannot be written to 
or read from.

KAFKA-1778 (KIP-39) is somewhat related and would further help one move 
controller around.

With all this, maybe it would make sense to split controller process into 
separate app and candidates cluster into separate one from data nodes.

> Add config to prevent broker becoming controller
> 
>
> Key: KAFKA-2310
> URL: https://issues.apache.org/jira/browse/KAFKA-2310
> Project: Kafka
>  Issue Type: Bug
>Reporter: Andrii Biletskyi
>Assignee: Andrii Biletskyi
> Attachments: KAFKA-2310.patch, KAFKA-2310_0.8.1.patch, 
> KAFKA-2310_0.8.2.patch
>
>
> The goal is to be able to specify which cluster brokers can serve as a 
> controller and which cannot. This way it will be possible to "reserve" 
> particular, not overloaded with partitions and other operations, broker as 
> controller.
> Proposed to add config _controller.eligibility_ defaulted to true (for 
> backward compatibility, since now any broker can become a controller)
> Patch will be available for trunk, 0.8.2 and 0.8.1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2800) Update outdated dependencies

2015-11-26 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15028365#comment-15028365
 ] 

Stevo Slavic commented on KAFKA-2800:
-

zkclient 0.7 is available http://repo1.maven.org/maven2/com/101tec/zkclient/0.7/

> Update outdated dependencies
> 
>
> Key: KAFKA-2800
> URL: https://issues.apache.org/jira/browse/KAFKA-2800
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.2.2
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.9.1.0
>
>
> See the relevant discussion here: 
> http://search-hadoop.com/m/uyzND1LAyyi2IB1wW1/Dependency+Updates=Dependency+Updates



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2687) Add support for ListGroups and DescribeGroup APIs

2015-11-20 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15018125#comment-15018125
 ] 

Stevo Slavic commented on KAFKA-2687:
-

It seems ConsumerMetadataRequest/ConsumerMetadataResponse got renamed in some 
commit, but documentation wasn't updated (completely), there are still 
references in 
https://github.com/apache/kafka/blob/0.9.0/docs/implementation.html

> Add support for ListGroups and DescribeGroup APIs
> -
>
> Key: KAFKA-2687
> URL: https://issues.apache.org/jira/browse/KAFKA-2687
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Since the new consumer currently has no persistence in Zookeeper (pending 
> outcome of KAFKA-2017), there is no way for administrators to investigate 
> group status including getting the list of members in the group and their 
> partition assignments. We therefore propose to modify GroupMetadataRequest 
> (previously known as ConsumerMetadataRequest) to return group metadata when 
> received by the respective group's coordinator. When received by another 
> broker, the request will be handled as before: by only returning coordinator 
> host and port information.
> {code}
> GroupMetadataRequest => GroupId IncludeMetadata
>   GroupId => String
>   IncludeMetadata => Boolean
> GroupMetadataResponse => ErrorCode Coordinator GroupMetadata
>   ErrorCode => int16
>   Coordinator => Id Host Port
> Id => int32
> Host => string
> Port => int32
>   GroupMetadata => State ProtocolType Generation Protocol Leader  Members
> State => String
> ProtocolType => String
> Generation => int32
> Protocol => String
> Leader => String
> Members => [Member MemberMetadata MemberAssignment]
>   Member => MemberIp ClientId
> MemberIp => String
> ClientId => String
>   MemberMetadata => Bytes
>   MemberAssignment => Bytes
> {code}
> The request schema includes a flag to indicate whether metadata is needed, 
> which saves clients from having to read all group metadata when they are just 
> trying to find the coordinator. This is important to reduce group overhead 
> for use cases which involve a large number of topic subscriptions (e.g. 
> mirror maker).
> Tools will use the protocol type to determine how to parse metadata. For 
> example, when the protocolType is "consumer", the tool can use 
> ConsumerProtocol to parse the member metadata as topic subscriptions and 
> partition assignments. 
> The detailed proposal can be found below.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-40%3A+ListGroups+and+DescribeGroup



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2687) Add support for ListGroups and DescribeGroup APIs

2015-11-20 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15018157#comment-15018157
 ] 

Stevo Slavic commented on KAFKA-2687:
-

So much changed, still trying to grasp it, would delay PR too much IMO, so 
please go ahead.

Noticed, ZKStringSerializer is now private object, and all of the ZkUtils 
constructors do not have same features available in ZkClient, like for 
configuring operation retry timeout, so cannot construct ZkUtils with 
ZkStringSerializer and ZkClient configured with operation retry timeout.

> Add support for ListGroups and DescribeGroup APIs
> -
>
> Key: KAFKA-2687
> URL: https://issues.apache.org/jira/browse/KAFKA-2687
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Since the new consumer currently has no persistence in Zookeeper (pending 
> outcome of KAFKA-2017), there is no way for administrators to investigate 
> group status including getting the list of members in the group and their 
> partition assignments. We therefore propose to modify GroupMetadataRequest 
> (previously known as ConsumerMetadataRequest) to return group metadata when 
> received by the respective group's coordinator. When received by another 
> broker, the request will be handled as before: by only returning coordinator 
> host and port information.
> {code}
> GroupMetadataRequest => GroupId IncludeMetadata
>   GroupId => String
>   IncludeMetadata => Boolean
> GroupMetadataResponse => ErrorCode Coordinator GroupMetadata
>   ErrorCode => int16
>   Coordinator => Id Host Port
> Id => int32
> Host => string
> Port => int32
>   GroupMetadata => State ProtocolType Generation Protocol Leader  Members
> State => String
> ProtocolType => String
> Generation => int32
> Protocol => String
> Leader => String
> Members => [Member MemberMetadata MemberAssignment]
>   Member => MemberIp ClientId
> MemberIp => String
> ClientId => String
>   MemberMetadata => Bytes
>   MemberAssignment => Bytes
> {code}
> The request schema includes a flag to indicate whether metadata is needed, 
> which saves clients from having to read all group metadata when they are just 
> trying to find the coordinator. This is important to reduce group overhead 
> for use cases which involve a large number of topic subscriptions (e.g. 
> mirror maker).
> Tools will use the protocol type to determine how to parse metadata. For 
> example, when the protocolType is "consumer", the tool can use 
> ConsumerProtocol to parse the member metadata as topic subscriptions and 
> partition assignments. 
> The detailed proposal can be found below.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-40%3A+ListGroups+and+DescribeGroup



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2870) Support configuring operationRetryTimeout of underlying ZkClient through ZkUtils constructor

2015-11-20 Thread Stevo Slavic (JIRA)
Stevo Slavic created KAFKA-2870:
---

 Summary: Support configuring operationRetryTimeout of underlying 
ZkClient through ZkUtils constructor
 Key: KAFKA-2870
 URL: https://issues.apache.org/jira/browse/KAFKA-2870
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.9.0.0
Reporter: Stevo Slavic
Priority: Minor


Currently (Kafka 0.9.0.0 RC3) it's not possible to have underlying {{ZkClient}} 
{{operationRetryTimeout}} configured and use Kafka's {{ZKStringSerializer}} in 
{{ZkUtils}} instance.

Please support configuring {{operationRetryTimeout}} via another 
{{ZkUtils.apply}} factory method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-598) decouple fetch size from max message size

2015-11-05 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14991703#comment-14991703
 ] 

Stevo Slavic commented on KAFKA-598:


Curious, does new consumer coming in 0.9.0.0 allow easier detecting of "there's 
larger message then fetch size" situation, and using different fetch size for 
different requests programmatically?

> decouple fetch size from max message size
> -
>
> Key: KAFKA-598
> URL: https://issues.apache.org/jira/browse/KAFKA-598
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.0
>Reporter: Jun Rao
>Assignee: Joel Koshy
>Priority: Blocker
>  Labels: p4
> Attachments: KAFKA-598-v1.patch, KAFKA-598-v2.patch, 
> KAFKA-598-v3.patch
>
>
> Currently, a consumer has to set fetch size larger than the max message size. 
> This increases the memory footprint on the consumer, especially when a large 
> number of topic/partition is subscribed. By decoupling the fetch size from 
> max message size, we can use a smaller fetch size for normal consumption and 
> when hitting a large message (hopefully rare), we automatically increase 
> fetch size to max message size temporarily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2255) Missing documentation for max.in.flight.requests.per.connection

2015-11-03 Thread Stevo Slavic (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stevo Slavic updated KAFKA-2255:

Fix Version/s: 0.9.0.0

> Missing documentation for max.in.flight.requests.per.connection
> ---
>
> Key: KAFKA-2255
> URL: https://issues.apache.org/jira/browse/KAFKA-2255
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Navina Ramesh
>Assignee: Aditya Auradkar
> Fix For: 0.9.0.0
>
>
> Hi Kafka team,
> Samza team noticed that the documentation for 
> max.in.flight.requests.per.connection property for the java based producer is 
> missing in the 0.8.2 documentation. I checked the code and looks like this 
> config is still enforced. Can you please update the website to reflect the 
> same?
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2255) Missing documentation for max.in.flight.requests.per.connection

2015-11-02 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985117#comment-14985117
 ] 

Stevo Slavic commented on KAFKA-2255:
-

Docs in code for this config property state:
{quote}
The maximum number of unacknowledged requests the client will send on a single 
connection before blocking.
{quote}

> Missing documentation for max.in.flight.requests.per.connection
> ---
>
> Key: KAFKA-2255
> URL: https://issues.apache.org/jira/browse/KAFKA-2255
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Navina Ramesh
>Assignee: Aditya Auradkar
>
> Hi Kafka team,
> Samza team noticed that the documentation for 
> max.in.flight.requests.per.connection property for the java based producer is 
> missing in the 0.8.2 documentation. I checked the code and looks like this 
> config is still enforced. Can you please update the website to reflect the 
> same?
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2255) Missing documentation for max.in.flight.requests.per.connection

2015-10-31 Thread Stevo Slavic (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stevo Slavic updated KAFKA-2255:

Affects Version/s: 0.8.2.0

> Missing documentation for max.in.flight.requests.per.connection
> ---
>
> Key: KAFKA-2255
> URL: https://issues.apache.org/jira/browse/KAFKA-2255
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Navina Ramesh
>Assignee: Aditya Auradkar
>
> Hi Kafka team,
> Samza team noticed that the documentation for 
> max.in.flight.requests.per.connection property for the java based producer is 
> missing in the 0.8.2 documentation. I checked the code and looks like this 
> config is still enforced. Can you please update the website to reflect the 
> same?
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2106) Partition balance tool between borkers

2015-09-15 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14744999#comment-14744999
 ] 

Stevo Slavic commented on KAFKA-2106:
-

This one is nice to have for 0.9.0.0 release.

It's somewhat related to KAFKA-1792 and 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-6+-+New+reassignment+partition+logic+for+rebalancing

> Partition balance tool between borkers
> --
>
> Key: KAFKA-2106
> URL: https://issues.apache.org/jira/browse/KAFKA-2106
> Project: Kafka
>  Issue Type: New Feature
>  Components: admin
>Reporter: chenshangan
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-2106.3, KAFKA-2106.patch, KAFKA-2106.patch.2
>
>
> The default partition assignment algorithm can work well in a static kafka 
> cluster(number of brokers seldom change). Actually, in production env, number 
> of brokers is always increasing according to the business data. When new 
> brokers added to the cluster, it's better to provide a tool that can help to 
> move existing data to new brokers. Currently, users need to choose topic or 
> partitions manually and use the Reassign Partitions Tool 
> (kafka-reassign-partitions.sh) to achieve the goal. It's a time-consuming 
> task when there's a lot of topics in the cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2106) Partition balance tool between borkers

2015-09-15 Thread Stevo Slavic (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stevo Slavic updated KAFKA-2106:

Affects Version/s: (was: 0.9.0.0)

> Partition balance tool between borkers
> --
>
> Key: KAFKA-2106
> URL: https://issues.apache.org/jira/browse/KAFKA-2106
> Project: Kafka
>  Issue Type: New Feature
>  Components: admin
>Reporter: chenshangan
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-2106.3, KAFKA-2106.patch, KAFKA-2106.patch.2
>
>
> The default partition assignment algorithm can work well in a static kafka 
> cluster(number of brokers seldom change). Actually, in production env, number 
> of brokers is always increasing according to the business data. When new 
> brokers added to the cluster, it's better to provide a tool that can help to 
> move existing data to new brokers. Currently, users need to choose topic or 
> partitions manually and use the Reassign Partitions Tool 
> (kafka-reassign-partitions.sh) to achieve the goal. It's a time-consuming 
> task when there's a lot of topics in the cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2106) Partition balance tool between borkers

2015-09-15 Thread Stevo Slavic (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stevo Slavic updated KAFKA-2106:

Fix Version/s: 0.9.0.0

> Partition balance tool between borkers
> --
>
> Key: KAFKA-2106
> URL: https://issues.apache.org/jira/browse/KAFKA-2106
> Project: Kafka
>  Issue Type: New Feature
>  Components: admin
>Reporter: chenshangan
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-2106.3, KAFKA-2106.patch, KAFKA-2106.patch.2
>
>
> The default partition assignment algorithm can work well in a static kafka 
> cluster(number of brokers seldom change). Actually, in production env, number 
> of brokers is always increasing according to the business data. When new 
> brokers added to the cluster, it's better to provide a tool that can help to 
> move existing data to new brokers. Currently, users need to choose topic or 
> partitions manually and use the Reassign Partitions Tool 
> (kafka-reassign-partitions.sh) to achieve the goal. It's a time-consuming 
> task when there's a lot of topics in the cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2551) Unclean leader election docs outdated

2015-09-15 Thread Stevo Slavic (JIRA)
Stevo Slavic created KAFKA-2551:
---

 Summary: Unclean leader election docs outdated
 Key: KAFKA-2551
 URL: https://issues.apache.org/jira/browse/KAFKA-2551
 Project: Kafka
  Issue Type: Bug
  Components: website
Affects Versions: 0.8.2.2
Reporter: Stevo Slavic
Priority: Trivial


Current unclean leader election docs state:
{quote}
In the future, we would like to make this configurable to better support use 
cases where downtime is preferable to inconsistency.
{quote}

Since 0.8.2.0, unclean leader election strategy (whether to allow it or not) is 
already configurable via {{unclean.leader.election.enable}} broker config 
property.

That sentence is in both 
https://svn.apache.org/repos/asf/kafka/site/083/design.html and 
https://svn.apache.org/repos/asf/kafka/site/082/design.html near the end of 
"Unclean leader election: What if they all die?" section. Next section, 
"Availability and Durability Guarantees", mentions ability to disable unclean 
leader election, so likely just this one reference needs to be updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2544) Replication tools wiki page needs to be updated

2015-09-14 Thread Stevo Slavic (JIRA)
Stevo Slavic created KAFKA-2544:
---

 Summary: Replication tools wiki page needs to be updated
 Key: KAFKA-2544
 URL: https://issues.apache.org/jira/browse/KAFKA-2544
 Project: Kafka
  Issue Type: Improvement
  Components: website
Affects Versions: 0.8.2.1
Reporter: Stevo Slavic
Priority: Minor


https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools is 
outdated, mentions tools which have been heavily refactored or replaced by 
other tools, e.g. add partition tool, list/create topics tools, etc.

Please have the replication tools wiki page updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2114) Unable to change min.insync.replicas default

2015-09-03 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14728627#comment-14728627
 ] 

Stevo Slavic commented on KAFKA-2114:
-

I guess affected version should be 0.8.2, correct?

> Unable to change min.insync.replicas default
> 
>
> Key: KAFKA-2114
> URL: https://issues.apache.org/jira/browse/KAFKA-2114
> Project: Kafka
>  Issue Type: Bug
>Reporter: Bryan Baugher
>Assignee: Gwen Shapira
> Fix For: 0.8.3
>
> Attachments: KAFKA-2114.patch
>
>
> Following the comment here[1] I was unable to change the min.insync.replicas 
> default value. I tested this by setting up a 3 node cluster, wrote to a topic 
> with a replication factor of 3, using request.required.acks=-1 and setting 
> min.insync.replicas=2 on the broker's server.properties. I then shutdown 2 
> brokers but I was still able to write successfully. Only after running the 
> alter topic command setting min.insync.replicas=2 on the topic did I see 
> write failures.
> [1] - 
> http://mail-archives.apache.org/mod_mbox/kafka-users/201504.mbox/%3CCANZ-JHF71yqKE6%2BKKhWe2EGUJv6R3bTpoJnYck3u1-M35sobgg%40mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2492) Upgrade zkclient dependency to 0.6

2015-09-01 Thread Stevo Slavic (JIRA)
Stevo Slavic created KAFKA-2492:
---

 Summary: Upgrade zkclient dependency to 0.6
 Key: KAFKA-2492
 URL: https://issues.apache.org/jira/browse/KAFKA-2492
 Project: Kafka
  Issue Type: Task
Affects Versions: 0.8.2.1
Reporter: Stevo Slavic
Priority: Trivial


If zkclient does not get replaced with curator (via KAFKA-873) sooner please 
consider upgrading zkclient dependency to recently released 0.6.

zkclient 0.6 has few important changes included like:
- 
[fix|https://github.com/sgroschupf/zkclient/commit/0630c9c6e67ab49a51e80bfd939e4a0d01a69dfe]
 to fail retryUntilConnected actions with clear exception in case client gets 
closed
- [upgraded zookeeper dependency from 3.4.6 to 
3.4.3|https://github.com/sgroschupf/zkclient/commit/8975c1790f7f36cc5d4feea077df337fb1ddabdb]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2351) Brokers are having a problem shutting down correctly

2015-08-05 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14654927#comment-14654927
 ] 

Stevo Slavic commented on KAFKA-2351:
-

Running single instance of Kafka broker from latest trunk I experienced this 
not so successful controlled shutdown:
{noformat}
[2015-08-05 00:19:09,998] INFO [Offset Manager on Broker 0]: Removed 0 expired 
offsets in 0 milliseconds. (kafka.server.OffsetManager)
^C[2015-08-05 00:23:09,144] INFO [Kafka Server 0], shutting down 
(kafka.server.KafkaServer)
[2015-08-05 00:23:09,146] INFO [Kafka Server 0], Starting controlled shutdown 
(kafka.server.KafkaServer)
[2015-08-05 00:23:09,155] ERROR [KafkaApi-0] error when handling request Name: 
ControlledShutdownRequest; Version: 0; CorrelationId: 0; BrokerId: 0 
(kafka.server.KafkaApis)
kafka.common.ControllerMovedException: Controller moved to another broker. 
Aborting controlled shutdown
at 
kafka.controller.KafkaController.shutdownBroker(KafkaController.scala:231)
at 
kafka.server.KafkaApis.handleControlledShutdownRequest(KafkaApis.scala:146)
at kafka.server.KafkaApis.handle(KafkaApis.scala:63)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
at java.lang.Thread.run(Thread.java:745)
[2015-08-05 00:23:09,156] INFO [Kafka Server 0], Remaining partitions to move:  
(kafka.server.KafkaServer)
[2015-08-05 00:23:09,156] INFO [Kafka Server 0], Error code from controller: -1 
(kafka.server.KafkaServer)
[2015-08-05 00:23:14,160] WARN [Kafka Server 0], Retrying controlled shutdown 
after the previous attempt failed... (kafka.server.KafkaServer)
[2015-08-05 00:23:14,166] ERROR [KafkaApi-0] error when handling request Name: 
ControlledShutdownRequest; Version: 0; CorrelationId: 1; BrokerId: 0 
(kafka.server.KafkaApis)
kafka.common.ControllerMovedException: Controller moved to another broker. 
Aborting controlled shutdown
at 
kafka.controller.KafkaController.shutdownBroker(KafkaController.scala:231)
at 
kafka.server.KafkaApis.handleControlledShutdownRequest(KafkaApis.scala:146)
at kafka.server.KafkaApis.handle(KafkaApis.scala:63)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
at java.lang.Thread.run(Thread.java:745)
[2015-08-05 00:23:14,167] INFO [Kafka Server 0], Remaining partitions to move:  
(kafka.server.KafkaServer)
[2015-08-05 00:23:14,167] INFO [Kafka Server 0], Error code from controller: -1 
(kafka.server.KafkaServer)
[2015-08-05 00:23:19,169] WARN [Kafka Server 0], Retrying controlled shutdown 
after the previous attempt failed... (kafka.server.KafkaServer)
[2015-08-05 00:23:19,172] ERROR [KafkaApi-0] error when handling request Name: 
ControlledShutdownRequest; Version: 0; CorrelationId: 2; BrokerId: 0 
(kafka.server.KafkaApis)
kafka.common.ControllerMovedException: Controller moved to another broker. 
Aborting controlled shutdown
at 
kafka.controller.KafkaController.shutdownBroker(KafkaController.scala:231)
at 
kafka.server.KafkaApis.handleControlledShutdownRequest(KafkaApis.scala:146)
at kafka.server.KafkaApis.handle(KafkaApis.scala:63)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
at java.lang.Thread.run(Thread.java:745)
[2015-08-05 00:23:19,173] INFO [Kafka Server 0], Remaining partitions to move:  
(kafka.server.KafkaServer)
[2015-08-05 00:23:19,173] INFO [Kafka Server 0], Error code from controller: -1 
(kafka.server.KafkaServer)
[2015-08-05 00:23:24,176] WARN [Kafka Server 0], Retrying controlled shutdown 
after the previous attempt failed... (kafka.server.KafkaServer)
[2015-08-05 00:23:24,177] WARN [Kafka Server 0], Proceeding to do an unclean 
shutdown as all the controlled shutdown attempts failed 
(kafka.server.KafkaServer)
[2015-08-05 00:23:24,180] INFO [Socket Server on Broker 0], Shutting down 
(kafka.network.SocketServer)
[2015-08-05 00:23:24,189] INFO [Socket Server on Broker 0], Shutdown completed 
(kafka.network.SocketServer)
[2015-08-05 00:23:24,190] INFO [Kafka Request Handler on Broker 0], shutting 
down (kafka.server.KafkaRequestHandlerPool)
[2015-08-05 00:23:24,193] INFO [Kafka Request Handler on Broker 0], shut down 
completely (kafka.server.KafkaRequestHandlerPool)
[2015-08-05 00:23:24,196] INFO [Replica Manager on Broker 0]: Shutting down 
(kafka.server.ReplicaManager)
[2015-08-05 00:23:24,196] INFO [ReplicaFetcherManager on broker 0] shutting 
down (kafka.server.ReplicaFetcherManager)
[2015-08-05 00:23:24,197] INFO [ReplicaFetcherManager on broker 0] shutdown 
completed (kafka.server.ReplicaFetcherManager)
[2015-08-05 00:23:24,197] INFO [ExpirationReaper-0], Shutting down 
(kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2015-08-05 00:23:24,310] INFO [ExpirationReaper-0], Stopped  
(kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2015-08-05 00:23:24,310] INFO 

[jira] [Issue Comment Deleted] (KAFKA-2120) Add a request timeout to NetworkClient

2015-08-03 Thread Stevo Slavic (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stevo Slavic updated KAFKA-2120:

Comment: was deleted

(was: If this is considered as must have for Kafka 0.8.3, please consider 
setting Fix Version to 0.8.3. Related issue, KAFKA-1788 is fixed only for 
0.8.3.)

 Add a request timeout to NetworkClient
 --

 Key: KAFKA-2120
 URL: https://issues.apache.org/jira/browse/KAFKA-2120
 Project: Kafka
  Issue Type: New Feature
Reporter: Jiangjie Qin
Assignee: Mayuresh Gharat
 Fix For: 0.8.3

 Attachments: KAFKA-2120.patch, KAFKA-2120_2015-07-27_15:31:19.patch, 
 KAFKA-2120_2015-07-29_15:57:02.patch


 Currently NetworkClient does not have a timeout setting for requests. So if 
 no response is received for a request due to reasons such as broker is down, 
 the request will never be completed.
 Request timeout will also be used as implicit timeout for some methods such 
 as KafkaProducer.flush() and kafkaProducer.close().
 KIP-19 is created for this public interface change.
 https://cwiki.apache.org/confluence/display/KAFKA/KIP-19+-+Add+a+request+timeout+to+NetworkClient



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2120) Add a request timeout to NetworkClient

2015-07-30 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14647423#comment-14647423
 ] 

Stevo Slavic commented on KAFKA-2120:
-

If this is considered as must have for Kafka 0.8.3, please consider setting 
Fix Version to 0.8.3. Related issue, KAFKA-1788 is fixed only for 0.8.3.

 Add a request timeout to NetworkClient
 --

 Key: KAFKA-2120
 URL: https://issues.apache.org/jira/browse/KAFKA-2120
 Project: Kafka
  Issue Type: New Feature
Reporter: Jiangjie Qin
Assignee: Mayuresh Gharat
 Attachments: KAFKA-2120.patch, KAFKA-2120_2015-07-27_15:31:19.patch, 
 KAFKA-2120_2015-07-29_15:57:02.patch


 Currently NetworkClient does not have a timeout setting for requests. So if 
 no response is received for a request due to reasons such as broker is down, 
 the request will never be completed.
 Request timeout will also be used as implicit timeout for some methods such 
 as KafkaProducer.flush() and kafkaProducer.close().
 KIP-19 is created for this public interface change.
 https://cwiki.apache.org/confluence/display/KAFKA/KIP-19+-+Add+a+request+timeout+to+NetworkClient



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-1370) Gradle startup script for Windows

2015-07-29 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13962801#comment-13962801
 ] 

Stevo Slavic edited comment on KAFKA-1370 at 7/29/15 11:27 PM:
---

Created pull request with this change (see 
[here|https://github.com/apache/kafka/pull/22])


was (Author: sslavic):
Created pull request with this change (see 
[here|https://github.com/apache/kafka/pull/21])

 Gradle startup script for Windows
 -

 Key: KAFKA-1370
 URL: https://issues.apache.org/jira/browse/KAFKA-1370
 Project: Kafka
  Issue Type: Wish
  Components: tools
Affects Versions: 0.8.1
Reporter: Stevo Slavic
Assignee: Stevo Slavic
Priority: Trivial
  Labels: gradle
 Fix For: 0.8.2.0

 Attachments: 
 0001-KAFKA-1370-Added-Gradle-startup-script-for-Windows.patch


 Please provide Gradle startup script for Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2380) Publish Kafka snapshot Maven artifacts

2015-07-27 Thread Stevo Slavic (JIRA)
Stevo Slavic created KAFKA-2380:
---

 Summary: Publish Kafka snapshot Maven artifacts
 Key: KAFKA-2380
 URL: https://issues.apache.org/jira/browse/KAFKA-2380
 Project: Kafka
  Issue Type: Task
Affects Versions: 0.8.2.1
Reporter: Stevo Slavic
Priority: Minor


Please have Kafka snapshot Maven artifacts published regularly (e.g. either 
after every successful CI job run, or after successful nightly CI job run) to 
[Apache snapshots 
repository|http://repository.apache.org/content/groups/snapshots/org/apache/kafka/].

It will be very helpful for and promote early integration efforts, of 
patches/fixes to issues or of brand new Kafka clients/server versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2358) KafkaConsumer.partitionsFor should never return null

2015-07-24 Thread Stevo Slavic (JIRA)
Stevo Slavic created KAFKA-2358:
---

 Summary: KafkaConsumer.partitionsFor should never return null
 Key: KAFKA-2358
 URL: https://issues.apache.org/jira/browse/KAFKA-2358
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.8.3
Reporter: Stevo Slavic
Priority: Minor


{{KafkaConsumer.partitionsFor}} method by it's signature returns a 
{{ListPartitionInfo}}. Problem is that in case (metadata for) topic does not 
exist, current implementation will return null, which is considered a bad 
practice - instead of null it should return empty list.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2358) Cluster collection returning methods should never return null

2015-07-24 Thread Stevo Slavic (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stevo Slavic updated KAFKA-2358:

Description: 
{{KafkaConsumer.partitionsFor}} method by it's signature returns a 
{{ListPartitionInfo}}. Problem is that in case (metadata for) topic does not 
exist, current implementation will return null, which is considered a bad 
practice - instead of null it should return empty list.

Root cause is that the Cluster collection returning methods are returning null.

  was:{{KafkaConsumer.partitionsFor}} method by it's signature returns a 
{{ListPartitionInfo}}. Problem is that in case (metadata for) topic does not 
exist, current implementation will return null, which is considered a bad 
practice - instead of null it should return empty list.


 Cluster collection returning methods should never return null
 -

 Key: KAFKA-2358
 URL: https://issues.apache.org/jira/browse/KAFKA-2358
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.8.3
Reporter: Stevo Slavic
Priority: Minor

 {{KafkaConsumer.partitionsFor}} method by it's signature returns a 
 {{ListPartitionInfo}}. Problem is that in case (metadata for) topic does 
 not exist, current implementation will return null, which is considered a bad 
 practice - instead of null it should return empty list.
 Root cause is that the Cluster collection returning methods are returning 
 null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2358) Cluster collection returning methods should never return null

2015-07-24 Thread Stevo Slavic (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stevo Slavic updated KAFKA-2358:

Summary: Cluster collection returning methods should never return null  
(was: KafkaConsumer.partitionsFor should never return null)

 Cluster collection returning methods should never return null
 -

 Key: KAFKA-2358
 URL: https://issues.apache.org/jira/browse/KAFKA-2358
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.8.3
Reporter: Stevo Slavic
Priority: Minor

 {{KafkaConsumer.partitionsFor}} method by it's signature returns a 
 {{ListPartitionInfo}}. Problem is that in case (metadata for) topic does 
 not exist, current implementation will return null, which is considered a bad 
 practice - instead of null it should return empty list.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2359) New consumer - partitions auto assigned only on poll

2015-07-24 Thread Stevo Slavic (JIRA)
Stevo Slavic created KAFKA-2359:
---

 Summary: New consumer - partitions auto assigned only on poll
 Key: KAFKA-2359
 URL: https://issues.apache.org/jira/browse/KAFKA-2359
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.3
Reporter: Stevo Slavic
Priority: Minor


In the new consumer I encountered unexpected behavior. After constructing 
{{KafkaConsumer}} instance with configured consumer rebalance callback handler, 
and subscribing to a topic with consumer.subscribe(topic), retrieving 
subscriptions would return empty set and callback handler would not get called 
(no partitions ever assigned or revoked), no matter how long instance was up.

Then I found by inspecting {{KafkaConsumer}} code that partition assignment 
will only be triggered on first {{poll}}, since {{pollOnce}} has:

{noformat}
// ensure we have partitions assigned if we expect to
if (subscriptions.partitionsAutoAssigned())
coordinator.ensurePartitionAssignment();
{noformat}

I'm proposing to fix this by including same {{ensurePartitionAssignment}} 
fragment in {{KafkaConsumer.subscriptions}} accessor as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2342) KafkaConsumer rebalance with in-flight fetch can cause invalid position

2015-07-22 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14637533#comment-14637533
 ] 

Stevo Slavic commented on KAFKA-2342:
-

[~guozhang], fix version 0.9.0 or 0.8.3?

 KafkaConsumer rebalance with in-flight fetch can cause invalid position
 ---

 Key: KAFKA-2342
 URL: https://issues.apache.org/jira/browse/KAFKA-2342
 Project: Kafka
  Issue Type: Sub-task
  Components: core
Affects Versions: 0.8.3
Reporter: Jun Rao
Assignee: Jason Gustafson
 Fix For: 0.9.0


 If a rebalance occurs with an in-flight fetch, the new KafkaConsumer can end 
 up updating the fetch position of a partition to an offset which is no longer 
 valid. The consequence is that we may end up either returning to the user 
 messages with an unexpected position or we may fail to give back the right 
 offset in position(). 
 Additionally, this bug causes transient test failures in 
 ConsumerBounceTest.testConsumptionWithBrokerFailures with the following 
 exception:
 kafka.api.ConsumerBounceTest  testConsumptionWithBrokerFailures FAILED
 java.lang.NullPointerException
 at 
 org.apache.kafka.clients.consumer.KafkaConsumer.position(KafkaConsumer.java:949)
 at 
 kafka.api.ConsumerBounceTest.consumeWithBrokerFailures(ConsumerBounceTest.scala:86)
 at 
 kafka.api.ConsumerBounceTest.testConsumptionWithBrokerFailures(ConsumerBounceTest.scala:61)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2356) Support retrieving partitions of ConsumerRecords

2015-07-22 Thread Stevo Slavic (JIRA)
Stevo Slavic created KAFKA-2356:
---

 Summary: Support retrieving partitions of ConsumerRecords
 Key: KAFKA-2356
 URL: https://issues.apache.org/jira/browse/KAFKA-2356
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.8.3
Reporter: Stevo Slavic
Priority: Trivial


In new consumer on trunk, ConsumerRecords has method to retrieve records for 
given TopicPartition, but there is no method to retrieve TopicPartition's 
included/available in ConsumerRecords. Please have it supported.

Method could be something like:
{noformat}
/**
 * Get partitions of records returned by a {@link Consumer#poll(long)} operation
*/
public SetTopicPartition partitions() {
return Collections.unmodifiableSet(this.records.keySet());
}
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2345) Attempt to delete a topic already marked for deletion throws ZkNodeExistsException

2015-07-22 Thread Stevo Slavic (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stevo Slavic updated KAFKA-2345:

Affects Version/s: 0.8.2.0

 Attempt to delete a topic already marked for deletion throws 
 ZkNodeExistsException
 --

 Key: KAFKA-2345
 URL: https://issues.apache.org/jira/browse/KAFKA-2345
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2.0
Reporter: Ashish K Singh
Assignee: Ashish K Singh
 Fix For: 0.8.3

 Attachments: KAFKA-2345.patch, KAFKA-2345_2015-07-17_10:20:55.patch


 Throwing a TopicAlreadyMarkedForDeletionException will make much more sense. 
 A user does not necessarily have to know about involvement of zk in the 
 process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2203) Get gradle build to work with Java 8

2015-07-17 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14631105#comment-14631105
 ] 

Stevo Slavic commented on KAFKA-2203:
-

On a clean clone of trunk, with gradle 2.5 and JDK 8u45, when I run {{gradle 
clean jarAll}} build fails. Here is relevant build output fragment:
{noformat}
...
:kafka:core:compileScala
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; 
support was removed in 8.0
error: error while loading CharSequence, class file 
'/Library/Java/JavaVirtualMachines/jdk1.8.0_45.jdk/Contents/Home/jre/lib/rt.jar(java/lang/CharSequence.class)'
 is broken
(bad constant pool tag 18 at byte 10)
error: error while loading AnnotatedElement, class file 
'/Library/Java/JavaVirtualMachines/jdk1.8.0_45.jdk/Contents/Home/jre/lib/rt.jar(java/lang/reflect/AnnotatedElement.class)'
 is broken
(bad constant pool tag 18 at byte 76)
error: error while loading Arrays, class file 
'/Library/Java/JavaVirtualMachines/jdk1.8.0_45.jdk/Contents/Home/jre/lib/rt.jar(java/util/Arrays.class)'
 is broken
(bad constant pool tag 18 at byte 765)
error: error while loading Comparator, class file 
'/Library/Java/JavaVirtualMachines/jdk1.8.0_45.jdk/Contents/Home/jre/lib/rt.jar(java/util/Comparator.class)'
 is broken
(bad constant pool tag 18 at byte 20)
/var/folders/lf/rbfblwvx6rx3xhm68yksmqjwdv1dsf/T/sbt_d6110328/xsbt/ExtractAPI.scala:479:
 error: java.util.Comparator does not take type parameters
  private[this] val sortClasses = new Comparator[Symbol] {
  ^
5 errors found
:kafka:core:compileScala FAILED
:jar_core_2_9_1 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':core:compileScala'.
 org.gradle.messaging.remote.internal.PlaceholderException (no error message)

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED
{noformat}

Related discussion/issue from past: KAFKA-1624

 Get gradle build to work with Java 8
 

 Key: KAFKA-2203
 URL: https://issues.apache.org/jira/browse/KAFKA-2203
 Project: Kafka
  Issue Type: Bug
  Components: build
Affects Versions: 0.8.1.1
Reporter: Gaju Bhat
Priority: Minor
 Fix For: 0.8.1.2

 Attachments: 0001-Special-case-java-8-and-javadoc-handling.patch


 The gradle build halts because javadoc in java 8 is a lot stricter about 
 valid html.
 It might be worthwhile to special case java 8 as described 
 [here|http://blog.joda.org/2014/02/turning-off-doclint-in-jdk-8-javadoc.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-2203) Get gradle build to work with Java 8

2015-07-17 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14631105#comment-14631105
 ] 

Stevo Slavic edited comment on KAFKA-2203 at 7/17/15 9:50 AM:
--

On a clean clone of trunk, with gradle 2.5 and JDK 8u45, when I run {{gradle 
clean jarAll}} build fails. Here is relevant build output fragment:
{noformat}
...
:kafka:core:compileScala
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; 
support was removed in 8.0
error: error while loading CharSequence, class file 
'/Library/Java/JavaVirtualMachines/jdk1.8.0_45.jdk/Contents/Home/jre/lib/rt.jar(java/lang/CharSequence.class)'
 is broken
(bad constant pool tag 18 at byte 10)
error: error while loading AnnotatedElement, class file 
'/Library/Java/JavaVirtualMachines/jdk1.8.0_45.jdk/Contents/Home/jre/lib/rt.jar(java/lang/reflect/AnnotatedElement.class)'
 is broken
(bad constant pool tag 18 at byte 76)
error: error while loading Arrays, class file 
'/Library/Java/JavaVirtualMachines/jdk1.8.0_45.jdk/Contents/Home/jre/lib/rt.jar(java/util/Arrays.class)'
 is broken
(bad constant pool tag 18 at byte 765)
error: error while loading Comparator, class file 
'/Library/Java/JavaVirtualMachines/jdk1.8.0_45.jdk/Contents/Home/jre/lib/rt.jar(java/util/Comparator.class)'
 is broken
(bad constant pool tag 18 at byte 20)
/var/folders/lf/rbfblwvx6rx3xhm68yksmqjwdv1dsf/T/sbt_d6110328/xsbt/ExtractAPI.scala:479:
 error: java.util.Comparator does not take type parameters
  private[this] val sortClasses = new Comparator[Symbol] {
  ^
5 errors found
:kafka:core:compileScala FAILED
:jar_core_2_9_1 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':core:compileScala'.
 org.gradle.messaging.remote.internal.PlaceholderException (no error message)

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED
{noformat}

Related issue from past: KAFKA-1624

And related discussion from dev mailing list: 
http://mail-archives.apache.org/mod_mbox/kafka-dev/201507.mbox/%3CCAHwHRrU75Of4ErxSr9-%3D4EEB_jCmcA4PL4S4hP2P-6peaUOfZA%40mail.gmail.com%3E

Is there a ticket to drop scala 2.9.x support? I couldn't find one.


was (Author: sslavic):
On a clean clone of trunk, with gradle 2.5 and JDK 8u45, when I run {{gradle 
clean jarAll}} build fails. Here is relevant build output fragment:
{noformat}
...
:kafka:core:compileScala
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; 
support was removed in 8.0
error: error while loading CharSequence, class file 
'/Library/Java/JavaVirtualMachines/jdk1.8.0_45.jdk/Contents/Home/jre/lib/rt.jar(java/lang/CharSequence.class)'
 is broken
(bad constant pool tag 18 at byte 10)
error: error while loading AnnotatedElement, class file 
'/Library/Java/JavaVirtualMachines/jdk1.8.0_45.jdk/Contents/Home/jre/lib/rt.jar(java/lang/reflect/AnnotatedElement.class)'
 is broken
(bad constant pool tag 18 at byte 76)
error: error while loading Arrays, class file 
'/Library/Java/JavaVirtualMachines/jdk1.8.0_45.jdk/Contents/Home/jre/lib/rt.jar(java/util/Arrays.class)'
 is broken
(bad constant pool tag 18 at byte 765)
error: error while loading Comparator, class file 
'/Library/Java/JavaVirtualMachines/jdk1.8.0_45.jdk/Contents/Home/jre/lib/rt.jar(java/util/Comparator.class)'
 is broken
(bad constant pool tag 18 at byte 20)
/var/folders/lf/rbfblwvx6rx3xhm68yksmqjwdv1dsf/T/sbt_d6110328/xsbt/ExtractAPI.scala:479:
 error: java.util.Comparator does not take type parameters
  private[this] val sortClasses = new Comparator[Symbol] {
  ^
5 errors found
:kafka:core:compileScala FAILED
:jar_core_2_9_1 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':core:compileScala'.
 org.gradle.messaging.remote.internal.PlaceholderException (no error message)

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED
{noformat}

Related discussion/issue from past: KAFKA-1624

 Get gradle build to work with Java 8
 

 Key: KAFKA-2203
 URL: https://issues.apache.org/jira/browse/KAFKA-2203
 Project: Kafka
  Issue Type: Bug
  Components: build
Affects Versions: 0.8.1.1
Reporter: Gaju Bhat
Priority: Minor
 Fix For: 0.8.1.2

 Attachments: 0001-Special-case-java-8-and-javadoc-handling.patch


 The gradle build halts because javadoc in java 8 is a lot stricter about 
 valid html.
 It might be worthwhile to special case java 8 as described 
 [here|http://blog.joda.org/2014/02/turning-off-doclint-in-jdk-8-javadoc.html].



--
This message was sent by Atlassian JIRA

[jira] [Commented] (KAFKA-2203) Get gradle build to work with Java 8

2015-07-16 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629869#comment-14629869
 ] 

Stevo Slavic commented on KAFKA-2203:
-

Kafka trunk still doesn't build with JDK 8.

 Get gradle build to work with Java 8
 

 Key: KAFKA-2203
 URL: https://issues.apache.org/jira/browse/KAFKA-2203
 Project: Kafka
  Issue Type: Bug
  Components: build
Affects Versions: 0.8.1.1
Reporter: Gaju Bhat
Priority: Minor
 Fix For: 0.8.1.2

 Attachments: 0001-Special-case-java-8-and-javadoc-handling.patch


 The gradle build halts because javadoc in java 8 is a lot stricter about 
 valid html.
 It might be worthwhile to special case java 8 as described 
 [here|http://blog.joda.org/2014/02/turning-off-doclint-in-jdk-8-javadoc.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2275) Add a ListTopic() API to the new consumer

2015-07-13 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14624399#comment-14624399
 ] 

Stevo Slavic commented on KAFKA-2275:
-

[~guozhang] so it's {{listTopics()}} not {{listTopic()}}?

 Add a ListTopic() API to the new consumer
 -

 Key: KAFKA-2275
 URL: https://issues.apache.org/jira/browse/KAFKA-2275
 Project: Kafka
  Issue Type: Sub-task
  Components: consumer
Reporter: Guozhang Wang
Priority: Critical
 Fix For: 0.8.3


 One usecase for this API is for consumers that want specific partition 
 assignment with regex subscription. For implementation, it involves sending a 
 TopicMetadataRequest to a random broker and parse the response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2275) Add a ListTopic() API to the new consumer

2015-07-12 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14624057#comment-14624057
 ] 

Stevo Slavic commented on KAFKA-2275:
-

Isn't this covered by [public ListPartitionInfo partitionsFor(String 
topic)|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java#L967]?
 It was added in 
[this|https://github.com/apache/kafka/commit/0699ff2ce60abb466cab5315977a224f1a70a4da#diff-267b7c1e68156c1301c56be63ae41dd0]
 commit.

 Add a ListTopic() API to the new consumer
 -

 Key: KAFKA-2275
 URL: https://issues.apache.org/jira/browse/KAFKA-2275
 Project: Kafka
  Issue Type: Sub-task
  Components: consumer
Reporter: Guozhang Wang
Priority: Critical
 Fix For: 0.8.3


 One usecase for this API is for consumers that want specific partition 
 assignment with regex subscription. For implementation, it involves sending a 
 TopicMetadataRequest to a random broker and parse the response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1372) Upgrade to Gradle 1.10

2015-07-08 Thread Stevo Slavic (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stevo Slavic updated KAFKA-1372:

Resolution: Duplicate
Status: Resolved  (was: Patch Available)

Resolving as duplicate of KAFKA-1559

 Upgrade to Gradle 1.10
 --

 Key: KAFKA-1372
 URL: https://issues.apache.org/jira/browse/KAFKA-1372
 Project: Kafka
  Issue Type: Task
  Components: tools
Affects Versions: 0.8.1
Reporter: Stevo Slavic
Priority: Minor
  Labels: gradle
 Attachments: 0001-KAFKA-1372-Upgrade-to-Gradle-1.10.patch, 
 0001-KAFKA-1372-Upgrade-to-Gradle-1.11.patch


 Currently used version of Gradle wrapper is 1.6 while 1.11 is available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2304) Support enabling JMX in Kafka Vagrantfile

2015-06-30 Thread Stevo Slavic (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stevo Slavic updated KAFKA-2304:

Attachment: KAFKA-2304-JMX.patch

Makes sense, changed.

Attached updated [^KAFKA-2304-JMX.patch] with changes after review from 
[~ewencp]

 Support enabling JMX in Kafka Vagrantfile
 -

 Key: KAFKA-2304
 URL: https://issues.apache.org/jira/browse/KAFKA-2304
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.3
Reporter: Stevo Slavic
Priority: Minor
 Attachments: KAFKA-2304-JMX.patch, KAFKA-2304-JMX.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2304) Support enabling JMX in Kafka Vagrantfile

2015-06-29 Thread Stevo Slavic (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stevo Slavic updated KAFKA-2304:

Status: Patch Available  (was: Open)

 Support enabling JMX in Kafka Vagrantfile
 -

 Key: KAFKA-2304
 URL: https://issues.apache.org/jira/browse/KAFKA-2304
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.3
Reporter: Stevo Slavic
Priority: Minor
 Attachments: KAFKA-2304-JMX.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2304) Support enabling JMX in Kafka Vagrantfile

2015-06-29 Thread Stevo Slavic (JIRA)
Stevo Slavic created KAFKA-2304:
---

 Summary: Support enabling JMX in Kafka Vagrantfile
 Key: KAFKA-2304
 URL: https://issues.apache.org/jira/browse/KAFKA-2304
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.3
Reporter: Stevo Slavic
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2304) Support enabling JMX in Kafka Vagrantfile

2015-06-29 Thread Stevo Slavic (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stevo Slavic updated KAFKA-2304:

Attachment: KAFKA-2304-JMX.patch

Attached [^KAFKA-2304-JMX.patch]

 Support enabling JMX in Kafka Vagrantfile
 -

 Key: KAFKA-2304
 URL: https://issues.apache.org/jira/browse/KAFKA-2304
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.3
Reporter: Stevo Slavic
Priority: Minor
 Attachments: KAFKA-2304-JMX.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1173) Using Vagrant to get up and running with Apache Kafka

2015-06-28 Thread Stevo Slavic (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stevo Slavic updated KAFKA-1173:

Attachment: KAFKA-1173-JMX.patch

Please consider improving Vagrantfile further - currently it does not seem to 
support out of the box enabling JMX. Attached [^KAFKA-1173-JMX.patch] which 
worked for me to enable JMX.

 Using Vagrant to get up and running with Apache Kafka
 -

 Key: KAFKA-1173
 URL: https://issues.apache.org/jira/browse/KAFKA-1173
 Project: Kafka
  Issue Type: Improvement
Reporter: Joe Stein
Assignee: Ewen Cheslack-Postava
 Fix For: 0.8.3

 Attachments: KAFKA-1173-JMX.patch, KAFKA-1173.patch, 
 KAFKA-1173_2013-12-07_12:07:55.patch, KAFKA-1173_2014-11-11_13:50:55.patch, 
 KAFKA-1173_2014-11-12_11:32:09.patch, KAFKA-1173_2014-11-18_16:01:33.patch


 Vagrant has been getting a lot of pickup in the tech communities.  I have 
 found it very useful for development and testing and working with a few 
 clients now using it to help virtualize their environments in repeatable ways.
 Using Vagrant to get up and running.
 For 0.8.0 I have a patch on github https://github.com/stealthly/kafka
 1) Install Vagrant [http://www.vagrantup.com/](http://www.vagrantup.com/)
 2) Install Virtual Box 
 [https://www.virtualbox.org/](https://www.virtualbox.org/)
 In the main kafka folder
 1) ./sbt update
 2) ./sbt package
 3) ./sbt assembly-package-dependency
 4) vagrant up
 once this is done 
 * Zookeeper will be running 192.168.50.5
 * Broker 1 on 192.168.50.10
 * Broker 2 on 192.168.50.20
 * Broker 3 on 192.168.50.30
 When you are all up and running you will be back at a command brompt.  
 If you want you can login to the machines using vagrant shh machineName but 
 you don't need to.
 You can access the brokers and zookeeper by their IP
 e.g.
 bin/kafka-console-producer.sh --broker-list 
 192.168.50.10:9092,192.168.50.20:9092,192.168.50.30:9092 --topic sandbox
 bin/kafka-console-consumer.sh --zookeeper 192.168.50.5:2181 --topic sandbox 
 --from-beginning



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1866) LogStartOffset gauge throws exceptions after log.delete()

2015-06-14 Thread Stevo Slavic (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stevo Slavic updated KAFKA-1866:

Affects Version/s: 0.8.2.1

 LogStartOffset gauge throws exceptions after log.delete()
 -

 Key: KAFKA-1866
 URL: https://issues.apache.org/jira/browse/KAFKA-1866
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2.1
Reporter: Gian Merlino
Assignee: Sriharsha Chintalapani
 Fix For: 0.8.3

 Attachments: KAFKA-1866.patch, KAFKA-1866_2015-02-10_22:50:09.patch, 
 KAFKA-1866_2015-02-11_09:25:33.patch


 The LogStartOffset gauge does logSegments.head.baseOffset, which throws 
 NoSuchElementException on an empty list, which can occur after a delete() of 
 the log. This makes life harder for custom MetricsReporters, since they have 
 to deal with .value() possibly throwing an exception.
 Locally we're dealing with this by having Log.delete() also call removeMetric 
 on all the gauges. That also has the benefit of not having a bunch of metrics 
 floating around for logs that the broker is not actually handling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2263) Update Is it possible to delete a topic wiki FAQ answer

2015-06-11 Thread Stevo Slavic (JIRA)
Stevo Slavic created KAFKA-2263:
---

 Summary: Update Is it possible to delete a topic wiki FAQ answer
 Key: KAFKA-2263
 URL: https://issues.apache.org/jira/browse/KAFKA-2263
 Project: Kafka
  Issue Type: Task
  Components: website
Affects Versions: 0.8.2.1
Reporter: Stevo Slavic
Priority: Trivial


Answer to the mentioned 
[FAQ|https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Isitpossibletodeleteatopic?]
 hasn't been updated since delete feature became available in 0.8.2.x



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2244) Document Kafka metrics configuration properties

2015-06-03 Thread Stevo Slavic (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stevo Slavic updated KAFKA-2244:

Labels: newbie  (was: )

 Document Kafka metrics configuration properties
 ---

 Key: KAFKA-2244
 URL: https://issues.apache.org/jira/browse/KAFKA-2244
 Project: Kafka
  Issue Type: Task
  Components: config, website
Affects Versions: 0.8.2.1
Reporter: Stevo Slavic
  Labels: newbie

 Please have two configuration properties used in 
 kafka.metrics.KafkaMetricsConfig, namely kafka.metrics.reporters and 
 kafka.metrics.polling.interval.secs, documented on 
 http://kafka.apache.org/documentation.html#configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2233) Log deletion is not removing log metrics

2015-05-31 Thread Stevo Slavic (JIRA)
Stevo Slavic created KAFKA-2233:
---

 Summary: Log deletion is not removing log metrics
 Key: KAFKA-2233
 URL: https://issues.apache.org/jira/browse/KAFKA-2233
 Project: Kafka
  Issue Type: Bug
  Components: log
Affects Versions: 0.8.2.1
Reporter: Stevo Slavic
Assignee: Jay Kreps
Priority: Minor


Topic deletion does not remove associated metrics. Any configured kafka metric 
reporter that gets triggered after a topic is deleted, when polling for log 
metrics for such deleted logs it will throw something like:

{noformat}
java.util.NoSuchElementException
at 
java.util.concurrent.ConcurrentSkipListMap$Iter.advance(ConcurrentSkipListMap.java:2299)
at 
java.util.concurrent.ConcurrentSkipListMap$ValueIterator.next(ConcurrentSkipListMap.java:2326)
at 
scala.collection.convert.Wrappers$JIteratorWrapper.next(Wrappers.scala:43)
at scala.collection.IterableLike$class.head(IterableLike.scala:107)
at scala.collection.AbstractIterable.head(Iterable.scala:54)
at kafka.log.Log.logStartOffset(Log.scala:502)
at kafka.log.Log$$anon$2.value(Log.scala:86)
at kafka.log.Log$$anon$2.value(Log.scala:85)
{noformat}

since on log deletion, {{Log}} segments collection get cleared, so logSegments 
{{Iterable}} has no (next) elements.

Known workaround is to restart broker - as metric registry is in memory, not 
persisted, on restart it will be recreated with metrics for 
existing/non-deleted topics only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2132) Move Log4J appender to clients module

2015-05-10 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537070#comment-14537070
 ] 

Stevo Slavic commented on KAFKA-2132:
-

Please consider changing issue title and editing description, to match 
decisions.

 Move Log4J appender to clients module
 -

 Key: KAFKA-2132
 URL: https://issues.apache.org/jira/browse/KAFKA-2132
 Project: Kafka
  Issue Type: Improvement
Reporter: Gwen Shapira
Assignee: Ashish K Singh
 Attachments: KAFKA-2132.patch, KAFKA-2132_2015-04-27_19:59:46.patch, 
 KAFKA-2132_2015-04-30_12:22:02.patch, KAFKA-2132_2015-04-30_15:53:17.patch


 Log4j appender is just a producer.
 Since we have a new producer in the clients module, no need to keep Log4J 
 appender in core and force people to package all of Kafka with their apps.
 Lets move the Log4jAppender to clients module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1460) NoReplicaOnlineException: No replica for partition

2015-02-23 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14334066#comment-14334066
 ] 

Stevo Slavic commented on KAFKA-1460:
-

Just encountered this issue in an integration test, which starts embedded 
ZooKeeper and Kafka, on same node.

 NoReplicaOnlineException: No replica for partition
 --

 Key: KAFKA-1460
 URL: https://issues.apache.org/jira/browse/KAFKA-1460
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1.1
Reporter: Artur Denysenko
Priority: Critical
 Attachments: state-change.log


 We have a standalone kafka server.
 After several days of running we get:
 {noformat}
 kafka.common.NoReplicaOnlineException: No replica for partition 
 [gk.q.module,1] is alive. Live brokers are: [Set()], Assigned replicas are: 
 [List(0)]
   at 
 kafka.controller.OfflinePartitionLeaderSelector.selectLeader(PartitionLeaderSelector.scala:61)
   at 
 kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:336)
   at 
 kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:185)
   at 
 kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:99)
   at 
 kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:96)
   at 
 scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:743)
   at 
 scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:95)
   at 
 scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:95)
   at scala.collection.Iterator$class.foreach(Iterator.scala:772)
   at 
 scala.collection.mutable.HashTable$$anon$1.foreach(HashTable.scala:157)
   at 
 scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:190)
   at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:45)
   at scala.collection.mutable.HashMap.foreach(HashMap.scala:95)
   at 
 scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:742)
   at 
 kafka.controller.PartitionStateMachine.triggerOnlinePartitionStateChange(PartitionStateMachine.scala:96)
   at 
 kafka.controller.PartitionStateMachine.startup(PartitionStateMachine.scala:68)
   at 
 kafka.controller.KafkaController.onControllerFailover(KafkaController.scala:312)
   at 
 kafka.controller.KafkaController$$anonfun$1.apply$mcV$sp(KafkaController.scala:162)
   at 
 kafka.server.ZookeeperLeaderElector.elect(ZookeeperLeaderElector.scala:63)
   at 
 kafka.controller.KafkaController$SessionExpirationListener$$anonfun$handleNewSession$1.apply$mcZ$sp(KafkaController.scala:1068)
   at 
 kafka.controller.KafkaController$SessionExpirationListener$$anonfun$handleNewSession$1.apply(KafkaController.scala:1066)
   at 
 kafka.controller.KafkaController$SessionExpirationListener$$anonfun$handleNewSession$1.apply(KafkaController.scala:1066)
   at kafka.utils.Utils$.inLock(Utils.scala:538)
   at 
 kafka.controller.KafkaController$SessionExpirationListener.handleNewSession(KafkaController.scala:1066)
   at org.I0Itec.zkclient.ZkClient$4.run(ZkClient.java:472)
   at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
 {noformat}
 Please see attached [state-change.log]
 You can find all server logs (450mb) here: 
 http://46.4.114.35:/deploy/kafka-logs.2014-05-14-16.tgz
 On client we get:
 {noformat}
 16:28:36,843 [ool-12-thread-2] WARN  ZookeeperConsumerConnector - 
 [dev_dev-1400257716132-e7b8240c], no brokers found when trying to rebalance.
 {noformat}
 If we try to send message using 'kafka-console-producer.sh':
 {noformat}
 [root@dev kafka]# /srv/kafka/bin/kafka-console-producer.sh --broker-list 
 localhost:9092 --topic test
 message
 SLF4J: Failed to load class org.slf4j.impl.StaticLoggerBinder.
 SLF4J: Defaulting to no-operation (NOP) logger implementation
 SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further 
 details.
 [2014-05-16 19:45:30,950] WARN Fetching topic metadata with correlation id 0 
 for topics [Set(test)] from broker [id:0,host:localhost,port:9092] failed 
 (kafka.client.ClientUtils$)
 java.net.SocketTimeoutException
 at 
 sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:229)
 at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103)
 at 
 java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385)
 at kafka.utils.Utils$.read(Utils.scala:375)
 at 
 kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:54)
 at 

[jira] [Created] (KAFKA-1938) [doc] Quick start example should reference appropriate Kafka version

2015-02-09 Thread Stevo Slavic (JIRA)
Stevo Slavic created KAFKA-1938:
---

 Summary: [doc] Quick start example should reference appropriate 
Kafka version
 Key: KAFKA-1938
 URL: https://issues.apache.org/jira/browse/KAFKA-1938
 Project: Kafka
  Issue Type: Improvement
  Components: website
Affects Versions: 0.8.2
Reporter: Stevo Slavic
Priority: Trivial


Kafka 0.8.2.0 documentation, quick start example on 
https://kafka.apache.org/documentation.html#quickstart in step 1 links and 
instructs reader to download Kafka 0.8.1.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1764) ZookeeperConsumerConnector could put multiple shutdownCommand to the same data chunk queue.

2014-11-14 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14212011#comment-14212011
 ] 

Stevo Slavic commented on KAFKA-1764:
-

Is this issue duplicate of KAFKA-1716 ?

 ZookeeperConsumerConnector could put multiple shutdownCommand to the same 
 data chunk queue.
 ---

 Key: KAFKA-1764
 URL: https://issues.apache.org/jira/browse/KAFKA-1764
 Project: Kafka
  Issue Type: Bug
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin
 Attachments: KAFKA-1764.patch, KAFKA-1764_2014-11-12_14:05:35.patch, 
 KAFKA-1764_2014-11-13_23:57:51.patch


 In ZookeeperConsumerConnector shutdown(), we could potentially put multiple 
 shutdownCommand into the same data chunk queue, provided the topics are 
 sharing the same data chunk queue in topicThreadIdAndQueues.
 From email thread to document:
 In ZookeeperConsumerConnector shutdown(), we could potentially put
 multiple shutdownCommand into the same data chunk queue, provided the
 topics are sharing the same data chunk queue in topicThreadIdAndQueues.
 In our case, we only have 1 consumer stream for all the topics, the data
 chunk queue capacity is set to 1. The execution sequence causing problem is
 as below:
 1. ZookeeperConsumerConnector shutdown() is called, it tries to put
 shutdownCommand for each queue in topicThreadIdAndQueues. Since we only
 have 1 queue, multiple shutdownCommand will be put into the queue.
 2. In sendShutdownToAllQueues(), between queue.clean() and
 queue.put(shutdownCommand), consumer iterator receives the shutdownCommand
 and put it back into the data chunk queue. After that,
 ZookeeperConsumerConnector tries to put another shutdownCommand into the
 data chunk queue but will block forever.
 The thread stack trace is as below:
 {code}
 Thread-23 #58 prio=5 os_prio=0 tid=0x7ff440004800 nid=0x40a waiting
 on condition [0x7ff4f0124000]
java.lang.Thread.State: WAITING (parking)
 at sun.misc.Unsafe.park(Native Method)
 - parking to wait for  0x000680b96bf0 (a
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
 at
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at
 java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:350)
 at
 kafka.consumer.ZookeeperConsumerConnector$$anonfun$sendShutdownToAllQueues$1.apply(ZookeeperConsumerConnector.scala:262)
 at
 kafka.consumer.ZookeeperConsumerConnector$$anonfun$sendShutdownToAllQueues$1.apply(ZookeeperConsumerConnector.scala:259)
 at scala.collection.Iterator$class.foreach(Iterator.scala:727)
 at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
 at
 scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
 at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
 at
 kafka.consumer.ZookeeperConsumerConnector.sendShutdownToAllQueues(ZookeeperConsumerConnector.scala:259)
 at
 kafka.consumer.ZookeeperConsumerConnector.liftedTree1$1(ZookeeperConsumerConnector.scala:199)
 at
 kafka.consumer.ZookeeperConsumerConnector.shutdown(ZookeeperConsumerConnector.scala:192)
 - locked 0x000680dd5848 (a java.lang.Object)
 at
 kafka.tools.MirrorMaker$$anonfun$cleanShutdown$1.apply(MirrorMaker.scala:185)
 at
 kafka.tools.MirrorMaker$$anonfun$cleanShutdown$1.apply(MirrorMaker.scala:185)
 at scala.collection.immutable.List.foreach(List.scala:318)
 at kafka.tools.MirrorMaker$.cleanShutdown(MirrorMaker.scala:185)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1716) hang during shutdown of ZookeeperConsumerConnector

2014-11-14 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14212082#comment-14212082
 ] 

Stevo Slavic commented on KAFKA-1716:
-

Is this issue related to KAFKA-1764 ? That one has a patch.

 hang during shutdown of ZookeeperConsumerConnector
 --

 Key: KAFKA-1716
 URL: https://issues.apache.org/jira/browse/KAFKA-1716
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 0.8.1.1
Reporter: Sean Fay
Assignee: Neha Narkhede

 It appears to be possible for {{ZookeeperConsumerConnector.shutdown()}} to 
 wedge in the case that some consumer fetcher threads receive messages during 
 the shutdown process.
 Shutdown thread:
 {code}-- Parking to wait for: 
 java/util/concurrent/CountDownLatch$Sync@0x2aaaf3ef06d0
 at jrockit/vm/Locks.park0(J)V(Native Method)
 at jrockit/vm/Locks.park(Locks.java:2230)
 at sun/misc/Unsafe.park(ZJ)V(Native Method)
 at java/util/concurrent/locks/LockSupport.park(LockSupport.java:156)
 at 
 java/util/concurrent/locks/AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811)
 at 
 java/util/concurrent/locks/AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:969)
 at 
 java/util/concurrent/locks/AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1281)
 at java/util/concurrent/CountDownLatch.await(CountDownLatch.java:207)
 at kafka/utils/ShutdownableThread.shutdown(ShutdownableThread.scala:36)
 at 
 kafka/server/AbstractFetcherThread.shutdown(AbstractFetcherThread.scala:71)
 at 
 kafka/server/AbstractFetcherManager$$anonfun$closeAllFetchers$2.apply(AbstractFetcherManager.scala:121)
 at 
 kafka/server/AbstractFetcherManager$$anonfun$closeAllFetchers$2.apply(AbstractFetcherManager.scala:120)
 at 
 scala/collection/TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
 at 
 scala/collection/mutable/HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
 at 
 scala/collection/mutable/HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
 at 
 scala/collection/mutable/HashTable$class.foreachEntry(HashTable.scala:226)
 at scala/collection/mutable/HashMap.foreachEntry(HashMap.scala:39)
 at scala/collection/mutable/HashMap.foreach(HashMap.scala:98)
 at 
 scala/collection/TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
 at 
 kafka/server/AbstractFetcherManager.closeAllFetchers(AbstractFetcherManager.scala:120)
 ^-- Holding lock: java/lang/Object@0x2aaaebcc7318[thin lock]
 at 
 kafka/consumer/ConsumerFetcherManager.stopConnections(ConsumerFetcherManager.scala:148)
 at 
 kafka/consumer/ZookeeperConsumerConnector.liftedTree1$1(ZookeeperConsumerConnector.scala:171)
 at 
 kafka/consumer/ZookeeperConsumerConnector.shutdown(ZookeeperConsumerConnector.scala:167){code}
 ConsumerFetcherThread:
 {code}-- Parking to wait for: 
 java/util/concurrent/locks/AbstractQueuedSynchronizer$ConditionObject@0x2aaaebcc7568
 at jrockit/vm/Locks.park0(J)V(Native Method)
 at jrockit/vm/Locks.park(Locks.java:2230)
 at sun/misc/Unsafe.park(ZJ)V(Native Method)
 at java/util/concurrent/locks/LockSupport.park(LockSupport.java:156)
 at 
 java/util/concurrent/locks/AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1987)
 at 
 java/util/concurrent/LinkedBlockingQueue.put(LinkedBlockingQueue.java:306)
 at kafka/consumer/PartitionTopicInfo.enqueue(PartitionTopicInfo.scala:60)
 at 
 kafka/consumer/ConsumerFetcherThread.processPartitionData(ConsumerFetcherThread.scala:49)
 at 
 kafka/server/AbstractFetcherThread$$anonfun$processFetchRequest$1$$anonfun$apply$mcV$sp$2.apply(AbstractFetcherThread.scala:130)
 at 
 kafka/server/AbstractFetcherThread$$anonfun$processFetchRequest$1$$anonfun$apply$mcV$sp$2.apply(AbstractFetcherThread.scala:111)
 at scala/collection/immutable/HashMap$HashMap1.foreach(HashMap.scala:224)
 at 
 scala/collection/immutable/HashMap$HashTrieMap.foreach(HashMap.scala:403)
 at 
 kafka/server/AbstractFetcherThread$$anonfun$processFetchRequest$1.apply$mcV$sp(AbstractFetcherThread.scala:111)
 at 
 kafka/server/AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(AbstractFetcherThread.scala:111)
 at 
 kafka/server/AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(AbstractFetcherThread.scala:111)
 at kafka/utils/Utils$.inLock(Utils.scala:538)
 at 
 kafka/server/AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:110)
 at 
 kafka/server/AbstractFetcherThread.doWork(AbstractFetcherThread.scala:88)
 at kafka/utils/ShutdownableThread.run(ShutdownableThread.scala:51)
 at 

[jira] [Created] (KAFKA-1761) num.partitions documented default is 1 while actual default is 2

2014-11-10 Thread Stevo Slavic (JIRA)
Stevo Slavic created KAFKA-1761:
---

 Summary: num.partitionsdocumented default is 1 while actual 
default is 2
 Key: KAFKA-1761
 URL: https://issues.apache.org/jira/browse/KAFKA-1761
 Project: Kafka
  Issue Type: Bug
  Components: log
Affects Versions: 0.8.1.1
Reporter: Stevo Slavic
Assignee: Jay Kreps
Priority: Minor


Default {{num.partitions}} documented in 
http://kafka.apache.org/08/configuration.html is 1, while server configuration 
defaults same parameter to 2 (see 
https://github.com/apache/kafka/blob/0.8.1/config/server.properties#L63 )

Please have this inconsistency fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1761) num.partitions documented default is 1 while actual default is 2

2014-11-10 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205277#comment-14205277
 ] 

Stevo Slavic commented on KAFKA-1761:
-

Changed in 
[commit|https://github.com/apache/kafka/commit/b428d8cc48237099af648de12d18be78d54446eb#diff-fe795615cd3ca9a55e864ad330f3344c]
 for KAFKA-1531

 num.partitionsdocumented default is 1 while actual default is 2
 -

 Key: KAFKA-1761
 URL: https://issues.apache.org/jira/browse/KAFKA-1761
 Project: Kafka
  Issue Type: Bug
  Components: log
Affects Versions: 0.8.1.1
Reporter: Stevo Slavic
Assignee: Jay Kreps
Priority: Minor

 Default {{num.partitions}} documented in 
 http://kafka.apache.org/08/configuration.html is 1, while server 
 configuration defaults same parameter to 2 (see 
 https://github.com/apache/kafka/blob/0.8.1/config/server.properties#L63 )
 Please have this inconsistency fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1737) Document required ZkSerializer for ZkClient used with AdminUtils

2014-10-29 Thread Stevo Slavic (JIRA)
Stevo Slavic created KAFKA-1737:
---

 Summary: Document required ZkSerializer for ZkClient used with 
AdminUtils
 Key: KAFKA-1737
 URL: https://issues.apache.org/jira/browse/KAFKA-1737
 Project: Kafka
  Issue Type: Improvement
  Components: tools
Affects Versions: 0.8.1.1
Reporter: Stevo Slavic
Priority: Minor


{{ZkClient}} instances passed to {{AdminUtils}} calls must have 
{{kafka.utils.ZKStringSerializer}} set as {{ZkSerializer}}. Otherwise commands 
executed via {{AdminUtils}} may not be seen/recognizable to broker, producer or 
consumer. E.g. producer (with auto topic creation turned off) will not be able 
to send messages to a topic created via {{AdminUtils}}, it will throw 
{{UnknownTopicOrPartitionException}}.

Please consider at least documenting this requirement in {{AdminUtils}} 
scaladoc.

For more info see [related discussion on Kafka user mailing 
list|http://mail-archives.apache.org/mod_mbox/kafka-users/201410.mbox/%3CCAAUywg-oihNiXuQRYeS%3D8Z3ymsmEHo6ghLs%3Dru4nbm%2BdHVz6TA%40mail.gmail.com%3E].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1485) Upgrade to Zookeeper 3.4.6 and create shim for ZKCLI so system tests can run

2014-10-18 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14176177#comment-14176177
 ] 

Stevo Slavic commented on KAFKA-1485:
-

[~junrao], yes, it's optional (needed if netty based connection factories are 
used instead of nio ones), bit it is not labeled as optional in ZK pom.xml (see 
and please consider voting for ZOOKEEPER-1681).

 Upgrade to Zookeeper 3.4.6 and create shim for ZKCLI so system tests can run
 --

 Key: KAFKA-1485
 URL: https://issues.apache.org/jira/browse/KAFKA-1485
 Project: Kafka
  Issue Type: Wish
Affects Versions: 0.8.1.1
Reporter: Machiel Groeneveld
Assignee: Gwen Shapira
  Labels: newbie
 Fix For: 0.8.2

 Attachments: KAFKA-1485.2.patch, KAFKA-1485.3.patch, 
 KAFKA-1485.4.patch, KAFKA-1485.patch


 I can't run projects alongside Kafka that use zookeeper 3.4 jars. 3.4 has 
 been out for 2.5 years and seems to be ready for adoption.
 In particular Apache Storm will upgrade to Zookeeper 3.4.x in their next 
 0.9.2 release. I can't run both versions in my tests at the same time. 
 The only compile problem I saw was in EmbeddedZookeeper.scala 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (KAFKA-1328) Add new consumer APIs

2014-10-10 Thread Stevo Slavic (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stevo Slavic updated KAFKA-1328:

Comment: was deleted

(was: Affects 0.9.0 ?!)

 Add new consumer APIs
 -

 Key: KAFKA-1328
 URL: https://issues.apache.org/jira/browse/KAFKA-1328
 Project: Kafka
  Issue Type: Sub-task
  Components: consumer
Affects Versions: 0.8.0, 0.8.1
Reporter: Neha Narkhede
Assignee: Neha Narkhede
 Fix For: 0.8.2

 Attachments: KAFKA-1328.patch, KAFKA-1328_2014-04-10_17:13:24.patch, 
 KAFKA-1328_2014-04-10_18:30:48.patch, KAFKA-1328_2014-04-11_10:54:19.patch, 
 KAFKA-1328_2014-04-11_11:16:44.patch, KAFKA-1328_2014-04-12_18:30:22.patch, 
 KAFKA-1328_2014-04-12_19:12:12.patch, KAFKA-1328_2014-05-05_11:35:07.patch, 
 KAFKA-1328_2014-05-05_11:35:41.patch, KAFKA-1328_2014-05-09_17:18:55.patch, 
 KAFKA-1328_2014-05-16_11:46:02.patch, KAFKA-1328_2014-05-20_15:55:01.patch, 
 KAFKA-1328_2014-05-20_16:34:37.patch


 New consumer API discussion is here - 
 http://mail-archives.apache.org/mod_mbox/kafka-users/201402.mbox/%3CCAOG_4QYBHwyi0xN=hl1fpnrtkvfjzx14ujfntft3nn_mw3+...@mail.gmail.com%3E
 This JIRA includes reviewing and checking in the new consumer APIs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1674) auto.create.topics.enable docs are misleading

2014-10-06 Thread Stevo Slavic (JIRA)
Stevo Slavic created KAFKA-1674:
---

 Summary: auto.create.topics.enable docs are misleading
 Key: KAFKA-1674
 URL: https://issues.apache.org/jira/browse/KAFKA-1674
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1.1
Reporter: Stevo Slavic
Priority: Minor


{{auto.create.topics.enable}} is currently 
[documented|http://kafka.apache.org/08/configuration.html] with
{quote}
Enable auto creation of topic on the server. If this is set to true then 
attempts to produce, consume, or fetch metadata for a non-existent topic will 
automatically create it with the default replication factor and number of 
partitions.
{quote}

In Kafka 0.8.1.1 reality, topics are only created when trying to publish a 
message on non-existing topic.

After 
[discussion|http://mail-archives.apache.org/mod_mbox/kafka-users/201410.mbox/%3CCAFbh0Q1WXLUDO-im1fQ1yEvrMduxmXbj5HXVc3Cq8B%3DfeMso9g%40mail.gmail.com%3E]
 with [~junrao] conclusion was that it's documentation issue which needs to be 
fixed.

Please check once more if this is just non-working functionality. If it is docs 
only issue, and implicit topic creation functionality should work only for 
producer, consider moving configuration property (docs only, but maybe code 
also?) from broker configuration options to producer configuration options.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1674) auto.create.topics.enable docs are misleading

2014-10-06 Thread Stevo Slavic (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stevo Slavic updated KAFKA-1674:

Description: 
{{auto.create.topics.enable}} is currently 
[documented|http://kafka.apache.org/08/configuration.html] with
{quote}
Enable auto creation of topic on the server. If this is set to true then 
attempts to produce, consume, or fetch metadata for a non-existent topic will 
automatically create it with the default replication factor and number of 
partitions.
{quote}

In Kafka 0.8.1.1 reality, topics are only created when trying to publish a 
message on non-existing topic.

After 
[discussion|http://mail-archives.apache.org/mod_mbox/kafka-users/201410.mbox/%3CCAFbh0Q1WXLUDO-im1fQ1yEvrMduxmXbj5HXVc3Cq8B%3DfeMso9g%40mail.gmail.com%3E]
 with [~junrao] conclusion was that it's documentation issue which needs to be 
fixed.

Please check once more if this is just non-working functionality. If it is docs 
only issue, and implicit topic creation functionality should work only for 
producer, consider moving {{auto.create.topics.enable}} and other topic auto 
creation related configuration properties (docs only, but maybe code also?) 
from broker configuration options to producer configuration options.

  was:
{{auto.create.topics.enable}} is currently 
[documented|http://kafka.apache.org/08/configuration.html] with
{quote}
Enable auto creation of topic on the server. If this is set to true then 
attempts to produce, consume, or fetch metadata for a non-existent topic will 
automatically create it with the default replication factor and number of 
partitions.
{quote}

In Kafka 0.8.1.1 reality, topics are only created when trying to publish a 
message on non-existing topic.

After 
[discussion|http://mail-archives.apache.org/mod_mbox/kafka-users/201410.mbox/%3CCAFbh0Q1WXLUDO-im1fQ1yEvrMduxmXbj5HXVc3Cq8B%3DfeMso9g%40mail.gmail.com%3E]
 with [~junrao] conclusion was that it's documentation issue which needs to be 
fixed.

Please check once more if this is just non-working functionality. If it is docs 
only issue, and implicit topic creation functionality should work only for 
producer, consider moving configuration property (docs only, but maybe code 
also?) from broker configuration options to producer configuration options.


 auto.create.topics.enable docs are misleading
 -

 Key: KAFKA-1674
 URL: https://issues.apache.org/jira/browse/KAFKA-1674
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1.1
Reporter: Stevo Slavic
Priority: Minor

 {{auto.create.topics.enable}} is currently 
 [documented|http://kafka.apache.org/08/configuration.html] with
 {quote}
 Enable auto creation of topic on the server. If this is set to true then 
 attempts to produce, consume, or fetch metadata for a non-existent topic will 
 automatically create it with the default replication factor and number of 
 partitions.
 {quote}
 In Kafka 0.8.1.1 reality, topics are only created when trying to publish a 
 message on non-existing topic.
 After 
 [discussion|http://mail-archives.apache.org/mod_mbox/kafka-users/201410.mbox/%3CCAFbh0Q1WXLUDO-im1fQ1yEvrMduxmXbj5HXVc3Cq8B%3DfeMso9g%40mail.gmail.com%3E]
  with [~junrao] conclusion was that it's documentation issue which needs to 
 be fixed.
 Please check once more if this is just non-working functionality. If it is 
 docs only issue, and implicit topic creation functionality should work only 
 for producer, consider moving {{auto.create.topics.enable}} and other topic 
 auto creation related configuration properties (docs only, but maybe code 
 also?) from broker configuration options to producer configuration options.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1674) auto.create.topics.enable docs are misleading

2014-10-06 Thread Stevo Slavic (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stevo Slavic updated KAFKA-1674:

Description: 
{{auto.create.topics.enable}} is currently 
[documented|http://kafka.apache.org/08/configuration.html] with
{quote}
Enable auto creation of topic on the server. If this is set to true then 
attempts to produce, consume, or fetch metadata for a non-existent topic will 
automatically create it with the default replication factor and number of 
partitions.
{quote}

In Kafka 0.8.1.1 reality, topics are only created when trying to publish a 
message on non-existing topic.

After 
[discussion|http://mail-archives.apache.org/mod_mbox/kafka-users/201410.mbox/%3CCAFbh0Q1WXLUDO-im1fQ1yEvrMduxmXbj5HXVc3Cq8B%3DfeMso9g%40mail.gmail.com%3E]
 with [~junrao] conclusion was that it's documentation issue which needs to be 
fixed.

Please check once more if this is just non-working functionality. If it is docs 
only issue, and implicit topic creation functionality should work only for 
producer, consider moving {{auto.create.topics.enable}} and maybe also 
{{num.partitions}}, {{default.replication.factor}} and any other topic auto 
creation related configuration properties (docs only, but maybe code also?) 
from broker configuration options to producer configuration options.

  was:
{{auto.create.topics.enable}} is currently 
[documented|http://kafka.apache.org/08/configuration.html] with
{quote}
Enable auto creation of topic on the server. If this is set to true then 
attempts to produce, consume, or fetch metadata for a non-existent topic will 
automatically create it with the default replication factor and number of 
partitions.
{quote}

In Kafka 0.8.1.1 reality, topics are only created when trying to publish a 
message on non-existing topic.

After 
[discussion|http://mail-archives.apache.org/mod_mbox/kafka-users/201410.mbox/%3CCAFbh0Q1WXLUDO-im1fQ1yEvrMduxmXbj5HXVc3Cq8B%3DfeMso9g%40mail.gmail.com%3E]
 with [~junrao] conclusion was that it's documentation issue which needs to be 
fixed.

Please check once more if this is just non-working functionality. If it is docs 
only issue, and implicit topic creation functionality should work only for 
producer, consider moving {{auto.create.topics.enable}} and other topic auto 
creation related configuration properties (docs only, but maybe code also?) 
from broker configuration options to producer configuration options.


 auto.create.topics.enable docs are misleading
 -

 Key: KAFKA-1674
 URL: https://issues.apache.org/jira/browse/KAFKA-1674
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1.1
Reporter: Stevo Slavic
Priority: Minor

 {{auto.create.topics.enable}} is currently 
 [documented|http://kafka.apache.org/08/configuration.html] with
 {quote}
 Enable auto creation of topic on the server. If this is set to true then 
 attempts to produce, consume, or fetch metadata for a non-existent topic will 
 automatically create it with the default replication factor and number of 
 partitions.
 {quote}
 In Kafka 0.8.1.1 reality, topics are only created when trying to publish a 
 message on non-existing topic.
 After 
 [discussion|http://mail-archives.apache.org/mod_mbox/kafka-users/201410.mbox/%3CCAFbh0Q1WXLUDO-im1fQ1yEvrMduxmXbj5HXVc3Cq8B%3DfeMso9g%40mail.gmail.com%3E]
  with [~junrao] conclusion was that it's documentation issue which needs to 
 be fixed.
 Please check once more if this is just non-working functionality. If it is 
 docs only issue, and implicit topic creation functionality should work only 
 for producer, consider moving {{auto.create.topics.enable}} and maybe also 
 {{num.partitions}}, {{default.replication.factor}} and any other topic auto 
 creation related configuration properties (docs only, but maybe code also?) 
 from broker configuration options to producer configuration options.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1674) auto.create.topics.enable docs are misleading

2014-10-06 Thread Stevo Slavic (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stevo Slavic updated KAFKA-1674:

Description: 
{{auto.create.topics.enable}} is currently 
[documented|http://kafka.apache.org/08/configuration.html] with
{quote}
Enable auto creation of topic on the server. If this is set to true then 
attempts to produce, consume, or fetch metadata for a non-existent topic will 
automatically create it with the default replication factor and number of 
partitions.
{quote}

In Kafka 0.8.1.1 reality, topics are only created when trying to publish a 
message on non-existing topic.

After 
[discussion|http://mail-archives.apache.org/mod_mbox/kafka-users/201410.mbox/%3CCAFbh0Q1WXLUDO-im1fQ1yEvrMduxmXbj5HXVc3Cq8B%3DfeMso9g%40mail.gmail.com%3E]
 with [~junrao] conclusion was that it's documentation issue which needs to be 
fixed.

Before fixing docs, please check once more if this is just non-working 
functionality.

  was:
{{auto.create.topics.enable}} is currently 
[documented|http://kafka.apache.org/08/configuration.html] with
{quote}
Enable auto creation of topic on the server. If this is set to true then 
attempts to produce, consume, or fetch metadata for a non-existent topic will 
automatically create it with the default replication factor and number of 
partitions.
{quote}

In Kafka 0.8.1.1 reality, topics are only created when trying to publish a 
message on non-existing topic.

After 
[discussion|http://mail-archives.apache.org/mod_mbox/kafka-users/201410.mbox/%3CCAFbh0Q1WXLUDO-im1fQ1yEvrMduxmXbj5HXVc3Cq8B%3DfeMso9g%40mail.gmail.com%3E]
 with [~junrao] conclusion was that it's documentation issue which needs to be 
fixed.

Please check once more if this is just non-working functionality. If it is docs 
only issue, and implicit topic creation functionality should work only for 
producer, consider moving {{auto.create.topics.enable}} and maybe also 
{{num.partitions}}, {{default.replication.factor}} and any other topic auto 
creation related configuration properties (docs only, but maybe code also?) 
from broker configuration options to producer configuration options.


 auto.create.topics.enable docs are misleading
 -

 Key: KAFKA-1674
 URL: https://issues.apache.org/jira/browse/KAFKA-1674
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1.1
Reporter: Stevo Slavic
Priority: Minor

 {{auto.create.topics.enable}} is currently 
 [documented|http://kafka.apache.org/08/configuration.html] with
 {quote}
 Enable auto creation of topic on the server. If this is set to true then 
 attempts to produce, consume, or fetch metadata for a non-existent topic will 
 automatically create it with the default replication factor and number of 
 partitions.
 {quote}
 In Kafka 0.8.1.1 reality, topics are only created when trying to publish a 
 message on non-existing topic.
 After 
 [discussion|http://mail-archives.apache.org/mod_mbox/kafka-users/201410.mbox/%3CCAFbh0Q1WXLUDO-im1fQ1yEvrMduxmXbj5HXVc3Cq8B%3DfeMso9g%40mail.gmail.com%3E]
  with [~junrao] conclusion was that it's documentation issue which needs to 
 be fixed.
 Before fixing docs, please check once more if this is just non-working 
 functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1624) building on JDK 8 fails

2014-09-16 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14136314#comment-14136314
 ] 

Stevo Slavic commented on KAFKA-1624:
-

If I'm not mistaken, this error is caused by Scala 2.10 (and older) 
incompatibility with (reading) Java 8.
{{./gradlew clean test_core_2_11}} with Java 8 passes successfully for me on 
current trunk, although even Scala 2.11 just has experimental support for Java 
8.

Scala 2.12 will require Java 8 (see [Scala 2.12 
roadmap|http://www.scala-lang.org/news/2.12-roadmap]).

 building on JDK 8 fails
 ---

 Key: KAFKA-1624
 URL: https://issues.apache.org/jira/browse/KAFKA-1624
 Project: Kafka
  Issue Type: Bug
Reporter: Joe Stein
  Labels: newbie
 Fix For: 0.9.0


 {code}
 Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; 
 support was removed in 8.0
 error: error while loading CharSequence, class file 
 '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/lang/CharSequence.class)' is 
 broken
 (class java.lang.RuntimeException/bad constant pool tag 18 at byte 10)
 error: error while loading Comparator, class file 
 '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/util/Comparator.class)' is 
 broken
 (class java.lang.RuntimeException/bad constant pool tag 18 at byte 20)
 error: error while loading AnnotatedElement, class file 
 '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/lang/reflect/AnnotatedElement.class)'
  is broken
 (class java.lang.RuntimeException/bad constant pool tag 18 at byte 76)
 error: error while loading Arrays, class file 
 '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/util/Arrays.class)' is broken
 (class java.lang.RuntimeException/bad constant pool tag 18 at byte 765)
 /tmp/sbt_53783b12/xsbt/ExtractAPI.scala:395: error: java.util.Comparator does 
 not take type parameters
   private[this] val sortClasses = new Comparator[Symbol] {
 ^
 5 errors found
 :core:compileScala FAILED
 FAILURE: Build failed with an exception.
 * What went wrong:
 Execution failed for task ':core:compileScala'.
  org.gradle.messaging.remote.internal.PlaceholderException (no error message)
 * Try:
 Run with --stacktrace option to get the stack trace. Run with --info or 
 --debug option to get more log output.
 BUILD FAILED
 Total time: 1 mins 48.298 secs
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1438) Migrate kafka client tools

2014-08-26 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14110437#comment-14110437
 ] 

Stevo Slavic commented on KAFKA-1438:
-

Can some of the committers please reopen this ticket?

 Migrate kafka client tools
 --

 Key: KAFKA-1438
 URL: https://issues.apache.org/jira/browse/KAFKA-1438
 Project: Kafka
  Issue Type: Bug
Reporter: Guozhang Wang
Assignee: Sriharsha Chintalapani
  Labels: newbie, tools, usability
 Fix For: 0.8.2

 Attachments: KAFKA-1438-windows_bat.patch, KAFKA-1438.patch, 
 KAFKA-1438.patch, KAFKA-1438_2014-05-27_11:45:29.patch, 
 KAFKA-1438_2014-05-27_12:16:00.patch, KAFKA-1438_2014-05-27_17:08:59.patch, 
 KAFKA-1438_2014-05-28_08:32:46.patch, KAFKA-1438_2014-05-28_08:36:28.patch, 
 KAFKA-1438_2014-05-28_08:40:22.patch, KAFKA-1438_2014-05-30_11:36:01.patch, 
 KAFKA-1438_2014-05-30_11:38:46.patch, KAFKA-1438_2014-05-30_11:42:32.patch


 Currently the console/perf client tools scatter across different packages, 
 we'd better to:
 1. Move Consumer/ProducerPerformance and SimpleConsumerPerformance to tools 
 and remove the perf sub-project.
 2. Move ConsoleConsumer from kafka.consumer to kafka.tools.
 3. Move other consumer related tools from kafka.consumer to kafka.tools.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (KAFKA-1419) cross build for scala 2.11

2014-08-25 Thread Stevo Slavic (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stevo Slavic updated KAFKA-1419:


Attachment: KAFKA-1419-scalaBinaryVersion.diff

Attaching again [^KAFKA-1419-scalaBinaryVersion.diff]

 cross build for scala 2.11
 --

 Key: KAFKA-1419
 URL: https://issues.apache.org/jira/browse/KAFKA-1419
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Affects Versions: 0.8.1
Reporter: Scott Clasen
Assignee: Ivan Lyutov
Priority: Blocker
 Fix For: 0.8.2

 Attachments: KAFKA-1419-scalaBinaryVersion.diff, 
 KAFKA-1419-scalaBinaryVersion.diff, KAFKA-1419.patch, KAFKA-1419.patch, 
 KAFKA-1419_2014-07-28_15:05:16.patch, KAFKA-1419_2014-07-29_15:13:43.patch, 
 KAFKA-1419_2014-08-04_14:43:26.patch, KAFKA-1419_2014-08-05_12:51:16.patch, 
 KAFKA-1419_2014-08-07_10:17:34.patch, KAFKA-1419_2014-08-07_10:52:18.patch


 Please publish builds for scala 2.11, hopefully just needs a small tweak to 
 the gradle conf?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (KAFKA-1419) cross build for scala 2.11

2014-08-25 Thread Stevo Slavic (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stevo Slavic updated KAFKA-1419:


Attachment: KAFKA-1419-scalaBinaryVersion.patch

Attaching [^KAFKA-1419-scalaBinaryVersion.patch]

 cross build for scala 2.11
 --

 Key: KAFKA-1419
 URL: https://issues.apache.org/jira/browse/KAFKA-1419
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Affects Versions: 0.8.1
Reporter: Scott Clasen
Assignee: Ivan Lyutov
Priority: Blocker
 Fix For: 0.8.2

 Attachments: KAFKA-1419-scalaBinaryVersion.diff, 
 KAFKA-1419-scalaBinaryVersion.diff, KAFKA-1419-scalaBinaryVersion.patch, 
 KAFKA-1419.patch, KAFKA-1419.patch, KAFKA-1419_2014-07-28_15:05:16.patch, 
 KAFKA-1419_2014-07-29_15:13:43.patch, KAFKA-1419_2014-08-04_14:43:26.patch, 
 KAFKA-1419_2014-08-05_12:51:16.patch, KAFKA-1419_2014-08-07_10:17:34.patch, 
 KAFKA-1419_2014-08-07_10:52:18.patch


 Please publish builds for scala 2.11, hopefully just needs a small tweak to 
 the gradle conf?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Issue Comment Deleted] (KAFKA-1419) cross build for scala 2.11

2014-08-25 Thread Stevo Slavic (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stevo Slavic updated KAFKA-1419:


Comment: was deleted

(was: Attaching again [^KAFKA-1419-scalaBinaryVersion.diff])

 cross build for scala 2.11
 --

 Key: KAFKA-1419
 URL: https://issues.apache.org/jira/browse/KAFKA-1419
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Affects Versions: 0.8.1
Reporter: Scott Clasen
Assignee: Ivan Lyutov
Priority: Blocker
 Fix For: 0.8.2

 Attachments: KAFKA-1419-scalaBinaryVersion.patch, KAFKA-1419.patch, 
 KAFKA-1419.patch, KAFKA-1419_2014-07-28_15:05:16.patch, 
 KAFKA-1419_2014-07-29_15:13:43.patch, KAFKA-1419_2014-08-04_14:43:26.patch, 
 KAFKA-1419_2014-08-05_12:51:16.patch, KAFKA-1419_2014-08-07_10:17:34.patch, 
 KAFKA-1419_2014-08-07_10:52:18.patch


 Please publish builds for scala 2.11, hopefully just needs a small tweak to 
 the gradle conf?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (KAFKA-1419) cross build for scala 2.11

2014-08-25 Thread Stevo Slavic (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stevo Slavic updated KAFKA-1419:


Attachment: (was: KAFKA-1419-scalaBinaryVersion.diff)

 cross build for scala 2.11
 --

 Key: KAFKA-1419
 URL: https://issues.apache.org/jira/browse/KAFKA-1419
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Affects Versions: 0.8.1
Reporter: Scott Clasen
Assignee: Ivan Lyutov
Priority: Blocker
 Fix For: 0.8.2

 Attachments: KAFKA-1419-scalaBinaryVersion.patch, KAFKA-1419.patch, 
 KAFKA-1419.patch, KAFKA-1419_2014-07-28_15:05:16.patch, 
 KAFKA-1419_2014-07-29_15:13:43.patch, KAFKA-1419_2014-08-04_14:43:26.patch, 
 KAFKA-1419_2014-08-05_12:51:16.patch, KAFKA-1419_2014-08-07_10:17:34.patch, 
 KAFKA-1419_2014-08-07_10:52:18.patch


 Please publish builds for scala 2.11, hopefully just needs a small tweak to 
 the gradle conf?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (KAFKA-1419) cross build for scala 2.11

2014-08-25 Thread Stevo Slavic (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stevo Slavic updated KAFKA-1419:


Attachment: (was: KAFKA-1419-scalaBinaryVersion.diff)

 cross build for scala 2.11
 --

 Key: KAFKA-1419
 URL: https://issues.apache.org/jira/browse/KAFKA-1419
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Affects Versions: 0.8.1
Reporter: Scott Clasen
Assignee: Ivan Lyutov
Priority: Blocker
 Fix For: 0.8.2

 Attachments: KAFKA-1419-scalaBinaryVersion.patch, KAFKA-1419.patch, 
 KAFKA-1419.patch, KAFKA-1419_2014-07-28_15:05:16.patch, 
 KAFKA-1419_2014-07-29_15:13:43.patch, KAFKA-1419_2014-08-04_14:43:26.patch, 
 KAFKA-1419_2014-08-05_12:51:16.patch, KAFKA-1419_2014-08-07_10:17:34.patch, 
 KAFKA-1419_2014-08-07_10:52:18.patch


 Please publish builds for scala 2.11, hopefully just needs a small tweak to 
 the gradle conf?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (KAFKA-1419) cross build for scala 2.11

2014-08-25 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14108832#comment-14108832
 ] 

Stevo Slavic commented on KAFKA-1419:
-

I saw binary content too when opening attached file with .diff extension. Now I 
just named file with .patch extension, and it's text file. Btw, I see and open 
.diff file locally as text file. Strange. Maybe JIRA doing something funky.

Anyway, please use attached patch.

Also, please consider accepting github pull requests, like lots of other Apache 
projects. (just [google apache project pull 
request|https://www.google.com/search?q=apache+project+pull+request]). That 
would make it a lot easier for all, both committers and contributors, and avoid 
issues like this.

 cross build for scala 2.11
 --

 Key: KAFKA-1419
 URL: https://issues.apache.org/jira/browse/KAFKA-1419
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Affects Versions: 0.8.1
Reporter: Scott Clasen
Assignee: Ivan Lyutov
Priority: Blocker
 Fix For: 0.8.2

 Attachments: KAFKA-1419-scalaBinaryVersion.patch, KAFKA-1419.patch, 
 KAFKA-1419.patch, KAFKA-1419_2014-07-28_15:05:16.patch, 
 KAFKA-1419_2014-07-29_15:13:43.patch, KAFKA-1419_2014-08-04_14:43:26.patch, 
 KAFKA-1419_2014-08-05_12:51:16.patch, KAFKA-1419_2014-08-07_10:17:34.patch, 
 KAFKA-1419_2014-08-07_10:52:18.patch


 Please publish builds for scala 2.11, hopefully just needs a small tweak to 
 the gradle conf?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (KAFKA-1438) Migrate kafka client tools

2014-08-22 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14106864#comment-14106864
 ] 

Stevo Slavic commented on KAFKA-1438:
-

Windows batch scripts, {{kafka-console-consumer.bat}} and 
{{kafka-console-producer.bat}}, haven't been updated with new 
{{ConsoleConsumer}} and {{ConsoleProducer}} path/package.

 Migrate kafka client tools
 --

 Key: KAFKA-1438
 URL: https://issues.apache.org/jira/browse/KAFKA-1438
 Project: Kafka
  Issue Type: Bug
Reporter: Guozhang Wang
Assignee: Sriharsha Chintalapani
  Labels: newbie, tools, usability
 Fix For: 0.8.2

 Attachments: KAFKA-1438.patch, KAFKA-1438.patch, 
 KAFKA-1438_2014-05-27_11:45:29.patch, KAFKA-1438_2014-05-27_12:16:00.patch, 
 KAFKA-1438_2014-05-27_17:08:59.patch, KAFKA-1438_2014-05-28_08:32:46.patch, 
 KAFKA-1438_2014-05-28_08:36:28.patch, KAFKA-1438_2014-05-28_08:40:22.patch, 
 KAFKA-1438_2014-05-30_11:36:01.patch, KAFKA-1438_2014-05-30_11:38:46.patch, 
 KAFKA-1438_2014-05-30_11:42:32.patch


 Currently the console/perf client tools scatter across different packages, 
 we'd better to:
 1. Move Consumer/ProducerPerformance and SimpleConsumerPerformance to tools 
 and remove the perf sub-project.
 2. Move ConsoleConsumer from kafka.consumer to kafka.tools.
 3. Move other consumer related tools from kafka.consumer to kafka.tools.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (KAFKA-1438) Migrate kafka client tools

2014-08-22 Thread Stevo Slavic (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stevo Slavic updated KAFKA-1438:


Attachment: KAFKA-1438-windows_bat.patch

Attached [^KAFKA-1438-windows_bat.patch] which fixes windows batch files issue. 
The patch assumes that another patch, in KAFKA-1419, for similar windows batch 
files issue has been applied first - otherwise this change cannot be tested.

 Migrate kafka client tools
 --

 Key: KAFKA-1438
 URL: https://issues.apache.org/jira/browse/KAFKA-1438
 Project: Kafka
  Issue Type: Bug
Reporter: Guozhang Wang
Assignee: Sriharsha Chintalapani
  Labels: newbie, tools, usability
 Fix For: 0.8.2

 Attachments: KAFKA-1438-windows_bat.patch, KAFKA-1438.patch, 
 KAFKA-1438.patch, KAFKA-1438_2014-05-27_11:45:29.patch, 
 KAFKA-1438_2014-05-27_12:16:00.patch, KAFKA-1438_2014-05-27_17:08:59.patch, 
 KAFKA-1438_2014-05-28_08:32:46.patch, KAFKA-1438_2014-05-28_08:36:28.patch, 
 KAFKA-1438_2014-05-28_08:40:22.patch, KAFKA-1438_2014-05-30_11:36:01.patch, 
 KAFKA-1438_2014-05-30_11:38:46.patch, KAFKA-1438_2014-05-30_11:42:32.patch


 Currently the console/perf client tools scatter across different packages, 
 we'd better to:
 1. Move Consumer/ProducerPerformance and SimpleConsumerPerformance to tools 
 and remove the perf sub-project.
 2. Move ConsoleConsumer from kafka.consumer to kafka.tools.
 3. Move other consumer related tools from kafka.consumer to kafka.tools.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (KAFKA-1419) cross build for scala 2.11

2014-08-21 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105299#comment-14105299
 ] 

Stevo Slavic commented on KAFKA-1419:
-

{{bin/windows/kafka-run-class.bat}} still reference 2.8.0.

Both {{kafka-run-class.sh}} and {{kafka-run-class.bat}} are broken for scala 
versions where scala version is not equal to scala binary version, like 2.10.1 
where binary version is 2.10. 

More specifically classpath addition for kafka core is wrong.
In {{kafka-run-class.bat}} instead of
{noformat}
%BASE_DIR%\core\build\libs\kafka_%SCALA_VERSION%*.jar
{noformat}

there should be something like:

{noformat}
%BASE_DIR%\core\build\libs\kafka_%SCALA_BINARY_VERSION%*.jar
{noformat}

Similarly, in {{kafka-run-class.sh}} instead of
{noformat}
for file in $base_dir/core/build/libs/kafka_${SCALA_VERSION}*.jar;
do
  CLASSPATH=$CLASSPATH:$file
done
{noformat}

there should be something like
{noformat}
for file in $base_dir/core/build/libs/kafka_${SCALA_BINARY_VERSION}*.jar;
do
  CLASSPATH=$CLASSPATH:$file
done
{noformat}



This will require adding one more variable for scala binary version, in both 
mentioned scripts.

e.g. in {{kafka-run-class.sh}} from

{noformat}
if [ -z $SCALA_VERSION ]; then
SCALA_VERSION=2.10.1
fi
{noformat}

to

{noformat}
if [ -z $SCALA_VERSION ]; then
SCALA_VERSION=2.10.1
fi

if [ -z $SCALA_BINARY_VERSION ]; then
SCALA_BINARY_VERSION=2.10
fi
{noformat}

and in {{kafka-run-class.bat}}, from

{noformat}
IF [%SCALA_VERSION%] EQU [] (
  set SCALA_VERSION=2.10.1
)
{noformat}

to

{noformat}
IF [%SCALA_VERSION%] EQU [] (
  set SCALA_VERSION=2.10.1
)

IF [%SCALA_BINARY_VERSION%] EQU [] (
  set SCALA_BINARY_VERSION=2.10
)
{noformat}

 cross build for scala 2.11
 --

 Key: KAFKA-1419
 URL: https://issues.apache.org/jira/browse/KAFKA-1419
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Affects Versions: 0.8.1
Reporter: Scott Clasen
Assignee: Ivan Lyutov
Priority: Blocker
 Fix For: 0.8.2

 Attachments: KAFKA-1419.patch, KAFKA-1419.patch, 
 KAFKA-1419_2014-07-28_15:05:16.patch, KAFKA-1419_2014-07-29_15:13:43.patch, 
 KAFKA-1419_2014-08-04_14:43:26.patch, KAFKA-1419_2014-08-05_12:51:16.patch, 
 KAFKA-1419_2014-08-07_10:17:34.patch, KAFKA-1419_2014-08-07_10:52:18.patch


 Please publish builds for scala 2.11, hopefully just needs a small tweak to 
 the gradle conf?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   >