Re: [DISCUSS] KIP-49: Fair Partition Assignment Strategy

2016-02-26 Thread Joel Koshy
Hi Andrew,

Thanks for the wiki. Just a couple of comments:

   - The disruptive config change issue that you mentioned is pretty much a
   non-issue in the new consumer due to central assignment.
   - Optional: but it may be helpful to add a concrete example.
   - More of an orthogonal observation than a comment: with heavily skewed
   subscriptions fairness is sort of moot. i.e., people would generally scale
   up or down subscription counts with the express purpose of
   reducing/increasing load on those instances.
   - WRT roundrobin we later realized a significant flaw in the way we lay
   out partitions: we originally wanted to randomize the partition layout to
   reduce the likelihood of most partitions of the same topic from ending up
   on a given consumer which is important if you have a few very large topics.
   Unfortunately we used hashCode - which does a splendid job of clumping
   partitions from the same topic together :( We can probably just "fix" that
   in the new consumer's roundrobin assignor.

Thanks,

Joel


On Fri, Feb 26, 2016 at 2:32 PM, Olson,Andrew  wrote:

> Here is a proposal for a new partition assignment strategy,
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-49+-+Fair+Partition+Assignment+Strategy
>
> This KIP corresponds to these two pending pull requests,
> https://github.com/apache/kafka/pull/146
> https://github.com/apache/kafka/pull/979
>
> thanks,
> Andrew
>
> CONFIDENTIALITY NOTICE This message and any included attachments are from
> Cerner Corporation and are intended only for the addressee. The information
> contained in this message is confidential and may constitute inside or
> non-public information under international, federal, or state securities
> laws. Unauthorized forwarding, printing, copying, distribution, or use of
> such information is strictly prohibited and may be unlawful. If you are not
> the addressee, please promptly delete this message and notify the sender of
> the delivery error by e-mail or you may call Cerner's corporate offices in
> Kansas City, Missouri, U.S.A at (+1) (816)221-1024.
>


Re: [DISCUSS] KIP-48 Support for delegation tokens as an authentication mechanism

2016-02-26 Thread Ashish Singh
Parth,

Thanks again for the awesome write up. Following our discussion from the
JIRA, I think it will be easier to compare various alternatives if they are
listed together. I am stating below a few alternatives along with a the
current proposal.
(Current proposal) Store Delegation Token, DT, on ZK.

   1. Client authenticates with a broker.
   2. Once a client is authenticated, it will make a broker side call to
   issue a delegation token.
   3. The broker generates a shared secret based on HMAC-SHA256(a
   Password/Secret shared between all brokers, randomly generated tokenId).
   4. Broker stores this token in its in memory cache. Broker also stores
   the DelegationToken without the hmac in the zookeeper.
   5. All brokers will have a cache backed by zookeeper so they will all
   get notified whenever a new token is generated and they will update their
   local cache whenever token state changes.
   6. Broker returns the token to Client.

Probable issues and fixes

   1. Probable race condition, client tries to authenticate with a broker
   that is yet to be updated with the newly generated DT. This can probably be
   dealt with making dtRequest block until all brokers have updated their DT
   cache. Zk barrier or similar mechanism can be used. However, all such
   mechanisms will increase complexity.
   2. Using a static secret key from config file. Will require yet another
   config and uses a static secret key. It is advised to rotate secret keys
   periodically. This can be avoided with controller generating secretKey and
   passing to brokers periodically. However, this will require brokers to
   maintain certain counts of secretKeys.

(Alternative 1) Have controller generate delegation token.

   1. Client authenticates with a broker.
   2. Once a client is authenticated, it will make a broker side call to
   issue a delegation token.
   3. Broker forwards the request to controller.
   4. Controller generates a DT and broadcasts to all brokers.
   5. Broker stores this token in its memory cache.
   6. Controller responds to broker’s DT req.
   7. Broker returns the token to Client.

Probable issues and fixes

   1. We will have to add new APIs to support controller pushing tokens to
   brokers on top of the minimal APIs that are currently proposed.
   2. We will also have to add APIs to support the bootstrapping case, i.e,
   when a new broker comes up it will have to get all delegation tokens from
   the controller.
   3. In catastrophic failures where all brokers go down, the tokens will
   be lost even if servers are restarted as tokens are not persisted anywhere.
   If this happens, then there are more important things to worry about and
   maybe it is better to re-authenticate.

(Alternative 2) Do not distribute DT to other brokers at all.

   1. Client authenticates with a broker.
   2. Once a client is authenticated, it will make a broker side call to
   issue a delegation token.
   3. The broker generates DT of form, [hmac + (owner, renewer,
   maxLifeTime, id, hmac, expirationTime)] and passes back this DT to client.
   hmac is generated via {HMAC-SHA256(owner, renewer, maxLifeTime, id, hmac,
   expirationTime) using SecretKey}. Note that all brokers have this SecretKey.
   4. Client then goes to any broker and to authenticate sends the DT.
   Broker recalculates hmac using (owner, renewer, maxLifeTime, id, hmac,
   expirationTime) info from DT and its SecretKey. If it matches with hmac of
   DT, client is authenticated. Yes, it will do other obvious checks of
   timestamp expiry and such.

Note that secret key will be generated by controller and passed to brokers
periodically.
Probable issues and fixes

   1. How to delete a DT? Yes, that is a downside here. However, this can
   be handled with brokers maintaining a blacklist of DTs, DTs from this list
   can be removed after expiry.
   2. In catastrophic failures where all brokers go down, the tokens will
   be lost even if servers are restarted as tokens are not persisted anywhere.
   If this happens, then there are more important things to worry about and
   maybe it is better to re-authenticate.

​

On Fri, Feb 26, 2016 at 1:58 PM, Parth Brahmbhatt <
pbrahmbh...@hortonworks.com> wrote:

> Hi,
>
> I have filed KIP-48 so we can offer hadoop like delegation tokens in
> kafka. You can review the design
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+Kafka.
> This KIP depends on KIP-43 and we have also discussed an alternative to
> proposed design here<
> https://issues.apache.org/jira/browse/KAFKA-1696?focusedCommentId=15167800=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15167800
> >.
>
> Thanks
> Parth
>



-- 

Regards,
Ashish


Build failed in Jenkins: kafka-trunk-jdk7 #1071

2016-02-26 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-3278: concatenate thread name to clientId when producer and

--
[...truncated 1907 lines...]
at 
kafka.zk.ZooKeeperTestHarness$class.setUp(ZooKeeperTestHarness.scala:65)
at kafka.zk.ZKPathTest.setUp(ZKPathTest.scala:26)

kafka.zk.ZKPathTest > testCreatePersistentSequentialExists FAILED
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:95)
at kafka.zk.EmbeddedZookeeper.(EmbeddedZookeeper.scala:36)
at 
kafka.zk.ZooKeeperTestHarness$class.setUp(ZooKeeperTestHarness.scala:65)
at kafka.zk.ZKPathTest.setUp(ZKPathTest.scala:26)

kafka.zk.ZKPathTest > testCreateEphemeralPathExists FAILED
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:95)
at kafka.zk.EmbeddedZookeeper.(EmbeddedZookeeper.scala:36)
at 
kafka.zk.ZooKeeperTestHarness$class.setUp(ZooKeeperTestHarness.scala:65)
at kafka.zk.ZKPathTest.setUp(ZKPathTest.scala:26)

kafka.zk.ZKPathTest > testCreatePersistentPath FAILED
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:95)
at kafka.zk.EmbeddedZookeeper.(EmbeddedZookeeper.scala:36)
at 
kafka.zk.ZooKeeperTestHarness$class.setUp(ZooKeeperTestHarness.scala:65)
at kafka.zk.ZKPathTest.setUp(ZKPathTest.scala:26)

kafka.zk.ZKPathTest > testMakeSurePersistsPathExistsThrowsException FAILED
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:95)
at kafka.zk.EmbeddedZookeeper.(EmbeddedZookeeper.scala:36)
at 
kafka.zk.ZooKeeperTestHarness$class.setUp(ZooKeeperTestHarness.scala:65)
at kafka.zk.ZKPathTest.setUp(ZKPathTest.scala:26)

kafka.zk.ZKPathTest > testCreateEphemeralPathThrowsException FAILED
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:95)
at kafka.zk.EmbeddedZookeeper.(EmbeddedZookeeper.scala:36)
at 
kafka.zk.ZooKeeperTestHarness$class.setUp(ZooKeeperTestHarness.scala:65)
at kafka.zk.ZKPathTest.setUp(ZKPathTest.scala:26)

kafka.zk.ZKPathTest > testCreatePersistentPathThrowsException FAILED
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:95)
at 

Re: Upgrading to Kafka 0.9.x

2016-02-26 Thread Joel Koshy
The 0.9 release still has the old consumer as Jay mentioned but this
specific release is a little unusual in that it also provides a completely
new consumer client.

Based on what I understand, users of Kafka need to upgrade their brokers to
> Kafka 0.9.x first, before they upgrade their clients to Kafka 0.9.x.



However, that presents a problem to other projects that integrate with
> Kafka (Spark, Flume, Storm, etc.).


This is true and we faced a similar issue at LinkedIn - there are scenarios
where it is useful/necessary to allow the client to be upgraded before the
broker. This improvement

can help with that although if users want to leverage newer server-side
features they would obviously need to upgrade the brokers.

Thanks,

Joel

On Fri, Feb 26, 2016 at 4:22 PM, Mark Grover  wrote:

> Thanks Jay. Yeah, if we were able to use the old consumer API from 0.9
> clients to work with 0.8 brokers that would have been super helpful here. I
> am just trying to avoid a scenario where Spark cares about new features
> from every new major release of Kafka (which is a good thing) but ends up
> having to keep multiple profiles/artifacts for it - one for 0.8.x, one for
> 0.9.x and another one, once 0.10.x gets released.
>
> So, anything that the Kafka community can do to alleviate the situation
> down the road would be great. Thanks again!
>
> On Fri, Feb 26, 2016 at 11:36 AM, Jay Kreps  wrote:
>
> > Hey, yeah, we'd really like to make this work well for you guys.
> >
> > I think there are actually maybe two questions here:
> > 1. How should this work in steady state?
> > 2. Given that there was a major reworking of the kafka consumer java
> > library for 0.9 how does that impact things right now? (
> >
> >
> http://www.confluent.io/blog/tutorial-getting-started-with-the-new-apache-kafka-0.9-consumer-client
> > )
> >
> > Quick recap of how we do compatibility, just so everyone is on the same
> > page:
> > 1. The protocol is versioned and the cluster supports multiple versions.
> > 2. As we evolve Kafka we always continue to support older versions of the
> > protocol an hence older clients continue to work with newer Kafka
> versions.
> > 2. In general we don't try to have the clients support older versions of
> > Kafka since, after all, the whole point of the new client is to add
> > features which often require those features to be in the broker.
> >
> > So I think in steady state the answer is to choose a conservative version
> > to build against and it's on us to keep that working as Kafka evolves. As
> > always there will be some tradeoff between using the newest features and
> > being compatible with old stuff.
> >
> > But that steady state question ignores the fact that we did a complete
> > rewrite of the consumer in 0.9. The old consumer is still there,
> supported,
> > and still works as before but the new consumer is the path forward and
> what
> > we are adding features to. At some point you will want to migrate to this
> > new api, which will be a non-trivial change to your code.
> >
> > This api has a couple of advantages for you guys (1) it supports
> security,
> > (2) It allows low-level control over partition assignment and offsets
> > without the crazy fiddliness of the old "simple consumer" api, (3) it no
> > longer directly accesses ZK, (4) no scala dependency and no dependency on
> > Kafka core. I think all four of these should be desirable for Spark et
> al.
> >
> > One thing we could discuss is the possibility of doing forwards and
> > backwards compatibility in the clients. I'm not sure this would actually
> > make things better, that would probably depend on the details of how it
> > worked.
> >
> > -Jay
> >
> >
> > On Fri, Feb 26, 2016 at 9:46 AM, Mark Grover  wrote:
> >
> > > Hi Kafka devs,
> > > I come to you with a dilemma and a request.
> > >
> > > Based on what I understand, users of Kafka need to upgrade their
> brokers
> > to
> > > Kafka 0.9.x first, before they upgrade their clients to Kafka 0.9.x.
> > >
> > > However, that presents a problem to other projects that integrate with
> > > Kafka (Spark, Flume, Storm, etc.). From here on, I will speak for
> Spark +
> > > Kafka, since that's the one I am most familiar with.
> > >
> > > In the light of compatibility (or the lack thereof) between 0.8.x and
> > > 0.9.x, Spark is faced with a problem of what version(s) of Kafka to be
> > > compatible with, and has 2 options (discussed in this PR
> > > ):
> > > 1. We either upgrade to Kafka 0.9, dropping support for 0.8. Storm and
> > > Flume are already on this path.
> > > 2. We introduce complexity in our code to support both 0.8 and 0.9 for
> > the
> > > entire duration of our next major release (Apache Spark 2.x).
> > >
> > > I'd love to hear your thoughts on which option, you recommend.
> > >
> > > Long 

Build failed in Jenkins: kafka-trunk-jdk8 #398

2016-02-26 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-3278: concatenate thread name to clientId when producer and

--
[...truncated 1512 lines...]

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testTopicMetadataRequest 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsWrongSetValue PASSED

kafka.KafkaTest > testKafkaSslPasswords PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgs PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheEnd PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsOnly PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheBegging PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.utils.UtilsTest > testAbs PASSED

kafka.utils.UtilsTest > testReplaceSuffix PASSED

kafka.utils.UtilsTest > testCircularIterator PASSED

kafka.utils.UtilsTest > testReadBytes PASSED

kafka.utils.UtilsTest > testCsvList PASSED

kafka.utils.UtilsTest > testReadInt PASSED

kafka.utils.UtilsTest > testCsvMap PASSED

kafka.utils.UtilsTest > testInLock PASSED

kafka.utils.UtilsTest > testSwallow PASSED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask PASSED

kafka.utils.SchedulerTest > testNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testRestart PASSED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler PASSED

kafka.utils.SchedulerTest > testPeriodicTask PASSED

kafka.utils.ByteBoundedBlockingQueueTest > testByteBoundedBlockingQueue PASSED

kafka.utils.timer.TimerTaskListTest > testAll PASSED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask PASSED

kafka.utils.timer.TimerTest > testTaskExpiration PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArg PASSED

kafka.utils.CommandLineUtilsTest > testParseSingleArg PASSED

kafka.utils.CommandLineUtilsTest > testParseArgs PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgAsValid PASSED

kafka.utils.IteratorTemplateTest > testIterator PASSED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED

kafka.utils.ReplicationUtilsTest > testGetLeaderIsrAndEpochForPartition PASSED

kafka.utils.JsonTest > testJsonEncoding PASSED

kafka.message.MessageCompressionTest > testCompressSize PASSED

kafka.message.MessageCompressionTest > testSimpleCompressDecompress PASSED

kafka.message.MessageWriterTest > testWithNoCompressionAttribute PASSED

kafka.message.MessageWriterTest > testWithCompressionAttribute PASSED

kafka.message.MessageWriterTest > testBufferingOutputStream PASSED

kafka.message.MessageWriterTest > testWithKey PASSED

kafka.message.MessageTest > testChecksum PASSED

kafka.message.MessageTest > testInvalidTimestamp PASSED

kafka.message.MessageTest > testIsHashable PASSED

kafka.message.MessageTest > testInvalidTimestampAndMagicValueCombination PASSED

kafka.message.MessageTest > testExceptionMapping PASSED

kafka.message.MessageTest > testFieldValues PASSED

kafka.message.MessageTest > testInvalidMagicByte PASSED

kafka.message.MessageTest > testEquality PASSED

kafka.message.MessageTest > testMessageFormatConversion PASSED

kafka.message.ByteBufferMessageSetTest > testMessageWithProvidedOffsetSeq PASSED

kafka.message.ByteBufferMessageSetTest > testValidBytes PASSED

kafka.message.ByteBufferMessageSetTest > testValidBytesWithCompression PASSED

kafka.message.ByteBufferMessageSetTest > 
testOffsetAssignmentAfterMessageFormatConversion PASSED

kafka.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.message.ByteBufferMessageSetTest > testAbsoluteOffsetAssignment PASSED

kafka.message.ByteBufferMessageSetTest > testCreateTime PASSED

kafka.message.ByteBufferMessageSetTest > testInvalidCreateTime PASSED

kafka.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.message.ByteBufferMessageSetTest > testLogAppendTime PASSED

kafka.message.ByteBufferMessageSetTest > testWriteTo PASSED


[GitHub] kafka pull request: MINOR: Add vagrant up wrapper for simple paral...

2016-02-26 Thread granders
GitHub user granders opened a pull request:

https://github.com/apache/kafka/pull/982

MINOR: Add vagrant up wrapper for simple parallel bringup on aws

The main impediment to bringing up aws machines in parallel using vagrant 
was the interaction between `vagrant-hostmanager` and `vagrant-aws`. If you 
disable hostmanager during the `up` phase, and run it after the cluster is up, 
parallel bringup is possible. The only caveat is that machines must be brought 
up in small-ish batches to prevent rate limit errors from AWS since 
`vagrant-aws` doesn't seem to have mechanisms to 

This PR:
- disables `vagrant-hostmanager` during bringup
- adds a wrapper script to make it convenient to bring machines up in 
batches on aws

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/confluentinc/kafka vagrant-disable-hostmanager

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/982.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #982


commit f94aa2caf483d6d53becc2b13eb38198a349df57
Author: Geoff Anderson 
Date:   2016-02-26T22:33:56Z

Add vagrant up wrapper for easier parallel brinup on aws




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Upgrading to Kafka 0.9.x

2016-02-26 Thread Mark Grover
Thanks Jay. Yeah, if we were able to use the old consumer API from 0.9
clients to work with 0.8 brokers that would have been super helpful here. I
am just trying to avoid a scenario where Spark cares about new features
from every new major release of Kafka (which is a good thing) but ends up
having to keep multiple profiles/artifacts for it - one for 0.8.x, one for
0.9.x and another one, once 0.10.x gets released.

So, anything that the Kafka community can do to alleviate the situation
down the road would be great. Thanks again!

On Fri, Feb 26, 2016 at 11:36 AM, Jay Kreps  wrote:

> Hey, yeah, we'd really like to make this work well for you guys.
>
> I think there are actually maybe two questions here:
> 1. How should this work in steady state?
> 2. Given that there was a major reworking of the kafka consumer java
> library for 0.9 how does that impact things right now? (
>
> http://www.confluent.io/blog/tutorial-getting-started-with-the-new-apache-kafka-0.9-consumer-client
> )
>
> Quick recap of how we do compatibility, just so everyone is on the same
> page:
> 1. The protocol is versioned and the cluster supports multiple versions.
> 2. As we evolve Kafka we always continue to support older versions of the
> protocol an hence older clients continue to work with newer Kafka versions.
> 2. In general we don't try to have the clients support older versions of
> Kafka since, after all, the whole point of the new client is to add
> features which often require those features to be in the broker.
>
> So I think in steady state the answer is to choose a conservative version
> to build against and it's on us to keep that working as Kafka evolves. As
> always there will be some tradeoff between using the newest features and
> being compatible with old stuff.
>
> But that steady state question ignores the fact that we did a complete
> rewrite of the consumer in 0.9. The old consumer is still there, supported,
> and still works as before but the new consumer is the path forward and what
> we are adding features to. At some point you will want to migrate to this
> new api, which will be a non-trivial change to your code.
>
> This api has a couple of advantages for you guys (1) it supports security,
> (2) It allows low-level control over partition assignment and offsets
> without the crazy fiddliness of the old "simple consumer" api, (3) it no
> longer directly accesses ZK, (4) no scala dependency and no dependency on
> Kafka core. I think all four of these should be desirable for Spark et al.
>
> One thing we could discuss is the possibility of doing forwards and
> backwards compatibility in the clients. I'm not sure this would actually
> make things better, that would probably depend on the details of how it
> worked.
>
> -Jay
>
>
> On Fri, Feb 26, 2016 at 9:46 AM, Mark Grover  wrote:
>
> > Hi Kafka devs,
> > I come to you with a dilemma and a request.
> >
> > Based on what I understand, users of Kafka need to upgrade their brokers
> to
> > Kafka 0.9.x first, before they upgrade their clients to Kafka 0.9.x.
> >
> > However, that presents a problem to other projects that integrate with
> > Kafka (Spark, Flume, Storm, etc.). From here on, I will speak for Spark +
> > Kafka, since that's the one I am most familiar with.
> >
> > In the light of compatibility (or the lack thereof) between 0.8.x and
> > 0.9.x, Spark is faced with a problem of what version(s) of Kafka to be
> > compatible with, and has 2 options (discussed in this PR
> > ):
> > 1. We either upgrade to Kafka 0.9, dropping support for 0.8. Storm and
> > Flume are already on this path.
> > 2. We introduce complexity in our code to support both 0.8 and 0.9 for
> the
> > entire duration of our next major release (Apache Spark 2.x).
> >
> > I'd love to hear your thoughts on which option, you recommend.
> >
> > Long term, I'd really appreciate if Kafka could do something that doesn't
> > make Spark having to support two, or even more versions of Kafka. And, if
> > there is something that I, personally, and Spark project can do in your
> > next release candidate phase to make things easier, please do let us
> know.
> >
> > Thanks!
> > Mark
> >
>


[jira] [Commented] (KAFKA-3299) KafkaConnect: DistributedHerder shouldn't wait forever to read configs after rebalance

2016-02-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15170172#comment-15170172
 ] 

ASF GitHub Bot commented on KAFKA-3299:
---

GitHub user gwenshap opened a pull request:

https://github.com/apache/kafka/pull/981

KAFKA-3299: Ensure that reading config log on rebalance doesn't hang the 
herder



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gwenshap/kafka KAFKA-3299

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/981.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #981


commit 319672d563f31fb7df85645fd802acc4e1151f75
Author: Gwen Shapira 
Date:   2016-02-27T00:07:38Z

Ensure that reading config log on rebalance doesn't hang the herder




> KafkaConnect: DistributedHerder shouldn't wait forever to read configs after 
> rebalance
> --
>
> Key: KAFKA-3299
> URL: https://issues.apache.org/jira/browse/KAFKA-3299
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>
> Right now, the handleRebalance code calls readConfigToEnd with timeout of 
> MAX_INT if it isn't the leader.
> The normal workerSyncTimeoutMs is probably sufficient.
> At least this allows a worker to time-out, get back to the tick() loop and 
> check the "stopping" flag to see if it should shut down, to prevent it from 
> hanging forever.
> It doesn't resolve the question of what we should do with a worker that 
> repeatedly fails to read configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3299: Ensure that reading config log on ...

2016-02-26 Thread gwenshap
GitHub user gwenshap opened a pull request:

https://github.com/apache/kafka/pull/981

KAFKA-3299: Ensure that reading config log on rebalance doesn't hang the 
herder



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gwenshap/kafka KAFKA-3299

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/981.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #981


commit 319672d563f31fb7df85645fd802acc4e1151f75
Author: Gwen Shapira 
Date:   2016-02-27T00:07:38Z

Ensure that reading config log on rebalance doesn't hang the herder




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #1070

2016-02-26 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-3214: Added system tests for compressed topics

--
[...truncated 5600 lines...]

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putConnectorStateShouldOverride PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putConnectorStateRetriableFailure PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putSafeOverridesValueSetBySameWorker PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
readConnectorState PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putSafeConnectorIgnoresStaleStatus PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putConnectorState PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testSaveRestore 
PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
putAndGetConnectorStatus PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
putAndGetTaskStatus PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
deleteTaskStatus PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
deleteConnectorStatus PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testWriteFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullValueFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullKeyFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testNoOffsetsToFlush 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testFlushFailureReplacesOffsets PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testAlreadyFlushing 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelBeforeAwaitFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelAfterAwaitFlush PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > testStartStop PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > testPutTaskConfigs 
PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > 
testPutTaskConfigsDoesNotResolveAllInconsistencies PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > testRestore PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testSlowTaskStart PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testPollsInBackground 
PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testFailureInPoll PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testCommit PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testCommitFailure FAILED
java.lang.AssertionError: 
  Expectation failure on verify:
Listener.onStartup(job-0): expected: 1, actual: 1
Listener.onShutdown(job-0): expected: 1, actual: 0
at org.easymock.internal.MocksControl.verify(MocksControl.java:225)
at 
org.powermock.api.easymock.internal.invocationcontrol.EasyMockMethodInvocationControl.verify(EasyMockMethodInvocationControl.java:132)
at org.powermock.api.easymock.PowerMock.verify(PowerMock.java:1466)
at org.powermock.api.easymock.PowerMock.verifyAll(PowerMock.java:1405)
at 
org.apache.kafka.connect.runtime.WorkerSourceTaskTest.testCommitFailure(WorkerSourceTaskTest.java:256)

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > 
testSendRecordsConvertsData PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testSendRecordsRetries 
PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > 
testErrorInRebalancePartitionRevocation PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > 
testErrorInRebalancePartitionAssignment PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > testPollRedelivery PASSED

org.apache.kafka.connect.runtime.AbstractHerderTest > connectorStatus PASSED

org.apache.kafka.connect.runtime.AbstractHerderTest > taskStatus PASSED

org.apache.kafka.connect.runtime.WorkerTaskTest > stopBeforeStarting PASSED

org.apache.kafka.connect.runtime.WorkerTaskTest > standardStartup PASSED

org.apache.kafka.connect.runtime.WorkerTaskTest > cancelBeforeStopping PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testListConnectorsNotLeader PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testListConnectorsNotSynced PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnector PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testPutConnectorTaskConfigsConnectorNotFound PASSED


Build failed in Jenkins: kafka-trunk-jdk8 #397

2016-02-26 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-3214: Added system tests for compressed topics

--
[...truncated 5025 lines...]
org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSameName PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSelfParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddProcessorWithSelfParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithSink PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testTopicGroups PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testBuild PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithSource PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSourceWithSameName PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddProcessorWithSameName PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSourceWithSameTopic PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testTopicGroupsByStateStore PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithDuplicates PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithWrongParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkConnectedWithMultipleParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddProcessorWithWrongParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testAddStateStore 
PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkConnectedWithParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithNonExistingProcessor PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testRegisterNonPersistentStore PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testLockStateDirectory PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testGetStore PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testClose PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testChangeLogOffsets PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testRegisterPersistentStore PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testNoTopic PASSED

org.apache.kafka.streams.processor.internals.MinTimestampTrackerTest > 
testTracking PASSED

org.apache.kafka.streams.processor.internals.StandbyTaskTest > 
testStorePartitions PASSED

org.apache.kafka.streams.processor.internals.StandbyTaskTest > testUpdateKTable 
PASSED

org.apache.kafka.streams.processor.internals.StandbyTaskTest > 
testUpdateNonPersistentStore PASSED

org.apache.kafka.streams.processor.internals.StandbyTaskTest > testUpdate PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testProcessOrder 
PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testPauseResume 
PASSED

org.apache.kafka.streams.processor.internals.PartitionGroupTest > 
testTimeTracking PASSED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testStickiness PASSED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithStandby PASSED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithoutStandby PASSED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
testEncodeDecode PASSED

org.apache.kafka.streams.processor.internals.assignment.AssginmentInfoTest > 
testEncodeDecode PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingMultiplexingTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingStatefulTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingSimpleTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testTopologyMetadata PASSED

org.apache.kafka.streams.processor.internals.QuickUnionTest > testUnite PASSED

org.apache.kafka.streams.processor.internals.QuickUnionTest > testUniteMany 
PASSED

org.apache.kafka.streams.processor.internals.PunctuationQueueTest > 
testPunctuationInterval PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithStandbyReplicas PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithNewTasks PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithStates PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignBasic PASSED


[jira] [Commented] (KAFKA-3201) Add system test for KIP-31 and KIP-32 - Upgrade Test

2016-02-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15170091#comment-15170091
 ] 

ASF GitHub Bot commented on KAFKA-3201:
---

GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/980

KAFKA-3201: Added rolling upgrade system tests from 0.8 and 0.9 to 0.10

Three main tests:
1. Setup: Producer (0.8) → Kafka Cluster → Consumer (0.8)
First rolling bounce: Set inter.broker.protocol.version = 0.8 and 
message.format.version = 0.8
Second rolling bonus, use latest (default) inter.broker.protocol.version 
and message.format.version
2. Setup: Producer (0.9) → Kafka Cluster → Consumer (0.9)
First rolling bounce: Set inter.broker.protocol.version = 0.9 and 
message.format.version = 0.9
Second rolling bonus, use latest (default) inter.broker.protocol.version 
and message.format.version
3. Setup: Producer (0.9) → Kafka Cluster → Consumer (0.9)
First rolling bounce: Set inter.broker.protocol.version = 0.9 and 
message.format.version = 0.9
Second rolling bonus: use inter.broker.protocol.version = 0.10 and 
message.format.version = 0.9

Plus couple of variations of these tests using old/new consumer or no 
compression / snappy compression. 

Also added optional extra verification to ProduceConsumeValidate test to 
verify that all acks received by producer are successful. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apovzner/kafka kafka-3201-02

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/980.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #980


commit 35e7362e316419675cf6614787fcc2d12fae6e74
Author: Anna Povzner 
Date:   2016-02-26T22:09:14Z

KAFKA-3201: Added rolling upgrade system tests from 0.8 and 0.9 to 0.10

commit 208a50458ecff8ef1bf9b601c1162e796ad7de28
Author: Anna Povzner 
Date:   2016-02-26T22:59:22Z

Upgrade system tests ensure all producer acks are successful

commit dce6ff016c575aae30587c92f71159886158972c
Author: Anna Povzner 
Date:   2016-02-26T23:18:37Z

Using one producer in upgrade test, because --prefixValue is only supported 
in verifiable producer in trunk




> Add system test for KIP-31 and KIP-32 - Upgrade Test
> 
>
> Key: KAFKA-3201
> URL: https://issues.apache.org/jira/browse/KAFKA-3201
> Project: Kafka
>  Issue Type: Sub-task
>  Components: system tests
>Reporter: Jiangjie Qin
>Assignee: Anna Povzner
> Fix For: 0.10.0.0
>
>
> This system test should test the procedure to upgrade a Kafka broker from 
> 0.8.x and 0.9.0 to 0.10.0
> The procedure is documented in KIP-32:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-32+-+Add+timestamps+to+Kafka+message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3201: Added rolling upgrade system tests...

2016-02-26 Thread apovzner
GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/980

KAFKA-3201: Added rolling upgrade system tests from 0.8 and 0.9 to 0.10

Three main tests:
1. Setup: Producer (0.8) → Kafka Cluster → Consumer (0.8)
First rolling bounce: Set inter.broker.protocol.version = 0.8 and 
message.format.version = 0.8
Second rolling bonus, use latest (default) inter.broker.protocol.version 
and message.format.version
2. Setup: Producer (0.9) → Kafka Cluster → Consumer (0.9)
First rolling bounce: Set inter.broker.protocol.version = 0.9 and 
message.format.version = 0.9
Second rolling bonus, use latest (default) inter.broker.protocol.version 
and message.format.version
3. Setup: Producer (0.9) → Kafka Cluster → Consumer (0.9)
First rolling bounce: Set inter.broker.protocol.version = 0.9 and 
message.format.version = 0.9
Second rolling bonus: use inter.broker.protocol.version = 0.10 and 
message.format.version = 0.9

Plus couple of variations of these tests using old/new consumer or no 
compression / snappy compression. 

Also added optional extra verification to ProduceConsumeValidate test to 
verify that all acks received by producer are successful. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apovzner/kafka kafka-3201-02

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/980.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #980


commit 35e7362e316419675cf6614787fcc2d12fae6e74
Author: Anna Povzner 
Date:   2016-02-26T22:09:14Z

KAFKA-3201: Added rolling upgrade system tests from 0.8 and 0.9 to 0.10

commit 208a50458ecff8ef1bf9b601c1162e796ad7de28
Author: Anna Povzner 
Date:   2016-02-26T22:59:22Z

Upgrade system tests ensure all producer acks are successful

commit dce6ff016c575aae30587c92f71159886158972c
Author: Anna Povzner 
Date:   2016-02-26T23:18:37Z

Using one producer in upgrade test, because --prefixValue is only supported 
in verifiable producer in trunk




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-3299) KafkaConnect: DistributedHerder shouldn't wait forever to read configs after rebalance

2016-02-26 Thread Gwen Shapira (JIRA)
Gwen Shapira created KAFKA-3299:
---

 Summary: KafkaConnect: DistributedHerder shouldn't wait forever to 
read configs after rebalance
 Key: KAFKA-3299
 URL: https://issues.apache.org/jira/browse/KAFKA-3299
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira


Right now, the handleRebalance code calls readConfigToEnd with timeout of 
MAX_INT if it isn't the leader.

The normal workerSyncTimeoutMs is probably sufficient.

At least this allows a worker to time-out, get back to the tick() loop and 
check the "stopping" flag to see if it should shut down, to prevent it from 
hanging forever.

It doesn't resolve the question of what we should do with a worker that 
repeatedly fails to read configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3278) clientId is not unique in producer/consumer registration leads to mbean warning

2016-02-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15170045#comment-15170045
 ] 

ASF GitHub Bot commented on KAFKA-3278:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/978


> clientId is not unique in producer/consumer registration leads to mbean 
> warning
> ---
>
> Key: KAFKA-3278
> URL: https://issues.apache.org/jira/browse/KAFKA-3278
> Project: Kafka
>  Issue Type: Bug
>  Components: kafka streams
>Affects Versions: 0.9.0.1
> Environment: Mac OS
>Reporter: Tom Dearman
>Assignee: Tom Dearman
>Priority: Minor
> Fix For: 0.10.0.0
>
>
> The clientId passed through to StreamThread is not unique and this is used to 
> create consumers and producers, which in turn try to register mbeans, this 
> leads to a warn that mbean already registered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3278 concatenate thread name to clientId...

2016-02-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/978


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3278) clientId is not unique in producer/consumer registration leads to mbean warning

2016-02-26 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3278:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 978
[https://github.com/apache/kafka/pull/978]

> clientId is not unique in producer/consumer registration leads to mbean 
> warning
> ---
>
> Key: KAFKA-3278
> URL: https://issues.apache.org/jira/browse/KAFKA-3278
> Project: Kafka
>  Issue Type: Bug
>  Components: kafka streams
>Affects Versions: 0.9.0.1
> Environment: Mac OS
>Reporter: Tom Dearman
>Assignee: Tom Dearman
>Priority: Minor
> Fix For: 0.10.0.0
>
>
> The clientId passed through to StreamThread is not unique and this is used to 
> create consumers and producers, which in turn try to register mbeans, this 
> leads to a warn that mbean already registered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[DISCUSS] KIP-49: Fair Partition Assignment Strategy

2016-02-26 Thread Olson,Andrew
Here is a proposal for a new partition assignment strategy,
https://cwiki.apache.org/confluence/display/KAFKA/KIP-49+-+Fair+Partition+Assignment+Strategy

This KIP corresponds to these two pending pull requests,
https://github.com/apache/kafka/pull/146
https://github.com/apache/kafka/pull/979

thanks,
Andrew

CONFIDENTIALITY NOTICE This message and any included attachments are from 
Cerner Corporation and are intended only for the addressee. The information 
contained in this message is confidential and may constitute inside or 
non-public information under international, federal, or state securities laws. 
Unauthorized forwarding, printing, copying, distribution, or use of such 
information is strictly prohibited and may be unlawful. If you are not the 
addressee, please promptly delete this message and notify the sender of the 
delivery error by e-mail or you may call Cerner's corporate offices in Kansas 
City, Missouri, U.S.A at (+1) (816)221-1024.


[DISCUSS] KIP-48 Support for delegation tokens as an authentication mechanism

2016-02-26 Thread Parth Brahmbhatt
Hi,

I have filed KIP-48 so we can offer hadoop like delegation tokens in kafka. You 
can review the design 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+Kafka.
 This KIP depends on KIP-43 and we have also discussed an alternative to 
proposed design 
here.

Thanks
Parth


[GitHub] kafka pull request: KAFKA-3214: Added system tests for compressed ...

2016-02-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/958


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3214) Add consumer system tests for compressed topics

2016-02-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169889#comment-15169889
 ] 

ASF GitHub Bot commented on KAFKA-3214:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/958


> Add consumer system tests for compressed topics
> ---
>
> Key: KAFKA-3214
> URL: https://issues.apache.org/jira/browse/KAFKA-3214
> Project: Kafka
>  Issue Type: Test
>  Components: consumer
>Reporter: Jason Gustafson
>Assignee: Anna Povzner
> Fix For: 0.10.0.0
>
>
> As far as I can tell, we don't have any ducktape tests which verify 
> correctness when compression is enabled. If we did, we might have caught 
> KAFKA-3179 earlier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : kafka-trunk-jdk7 #1069

2016-02-26 Thread Apache Jenkins Server
See 



[jira] [Commented] (KAFKA-3296) All consumer reads hang indefinately

2016-02-26 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169673#comment-15169673
 ] 

Jason Gustafson commented on KAFKA-3296:


>From the log, it looks like the consumer cannot find the coordinator. Is it 
>possible that there is a race condition in the integration test with broker 
>shutdown?

> All consumer reads hang indefinately
> 
>
> Key: KAFKA-3296
> URL: https://issues.apache.org/jira/browse/KAFKA-3296
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0, 0.9.0.1
>Reporter: Simon Cooper
>Priority: Critical
>
> We've got several integration tests that bring up systems on VMs for testing. 
> We've recently upgraded to 0.9, and very occasionally we occasionally see an 
> issue where every consumer that tries to read from the broker hangs, spamming 
> the following in their logs:
> {code}2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.NetworkClient 
> [pool-10-thread-1] | Sending metadata request 
> ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21905,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489537856, sendTimeMs=0) to node 1
> 2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10954 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:37,857 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489537857, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@28edb273,
>  
> request=RequestSend(header={api_key=10,api_version=0,correlation_id=21906,client_id=consumer-1},
>  body={group_id=}), createdTimeMs=1456489537856, sendTimeMs=1456489537856), 
> responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.NetworkClient [pool-10-thread-1] | 
> Sending metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21907,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489537956, sendTimeMs=0) to node 1
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10955 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:37,957 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489537957, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@40cee8cc,
>  
> request=RequestSend(header={api_key=10,api_version=0,correlation_id=21908,client_id=consumer-1},
>  body={group_id=}), createdTimeMs=1456489537956, sendTimeMs=1456489537956), 
> responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.NetworkClient [pool-10-thread-1] | 
> Sending metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21909,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489538056, sendTimeMs=0) to node 1
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10956 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:38,057 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489538057, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@439e25fb,
>  
> 

[jira] [Resolved] (KAFKA-3298) Document unclean.leader.election.enable as a valid topic-level config

2016-02-26 Thread Joel Koshy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Koshy resolved KAFKA-3298.
---
Resolution: Duplicate

Thanks for the report. KAFKA-3234 addresses this and other such inconsistencies.

> Document unclean.leader.election.enable as a valid topic-level config
> -
>
> Key: KAFKA-3298
> URL: https://issues.apache.org/jira/browse/KAFKA-3298
> Project: Kafka
>  Issue Type: Bug
>  Components: website
>Reporter: Andrew Olson
>Priority: Minor
>
> The http://kafka.apache.org/documentation.html#topic-config section does not 
> currently include {{unclean.leader.election.enable}}. That is a valid 
> topic-level configuration property as demonstrated by this [1] test.
> [1] 
> https://github.com/apache/kafka/blob/0.9.0.1/core/src/test/scala/unit/kafka/integration/UncleanLeaderElectionTest.scala#L127



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3298) Document unclean.leader.election.enable as a valid topic-level config

2016-02-26 Thread Andrew Olson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Olson updated KAFKA-3298:

Description: 
The http://kafka.apache.org/documentation.html#topic-config section does not 
currently include {{unclean.leader.election.enable}}. That is a valid 
topic-level configuration property as demonstrated by this [1] test.

[1] 
https://github.com/apache/kafka/blob/0.9.0.1/core/src/test/scala/unit/kafka/integration/UncleanLeaderElectionTest.scala#L127

  was:The http://kafka.apache.org/documentation.html#topic-config section does 
not currently include {{unclean.leader.election.enable}}. This is a valid 
topic-level configuration property.


> Document unclean.leader.election.enable as a valid topic-level config
> -
>
> Key: KAFKA-3298
> URL: https://issues.apache.org/jira/browse/KAFKA-3298
> Project: Kafka
>  Issue Type: Bug
>  Components: website
>Reporter: Andrew Olson
>Priority: Minor
>
> The http://kafka.apache.org/documentation.html#topic-config section does not 
> currently include {{unclean.leader.election.enable}}. That is a valid 
> topic-level configuration property as demonstrated by this [1] test.
> [1] 
> https://github.com/apache/kafka/blob/0.9.0.1/core/src/test/scala/unit/kafka/integration/UncleanLeaderElectionTest.scala#L127



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Upgrading to Kafka 0.9.x

2016-02-26 Thread Jay Kreps
Hey, yeah, we'd really like to make this work well for you guys.

I think there are actually maybe two questions here:
1. How should this work in steady state?
2. Given that there was a major reworking of the kafka consumer java
library for 0.9 how does that impact things right now? (
http://www.confluent.io/blog/tutorial-getting-started-with-the-new-apache-kafka-0.9-consumer-client
)

Quick recap of how we do compatibility, just so everyone is on the same
page:
1. The protocol is versioned and the cluster supports multiple versions.
2. As we evolve Kafka we always continue to support older versions of the
protocol an hence older clients continue to work with newer Kafka versions.
2. In general we don't try to have the clients support older versions of
Kafka since, after all, the whole point of the new client is to add
features which often require those features to be in the broker.

So I think in steady state the answer is to choose a conservative version
to build against and it's on us to keep that working as Kafka evolves. As
always there will be some tradeoff between using the newest features and
being compatible with old stuff.

But that steady state question ignores the fact that we did a complete
rewrite of the consumer in 0.9. The old consumer is still there, supported,
and still works as before but the new consumer is the path forward and what
we are adding features to. At some point you will want to migrate to this
new api, which will be a non-trivial change to your code.

This api has a couple of advantages for you guys (1) it supports security,
(2) It allows low-level control over partition assignment and offsets
without the crazy fiddliness of the old "simple consumer" api, (3) it no
longer directly accesses ZK, (4) no scala dependency and no dependency on
Kafka core. I think all four of these should be desirable for Spark et al.

One thing we could discuss is the possibility of doing forwards and
backwards compatibility in the clients. I'm not sure this would actually
make things better, that would probably depend on the details of how it
worked.

-Jay


On Fri, Feb 26, 2016 at 9:46 AM, Mark Grover  wrote:

> Hi Kafka devs,
> I come to you with a dilemma and a request.
>
> Based on what I understand, users of Kafka need to upgrade their brokers to
> Kafka 0.9.x first, before they upgrade their clients to Kafka 0.9.x.
>
> However, that presents a problem to other projects that integrate with
> Kafka (Spark, Flume, Storm, etc.). From here on, I will speak for Spark +
> Kafka, since that's the one I am most familiar with.
>
> In the light of compatibility (or the lack thereof) between 0.8.x and
> 0.9.x, Spark is faced with a problem of what version(s) of Kafka to be
> compatible with, and has 2 options (discussed in this PR
> ):
> 1. We either upgrade to Kafka 0.9, dropping support for 0.8. Storm and
> Flume are already on this path.
> 2. We introduce complexity in our code to support both 0.8 and 0.9 for the
> entire duration of our next major release (Apache Spark 2.x).
>
> I'd love to hear your thoughts on which option, you recommend.
>
> Long term, I'd really appreciate if Kafka could do something that doesn't
> make Spark having to support two, or even more versions of Kafka. And, if
> there is something that I, personally, and Spark project can do in your
> next release candidate phase to make things easier, please do let us know.
>
> Thanks!
> Mark
>


[jira] [Commented] (KAFKA-1215) Rack-Aware replica assignment option

2016-02-26 Thread Allen Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169592#comment-15169592
 ] 

Allen Wang commented on KAFKA-1215:
---

[~aauradkar] Yes it is ready for review.


> Rack-Aware replica assignment option
> 
>
> Key: KAFKA-1215
> URL: https://issues.apache.org/jira/browse/KAFKA-1215
> Project: Kafka
>  Issue Type: Improvement
>  Components: replication
>Affects Versions: 0.8.0
>Reporter: Joris Van Remoortere
>Assignee: Allen Wang
> Fix For: 0.10.0.0
>
> Attachments: rack_aware_replica_assignment_v1.patch, 
> rack_aware_replica_assignment_v2.patch
>
>
> Adding a rack-id to kafka config. This rack-id can be used during replica 
> assignment by using the max-rack-replication argument in the admin scripts 
> (create topic, etc.). By default the original replication assignment 
> algorithm is used because max-rack-replication defaults to -1. 
> max-rack-replication > -1 is not honored if you are doing manual replica 
> assignment (preffered).
> If this looks good I can add some test cases specific to the rack-aware 
> assignment.
> I can also port this to trunk. We are currently running 0.8.0 in production 
> and need this, so i wrote the patch against that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3298) Document unclean.leader.election.enable as a valid topic-level config

2016-02-26 Thread Andrew Olson (JIRA)
Andrew Olson created KAFKA-3298:
---

 Summary: Document unclean.leader.election.enable as a valid 
topic-level config
 Key: KAFKA-3298
 URL: https://issues.apache.org/jira/browse/KAFKA-3298
 Project: Kafka
  Issue Type: Bug
  Components: website
Reporter: Andrew Olson
Priority: Minor


The http://kafka.apache.org/documentation.html#topic-config section does not 
currently include {{unclean.leader.election.enable}}. This is a valid 
topic-level configuration property.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3243: Fix Kafka basic ops documentation ...

2016-02-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/923


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3243) Fix Kafka basic ops documentation for Mirror maker, blacklist is not supported for new consumers

2016-02-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169497#comment-15169497
 ] 

ASF GitHub Bot commented on KAFKA-3243:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/923


> Fix Kafka basic ops documentation for Mirror maker, blacklist is not 
> supported for new consumers
> 
>
> Key: KAFKA-3243
> URL: https://issues.apache.org/jira/browse/KAFKA-3243
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.10.0.0
>
>
> Fix Kafka basic ops documentation for Mirror maker, blacklist is not 
> supported for new consumers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3243) Fix Kafka basic ops documentation for Mirror maker, blacklist is not supported for new consumers

2016-02-26 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-3243:
-
   Resolution: Fixed
Fix Version/s: 0.10.0.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 923
[https://github.com/apache/kafka/pull/923]

> Fix Kafka basic ops documentation for Mirror maker, blacklist is not 
> supported for new consumers
> 
>
> Key: KAFKA-3243
> URL: https://issues.apache.org/jira/browse/KAFKA-3243
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.10.0.0
>
>
> Fix Kafka basic ops documentation for Mirror maker, blacklist is not 
> supported for new consumers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1995) JMS to Kafka: Inbuilt JMSAdaptor/JMSProxy/JMSBridge (Client can speak JMS but hit Kafka)

2016-02-26 Thread James Cheng (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169426#comment-15169426
 ] 

James Cheng commented on KAFKA-1995:


Now that Kafka Connect has been released, a good way to do this would be by 
writing a JMS Kafka Connector.

Someone mentioned MQTT. I don't know if that's the same thing as JMS, but 
someone released a MQTT Kafka Connector. See 
http://www.confluent.io/developers/connectors



> JMS to Kafka: Inbuilt JMSAdaptor/JMSProxy/JMSBridge (Client can speak JMS but 
> hit Kafka)
> 
>
> Key: KAFKA-1995
> URL: https://issues.apache.org/jira/browse/KAFKA-1995
> Project: Kafka
>  Issue Type: Wish
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Rekha Joshi
>
> Kafka is a great alternative to JMS, providing high performance, throughput 
> as scalable, distributed pub sub/commit log service.
> However there always exist traditional systems running on JMS.
> Rather than rewriting, it would be great if we just had an inbuilt 
> JMSAdaptor/JMSProxy/JMSBridge by which client can speak JMS but hit Kafka 
> behind-the-scene.
> Something like Chukwa's o.a.h.chukwa.datacollection.adaptor.jms.JMSAdaptor, 
> which receives msg off JMS queue and transforms to a Chukwa chunk?
> I have come across folks talking of this need in past as well.Is it 
> considered and/or part of the roadmap?
> http://grokbase.com/t/kafka/users/131cst8xpv/stomp-binding-for-kafka
> http://grokbase.com/t/kafka/users/148dm4247q/consuming-messages-from-kafka-and-pushing-on-to-a-jms-queue
> http://grokbase.com/t/kafka/users/143hjepbn2/request-kafka-zookeeper-jms-details
> Looking for inputs on correct way to approach this so to retain all good 
> features of Kafka while still not rewriting entire application.Possible?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Upgrading to Kafka 0.9.x

2016-02-26 Thread Mark Grover
Hi Kafka devs,
I come to you with a dilemma and a request.

Based on what I understand, users of Kafka need to upgrade their brokers to
Kafka 0.9.x first, before they upgrade their clients to Kafka 0.9.x.

However, that presents a problem to other projects that integrate with
Kafka (Spark, Flume, Storm, etc.). From here on, I will speak for Spark +
Kafka, since that's the one I am most familiar with.

In the light of compatibility (or the lack thereof) between 0.8.x and
0.9.x, Spark is faced with a problem of what version(s) of Kafka to be
compatible with, and has 2 options (discussed in this PR
):
1. We either upgrade to Kafka 0.9, dropping support for 0.8. Storm and
Flume are already on this path.
2. We introduce complexity in our code to support both 0.8 and 0.9 for the
entire duration of our next major release (Apache Spark 2.x).

I'd love to hear your thoughts on which option, you recommend.

Long term, I'd really appreciate if Kafka could do something that doesn't
make Spark having to support two, or even more versions of Kafka. And, if
there is something that I, personally, and Spark project can do in your
next release candidate phase to make things easier, please do let us know.

Thanks!
Mark


[jira] [Commented] (KAFKA-2818) Clean up Controller Object on forced Resignation

2016-02-26 Thread Matthew Bruce (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169173#comment-15169173
 ] 

Matthew Bruce commented on KAFKA-2818:
--

[~fpj] You would definitely know that code better than me.  If you think that's 
the route to go then it sounds good to me.

> Clean up Controller Object on forced Resignation
> 
>
> Key: KAFKA-2818
> URL: https://issues.apache.org/jira/browse/KAFKA-2818
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.9.0.0
>Reporter: Matthew Bruce
>Assignee: Flavio Junqueira
>Priority: Minor
> Attachments: KAFKA-2818.patch
>
>
> Currently if the controller does a forced resignation (if an exception is 
> caught during updateLeaderEpochAndSendRequest, SendUpdateMetadataRequest or 
> shutdownBroker), the Zookeeper resignation callback function 
> OnControllerResignation doesn't get a chance to execute which leaves some 
> artifacts laying around.  In particular the Sensors dont get cleaned up and 
> if the Kafka broker ever gets re-elected as Controller it will fail due to 
> some metrics already existing.  An Error and stack trace of such an event is 
> below.
> A forced resignation situation can be induced with a mis-config in 
> broker.properties fairly easily, by settig only SASL_PLAINTTEXT listeners and 
> setting inter.broker.protocol.version=0.8.2.X
> {code}
> listeners=SASL_PLAINTEXT://:9092
> inter.broker.protocol.version=0.8.2.X
> security.inter.broker.protocol=SASL_PLAINTEXT
> {code}
> {code}
> [2015-11-09 16:33:47,510] ERROR Error while electing or becoming leader on 
> broker 182050300 (kafka.server.ZookeeperLeaderElector)
> java.lang.IllegalArgumentException: A metric named 'MetricName 
> [name=connection-close-rate, group=controller-channel-metrics, 
> description=Connections closed per second in the window., 
> tags={broker-id=182050300}]' already exists, can't register another one.
> at 
> org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:285)
> at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:177)
> at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:162)
> at 
> org.apache.kafka.common.network.Selector$SelectorMetrics.(Selector.java:578)
> at org.apache.kafka.common.network.Selector.(Selector.java:112)
> at 
> kafka.controller.ControllerChannelManager.kafka$controller$ControllerChannelManager$$addNewBroker(ControllerChannelManager.scala:91)
> at 
> kafka.controller.ControllerChannelManager$$anonfun$1.apply(ControllerChannelManager.scala:43)
> at 
> kafka.controller.ControllerChannelManager$$anonfun$1.apply(ControllerChannelManager.scala:43)
> at scala.collection.immutable.Set$Set1.foreach(Set.scala:74)
> at 
> kafka.controller.ControllerChannelManager.(ControllerChannelManager.scala:43)
> at 
> kafka.controller.KafkaController.startChannelManager(KafkaController.scala:819)
> at 
> kafka.controller.KafkaController.initializeControllerContext(KafkaController.scala:747)
> at 
> kafka.controller.KafkaController.onControllerFailover(KafkaController.scala:330)
> at 
> kafka.controller.KafkaController$$anonfun$1.apply$mcV$sp(KafkaController.scala:163)
> at 
> kafka.server.ZookeeperLeaderElector.elect(ZookeeperLeaderElector.scala:84)
> at 
> kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply$mcZ$sp(ZookeeperLeaderElector.scala:146)
> at 
> kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply(ZookeeperLeaderElector.scala:141)
> at 
> kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply(ZookeeperLeaderElector.scala:141)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at 
> kafka.server.ZookeeperLeaderElector$LeaderChangeListener.handleDataDeleted(ZookeeperLeaderElector.scala:141)
> at org.I0Itec.zkclient.ZkClient$9.run(ZkClient.java:823)
> at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3297) More optimally balanced partition assignment strategy (new consumer)

2016-02-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169074#comment-15169074
 ] 

ASF GitHub Bot commented on KAFKA-3297:
---

GitHub user noslowerdna opened a pull request:

https://github.com/apache/kafka/pull/979

KAFKA-3297: Fair consumer partition assignment strategy (new consumer)

Pull request for https://issues.apache.org/jira/browse/KAFKA-3297

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/noslowerdna/kafka KAFKA-3297

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/979.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #979


commit 34f113453f03bf2e791181fa0db5d41ea985022a
Author: Andrew Olson 
Date:   2016-02-26T14:15:05Z

KAFKA-3297: Fair consumer partition assignment strategy (new consumer)




> More optimally balanced partition assignment strategy (new consumer)
> 
>
> Key: KAFKA-3297
> URL: https://issues.apache.org/jira/browse/KAFKA-3297
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Andrew Olson
>Assignee: Andrew Olson
>
> While the roundrobin partition assignment strategy is an improvement over the 
> range strategy, when the consumer topic subscriptions are not identical 
> (previously disallowed but will be possible as of KAFKA-2172) it can produce 
> heavily skewed assignments. As suggested 
> [here|https://issues.apache.org/jira/browse/KAFKA-2172?focusedCommentId=14530767=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14530767]
>  it would be nice to have a strategy that attempts to assign an equal number 
> of partitions to each consumer in a group, regardless of how similar their 
> individual topic subscriptions are. We can accomplish this by tracking the 
> number of partitions assigned to each consumer, and having the partition 
> assignment loop assign each partition to a consumer interested in that topic 
> with the least number of partitions assigned. 
> Additionally, we can optimize the distribution fairness by adjusting the 
> partition assignment order:
> * Topics with fewer consumers are assigned first.
> * In the event of a tie for least consumers, the topic with more partitions 
> is assigned first.
> The general idea behind these two rules is to keep the most flexible 
> assignment choices available as long as possible by starting with the most 
> constrained partitions/consumers.
> This JIRA addresses the new consumer. For the original high-level consumer, 
> see KAFKA-2435.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3297: Fair consumer partition assignment...

2016-02-26 Thread noslowerdna
GitHub user noslowerdna opened a pull request:

https://github.com/apache/kafka/pull/979

KAFKA-3297: Fair consumer partition assignment strategy (new consumer)

Pull request for https://issues.apache.org/jira/browse/KAFKA-3297

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/noslowerdna/kafka KAFKA-3297

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/979.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #979


commit 34f113453f03bf2e791181fa0db5d41ea985022a
Author: Andrew Olson 
Date:   2016-02-26T14:15:05Z

KAFKA-3297: Fair consumer partition assignment strategy (new consumer)




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Work started] (KAFKA-3297) More optimally balanced partition assignment strategy (new consumer)

2016-02-26 Thread Andrew Olson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-3297 started by Andrew Olson.
---
> More optimally balanced partition assignment strategy (new consumer)
> 
>
> Key: KAFKA-3297
> URL: https://issues.apache.org/jira/browse/KAFKA-3297
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Andrew Olson
>Assignee: Andrew Olson
>
> While the roundrobin partition assignment strategy is an improvement over the 
> range strategy, when the consumer topic subscriptions are not identical 
> (previously disallowed but will be possible as of KAFKA-2172) it can produce 
> heavily skewed assignments. As suggested 
> [here|https://issues.apache.org/jira/browse/KAFKA-2172?focusedCommentId=14530767=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14530767]
>  it would be nice to have a strategy that attempts to assign an equal number 
> of partitions to each consumer in a group, regardless of how similar their 
> individual topic subscriptions are. We can accomplish this by tracking the 
> number of partitions assigned to each consumer, and having the partition 
> assignment loop assign each partition to a consumer interested in that topic 
> with the least number of partitions assigned. 
> Additionally, we can optimize the distribution fairness by adjusting the 
> partition assignment order:
> * Topics with fewer consumers are assigned first.
> * In the event of a tie for least consumers, the topic with more partitions 
> is assigned first.
> The general idea behind these two rules is to keep the most flexible 
> assignment choices available as long as possible by starting with the most 
> constrained partitions/consumers.
> This JIRA addresses the new consumer. For the original high-level consumer, 
> see KAFKA-2435.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3297) More optimally balanced partition assignment strategy (new consumer)

2016-02-26 Thread Andrew Olson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Olson updated KAFKA-3297:

Description: 
While the roundrobin partition assignment strategy is an improvement over the 
range strategy, when the consumer topic subscriptions are not identical 
(previously disallowed but will be possible as of KAFKA-2172) it can produce 
heavily skewed assignments. As suggested 
[here|https://issues.apache.org/jira/browse/KAFKA-2172?focusedCommentId=14530767=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14530767]
 it would be nice to have a strategy that attempts to assign an equal number of 
partitions to each consumer in a group, regardless of how similar their 
individual topic subscriptions are. We can accomplish this by tracking the 
number of partitions assigned to each consumer, and having the partition 
assignment loop assign each partition to a consumer interested in that topic 
with the least number of partitions assigned. 

Additionally, we can optimize the distribution fairness by adjusting the 
partition assignment order:
* Topics with fewer consumers are assigned first.
* In the event of a tie for least consumers, the topic with more partitions is 
assigned first.

The general idea behind these two rules is to keep the most flexible assignment 
choices available as long as possible by starting with the most constrained 
partitions/consumers.

This JIRA addresses the new consumer. For the original high-level consumer, see 
KAFKA-2435.

  was:
While the roundrobin partition assignment strategy is an improvement over the 
range strategy, when the consumer topic subscriptions are not identical 
(previously disallowed but will be possible as of KAFKA-2172) it can produce 
heavily skewed assignments. As suggested 
[here|https://issues.apache.org/jira/browse/KAFKA-2172?focusedCommentId=14530767=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14530767]
 it would be nice to have a strategy that attempts to assign an equal number of 
partitions to each consumer in a group, regardless of how similar their 
individual topic subscriptions are. We can accomplish this by tracking the 
number of partitions assigned to each consumer, and having the partition 
assignment loop assign each partition to a consumer interested in that topic 
with the least number of partitions assigned. 

Additionally, we can optimize the distribution fairness by adjusting the 
partition assignment order:
* Topics with fewer consumers are assigned first.
* In the event of a tie for least consumers, the topic with more partitions is 
assigned first.

The general idea behind these two rules is to keep the most flexible assignment 
choices available as long as possible by starting with the most constrained 
partitions/consumers.

This JIRA addresses the new consumer.


> More optimally balanced partition assignment strategy (new consumer)
> 
>
> Key: KAFKA-3297
> URL: https://issues.apache.org/jira/browse/KAFKA-3297
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Andrew Olson
>Assignee: Andrew Olson
>
> While the roundrobin partition assignment strategy is an improvement over the 
> range strategy, when the consumer topic subscriptions are not identical 
> (previously disallowed but will be possible as of KAFKA-2172) it can produce 
> heavily skewed assignments. As suggested 
> [here|https://issues.apache.org/jira/browse/KAFKA-2172?focusedCommentId=14530767=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14530767]
>  it would be nice to have a strategy that attempts to assign an equal number 
> of partitions to each consumer in a group, regardless of how similar their 
> individual topic subscriptions are. We can accomplish this by tracking the 
> number of partitions assigned to each consumer, and having the partition 
> assignment loop assign each partition to a consumer interested in that topic 
> with the least number of partitions assigned. 
> Additionally, we can optimize the distribution fairness by adjusting the 
> partition assignment order:
> * Topics with fewer consumers are assigned first.
> * In the event of a tie for least consumers, the topic with more partitions 
> is assigned first.
> The general idea behind these two rules is to keep the most flexible 
> assignment choices available as long as possible by starting with the most 
> constrained partitions/consumers.
> This JIRA addresses the new consumer. For the original high-level consumer, 
> see KAFKA-2435.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2435) More optimally balanced partition assignment strategy

2016-02-26 Thread Andrew Olson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Olson updated KAFKA-2435:

Description: 
While the roundrobin partition assignment strategy is an improvement over the 
range strategy, when the consumer topic subscriptions are not identical 
(previously disallowed but will be possible as of KAFKA-2172) it can produce 
heavily skewed assignments. As suggested 
[here|https://issues.apache.org/jira/browse/KAFKA-2172?focusedCommentId=14530767=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14530767]
 it would be nice to have a strategy that attempts to assign an equal number of 
partitions to each consumer in a group, regardless of how similar their 
individual topic subscriptions are. We can accomplish this by tracking the 
number of partitions assigned to each consumer, and having the partition 
assignment loop assign each partition to a consumer interested in that topic 
with the least number of partitions assigned. 

Additionally, we can optimize the distribution fairness by adjusting the 
partition assignment order:
* Topics with fewer consumers are assigned first.
* In the event of a tie for least consumers, the topic with more partitions is 
assigned first.

The general idea behind these two rules is to keep the most flexible assignment 
choices available as long as possible by starting with the most constrained 
partitions/consumers.

This JIRA addresses the original high-level consumer.

  was:
While the roundrobin partition assignment strategy is an improvement over the 
range strategy, when the consumer topic subscriptions are not identical 
(previously disallowed but will be possible as of KAFKA-2172) it can produce 
heavily skewed assignments. As suggested 
[here|https://issues.apache.org/jira/browse/KAFKA-2172?focusedCommentId=14530767=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14530767]
 it would be nice to have a strategy that attempts to assign an equal number of 
partitions to each consumer in a group, regardless of how similar their 
individual topic subscriptions are. We can accomplish this by tracking the 
number of partitions assigned to each consumer, and having the partition 
assignment loop assign each partition to a consumer interested in that topic 
with the least number of partitions assigned. 

Additionally, we can optimize the distribution fairness by adjusting the 
partition assignment order:
* Topics with fewer consumers are assigned first.
* In the event of a tie for least consumers, the topic with more partitions is 
assigned first.

The general idea behind these two rules is to keep the most flexible assignment 
choices available as long as possible by starting with the most constrained 
partitions/consumers.


> More optimally balanced partition assignment strategy
> -
>
> Key: KAFKA-2435
> URL: https://issues.apache.org/jira/browse/KAFKA-2435
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Andrew Olson
>Assignee: Andrew Olson
> Attachments: KAFKA-2435.patch
>
>
> While the roundrobin partition assignment strategy is an improvement over the 
> range strategy, when the consumer topic subscriptions are not identical 
> (previously disallowed but will be possible as of KAFKA-2172) it can produce 
> heavily skewed assignments. As suggested 
> [here|https://issues.apache.org/jira/browse/KAFKA-2172?focusedCommentId=14530767=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14530767]
>  it would be nice to have a strategy that attempts to assign an equal number 
> of partitions to each consumer in a group, regardless of how similar their 
> individual topic subscriptions are. We can accomplish this by tracking the 
> number of partitions assigned to each consumer, and having the partition 
> assignment loop assign each partition to a consumer interested in that topic 
> with the least number of partitions assigned. 
> Additionally, we can optimize the distribution fairness by adjusting the 
> partition assignment order:
> * Topics with fewer consumers are assigned first.
> * In the event of a tie for least consumers, the topic with more partitions 
> is assigned first.
> The general idea behind these two rules is to keep the most flexible 
> assignment choices available as long as possible by starting with the most 
> constrained partitions/consumers.
> This JIRA addresses the original high-level consumer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2435) More optimally balanced partition assignment strategy

2016-02-26 Thread Andrew Olson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Olson updated KAFKA-2435:

Description: 
While the roundrobin partition assignment strategy is an improvement over the 
range strategy, when the consumer topic subscriptions are not identical 
(previously disallowed but will be possible as of KAFKA-2172) it can produce 
heavily skewed assignments. As suggested 
[here|https://issues.apache.org/jira/browse/KAFKA-2172?focusedCommentId=14530767=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14530767]
 it would be nice to have a strategy that attempts to assign an equal number of 
partitions to each consumer in a group, regardless of how similar their 
individual topic subscriptions are. We can accomplish this by tracking the 
number of partitions assigned to each consumer, and having the partition 
assignment loop assign each partition to a consumer interested in that topic 
with the least number of partitions assigned. 

Additionally, we can optimize the distribution fairness by adjusting the 
partition assignment order:
* Topics with fewer consumers are assigned first.
* In the event of a tie for least consumers, the topic with more partitions is 
assigned first.

The general idea behind these two rules is to keep the most flexible assignment 
choices available as long as possible by starting with the most constrained 
partitions/consumers.

This JIRA addresses the original high-level consumer. For the new consumer, see 
KAFKA-3297.

  was:
While the roundrobin partition assignment strategy is an improvement over the 
range strategy, when the consumer topic subscriptions are not identical 
(previously disallowed but will be possible as of KAFKA-2172) it can produce 
heavily skewed assignments. As suggested 
[here|https://issues.apache.org/jira/browse/KAFKA-2172?focusedCommentId=14530767=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14530767]
 it would be nice to have a strategy that attempts to assign an equal number of 
partitions to each consumer in a group, regardless of how similar their 
individual topic subscriptions are. We can accomplish this by tracking the 
number of partitions assigned to each consumer, and having the partition 
assignment loop assign each partition to a consumer interested in that topic 
with the least number of partitions assigned. 

Additionally, we can optimize the distribution fairness by adjusting the 
partition assignment order:
* Topics with fewer consumers are assigned first.
* In the event of a tie for least consumers, the topic with more partitions is 
assigned first.

The general idea behind these two rules is to keep the most flexible assignment 
choices available as long as possible by starting with the most constrained 
partitions/consumers.

This JIRA addresses the original high-level consumer.


> More optimally balanced partition assignment strategy
> -
>
> Key: KAFKA-2435
> URL: https://issues.apache.org/jira/browse/KAFKA-2435
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Andrew Olson
>Assignee: Andrew Olson
> Attachments: KAFKA-2435.patch
>
>
> While the roundrobin partition assignment strategy is an improvement over the 
> range strategy, when the consumer topic subscriptions are not identical 
> (previously disallowed but will be possible as of KAFKA-2172) it can produce 
> heavily skewed assignments. As suggested 
> [here|https://issues.apache.org/jira/browse/KAFKA-2172?focusedCommentId=14530767=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14530767]
>  it would be nice to have a strategy that attempts to assign an equal number 
> of partitions to each consumer in a group, regardless of how similar their 
> individual topic subscriptions are. We can accomplish this by tracking the 
> number of partitions assigned to each consumer, and having the partition 
> assignment loop assign each partition to a consumer interested in that topic 
> with the least number of partitions assigned. 
> Additionally, we can optimize the distribution fairness by adjusting the 
> partition assignment order:
> * Topics with fewer consumers are assigned first.
> * In the event of a tie for least consumers, the topic with more partitions 
> is assigned first.
> The general idea behind these two rules is to keep the most flexible 
> assignment choices available as long as possible by starting with the most 
> constrained partitions/consumers.
> This JIRA addresses the original high-level consumer. For the new consumer, 
> see KAFKA-3297.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3297) More optimally balanced partition assignment strategy (new consumer)

2016-02-26 Thread Andrew Olson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Olson updated KAFKA-3297:

Description: 
While the roundrobin partition assignment strategy is an improvement over the 
range strategy, when the consumer topic subscriptions are not identical 
(previously disallowed but will be possible as of KAFKA-2172) it can produce 
heavily skewed assignments. As suggested 
[here|https://issues.apache.org/jira/browse/KAFKA-2172?focusedCommentId=14530767=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14530767]
 it would be nice to have a strategy that attempts to assign an equal number of 
partitions to each consumer in a group, regardless of how similar their 
individual topic subscriptions are. We can accomplish this by tracking the 
number of partitions assigned to each consumer, and having the partition 
assignment loop assign each partition to a consumer interested in that topic 
with the least number of partitions assigned. 

Additionally, we can optimize the distribution fairness by adjusting the 
partition assignment order:
* Topics with fewer consumers are assigned first.
* In the event of a tie for least consumers, the topic with more partitions is 
assigned first.

The general idea behind these two rules is to keep the most flexible assignment 
choices available as long as possible by starting with the most constrained 
partitions/consumers.

This JIRA addresses the new consumer.

  was:
While the roundrobin partition assignment strategy is an improvement over the 
range strategy, when the consumer topic subscriptions are not identical 
(previously disallowed but will be possible as of KAFKA-2172) it can produce 
heavily skewed assignments. As suggested 
[here|https://issues.apache.org/jira/browse/KAFKA-2172?focusedCommentId=14530767=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14530767]
 it would be nice to have a strategy that attempts to assign an equal number of 
partitions to each consumer in a group, regardless of how similar their 
individual topic subscriptions are. We can accomplish this by tracking the 
number of partitions assigned to each consumer, and having the partition 
assignment loop assign each partition to a consumer interested in that topic 
with the least number of partitions assigned. 

Additionally, we can optimize the distribution fairness by adjusting the 
partition assignment order:
* Topics with fewer consumers are assigned first.
* In the event of a tie for least consumers, the topic with more partitions is 
assigned first.

The general idea behind these two rules is to keep the most flexible assignment 
choices available as long as possible by starting with the most constrained 
partitions/consumers.


> More optimally balanced partition assignment strategy (new consumer)
> 
>
> Key: KAFKA-3297
> URL: https://issues.apache.org/jira/browse/KAFKA-3297
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Andrew Olson
>Assignee: Andrew Olson
>
> While the roundrobin partition assignment strategy is an improvement over the 
> range strategy, when the consumer topic subscriptions are not identical 
> (previously disallowed but will be possible as of KAFKA-2172) it can produce 
> heavily skewed assignments. As suggested 
> [here|https://issues.apache.org/jira/browse/KAFKA-2172?focusedCommentId=14530767=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14530767]
>  it would be nice to have a strategy that attempts to assign an equal number 
> of partitions to each consumer in a group, regardless of how similar their 
> individual topic subscriptions are. We can accomplish this by tracking the 
> number of partitions assigned to each consumer, and having the partition 
> assignment loop assign each partition to a consumer interested in that topic 
> with the least number of partitions assigned. 
> Additionally, we can optimize the distribution fairness by adjusting the 
> partition assignment order:
> * Topics with fewer consumers are assigned first.
> * In the event of a tie for least consumers, the topic with more partitions 
> is assigned first.
> The general idea behind these two rules is to keep the most flexible 
> assignment choices available as long as possible by starting with the most 
> constrained partitions/consumers.
> This JIRA addresses the new consumer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3297) More optimally balanced partition assignment strategy (new consumer)

2016-02-26 Thread Andrew Olson (JIRA)
Andrew Olson created KAFKA-3297:
---

 Summary: More optimally balanced partition assignment strategy 
(new consumer)
 Key: KAFKA-3297
 URL: https://issues.apache.org/jira/browse/KAFKA-3297
 Project: Kafka
  Issue Type: Improvement
Reporter: Andrew Olson
Assignee: Andrew Olson


While the roundrobin partition assignment strategy is an improvement over the 
range strategy, when the consumer topic subscriptions are not identical 
(previously disallowed but will be possible as of KAFKA-2172) it can produce 
heavily skewed assignments. As suggested 
[here|https://issues.apache.org/jira/browse/KAFKA-2172?focusedCommentId=14530767=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14530767]
 it would be nice to have a strategy that attempts to assign an equal number of 
partitions to each consumer in a group, regardless of how similar their 
individual topic subscriptions are. We can accomplish this by tracking the 
number of partitions assigned to each consumer, and having the partition 
assignment loop assign each partition to a consumer interested in that topic 
with the least number of partitions assigned. 

Additionally, we can optimize the distribution fairness by adjusting the 
partition assignment order:
* Topics with fewer consumers are assigned first.
* In the event of a tie for least consumers, the topic with more partitions is 
assigned first.

The general idea behind these two rules is to keep the most flexible assignment 
choices available as long as possible by starting with the most constrained 
partitions/consumers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3296) All consumer reads hang indefinately

2016-02-26 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169032#comment-15169032
 ] 

Ismael Juma commented on KAFKA-3296:


[~hachikuji] any thoughts?

> All consumer reads hang indefinately
> 
>
> Key: KAFKA-3296
> URL: https://issues.apache.org/jira/browse/KAFKA-3296
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0, 0.9.0.1
>Reporter: Simon Cooper
>Priority: Critical
>
> We've got several integration tests that bring up systems on VMs for testing. 
> We've recently upgraded to 0.9, and very occasionally we occasionally see an 
> issue where every consumer that tries to read from the broker hangs, spamming 
> the following in their logs:
> {code}2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.NetworkClient 
> [pool-10-thread-1] | Sending metadata request 
> ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21905,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489537856, sendTimeMs=0) to node 1
> 2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10954 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:37,857 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489537857, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@28edb273,
>  
> request=RequestSend(header={api_key=10,api_version=0,correlation_id=21906,client_id=consumer-1},
>  body={group_id=}), createdTimeMs=1456489537856, sendTimeMs=1456489537856), 
> responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.NetworkClient [pool-10-thread-1] | 
> Sending metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21907,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489537956, sendTimeMs=0) to node 1
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10955 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:37,957 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489537957, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@40cee8cc,
>  
> request=RequestSend(header={api_key=10,api_version=0,correlation_id=21908,client_id=consumer-1},
>  body={group_id=}), createdTimeMs=1456489537956, sendTimeMs=1456489537956), 
> responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.NetworkClient [pool-10-thread-1] | 
> Sending metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21909,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489538056, sendTimeMs=0) to node 1
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10956 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:38,057 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489538057, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@439e25fb,
>  
> request=RequestSend(header={api_key=10,api_version=0,correlation_id=21910,client_id=consumer-1},
>  body={group_id=}), createdTimeMs=1456489538056, sendTimeMs=1456489538056), 
> 

[jira] [Comment Edited] (KAFKA-3296) All consumer reads hang indefinately

2016-02-26 Thread Simon Cooper (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169007#comment-15169007
 ] 

Simon Cooper edited comment on KAFKA-3296 at 2/26/16 1:26 PM:
--

Example code that breaks that gets the start and end offset of the topic:

{code}Consumer c = ...
TopicPartition tp = ...
c.assign(ImmutableList.of(tp));
...
c.seekToEnd(tp);
long endOffset = c.position(tp);
c.seekToBeginning(tp);
long startOffset = c.position(tp);{code}

Although we also observe this behaviour with other uses of Consumer in our 
codebase. It looks like any call to position() or poll() hangs.

We also get the same hang when using kafka-console-consumer (no messages are 
received when using the new consumer, old consumer works fine)


was (Author: thecoop1984):
Example code that breaks that gets the start and end offset of the topic:

{code}Consumer c = ...
TopicPartition tp = ...
c.assign(ImmutableList.of(tp));
...
c.seekToEnd(tp);
long endOffset = _c.position(tp);
c.seekToBeginning(tp);
long startOffset = _c.position(tp);{code}

Although we also observe this behaviour with other uses of Consumer in our 
codebase. It looks like any access of position() or poll() hangs.

> All consumer reads hang indefinately
> 
>
> Key: KAFKA-3296
> URL: https://issues.apache.org/jira/browse/KAFKA-3296
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0, 0.9.0.1
>Reporter: Simon Cooper
>Priority: Critical
>
> We've got several integration tests that bring up systems on VMs for testing. 
> We've recently upgraded to 0.9, and very occasionally we occasionally see an 
> issue where every consumer that tries to read from the broker hangs, spamming 
> the following in their logs:
> {code}2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.NetworkClient 
> [pool-10-thread-1] | Sending metadata request 
> ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21905,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489537856, sendTimeMs=0) to node 1
> 2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10954 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:37,857 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489537857, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@28edb273,
>  
> request=RequestSend(header={api_key=10,api_version=0,correlation_id=21906,client_id=consumer-1},
>  body={group_id=}), createdTimeMs=1456489537856, sendTimeMs=1456489537856), 
> responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.NetworkClient [pool-10-thread-1] | 
> Sending metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21907,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489537956, sendTimeMs=0) to node 1
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10955 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:37,957 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489537957, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@40cee8cc,
>  
> request=RequestSend(header={api_key=10,api_version=0,correlation_id=21908,client_id=consumer-1},
>  body={group_id=}), createdTimeMs=1456489537956, sendTimeMs=1456489537956), 
> responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.NetworkClient [pool-10-thread-1] | 
> Sending metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21909,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> 

[jira] [Commented] (KAFKA-3296) All consumer reads hang indefinately

2016-02-26 Thread Simon Cooper (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169007#comment-15169007
 ] 

Simon Cooper commented on KAFKA-3296:
-

Example code that breaks that gets the start and end offset of the topic:

{code}Consumer c = ...
TopicPartition tp = ...
c.assign(ImmutableList.of(tp));
...
c.seekToEnd(tp);
long endOffset = _c.position(tp);
c.seekToBeginning(tp);
long startOffset = _c.position(tp);{code}

Although we also observe this behaviour with other uses of Consumer in our 
codebase. It looks like any access of position() or poll() hangs.

> All consumer reads hang indefinately
> 
>
> Key: KAFKA-3296
> URL: https://issues.apache.org/jira/browse/KAFKA-3296
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0, 0.9.0.1
>Reporter: Simon Cooper
>Priority: Critical
>
> We've got several integration tests that bring up systems on VMs for testing. 
> We've recently upgraded to 0.9, and very occasionally we occasionally see an 
> issue where every consumer that tries to read from the broker hangs, spamming 
> the following in their logs:
> {code}2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.NetworkClient 
> [pool-10-thread-1] | Sending metadata request 
> ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21905,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489537856, sendTimeMs=0) to node 1
> 2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10954 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:37,857 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489537857, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@28edb273,
>  
> request=RequestSend(header={api_key=10,api_version=0,correlation_id=21906,client_id=consumer-1},
>  body={group_id=}), createdTimeMs=1456489537856, sendTimeMs=1456489537856), 
> responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.NetworkClient [pool-10-thread-1] | 
> Sending metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21907,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489537956, sendTimeMs=0) to node 1
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10955 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:37,957 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489537957, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@40cee8cc,
>  
> request=RequestSend(header={api_key=10,api_version=0,correlation_id=21908,client_id=consumer-1},
>  body={group_id=}), createdTimeMs=1456489537956, sendTimeMs=1456489537956), 
> responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.NetworkClient [pool-10-thread-1] | 
> Sending metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21909,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489538056, sendTimeMs=0) to node 1
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10956 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:38,057 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489538057, 

[jira] [Commented] (KAFKA-3296) All consumer reads hang indefinately

2016-02-26 Thread Simon Cooper (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169000#comment-15169000
 ] 

Simon Cooper commented on KAFKA-3296:
-

Looks like this breaks for every topic on the broker (only tested with 
single-broker clusters though)

> All consumer reads hang indefinately
> 
>
> Key: KAFKA-3296
> URL: https://issues.apache.org/jira/browse/KAFKA-3296
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0, 0.9.0.1
>Reporter: Simon Cooper
>Priority: Critical
>
> We've got several integration tests that bring up systems on VMs for testing. 
> We've recently upgraded to 0.9, and very occasionally we occasionally see an 
> issue where every consumer that tries to read from the broker hangs, spamming 
> the following in their logs:
> {code}2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.NetworkClient 
> [pool-10-thread-1] | Sending metadata request 
> ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21905,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489537856, sendTimeMs=0) to node 1
> 2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10954 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:37,857 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489537857, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@28edb273,
>  
> request=RequestSend(header={api_key=10,api_version=0,correlation_id=21906,client_id=consumer-1},
>  body={group_id=}), createdTimeMs=1456489537856, sendTimeMs=1456489537856), 
> responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.NetworkClient [pool-10-thread-1] | 
> Sending metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21907,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489537956, sendTimeMs=0) to node 1
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10955 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:37,957 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489537957, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@40cee8cc,
>  
> request=RequestSend(header={api_key=10,api_version=0,correlation_id=21908,client_id=consumer-1},
>  body={group_id=}), createdTimeMs=1456489537956, sendTimeMs=1456489537956), 
> responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.NetworkClient [pool-10-thread-1] | 
> Sending metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21909,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489538056, sendTimeMs=0) to node 1
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10956 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:38,057 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489538057, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@439e25fb,
>  
> request=RequestSend(header={api_key=10,api_version=0,correlation_id=21910,client_id=consumer-1},
>  body={group_id=}), 

[jira] [Commented] (KAFKA-3296) All consumer reads hang indefinately

2016-02-26 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168995#comment-15168995
 ] 

Ismael Juma commented on KAFKA-3296:


Can you share your code please?

> All consumer reads hang indefinately
> 
>
> Key: KAFKA-3296
> URL: https://issues.apache.org/jira/browse/KAFKA-3296
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0, 0.9.0.1
>Reporter: Simon Cooper
>Priority: Critical
>
> We've got several integration tests that bring up systems on VMs for testing. 
> We've recently upgraded to 0.9, and very occasionally we occasionally see an 
> issue where every consumer that tries to read from the broker hangs, spamming 
> the following in their logs:
> {code}2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.NetworkClient 
> [pool-10-thread-1] | Sending metadata request 
> ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21905,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489537856, sendTimeMs=0) to node 1
> 2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10954 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:37,857 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489537857, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@28edb273,
>  
> request=RequestSend(header={api_key=10,api_version=0,correlation_id=21906,client_id=consumer-1},
>  body={group_id=}), createdTimeMs=1456489537856, sendTimeMs=1456489537856), 
> responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.NetworkClient [pool-10-thread-1] | 
> Sending metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21907,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489537956, sendTimeMs=0) to node 1
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10955 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:37,957 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489537957, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@40cee8cc,
>  
> request=RequestSend(header={api_key=10,api_version=0,correlation_id=21908,client_id=consumer-1},
>  body={group_id=}), createdTimeMs=1456489537956, sendTimeMs=1456489537956), 
> responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.NetworkClient [pool-10-thread-1] | 
> Sending metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21909,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489538056, sendTimeMs=0) to node 1
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10956 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:38,057 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489538057, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@439e25fb,
>  
> request=RequestSend(header={api_key=10,api_version=0,correlation_id=21910,client_id=consumer-1},
>  body={group_id=}), createdTimeMs=1456489538056, sendTimeMs=1456489538056), 
> 

[jira] [Comment Edited] (KAFKA-3296) All consumer reads hang indefinately

2016-02-26 Thread Simon Cooper (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168952#comment-15168952
 ] 

Simon Cooper edited comment on KAFKA-3296 at 2/26/16 1:07 PM:
--

We've observed the same behaviour with a 0.9.0.0 consumer & 0.9.0.1 broker, and 
0.9.0.1 consumer and 0.9.0.1 broker. The consumer was using manual partition 
assignment to a single-partition topic (no cgroups or anything like that), and 
the hang was on a position() call. This only happens very occasionally, on the 
order of several days between triggers, when starting >10 VMs a day.


was (Author: thecoop1984):
We've observed the same behaviour with a 0.9.0.0 consumer & 0.9.0.1 broker, and 
0.9.0.1 consumer and 0.9.0.1 broker. The consumer was using manual partition 
assignment to a single-partition topic (no cgroups or anything like that), and 
the hang was on a position() call. This only happens very occasionally, on the 
order of several days between triggers, when starting up >10 VMs a day.

> All consumer reads hang indefinately
> 
>
> Key: KAFKA-3296
> URL: https://issues.apache.org/jira/browse/KAFKA-3296
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0, 0.9.0.1
>Reporter: Simon Cooper
>Priority: Critical
>
> We've got several integration tests that bring up systems on VMs for testing. 
> We've recently upgraded to 0.9, and very occasionally we occasionally see an 
> issue where every consumer that tries to read from the broker hangs, spamming 
> the following in their logs:
> {code}2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.NetworkClient 
> [pool-10-thread-1] | Sending metadata request 
> ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21905,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489537856, sendTimeMs=0) to node 1
> 2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10954 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:37,857 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489537857, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@28edb273,
>  
> request=RequestSend(header={api_key=10,api_version=0,correlation_id=21906,client_id=consumer-1},
>  body={group_id=}), createdTimeMs=1456489537856, sendTimeMs=1456489537856), 
> responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.NetworkClient [pool-10-thread-1] | 
> Sending metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21907,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489537956, sendTimeMs=0) to node 1
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10955 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:37,957 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489537957, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@40cee8cc,
>  
> request=RequestSend(header={api_key=10,api_version=0,correlation_id=21908,client_id=consumer-1},
>  body={group_id=}), createdTimeMs=1456489537956, sendTimeMs=1456489537956), 
> responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.NetworkClient [pool-10-thread-1] | 
> Sending metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21909,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489538056, sendTimeMs=0) to node 1
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10956 to Cluster(nodes = [Node(1, 
> 

[jira] [Commented] (KAFKA-3296) All consumer reads hang indefinately

2016-02-26 Thread Simon Cooper (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168952#comment-15168952
 ] 

Simon Cooper commented on KAFKA-3296:
-

We've observed the same behaviour with a 0.9.0.0 consumer & 0.9.0.1 broker, and 
0.9.0.1 consumer and 0.9.0.1 broker. The consumer was using manual partition 
assignment to a single-partition topic (no cgroups or anything like that), and 
the hang was on a position() call. This only happens very occasionally, on the 
order of several days between triggers, when starting up >10 VMs a day.

> All consumer reads hang indefinately
> 
>
> Key: KAFKA-3296
> URL: https://issues.apache.org/jira/browse/KAFKA-3296
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0, 0.9.0.1
>Reporter: Simon Cooper
>Priority: Critical
>
> We've got several integration tests that bring up systems on VMs for testing. 
> We've recently upgraded to 0.9, and very occasionally we occasionally see an 
> issue where every consumer that tries to read from the broker hangs, spamming 
> the following in their logs:
> {code}2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.NetworkClient 
> [pool-10-thread-1] | Sending metadata request 
> ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21905,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489537856, sendTimeMs=0) to node 1
> 2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10954 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:37,857 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489537857, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@28edb273,
>  
> request=RequestSend(header={api_key=10,api_version=0,correlation_id=21906,client_id=consumer-1},
>  body={group_id=}), createdTimeMs=1456489537856, sendTimeMs=1456489537856), 
> responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.NetworkClient [pool-10-thread-1] | 
> Sending metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21907,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489537956, sendTimeMs=0) to node 1
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10955 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:37,957 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489537957, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@40cee8cc,
>  
> request=RequestSend(header={api_key=10,api_version=0,correlation_id=21908,client_id=consumer-1},
>  body={group_id=}), createdTimeMs=1456489537956, sendTimeMs=1456489537956), 
> responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.NetworkClient [pool-10-thread-1] | 
> Sending metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21909,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489538056, sendTimeMs=0) to node 1
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10956 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:38,057 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489538057, disconnected=false, 
> 

[jira] [Commented] (KAFKA-3296) All consumer reads hang indefinately

2016-02-26 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168944#comment-15168944
 ] 

Ismael Juma commented on KAFKA-3296:


Thanks for the report [~thecoop1984]. Can you please confirm that this is 
happening with 0.9.0.1? And if so, would you be able to provide some code so 
that we can try to reproduce it?

> All consumer reads hang indefinately
> 
>
> Key: KAFKA-3296
> URL: https://issues.apache.org/jira/browse/KAFKA-3296
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0, 0.9.0.1
>Reporter: Simon Cooper
>Priority: Critical
>
> We've got several integration tests that bring up systems on VMs for testing. 
> We've recently upgraded to 0.9, and very occasionally we occasionally see an 
> issue where every consumer that tries to read from the broker hangs, spamming 
> the following in their logs:
> {code}2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.NetworkClient 
> [pool-10-thread-1] | Sending metadata request 
> ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21905,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489537856, sendTimeMs=0) to node 1
> 2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10954 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:37,857 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489537857, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@28edb273,
>  
> request=RequestSend(header={api_key=10,api_version=0,correlation_id=21906,client_id=consumer-1},
>  body={group_id=}), createdTimeMs=1456489537856, sendTimeMs=1456489537856), 
> responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.NetworkClient [pool-10-thread-1] | 
> Sending metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21907,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489537956, sendTimeMs=0) to node 1
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10955 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:37,957 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489537957, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@40cee8cc,
>  
> request=RequestSend(header={api_key=10,api_version=0,correlation_id=21908,client_id=consumer-1},
>  body={group_id=}), createdTimeMs=1456489537956, sendTimeMs=1456489537956), 
> responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.NetworkClient [pool-10-thread-1] | 
> Sending metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21909,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489538056, sendTimeMs=0) to node 1
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10956 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:38,057 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489538057, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@439e25fb,
>  
> 

[jira] [Updated] (KAFKA-3296) All consumer reads hang indefinately

2016-02-26 Thread Simon Cooper (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Cooper updated KAFKA-3296:

Description: 
We've got several integration tests that bring up systems on VMs for testing. 
We've recently upgraded to 0.9, and very occasionally we occasionally see an 
issue where every consumer that tries to read from the broker hangs, spamming 
the following in their logs:

{code}2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.NetworkClient 
[pool-10-thread-1] | Sending metadata request 
ClientRequest(expectResponse=true, callback=null, 
request=RequestSend(header={api_key=3,api_version=0,correlation_id=21905,client_id=consumer-1},
 body={topics=[Topic1]}), isInitiatedByNetworkClient, 
createdTimeMs=1456489537856, sendTimeMs=0) to node 1
2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | Updated 
cluster metadata version 10954 to Cluster(nodes = [Node(1, server.internal, 
9092)], partitions = [Partition(topic = Topic1, partition = 0, leader = 1, 
replicas = [1,], isr = [1,]])
2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
[pool-10-thread-1] | Issuing group metadata request to broker 1
2016-02-26T12:25:37,857 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
[pool-10-thread-1] | Group metadata response 
ClientResponse(receivedTimeMs=1456489537857, disconnected=false, 
request=ClientRequest(expectResponse=true, 
callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@28edb273,
 
request=RequestSend(header={api_key=10,api_version=0,correlation_id=21906,client_id=consumer-1},
 body={group_id=}), createdTimeMs=1456489537856, sendTimeMs=1456489537856), 
responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.NetworkClient [pool-10-thread-1] | 
Sending metadata request ClientRequest(expectResponse=true, callback=null, 
request=RequestSend(header={api_key=3,api_version=0,correlation_id=21907,client_id=consumer-1},
 body={topics=[Topic1]}), isInitiatedByNetworkClient, 
createdTimeMs=1456489537956, sendTimeMs=0) to node 1
2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | Updated 
cluster metadata version 10955 to Cluster(nodes = [Node(1, server.internal, 
9092)], partitions = [Partition(topic = Topic1, partition = 0, leader = 1, 
replicas = [1,], isr = [1,]])
2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
[pool-10-thread-1] | Issuing group metadata request to broker 1
2016-02-26T12:25:37,957 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
[pool-10-thread-1] | Group metadata response 
ClientResponse(receivedTimeMs=1456489537957, disconnected=false, 
request=ClientRequest(expectResponse=true, 
callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@40cee8cc,
 
request=RequestSend(header={api_key=10,api_version=0,correlation_id=21908,client_id=consumer-1},
 body={group_id=}), createdTimeMs=1456489537956, sendTimeMs=1456489537956), 
responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.NetworkClient [pool-10-thread-1] | 
Sending metadata request ClientRequest(expectResponse=true, callback=null, 
request=RequestSend(header={api_key=3,api_version=0,correlation_id=21909,client_id=consumer-1},
 body={topics=[Topic1]}), isInitiatedByNetworkClient, 
createdTimeMs=1456489538056, sendTimeMs=0) to node 1
2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | Updated 
cluster metadata version 10956 to Cluster(nodes = [Node(1, server.internal, 
9092)], partitions = [Partition(topic = Topic1, partition = 0, leader = 1, 
replicas = [1,], isr = [1,]])
2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
[pool-10-thread-1] | Issuing group metadata request to broker 1
2016-02-26T12:25:38,057 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
[pool-10-thread-1] | Group metadata response 
ClientResponse(receivedTimeMs=1456489538057, disconnected=false, 
request=ClientRequest(expectResponse=true, 
callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@439e25fb,
 
request=RequestSend(header={api_key=10,api_version=0,correlation_id=21910,client_id=consumer-1},
 body={group_id=}), createdTimeMs=1456489538056, sendTimeMs=1456489538056), 
responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}}){code}

This persists for any 0.9 consumer trying to read from the topic (we haven't 
confirmed if this is for a single topic or for any topic on the broker). 0.8 
consumers can read from the broker without issues. This is fixed by a broker 
restart.

This was observed on a single-broker cluster. There were no suspicious log 
messages on the server.

  was:
We've got several integration tests that bring up systems on VMs for testing. 
We've recently upgraded to 0.9, and very occasionally we 

[jira] [Created] (KAFKA-3296) All consumer reads hang indefinately

2016-02-26 Thread Simon Cooper (JIRA)
Simon Cooper created KAFKA-3296:
---

 Summary: All consumer reads hang indefinately
 Key: KAFKA-3296
 URL: https://issues.apache.org/jira/browse/KAFKA-3296
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.9.0.1, 0.9.0.0
Reporter: Simon Cooper
Priority: Critical


We've got several integration tests that bring up systems on VMs for testing. 
We've recently upgraded to 0.9, and very occasionally we occasionally see an 
issue where every consumer that tries to read from the broker hangs, spamming 
the following in their logs:

{code}2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.NetworkClient 
[pool-10-thread-1] | Sending metadata request 
ClientRequest(expectResponse=true, callback=null, 
request=RequestSend(header={api_key=3,api_version=0,correlation_id=21905,client_id=consumer-1},
 body={topics=[Topic1]}), isInitiatedByNetworkClient, 
createdTimeMs=1456489537856, sendTimeMs=0) to node 1
2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | Updated 
cluster metadata version 10954 to Cluster(nodes = [Node(1, server.internal, 
9092)], partitions = [Partition(topic = Topic1, partition = 0, leader = 1, 
replicas = [1,], isr = [1,]])
2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
[pool-10-thread-1] | Issuing group metadata request to broker 1
2016-02-26T12:25:37,857 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
[pool-10-thread-1] | Group metadata response 
ClientResponse(receivedTimeMs=1456489537857, disconnected=false, 
request=ClientRequest(expectResponse=true, 
callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@28edb273,
 
request=RequestSend(header={api_key=10,api_version=0,correlation_id=21906,client_id=consumer-1},
 body={group_id=}), createdTimeMs=1456489537856, sendTimeMs=1456489537856), 
responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.NetworkClient [pool-10-thread-1] | 
Sending metadata request ClientRequest(expectResponse=true, callback=null, 
request=RequestSend(header={api_key=3,api_version=0,correlation_id=21907,client_id=consumer-1},
 body={topics=[Topic1]}), isInitiatedByNetworkClient, 
createdTimeMs=1456489537956, sendTimeMs=0) to node 1
2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | Updated 
cluster metadata version 10955 to Cluster(nodes = [Node(1, server.internal, 
9092)], partitions = [Partition(topic = Topic1, partition = 0, leader = 1, 
replicas = [1,], isr = [1,]])
2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
[pool-10-thread-1] | Issuing group metadata request to broker 1
2016-02-26T12:25:37,957 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
[pool-10-thread-1] | Group metadata response 
ClientResponse(receivedTimeMs=1456489537957, disconnected=false, 
request=ClientRequest(expectResponse=true, 
callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@40cee8cc,
 
request=RequestSend(header={api_key=10,api_version=0,correlation_id=21908,client_id=consumer-1},
 body={group_id=}), createdTimeMs=1456489537956, sendTimeMs=1456489537956), 
responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.NetworkClient [pool-10-thread-1] | 
Sending metadata request ClientRequest(expectResponse=true, callback=null, 
request=RequestSend(header={api_key=3,api_version=0,correlation_id=21909,client_id=consumer-1},
 body={topics=[Topic1]}), isInitiatedByNetworkClient, 
createdTimeMs=1456489538056, sendTimeMs=0) to node 1
2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | Updated 
cluster metadata version 10956 to Cluster(nodes = [Node(1, server.internal, 
9092)], partitions = [Partition(topic = Topic1, partition = 0, leader = 1, 
replicas = [1,], isr = [1,]])
2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
[pool-10-thread-1] | Issuing group metadata request to broker 1
2016-02-26T12:25:38,057 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
[pool-10-thread-1] | Group metadata response 
ClientResponse(receivedTimeMs=1456489538057, disconnected=false, 
request=ClientRequest(expectResponse=true, 
callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@439e25fb,
 
request=RequestSend(header={api_key=10,api_version=0,correlation_id=21910,client_id=consumer-1},
 body={group_id=}), createdTimeMs=1456489538056, sendTimeMs=1456489538056), 
responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}}){code}

This persists for any 0.9 consumer trying to read from the topic (we haven't 
confirmed if this is for a single topic or for any topic on the broker). 0.8 
consumers can read from the broker without issues. This is fixed by a broker 
restart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3278) clientId is not unique in producer/consumer registration leads to mbean warning

2016-02-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168803#comment-15168803
 ] 

ASF GitHub Bot commented on KAFKA-3278:
---

GitHub user tomdearman opened a pull request:

https://github.com/apache/kafka/pull/978

KAFKA-3278 concatenate thread name to clientId when producer and consumers 
config is created

@guozhangwang made the changes as requested, I reverted my original commit 
and that seems to have closed the other pull request - sorry if that mucks up 
the process a bit

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tomdearman/kafka KAFKA-3278

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/978.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #978


commit 172230a76033d22ed9712fa55f454d32fa37286b
Author: tomdearman 
Date:   2016-02-26T10:32:05Z

KAFKA-3278 concatenate thread name to clientId when producer and consumers 
config is created




> clientId is not unique in producer/consumer registration leads to mbean 
> warning
> ---
>
> Key: KAFKA-3278
> URL: https://issues.apache.org/jira/browse/KAFKA-3278
> Project: Kafka
>  Issue Type: Bug
>  Components: kafka streams
>Affects Versions: 0.9.0.1
> Environment: Mac OS
>Reporter: Tom Dearman
>Assignee: Tom Dearman
>Priority: Minor
> Fix For: 0.10.0.0
>
>
> The clientId passed through to StreamThread is not unique and this is used to 
> create consumers and producers, which in turn try to register mbeans, this 
> leads to a warn that mbean already registered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3278 concatenate thread name to clientId...

2016-02-26 Thread tomdearman
GitHub user tomdearman opened a pull request:

https://github.com/apache/kafka/pull/978

KAFKA-3278 concatenate thread name to clientId when producer and consumers 
config is created

@guozhangwang made the changes as requested, I reverted my original commit 
and that seems to have closed the other pull request - sorry if that mucks up 
the process a bit

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tomdearman/kafka KAFKA-3278

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/978.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #978


commit 172230a76033d22ed9712fa55f454d32fa37286b
Author: tomdearman 
Date:   2016-02-26T10:32:05Z

KAFKA-3278 concatenate thread name to clientId when producer and consumers 
config is created




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3278) clientId is not unique in producer/consumer registration leads to mbean warning

2016-02-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168748#comment-15168748
 ] 

ASF GitHub Bot commented on KAFKA-3278:
---

Github user tomdearman closed the pull request at:

https://github.com/apache/kafka/pull/965


> clientId is not unique in producer/consumer registration leads to mbean 
> warning
> ---
>
> Key: KAFKA-3278
> URL: https://issues.apache.org/jira/browse/KAFKA-3278
> Project: Kafka
>  Issue Type: Bug
>  Components: kafka streams
>Affects Versions: 0.9.0.1
> Environment: Mac OS
>Reporter: Tom Dearman
>Assignee: Tom Dearman
>Priority: Minor
> Fix For: 0.10.0.0
>
>
> The clientId passed through to StreamThread is not unique and this is used to 
> create consumers and producers, which in turn try to register mbeans, this 
> leads to a warn that mbean already registered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3278 add thread number to clientId passe...

2016-02-26 Thread tomdearman
Github user tomdearman closed the pull request at:

https://github.com/apache/kafka/pull/965


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Comment Edited] (KAFKA-1995) JMS to Kafka: Inbuilt JMSAdaptor/JMSProxy/JMSBridge (Client can speak JMS but hit Kafka)

2016-02-26 Thread Christian Ferrari (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168649#comment-15168649
 ] 

Christian Ferrari edited comment on KAFKA-1995 at 2/26/16 8:22 AM:
---

A JMS bridge would be very helpful to support this pattern:
[MQTT client] -(publish)-> [MQTT/JMS broker] -(Kafka JMS bridge)-> [Kafka] -> 
...



was (Author: christian.ferr...@generali.com):
A JMS bridge would be very helpful to support this pattern:
[MQTT client] ---(publish)---> [MQTT/JMS broker] ---(Kafka JMS bridge)--> 
[Kafka] --> ...


> JMS to Kafka: Inbuilt JMSAdaptor/JMSProxy/JMSBridge (Client can speak JMS but 
> hit Kafka)
> 
>
> Key: KAFKA-1995
> URL: https://issues.apache.org/jira/browse/KAFKA-1995
> Project: Kafka
>  Issue Type: Wish
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Rekha Joshi
>
> Kafka is a great alternative to JMS, providing high performance, throughput 
> as scalable, distributed pub sub/commit log service.
> However there always exist traditional systems running on JMS.
> Rather than rewriting, it would be great if we just had an inbuilt 
> JMSAdaptor/JMSProxy/JMSBridge by which client can speak JMS but hit Kafka 
> behind-the-scene.
> Something like Chukwa's o.a.h.chukwa.datacollection.adaptor.jms.JMSAdaptor, 
> which receives msg off JMS queue and transforms to a Chukwa chunk?
> I have come across folks talking of this need in past as well.Is it 
> considered and/or part of the roadmap?
> http://grokbase.com/t/kafka/users/131cst8xpv/stomp-binding-for-kafka
> http://grokbase.com/t/kafka/users/148dm4247q/consuming-messages-from-kafka-and-pushing-on-to-a-jms-queue
> http://grokbase.com/t/kafka/users/143hjepbn2/request-kafka-zookeeper-jms-details
> Looking for inputs on correct way to approach this so to retain all good 
> features of Kafka while still not rewriting entire application.Possible?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-1995) JMS to Kafka: Inbuilt JMSAdaptor/JMSProxy/JMSBridge (Client can speak JMS but hit Kafka)

2016-02-26 Thread Christian Ferrari (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168649#comment-15168649
 ] 

Christian Ferrari edited comment on KAFKA-1995 at 2/26/16 8:23 AM:
---

A JMS bridge would be very helpful to support this pattern:
[MQTT client] ...(publish)...> [MQTT/JMS broker] ...(Kafka JMS bridge)...> 
[Kafka] ...> ...



was (Author: christian.ferr...@generali.com):
A JMS bridge would be very helpful to support this pattern:
[MQTT client] -(publish)-> [MQTT/JMS broker] -(Kafka JMS bridge)-> [Kafka] -> 
...


> JMS to Kafka: Inbuilt JMSAdaptor/JMSProxy/JMSBridge (Client can speak JMS but 
> hit Kafka)
> 
>
> Key: KAFKA-1995
> URL: https://issues.apache.org/jira/browse/KAFKA-1995
> Project: Kafka
>  Issue Type: Wish
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Rekha Joshi
>
> Kafka is a great alternative to JMS, providing high performance, throughput 
> as scalable, distributed pub sub/commit log service.
> However there always exist traditional systems running on JMS.
> Rather than rewriting, it would be great if we just had an inbuilt 
> JMSAdaptor/JMSProxy/JMSBridge by which client can speak JMS but hit Kafka 
> behind-the-scene.
> Something like Chukwa's o.a.h.chukwa.datacollection.adaptor.jms.JMSAdaptor, 
> which receives msg off JMS queue and transforms to a Chukwa chunk?
> I have come across folks talking of this need in past as well.Is it 
> considered and/or part of the roadmap?
> http://grokbase.com/t/kafka/users/131cst8xpv/stomp-binding-for-kafka
> http://grokbase.com/t/kafka/users/148dm4247q/consuming-messages-from-kafka-and-pushing-on-to-a-jms-queue
> http://grokbase.com/t/kafka/users/143hjepbn2/request-kafka-zookeeper-jms-details
> Looking for inputs on correct way to approach this so to retain all good 
> features of Kafka while still not rewriting entire application.Possible?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1995) JMS to Kafka: Inbuilt JMSAdaptor/JMSProxy/JMSBridge (Client can speak JMS but hit Kafka)

2016-02-26 Thread Christian Ferrari (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168649#comment-15168649
 ] 

Christian Ferrari commented on KAFKA-1995:
--

A JMS bridge would be very helpful to support this pattern:
[MQTT client] ---(publish)---> [MQTT/JMS broker] ---(Kafka JMS bridge)--> 
[Kafka] --> ...


> JMS to Kafka: Inbuilt JMSAdaptor/JMSProxy/JMSBridge (Client can speak JMS but 
> hit Kafka)
> 
>
> Key: KAFKA-1995
> URL: https://issues.apache.org/jira/browse/KAFKA-1995
> Project: Kafka
>  Issue Type: Wish
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Rekha Joshi
>
> Kafka is a great alternative to JMS, providing high performance, throughput 
> as scalable, distributed pub sub/commit log service.
> However there always exist traditional systems running on JMS.
> Rather than rewriting, it would be great if we just had an inbuilt 
> JMSAdaptor/JMSProxy/JMSBridge by which client can speak JMS but hit Kafka 
> behind-the-scene.
> Something like Chukwa's o.a.h.chukwa.datacollection.adaptor.jms.JMSAdaptor, 
> which receives msg off JMS queue and transforms to a Chukwa chunk?
> I have come across folks talking of this need in past as well.Is it 
> considered and/or part of the roadmap?
> http://grokbase.com/t/kafka/users/131cst8xpv/stomp-binding-for-kafka
> http://grokbase.com/t/kafka/users/148dm4247q/consuming-messages-from-kafka-and-pushing-on-to-a-jms-queue
> http://grokbase.com/t/kafka/users/143hjepbn2/request-kafka-zookeeper-jms-details
> Looking for inputs on correct way to approach this so to retain all good 
> features of Kafka while still not rewriting entire application.Possible?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3295) test submit jira issue

2016-02-26 Thread andy (JIRA)
andy created KAFKA-3295:
---

 Summary: test submit jira issue
 Key: KAFKA-3295
 URL: https://issues.apache.org/jira/browse/KAFKA-3295
 Project: Kafka
  Issue Type: Test
Affects Versions: 0.9.0.1
Reporter: andy
Priority: Minor


test submit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)