[jira] [Commented] (KAFKA-4254) Questionable handling of unknown partitions in KafkaProducer

2016-10-04 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547518#comment-15547518
 ] 

Ismael Juma commented on KAFKA-4254:


Do we really want to have more cases where we fail slow on the off chance that 
some very rare operation is taking place? Adding partitions to a topic is 
pretty rare, so I think the second option may be better.

> Questionable handling of unknown partitions in KafkaProducer
> 
>
> Key: KAFKA-4254
> URL: https://issues.apache.org/jira/browse/KAFKA-4254
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.1.1
>
>
> Currently the producer will raise an {{IllegalArgumentException}} if the user 
> attempts to write to a partition which has just been created. This is caused 
> by the fact that the producer does not attempt to refetch topic metadata in 
> this case, which means that its check for partition validity is based on 
> stale metadata.
> If the topic for the partition did not already exist, it works fine because 
> the producer will block until it has metadata for the topic, so this case is 
> primarily hit when the number of partitions is dynamically increased. 
> A couple options to fix this that come to mind:
> 1. We could treat unknown partitions just as we do unknown topics. If the 
> partition doesn't exist, we refetch metadata and try again (timing out when 
> max.block.ms is reached).
> 2. We can at least throw a more specific exception so that users can handle 
> the error. Raising {{IllegalArgumentException}} is not helpful in practice 
> because it can also be caused by other error.s
> My inclination is to do the first one since the producer seems incorrect to 
> tell the user that the partition is invalid.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4254) Questionable handling of unknown partitions in KafkaProducer

2016-10-04 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-4254:
--

 Summary: Questionable handling of unknown partitions in 
KafkaProducer
 Key: KAFKA-4254
 URL: https://issues.apache.org/jira/browse/KAFKA-4254
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Reporter: Jason Gustafson
Assignee: Jason Gustafson
 Fix For: 0.10.1.1


Currently the producer will raise an {{IllegalArgumentException}} if the user 
attempts to write to a partition which has just been created. This is caused by 
the fact that the producer does not attempt to refetch topic metadata in this 
case, which means that its check for partition validity is based on stale 
metadata.

If the topic for the partition did not already exist, it works fine because the 
producer will block until it has metadata for the topic, so this case is 
primarily hit when the number of partitions is dynamically increased. 

A couple options to fix this that come to mind:

1. We could treat unknown partitions just as we do unknown topics. If the 
partition doesn't exist, we refetch metadata and try again (timing out when 
max.block.ms is reached).
2. We can at least throw a more specific exception so that users can handle the 
error. Raising {{IllegalArgumentException}} is not helpful in practice because 
it can also be caused by other error.s

My inclination is to do the first one since the producer seems incorrect to 
tell the user that the partition is invalid.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3838) Bump zkclient and Zookeeper versions

2016-10-04 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3838:
---
Fix Version/s: 0.10.1.0

> Bump zkclient and Zookeeper versions
> 
>
> Key: KAFKA-3838
> URL: https://issues.apache.org/jira/browse/KAFKA-3838
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Affects Versions: 0.10.0.1
>Reporter: FIlipe Azevedo
> Fix For: 0.10.1.0
>
>
> Zookeeper 3.4.8 has some improvements, specifically it handles DNS 
> Re-resolution when a connection to zookeeper fails. This potentially allows 
> Round Robin DNS without the need to hardcode the IP Addresses in the config. 
> http://zookeeper.apache.org/doc/r3.4.8/releasenotes.html
> ZkClient has a new 0.9 release which uses zookeeper 3.4.8 which is already 
> marked as stable.
> Tests are passing.
> Here is the PR: https://github.com/apache/kafka/pull/1504



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1602

2016-10-04 Thread Apache Jenkins Server
See 

Changes:

[me] KAFKA-4251: fix test driver not launching in Vagrant 1.8.6

--
[...truncated 2421 lines...]
kafka.api.AdminClientTest > testListGroups STARTED

kafka.api.AdminClientTest > testListGroups PASSED

kafka.api.AdminClientTest > testDescribeConsumerGroupForNonExistentGroup STARTED

kafka.api.AdminClientTest > testDescribeConsumerGroupForNonExistentGroup PASSED

kafka.api.ProducerBounceTest > testBrokerFailure STARTED

kafka.api.ProducerBounceTest > testBrokerFailure PASSED

kafka.api.ClientIdQuotaTest > testProducerConsumerOverrideUnthrottled STARTED

kafka.api.ClientIdQuotaTest > testProducerConsumerOverrideUnthrottled PASSED

kafka.api.ClientIdQuotaTest > testThrottledProducerConsumer STARTED

kafka.api.ClientIdQuotaTest > testThrottledProducerConsumer PASSED

kafka.api.ClientIdQuotaTest > testQuotaOverrideDelete STARTED

kafka.api.ClientIdQuotaTest > testQuotaOverrideDelete PASSED

kafka.api.test.ProducerCompressionTest > testCompression[0] STARTED

kafka.api.test.ProducerCompressionTest > testCompression[0] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[1] STARTED

kafka.api.test.ProducerCompressionTest > testCompression[1] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[2] STARTED

kafka.api.test.ProducerCompressionTest > testCompression[2] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[3] STARTED

kafka.api.test.ProducerCompressionTest > testCompression[3] PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testProduceConsumeViaAssign 
STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testProduceConsumeViaAssign 
PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoGroupAcl STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl 
STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl 
PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoProduceWithoutDescribeAcl STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoProduceWithoutDescribeAcl PASSED

kafka.api.ConsumerBounceTest > testSeekAndCommitWithBrokerFailures STARTED

kafka.api.ConsumerBounceTest > testSeekAndCommitWithBrokerFailures PASSED

kafka.api.ConsumerBounceTest > testConsumptionWithBrokerFailures STARTED

kafka.api.ConsumerBounceTest > testConsumptionWithBrokerFailures PASSED

kafka.api.SaslPlaintextConsumerTest > testCoordinatorFailover STARTED

kafka.api.SaslPlaintextConsumerTest > testCoordinatorFailover PASSED

kafka.api.SaslPlaintextConsumerTest > testSimpleConsumption STARTED

kafka.api.SaslPlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.SaslMultiMechanismConsumerTest > testMultipleBrokerMechanisms STARTED

kafka.api.SaslMultiMechanismConsumerTest > testMultipleBrokerMechanisms PASSED

kafka.api.SaslMultiMechanismConsumerTest > testCoordinatorFailover STARTED

kafka.api.SaslMultiMechanismConsumerTest > testCoordinatorFailover PASSED

kafka.api.SaslMultiMechanismConsumerTest > testSimpleConsumption STARTED

kafka.api.SaslMultiMechanismConsumerTest > testSimpleConsumption PASSED

kafka.api.FetchRequestTest > testShuffleWithSingleTopic STARTED

kafka.api.FetchRequestTest > testShuffleWithSingleTopic PASSED

kafka.api.FetchRequestTest > testShuffle STARTED

kafka.api.FetchRequestTest > testShuffle PASSED

kafka.api.UserClientIdQuotaTest > testProducerConsumerOverrideUnthrottled 
STARTED

kafka.api.UserClientIdQuotaTest > testProducerConsumerOverrideUnthrottled PASSED

kafka.api.UserClientIdQuotaTest > testThrottledProducerConsumer STARTED

kafka.api.UserClientIdQuotaTest > testThrottledProducerConsumer PASSED

kafka.api.UserClientIdQuotaTest > testQuotaOverrideDelete STARTED

kafka.api.UserClientIdQuotaTest > testQuotaOverrideDelete PASSED

kafka.api.SslProducerSendTest > 

Re: Snazzy new look to our website

2016-10-04 Thread Jason Gustafson
Huge improvement. Thanks Derrick and Gwen!

On Tue, Oct 4, 2016 at 5:54 PM, Becket Qin  wrote:

> Much fancier now :)
>
> On Tue, Oct 4, 2016 at 5:51 PM, Ali Akhtar  wrote:
>
> > Just noticed this on pulling up the documentation. Oh yeah! This new look
> > is fantastic.
> >
> > On Wed, Oct 5, 2016 at 4:31 AM, Vahid S Hashemian <
> > vahidhashem...@us.ibm.com
> > > wrote:
> >
> > > +1
> > >
> > > Thank you for the much needed new design.
> > > At first glance, it looks great, and more professional.
> > >
> > > --Vahid
> > >
> > >
> > >
> > > From:   Gwen Shapira 
> > > To: dev@kafka.apache.org, Users 
> > > Cc: Derrick Or 
> > > Date:   10/04/2016 04:13 PM
> > > Subject:Snazzy new look to our website
> > >
> > >
> > >
> > > Hi Team Kafka,
> > >
> > > I just merged PR 20 to our website - which gives it a new (and IMO
> > > pretty snazzy) look and feel. Thanks to Derrick Or for contributing
> > > the update.
> > >
> > > I had to do a hard-refresh (shift-f5 on my mac) to get the new look to
> > > load properly - so if stuff looks off, try it.
> > >
> > > Comments and contributions to the site are welcome.
> > >
> > > Gwen
> > >
> > >
> > >
> > >
> > >
> > >
> >
>


Re: [DISCUSS] Fault injection tests for Kafka

2016-10-04 Thread radai
for "small" failures (local failures on a single node, like socket
disconnection, disk read errors, out of memory etc) I've used byteman
before - http://byteman.jboss.org/

On Tue, Oct 4, 2016 at 5:46 PM, Joel Koshy  wrote:

> Hi Gwen,
>
> I've also seen suggestions of using Jepsen for fault injection, but
> > I'm not familiar with this framework.
> >
> > What do you guys think? Write our own failure injection? or write
> > Kafka tests in Jepsen?
> >
>
> This would definitely add a lot of value and save a lot on release
> validation overheads. I have heard of Jepsen (via the blog), but haven't
> used it. At LinkedIn a couple of infra teams have been using Simoorg
>  which being python-based would
> perhaps be easier to use for system test writers than Clojure (under
> Jepsen). The Ambry  project at LinkedIn
> uses it extensively (and I think has added several more failure scenarios
> which don't seem to be reflected in the github repo). Anyway, I think we
> should at least enumerate what we want to test and evaluate the
> alternatives before reinventing.
>
> Thanks,
>
> Joel
>


Re: Snazzy new look to our website

2016-10-04 Thread Joel Koshy
Looks great!

On Tue, Oct 4, 2016 at 4:38 PM, Guozhang Wang  wrote:

> The new look is great, thanks Derrick and Gwen!
>
> I'm wondering if we should still consider breaking "document.html" into
> multiple pages and indexed as sub-topics on the left bar?
>
>
> Guozhang
>
>
> On Tue, Oct 4, 2016 at 4:13 PM, Gwen Shapira  wrote:
>
> > Hi Team Kafka,
> >
> > I just merged PR 20 to our website - which gives it a new (and IMO
> > pretty snazzy) look and feel. Thanks to Derrick Or for contributing
> > the update.
> >
> > I had to do a hard-refresh (shift-f5 on my mac) to get the new look to
> > load properly - so if stuff looks off, try it.
> >
> > Comments and contributions to the site are welcome.
> >
> > Gwen
> >
>
>
>
> --
> -- Guozhang
>


Re: [DISCUSS] Fault injection tests for Kafka

2016-10-04 Thread Joel Koshy
Hi Gwen,

I've also seen suggestions of using Jepsen for fault injection, but
> I'm not familiar with this framework.
>
> What do you guys think? Write our own failure injection? or write
> Kafka tests in Jepsen?
>

This would definitely add a lot of value and save a lot on release
validation overheads. I have heard of Jepsen (via the blog), but haven't
used it. At LinkedIn a couple of infra teams have been using Simoorg
 which being python-based would
perhaps be easier to use for system test writers than Clojure (under
Jepsen). The Ambry  project at LinkedIn
uses it extensively (and I think has added several more failure scenarios
which don't seem to be reflected in the github repo). Anyway, I think we
should at least enumerate what we want to test and evaluate the
alternatives before reinventing.

Thanks,

Joel


Jenkins build is back to normal : kafka-0.10.1-jdk7 #46

2016-10-04 Thread Apache Jenkins Server
See 



Build failed in Jenkins: kafka-trunk-jdk8 #947

2016-10-04 Thread Apache Jenkins Server
See 

Changes:

[me] KAFKA-4251: fix test driver not launching in Vagrant 1.8.6

--
[...truncated 14049 lines...]
org.apache.kafka.streams.kstream.internals.KeyValuePrinterProcessorTest > 
testPrintKeyValueDefaultSerde PASSED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > 
testSendingOldValue STARTED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > 
testSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > 
testNotSendingOldValue STARTED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > 
testNotSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > testKTable STARTED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > testKTable PASSED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > testValueGetter 
STARTED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > testValueGetter 
PASSED

org.apache.kafka.streams.kstream.internals.KTableMapKeysTest > 
testMapKeysConvertingToStream STARTED

org.apache.kafka.streams.kstream.internals.KTableMapKeysTest > 
testMapKeysConvertingToStream PASSED

org.apache.kafka.streams.kstream.internals.KStreamForeachTest > testForeach 
STARTED

org.apache.kafka.streams.kstream.internals.KStreamForeachTest > testForeach 
PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableOuterJoinTest > 
testSendingOldValue STARTED

org.apache.kafka.streams.kstream.internals.KTableKTableOuterJoinTest > 
testSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableOuterJoinTest > testJoin 
STARTED

org.apache.kafka.streams.kstream.internals.KTableKTableOuterJoinTest > testJoin 
PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableOuterJoinTest > 
testNotSendingOldValue STARTED

org.apache.kafka.streams.kstream.internals.KTableKTableOuterJoinTest > 
testNotSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testOuterJoin STARTED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testOuterJoin PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > testJoin 
STARTED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > testJoin 
PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testWindowing STARTED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testWindowing PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testAsymetricWindowingBefore STARTED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testAsymetricWindowingBefore PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testAsymetricWindowingAfter STARTED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testAsymetricWindowingAfter PASSED

org.apache.kafka.streams.kstream.internals.KStreamFlatMapValuesTest > 
testFlatMapValues STARTED

org.apache.kafka.streams.kstream.internals.KStreamFlatMapValuesTest > 
testFlatMapValues PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > testJoin 
STARTED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > testJoin 
PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > 
testNotSendingOldValues STARTED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > 
testNotSendingOldValues PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > 
testSendingOldValues STARTED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > 
testSendingOldValues PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > testAggBasic 
STARTED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > testAggBasic 
PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > testCount 
STARTED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > testCount 
PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testAggCoalesced STARTED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testAggCoalesced PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testAggRepartition STARTED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testAggRepartition PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testRemoveOldBeforeAddNew STARTED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testRemoveOldBeforeAddNew PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testCountCoalesced STARTED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testCountCoalesced PASSED

org.apache.kafka.streams.kstream.internals.KStreamFilterTest > testFilterNot 
STARTED

org.apache.kafka.streams.kstream.internals.KStreamFilterTest > testFilterNot 
PASSED


[jira] [Commented] (KAFKA-3559) Task creation time taking too long in rebalance callback

2016-10-04 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547089#comment-15547089
 ] 

Guozhang Wang commented on KAFKA-3559:
--

Here are some more thoughts on this issue and how we can improve the situation:

Currently with Kafka Streams each rebalance is expensive, even if it is only 
"partial" (i.e. only a few of the non-leader members in the consumer group has 
decided to re-join, which will not trigger a full rebalance but only will cause 
the coordinator to send back the assignment again), since anyways 
{{onPartitionRevoked}} and {{onPartitionAssigned}} will be triggered, closing 
and (re-)constructing the tasks. For example, on my local (a very small) 
laptop, with a complex topology containing 10+ stores and 15+ internal topics, 
with 3 threads on rebalance could take up to 20 seconds.

On the other hand, we want to close the tasks in {{onPartitionRevoked}} before 
the synchronization barrier only because threads may hold some file locks 
related to these tasks. And since tasks are all committed right before closing, 
I think it is safe to delay the destruction of tasks so that we may be able to 
save the time of closing / reconstructing such tasks. More specifically:

1. In {{onPartitionRevoked}}, instead of closing the tasks, we only need to 
commit the tasks and "pause" them by calling their topology processor's newly 
added {{flush}} calls, releasing the corresponding file locks of the tasks: in 
fact, it is automatically done since we will not process any messages during 
the rebalance anyways.
2. Then in {{onPartitionAssigned}}, we can if there are any tasks that have 
really been migrated out of the thread; for those tasks, closing them (and note 
that since these tasks are already committed in {{onPartitionRevoked}}, closing 
them will only involve calling the topology processor's {{close}} function, as 
well as closing the state stores), otherwise "resume" processing.

We need to think through some minor issues such as the above mentioned file 
locks for persistent state stores, how clean-up will work without introducing 
deadlocks, etc. But I think in general this solution should work.

> Task creation time taking too long in rebalance callback
> 
>
> Key: KAFKA-3559
> URL: https://issues.apache.org/jira/browse/KAFKA-3559
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Eno Thereska
>  Labels: architecture
> Fix For: 0.10.2.0
>
>
> Currently in Kafka Streams, we create stream tasks upon getting newly 
> assigned partitions in rebalance callback function {code} onPartitionAssigned 
> {code}, which involves initialization of the processor state stores as well 
> (including opening the rocksDB, restore the store from changelog, etc, which 
> takes time).
> With a large number of state stores, the initialization time itself could 
> take tens of seconds, which usually is larger than the consumer session 
> timeout. As a result, when the callback is completed, the consumer is already 
> treated as failed by the coordinator and rebalance again.
> We need to consider if we can optimize the initialization process, or move it 
> out of the callback function, and while initializing the stores one-by-one, 
> use poll call to send heartbeats to avoid being kicked out by coordinator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Snazzy new look to our website

2016-10-04 Thread Guozhang Wang
The new look is great, thanks Derrick and Gwen!

I'm wondering if we should still consider breaking "document.html" into
multiple pages and indexed as sub-topics on the left bar?


Guozhang


On Tue, Oct 4, 2016 at 4:13 PM, Gwen Shapira  wrote:

> Hi Team Kafka,
>
> I just merged PR 20 to our website - which gives it a new (and IMO
> pretty snazzy) look and feel. Thanks to Derrick Or for contributing
> the update.
>
> I had to do a hard-refresh (shift-f5 on my mac) to get the new look to
> load properly - so if stuff looks off, try it.
>
> Comments and contributions to the site are welcome.
>
> Gwen
>



-- 
-- Guozhang


Re: Snazzy new look to our website

2016-10-04 Thread Vahid S Hashemian
+1

Thank you for the much needed new design.
At first glance, it looks great, and more professional.

--Vahid 



From:   Gwen Shapira 
To: dev@kafka.apache.org, Users 
Cc: Derrick Or 
Date:   10/04/2016 04:13 PM
Subject:Snazzy new look to our website



Hi Team Kafka,

I just merged PR 20 to our website - which gives it a new (and IMO
pretty snazzy) look and feel. Thanks to Derrick Or for contributing
the update.

I had to do a hard-refresh (shift-f5 on my mac) to get the new look to
load properly - so if stuff looks off, try it.

Comments and contributions to the site are welcome.

Gwen







Re: Streams support for Serdes

2016-10-04 Thread Hojjat Jafarpour
Hi Jeyhun,

You are right. As long as the types in Tuple-n are known types(e.g.,
primitives) and the serdes for them are available you can remove the need
for providing serdes and infer them. Indeed we have a project that we use
similar approach to eliminate the need to have user to provide the serdes.

Thanks.
--Hojjat

On Tue, Oct 4, 2016 at 4:07 PM, Guozhang Wang  wrote:

>
> -- Forwarded message --
> From: Jeyhun Karimov 
> Date: Mon, Sep 19, 2016 at 1:15 AM
> Subject: Streams support for Serdes
> To: "dev@kafka.apache.org" 
>
>
> Hi community,
>
> When using kafka-streams with POJO data types we write our own
> de/serializers. However I think if we have built-in Serdes support for
> Tuple-n data types (ex:Serdes.Tuple2) we may easily
> use Tuples and built-in Serdes can help to reduce the development cycle.
> Please correct me if I am wrong or if there is similar solution within
> library please let me know.
>
> Cheers
> Jeyhun
> --
> -Cheers
>
> Jeyhun
>
>
>
> --
> -- Guozhang
>


Snazzy new look to our website

2016-10-04 Thread Gwen Shapira
Hi Team Kafka,

I just merged PR 20 to our website - which gives it a new (and IMO
pretty snazzy) look and feel. Thanks to Derrick Or for contributing
the update.

I had to do a hard-refresh (shift-f5 on my mac) to get the new look to
load properly - so if stuff looks off, try it.

Comments and contributions to the site are welcome.

Gwen


Re: [VOTE] 0.10.1.0 RC0

2016-10-04 Thread Jason Gustafson
One clarification: this is a minor release, not a major one.

-Jason

On Tue, Oct 4, 2016 at 4:01 PM, Jason Gustafson  wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the first candidate for release of Apache Kafka 0.10.1.0. This is
> a major release that includes great new features including throttled
> replication, secure quotas, time-based log searching, and queryable state
> for Kafka Streams. A full list of the content can be found here:
> https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+0.10.1. Since
> this is a major release, we will give people more time to try it out and
> give feedback.
>
> Release notes for the 0.10.1.0 release:
> http://home.apache.org/~jgus/kafka-0.10.1.0-rc0/RELEASE_NOTES.html
>
> *** Please download, test and vote by Monday, Oct 10, 9am PT
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> http://home.apache.org/~jgus/kafka-0.10.1.0-rc0/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/
>
> * Javadoc:
> http://home.apache.org/~jgus/kafka-0.10.1.0-rc0/javadoc/
>
> * Tag to be voted upon (off 0.10.1 branch) is the 0.10.1.0 tag:
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
> b86130bad1a1a4a3d1dbe5c486977e6968b3ebc6
>
> * Documentation:
> http://kafka.apache.org/0101/documentation.html
>
> * Protocol:
> http://kafka.apache.org/0101/protocol.html
>
> Note that integration/system testing on Jenkins has been a major problem
> this release cycle. In order to validate this RC, we need to get these
> tests stable again. Any help we can get from the community will be greatly
> appreciated.
>
> Thanks,
>
> Jason
>


[jira] [Commented] (KAFKA-4244) Update our website look & feel

2016-10-04 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546983#comment-15546983
 ] 

Gwen Shapira commented on KAFKA-4244:
-

#3 is done - merged PR 20 to the site with the snazzy new look.

I'll do another push for validation once the formatting changes are merged to 
the code site.

> Update our website look & feel
> --
>
> Key: KAFKA-4244
> URL: https://issues.apache.org/jira/browse/KAFKA-4244
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
>
> Our website deserves a facelift.
> This will be multi-part change:
> 1. Changes to the web pages in our normal GitHub to new headers, fix some 
> missing tags, etc.
> 2. Changes to the auto-get code to get protocol.html correct too
> 3. Deploy changes to website + update the header/footer/CSS in the website to 
> actual cause facelift.
> Please do not deploy changes to the website from our GitHub after #1 is done 
> but before #3 is complete. Hopefully, I'll be all done by Monday.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[VOTE] 0.10.1.0 RC0

2016-10-04 Thread Jason Gustafson
Hello Kafka users, developers and client-developers,

This is the first candidate for release of Apache Kafka 0.10.1.0. This is a
major release that includes great new features including throttled
replication, secure quotas, time-based log searching, and queryable state
for Kafka Streams. A full list of the content can be found here:
https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+0.10.1. Since
this is a major release, we will give people more time to try it out and
give feedback.

Release notes for the 0.10.1.0 release:
http://home.apache.org/~jgus/kafka-0.10.1.0-rc0/RELEASE_NOTES.html

*** Please download, test and vote by Monday, Oct 10, 9am PT

Kafka's KEYS file containing PGP keys we use to sign the release:
http://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
http://home.apache.org/~jgus/kafka-0.10.1.0-rc0/

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/

* Javadoc:
http://home.apache.org/~jgus/kafka-0.10.1.0-rc0/javadoc/

* Tag to be voted upon (off 0.10.1 branch) is the 0.10.1.0 tag:
https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=b86130bad1a1a4a3d1dbe5c486977e6968b3ebc6

* Documentation:
http://kafka.apache.org/0101/documentation.html

* Protocol:
http://kafka.apache.org/0101/protocol.html

Note that integration/system testing on Jenkins has been a major problem
this release cycle. In order to validate this RC, we need to get these
tests stable again. Any help we can get from the community will be greatly
appreciated.

Thanks,

Jason


[GitHub] kafka-site pull request #20: new design

2016-10-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka-site/pull/20


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3182) Failure in kafka.network.SocketServerTest.testSocketsCloseOnShutdown

2016-10-04 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546925#comment-15546925
 ] 

Vahid Hashemian commented on KAFKA-3182:


Another incident: 
https://builds.apache.org/job/kafka-trunk-git-pr-jdk7/6051/testReport/junit/kafka.network/SocketServerTest/testSocketsCloseOnShutdown/

> Failure in kafka.network.SocketServerTest.testSocketsCloseOnShutdown
> 
>
> Key: KAFKA-3182
> URL: https://issues.apache.org/jira/browse/KAFKA-3182
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>  Labels: transient-unit-test-failure
>
> {code}
> Stacktrace
> org.scalatest.junit.JUnitTestFailedError: expected exception when writing to 
> closed trace socket
>   at 
> org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:102)
>   at 
> org.scalatest.junit.JUnitSuite.newAssertionFailedException(JUnitSuite.scala:79)
>   at org.scalatest.Assertions$class.fail(Assertions.scala:1328)
>   at org.scalatest.junit.JUnitSuite.fail(JUnitSuite.scala:79)
>   at 
> kafka.network.SocketServerTest.testSocketsCloseOnShutdown(SocketServerTest.scala:180)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:50)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:106)
>   at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:360)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
>  

[GitHub] kafka-site issue #20: new design

2016-10-04 Thread gwenshap
Github user gwenshap commented on the issue:

https://github.com/apache/kafka-site/pull/20
  
LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-0.10.1-jdk7 #45

2016-10-04 Thread Apache Jenkins Server
See 

Changes:

[jason] KAFKA-4165; Add 0.10.0.1 as a source for compatibility tests

--
[...truncated 6285 lines...]

kafka.api.AuthorizerIntegrationTest > testPatternSubscriptionWithNoTopicAccess 
PASSED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededToReadFromNonExistentTopic STARTED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededToReadFromNonExistentTopic PASSED

kafka.api.AuthorizerIntegrationTest > 
testPatternSubscriptionWithTopicDescribeOnlyAndGroupRead STARTED

kafka.api.AuthorizerIntegrationTest > 
testPatternSubscriptionWithTopicDescribeOnlyAndGroupRead PASSED

kafka.api.AuthorizerIntegrationTest > testDeleteWithWildCardAuth STARTED

kafka.api.AuthorizerIntegrationTest > testDeleteWithWildCardAuth PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoAccess STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicRead STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicRead PASSED

kafka.api.AuthorizerIntegrationTest > testListOffsetsWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testListOffsetsWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededForWritingToNonExistentTopic STARTED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededForWritingToNonExistentTopic PASSED

kafka.api.AuthorizerIntegrationTest > 
testPatternSubscriptionMatchingInternalTopicWithDescribeOnlyPermission STARTED

kafka.api.AuthorizerIntegrationTest > 
testPatternSubscriptionMatchingInternalTopicWithDescribeOnlyPermission PASSED

kafka.api.AuthorizerIntegrationTest > 
testSimpleConsumeWithOffsetLookupAndNoGroupAccess STARTED

kafka.api.AuthorizerIntegrationTest > 
testSimpleConsumeWithOffsetLookupAndNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicWrite STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testAuthorizationWithTopicNotExisting 
STARTED

kafka.api.AuthorizerIntegrationTest > testAuthorizationWithTopicNotExisting 
PASSED

kafka.api.AuthorizerIntegrationTest > testListOffsetsWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testListOffsetsWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicAndGroupRead STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicAndGroupRead PASSED

kafka.api.AuthorizerIntegrationTest > 
testPatternSubscriptionNotMatchingInternalTopic STARTED

kafka.api.AuthorizerIntegrationTest > 
testPatternSubscriptionNotMatchingInternalTopic PASSED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithoutDescribe 
STARTED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithoutDescribe 
PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicWrite STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoGroupAccess STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoAccess STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithoutTopicDescribeAccess 
STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithoutTopicDescribeAccess 
PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoGroupAccess STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoAccess STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead 
STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead 
PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > 

Re: [UPDATE] 0.10.1 Release Progress

2016-10-04 Thread Becket Qin
Thanks, Jason!

On Tue, Oct 4, 2016 at 1:57 PM, Guozhang Wang  wrote:

> Thanks for running the release Jason!
>
> On Mon, Oct 3, 2016 at 9:06 PM, Jason Gustafson 
> wrote:
>
> > Hi Everyone,
> >
> > The code freeze is upon us! We've made incredible progress fixing bugs
> and
> > improving testing. If your feature or bug fix didn't get in this time,
> > don't worry since the next release will be here in a few short months.
> Now
> > the focus is on verifying the release candidates, the first of which will
> > be cut tomorrow. Only blocking bugs or significant regressions will
> result
> > in new release candidates. The goal is to have a stable release by Oct.
> 17.
> >
> > Thanks to everyone who has contributed thus far!
> >
> > -Jason
> >
> > On Tue, Sep 27, 2016 at 6:29 PM, Sriram Subramanian 
> > wrote:
> >
> > > Thanks Jason!
> > >
> > > On Tue, Sep 27, 2016 at 5:38 PM, Ismael Juma 
> wrote:
> > >
> > > > Thanks for the update Jason. :)
> > > >
> > > > Ismael
> > > >
> > > > On Wed, Sep 28, 2016 at 1:28 AM, Jason Gustafson  >
> > > > wrote:
> > > >
> > > > > Hi All,
> > > > >
> > > > > Pardon all the JIRA noise, but I've been busy reducing the scope to
> > > match
> > > > > the available time since we're now 6 days from the code freeze.
> I've
> > > > pruned
> > > > > the list to about 30 tickets, some of which probably won't get in:
> > > > > https://cwiki.apache.org/confluence/display/KAFKA/
> > Release+Plan+0.10.1.
> > > > > Other
> > > > > than a few important bug fixes which are nearing completion, the
> main
> > > > > remaining items are documentation improvements, additional system
> > > tests,
> > > > > and transient test failures. If you are looking to help out,
> > addressing
> > > > the
> > > > > transient test failures are the biggest need since I'm sure you've
> > > > noticed
> > > > > all the build failures. Other than that, I think we're on track for
> > the
> > > > > code freeze. Keep up the great work!
> > > > >
> > > > > Thanks,
> > > > > Jason
> > > > >
> > > > > On Tue, Sep 20, 2016 at 3:18 PM, Mayuresh Gharat <
> > > > > gharatmayures...@gmail.com
> > > > > > wrote:
> > > > >
> > > > > > Great !
> > > > > >
> > > > > > Thanks,
> > > > > >
> > > > > > Mayuresh
> > > > > >
> > > > > > On Tue, Sep 20, 2016 at 2:43 PM, Becket Qin <
> becket@gmail.com>
> > > > > wrote:
> > > > > >
> > > > > > > Awesome!
> > > > > > >
> > > > > > > On Mon, Sep 19, 2016 at 11:42 PM, Neha Narkhede <
> > n...@confluent.io
> > > >
> > > > > > wrote:
> > > > > > >
> > > > > > > > Nice!
> > > > > > > > On Mon, Sep 19, 2016 at 11:33 PM Ismael Juma <
> > ism...@juma.me.uk>
> > > > > > wrote:
> > > > > > > >
> > > > > > > > > Well done everyone. :)
> > > > > > > > >
> > > > > > > > > On 20 Sep 2016 5:23 am, "Jason Gustafson" <
> > ja...@confluent.io>
> > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Thanks everyone for the hard work! The 0.10.1 release
> > branch
> > > > has
> > > > > > been
> > > > > > > > > > created. We're now entering the stabilization phase of
> this
> > > > > release
> > > > > > > > which
> > > > > > > > > > means we'll focus on bug fixes and testing.
> > > > > > > > > >
> > > > > > > > > > -Jason
> > > > > > > > > >
> > > > > > > > > > On Fri, Sep 16, 2016 at 5:00 PM, Jason Gustafson <
> > > > > > ja...@confluent.io
> > > > > > > >
> > > > > > > > > > wrote:
> > > > > > > > > >
> > > > > > > > > > > Hi All,
> > > > > > > > > > >
> > > > > > > > > > > Thanks everyone for the hard work! Here's an update on
> > the
> > > > > > > remaining
> > > > > > > > > KIPs
> > > > > > > > > > > that we are hoping to include:
> > > > > > > > > > >
> > > > > > > > > > > KIP-78 (clusterId): Review is basically complete.
> > Assuming
> > > no
> > > > > > > > problems
> > > > > > > > > > > emerge, Ismael is planning to merge today.
> > > > > > > > > > > KIP-74 (max fetch size): Review is nearing completion,
> > > just a
> > > > > few
> > > > > > > > minor
> > > > > > > > > > > issues remain. This will probably be merged tomorrow or
> > > > Sunday.
> > > > > > > > > > > KIP-55 (secure quotas): The patch has been rebased and
> > > > probably
> > > > > > > needs
> > > > > > > > > one
> > > > > > > > > > > more review pass before merging. Jun is confident it
> can
> > > get
> > > > in
> > > > > > > > before
> > > > > > > > > > > Monday.
> > > > > > > > > > >
> > > > > > > > > > > As for KIP-79, I've made one review pass, but to make
> it
> > > in,
> > > > > > we'll
> > > > > > > > need
> > > > > > > > > > 1)
> > > > > > > > > > > some more votes on the vote thread, and 2) a few review
> > > > > > iterations.
> > > > > > > > > It's
> > > > > > > > > > > looking a bit doubtful, but let's see how it goes!
> > > > > > > > > > >
> > > > > > > > > > > Since we are nearing the feature freeze, it would be
> > > helpful
> > > > if
> > > > > > > > people
> > > > > > > > > > > begin setting some priorities on the 

[GitHub] kafka pull request #1969: MINOR: missing fullstop in doc for `max.partition....

2016-10-04 Thread shikhar
GitHub user shikhar opened a pull request:

https://github.com/apache/kafka/pull/1969

MINOR: missing fullstop in doc for `max.partition.fetch.bytes`



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shikhar/kafka patch-2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1969.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1969


commit 3a9568911ca226979f4058129a3f238d1f0187c1
Author: Shikhar Bhushan 
Date:   2016-10-04T22:43:21Z

MINOR: missing fullstop in doc for `max.partition.fetch.bytes`




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #1968: MINOR: missing whitespace in doc for `ssl.cipher.s...

2016-10-04 Thread shikhar
GitHub user shikhar opened a pull request:

https://github.com/apache/kafka/pull/1968

MINOR: missing whitespace in doc for `ssl.cipher.suites`



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shikhar/kafka patch-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1968.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1968


commit 44001af72e226a3d683ab8169f8816e7cdf67a49
Author: Shikhar Bhushan 
Date:   2016-10-04T22:42:01Z

MINOR: missing whitespace in doc for `ssl.cipher.suites`




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP-82 - Add Record Headers

2016-10-04 Thread radai
another potential benefit of headers is it would reduce the number of API
changes required to support future features (as they could be implemented
as plugins).
that would greatly accelerate the rate with which kafka can be extended.

On Mon, Oct 3, 2016 at 12:46 PM, Michael Pearce 
wrote:

> Opposite way around v4 instead of v3 ;)
> 
> From: Michael Pearce
> Sent: Monday, October 3, 2016 8:45 PM
> To: dev@kafka.apache.org
> Subject: Re: [DISCUSS] KIP-82 - Add Record Headers
>
> Thanks guys for spotting this, i have updated KIP-82 to state v3 instead
> of v4.
>
> Mike.
> 
> From: Becket Qin 
> Sent: Monday, October 3, 2016 6:18 PM
> To: dev@kafka.apache.org
> Subject: Re: [DISCUSS] KIP-82 - Add Record Headers
>
> Yes, KIP-74 has already been checked in. The new FetchRequest/Response
> version should be V4. :)
>
> On Mon, Oct 3, 2016 at 10:14 AM, Sean McCauliff <
> smccaul...@linkedin.com.invalid> wrote:
>
> > Change to public interfaces:
> >
> > "Add ProduceRequest/ProduceResponse V3 which uses the new message format.
> > Add FetchRequest/FetchResponse V3 which uses the new message format."
> >
> > When I look at org.apache.kafka.common.requests.FetchResponse on
> > master I see that there is already a version 3.  Seems like this is
> > from a recent commit about implementing KIP-74.  Do we need to
> > coordinate these changes with KIP-74?
> >
> >
> > "The serialisation of the [int, bye[]] header set will on the wire
> > using a strict format"  bye[] -> byte[]
> >
> > Sean
> > --
> > Sean McCauliff
> > Staff Software Engineer
> > Kafka
> >
> > smccaul...@linkedin.com
> > linkedin.com/in/sean-mccauliff-b563192
> >
> >
> > On Fri, Sep 30, 2016 at 3:43 PM, radai 
> wrote:
> > > I think headers are a great idea.
> > >
> > > Right now, people who are trying to implement any sort of org-wide
> > > functionality like monitoring, tracing, profiling etc pretty much have
> to
> > > define their own wrapper layers, which probably leads to everyone
> > > implementing their own variants of the same underlying functionality.
> > >
> > > I think a common base for headers would allow implementing a lot of
> this
> > > functionality only one in a way that different header-based
> capabilities
> > > could be shared and composed and open the door the a wide range of
> > possible
> > > Kafka middleware that's simply impossible to write against the current
> > API.
> > >
> > > Here's a list of things that could be implemented as "plugins" on top
> of
> > a
> > > header mechanism (full list here -
> > > https://cwiki.apache.org/confluence/display/KAFKA/A+
> > Case+for+Kafka+Headers).
> > >
> > > A lot of these already exist within LinkedIn and could for example be
> > open
> > > sourced if Kafka had headers. I'm fairly certain the same is true in
> > other
> > > organizations using Kafka
> > >
> > >
> > >
> > > On Thu, Sep 22, 2016 at 12:31 PM, Michael Pearce <
> michael.pea...@ig.com>
> > > wrote:
> > >
> > >> Hi All,
> > >>
> > >>
> > >> I would like to discuss the following KIP proposal:
> > >>
> > >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > >> 82+-+Add+Record+Headers
> > >>
> > >>
> > >>
> > >> I have some initial ?drafts of roughly the changes that would be
> needed.
> > >> This is no where finalized and look forward to the discussion
> > especially as
> > >> some bits I'm personally in two minds about.
> > >>
> > >> https://github.com/michaelandrepearce/kafka/tree/
> > kafka-headers-properties
> > >>
> > >>
> > >>
> > >> Here is a link to a alternative option mentioned in the kip but one i
> > >> would personally would discard (disadvantages mentioned in kip)
> > >>
> > >> https://github.com/michaelandrepearce/kafka/tree/kafka-headers-full?
> > >>
> > >>
> > >> Thanks
> > >>
> > >> Mike
> > >>
> > >>
> > >>
> > >>
> > >>
> > >> The information contained in this email is strictly confidential and
> for
> > >> the use of the addressee only, unless otherwise indicated. If you are
> > not
> > >> the intended recipient, please do not read, copy, use or disclose to
> > others
> > >> this message or any attachment. Please also notify the sender by
> > replying
> > >> to this email or by telephone (+44(020 7896 0011) and then delete the
> > email
> > >> and any copies of it. Opinions, conclusion (etc) that do not relate to
> > the
> > >> official business of this company shall be understood as neither given
> > nor
> > >> endorsed by it. IG is a trading name of IG Markets Limited (a company
> > >> registered in England and Wales, company number 04008957) and IG Index
> > >> Limited (a company registered in England and Wales, company number
> > >> 01190902). Registered address at Cannon Bridge House, 25 Dowgate Hill,
> > >> London EC4R 2YA. Both IG Markets Limited (register number 195355) and
> IG
> > >> Index Limited (register number 114059) are authorised and 

[jira] [Commented] (KAFKA-4244) Update our website look & feel

2016-10-04 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546873#comment-15546873
 ] 

Gwen Shapira commented on KAFKA-4244:
-

Update:

Change #1 is pending review and change #2 proved to be unnecessary.

> Update our website look & feel
> --
>
> Key: KAFKA-4244
> URL: https://issues.apache.org/jira/browse/KAFKA-4244
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
>
> Our website deserves a facelift.
> This will be multi-part change:
> 1. Changes to the web pages in our normal GitHub to new headers, fix some 
> missing tags, etc.
> 2. Changes to the auto-get code to get protocol.html correct too
> 3. Deploy changes to website + update the header/footer/CSS in the website to 
> actual cause facelift.
> Please do not deploy changes to the website from our GitHub after #1 is done 
> but before #3 is complete. Hopefully, I'll be all done by Monday.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4253) Fix Kafka Stream thread shutting down process ordering

2016-10-04 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4253:
-
Assignee: (was: Matthias J. Sax)

> Fix Kafka Stream thread shutting down process ordering
> --
>
> Key: KAFKA-4253
> URL: https://issues.apache.org/jira/browse/KAFKA-4253
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Guozhang Wang
>
> Currently we close the stream thread in the following way:
> 0. Commit all tasks.
> 1. Close producer.
> 2. Close consumer.
> 3. Close restore consumer.
> 4. For each task, close its topology processors one-by-one following the 
> topology order by calling {{processor.close()}}.
> 5. For each task, close its state manager as well as flushing and closing all 
> its associated state stores.
> We choose to close the producer / consumer clients before shutting down the 
> tasks because we need to make sure all sent records has been acked so that we 
> have the right log-end-offset when closing the store and checkpointing the 
> offset of the changelog. However there is also an issue with this ordering, 
> in which users choose to write more records in their {{processor.close()}} 
> calls, this will cause RTE since the producers has already been closed, and 
> no changelog records will be able to write.
> Thinking about this issue, a more appropriate ordering will be:
> 1. For each task, close their topology processors following the topology 
> order by calling {{processor.close()}}.
> 2. For each task, commit its state by calling {{task.commit()}}. At this time 
> all sent records should be acked since {{producer.flush()}} is called.
> 3. For each task, close their {{ProcessorStateManager}}.
> 4. Close all embedded clients, i.e. producer / consumer / restore consumer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4253) Fix Kafka Stream thread shutting down process ordering

2016-10-04 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-4253:


 Summary: Fix Kafka Stream thread shutting down process ordering
 Key: KAFKA-4253
 URL: https://issues.apache.org/jira/browse/KAFKA-4253
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 0.10.1.0
Reporter: Guozhang Wang
Assignee: Matthias J. Sax


Currently we close the stream thread in the following way:

0. Commit all tasks.
1. Close producer.
2. Close consumer.
3. Close restore consumer.
4. For each task, close its topology processors one-by-one following the 
topology order by calling {{processor.close()}}.
5. For each task, close its state manager as well as flushing and closing all 
its associated state stores.

We choose to close the producer / consumer clients before shutting down the 
tasks because we need to make sure all sent records has been acked so that we 
have the right log-end-offset when closing the store and checkpointing the 
offset of the changelog. However there is also an issue with this ordering, in 
which users choose to write more records in their {{processor.close()}} calls, 
this will cause RTE since the producers has already been closed, and no 
changelog records will be able to write.

Thinking about this issue, a more appropriate ordering will be:

1. For each task, close their topology processors following the topology order 
by calling {{processor.close()}}.
2. For each task, commit its state by calling {{task.commit()}}. At this time 
all sent records should be acked since {{producer.flush()}} is called.
3. For each task, close their {{ProcessorStateManager}}.
4. Close all embedded clients, i.e. producer / consumer / restore consumer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4244) Update our website look & feel

2016-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546828#comment-15546828
 ] 

ASF GitHub Bot commented on KAFKA-4244:
---

GitHub user gwenshap opened a pull request:

https://github.com/apache/kafka/pull/1967

cherry-picking KAFKA-4244 to 0.10.1.0 branch

fixing few minor conflicts

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gwenshap/kafka KAFKA-4244-010

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1967.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1967


commit f74a967a54755248908c46faa8e2f9b8895766e8
Author: Gwen Shapira 
Date:   2016-10-04T21:43:17Z

fixing formating issues in docs. missing headers and lots of paragraph 
misformatting




> Update our website look & feel
> --
>
> Key: KAFKA-4244
> URL: https://issues.apache.org/jira/browse/KAFKA-4244
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
>
> Our website deserves a facelift.
> This will be multi-part change:
> 1. Changes to the web pages in our normal GitHub to new headers, fix some 
> missing tags, etc.
> 2. Changes to the auto-get code to get protocol.html correct too
> 3. Deploy changes to website + update the header/footer/CSS in the website to 
> actual cause facelift.
> Please do not deploy changes to the website from our GitHub after #1 is done 
> but before #3 is complete. Hopefully, I'll be all done by Monday.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1967: cherry-picking KAFKA-4244 to 0.10.1.0 branch

2016-10-04 Thread gwenshap
GitHub user gwenshap opened a pull request:

https://github.com/apache/kafka/pull/1967

cherry-picking KAFKA-4244 to 0.10.1.0 branch

fixing few minor conflicts

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gwenshap/kafka KAFKA-4244-010

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1967.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1967


commit f74a967a54755248908c46faa8e2f9b8895766e8
Author: Gwen Shapira 
Date:   2016-10-04T21:43:17Z

fixing formating issues in docs. missing headers and lots of paragraph 
misformatting




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4244) Update our website look & feel

2016-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546760#comment-15546760
 ] 

ASF GitHub Bot commented on KAFKA-4244:
---

GitHub user gwenshap opened a pull request:

https://github.com/apache/kafka/pull/1966

KAFKA-4244: fixing formating issues in docs. missing headers and lots of 
paragrap…

…h misformatting

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gwenshap/kafka KAFKA-4244

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1966.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1966


commit d738a4e6ab434431ec12e0727a5fac720a6d0cf9
Author: Gwen Shapira 
Date:   2016-10-04T21:43:17Z

fixing formating issues in docs. missing headers and lots of paragraph 
misformatting




> Update our website look & feel
> --
>
> Key: KAFKA-4244
> URL: https://issues.apache.org/jira/browse/KAFKA-4244
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
>
> Our website deserves a facelift.
> This will be multi-part change:
> 1. Changes to the web pages in our normal GitHub to new headers, fix some 
> missing tags, etc.
> 2. Changes to the auto-get code to get protocol.html correct too
> 3. Deploy changes to website + update the header/footer/CSS in the website to 
> actual cause facelift.
> Please do not deploy changes to the website from our GitHub after #1 is done 
> but before #3 is complete. Hopefully, I'll be all done by Monday.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-4251) Test driver not launching in Vagrant 1.8.6

2016-10-04 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-4251.
--
   Resolution: Fixed
Fix Version/s: 0.10.2.0
   0.10.1.0

Issue resolved by pull request 1962
[https://github.com/apache/kafka/pull/1962]

> Test driver not launching in Vagrant 1.8.6
> --
>
> Key: KAFKA-4251
> URL: https://issues.apache.org/jira/browse/KAFKA-4251
> Project: Kafka
>  Issue Type: Bug
>  Components: system tests
>Reporter: Xavier Léauté
> Fix For: 0.10.1.0, 0.10.2.0
>
>
> custom ip resolver in test driver makes incorrect assumption when calling 
> vm.communicate.execute, causing driver to fail launching with Vagrant 1.8.6



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1966: KAFKA-4244: fixing formating issues in docs. missi...

2016-10-04 Thread gwenshap
GitHub user gwenshap opened a pull request:

https://github.com/apache/kafka/pull/1966

KAFKA-4244: fixing formating issues in docs. missing headers and lots of 
paragrap…

…h misformatting

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gwenshap/kafka KAFKA-4244

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1966.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1966


commit d738a4e6ab434431ec12e0727a5fac720a6d0cf9
Author: Gwen Shapira 
Date:   2016-10-04T21:43:17Z

fixing formating issues in docs. missing headers and lots of paragraph 
misformatting




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #1962: KAFKA-4251: fix test driver not launching in Vagra...

2016-10-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1962


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4176) Node stopped receiving heartbeat responses once another node started within the same group

2016-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546751#comment-15546751
 ] 

ASF GitHub Bot commented on KAFKA-4176:
---

GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/1965

KAFKA-4176: Only call printStream.flush for System.out



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka 
K4176-only-flush-for-standardoutput

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1965.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1965


commit 87c8d45ee42218921ebec55101d4722245bcf1a2
Author: Guozhang Wang 
Date:   2016-10-04T21:37:54Z

Only call flush for System.out




> Node stopped receiving heartbeat responses once another node started within 
> the same group
> --
>
> Key: KAFKA-4176
> URL: https://issues.apache.org/jira/browse/KAFKA-4176
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.0.1
> Environment: Centos 7: 3.10.0-229.el7.x86_64 #1 SMP Fri Mar 6 
> 11:36:42 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
> Java: java version "1.8.0_101"
> Java(TM) SE Runtime Environment (build 1.8.0_101-b13)
> Java HotSpot(TM) 64-Bit Server VM (build 25.101-b13, mixed mode)
>Reporter: Marek Svitok
>
> I have 3 nodes working in the same group. I started them one after the other. 
> As I can see from the log the node once started receives heartbeat responses
> for the group it is part of. However once I start another node the former one 
> stops receiving these responses and the new one keeps receiving them. 
> Moreover it stops consuming any messages from previously assigner partitions:
> Node0
> 03:14:36.224 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:14:39.223 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:14:39.224 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:14:39.429 [main-SendThread(mujsignal-03:2182)] DEBUG 
> org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 
> 0x256bc1ce8c30170 after 0ms
> 03:14:39.462 [main-SendThread(mujsignal-03:2182)] DEBUG 
> org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 
> 0x256bc1ce8c30171 after 0ms
> 03:14:42.224 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:14:42.224 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:14:45.224 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:14:45.224 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:14:48.224 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - Attempt 
> to heart beat failed for group test_streams_id since it is rebalancing.
> 03:14:48.224 [StreamThread-2] INFO  o.a.k.c.c.i.ConsumerCoordinator - 
> Revoking previously assigned partitions [StreamTopic-2] for group 
> test_streams_id
> 03:14:48.224 [StreamThread-2] INFO  o.a.k.s.p.internals.StreamThread - 
> Removing a task 0_2
> Node1
> 03:22:18.710 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:22:18.716 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:22:21.709 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:22:21.716 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:22:24.710 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:22:24.717 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:22:24.872 [main-SendThread(mujsignal-03:2182)] DEBUG 
> org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 
> 0x256bc1ce8c30172 after 0ms
> 03:22:24.992 [main-SendThread(mujsignal-03:2182)] DEBUG 
> org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 
> 

[GitHub] kafka pull request #1965: KAFKA-4176: Only call printStream.flush for System...

2016-10-04 Thread guozhangwang
GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/1965

KAFKA-4176: Only call printStream.flush for System.out



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka 
K4176-only-flush-for-standardoutput

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1965.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1965


commit 87c8d45ee42218921ebec55101d4722245bcf1a2
Author: Guozhang Wang 
Date:   2016-10-04T21:37:54Z

Only call flush for System.out




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP-80: Kafka REST Server

2016-10-04 Thread Edoardo Comar
Harsha
thanks for opening the discussion on this KIP.

While I understand he founding members' stand that the Kafka project can 
not expand its surface to a large number of clients,
I strongly agree with your well explained points below and support your 
KIP.

A REST API is not just on the same level as any client, it's a basic 
building block of open web technologies.
It's the API that most of our first time user want to try out (or that 
would be users ask for and expect to be there).

A REST API for Kafka and a robust server implementation 
under the open governance of the Apache community would be most welcome.

+1

Edo
--
Edoardo Comar
IBM MessageHub

IBM United Kingdom Limited Registered in England and Wales with number 
741598 Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 
3AU

Harsha Chintalapani  wrote on 02/10/2016 21:23:15:

> From: Harsha Chintalapani 
> To: dev@kafka.apache.org
> Date: 02/10/2016 21:23
> Subject: Re: [DISCUSS] KIP-80: Kafka REST Server
> 
> Neha & Jay,
>  We did look at the open source alternatives. Our 
concern
> is what's the patch acceptance and adding features/ bug-fixes to the
> existing project under a Github (although it's licensed under Apache 
2.0).
> It would be great if that project made available under Apache and driven 
by
> the community.  Adding to the above, not all Kafka users are interested 
in
> using the Java client API, they would like to have simple REST API where
> they can code against using any language. I do believe this adds value 
to
> Apache Kafka in itself.
> 
> "For 1, I don't think there is value in giving in to the NIH syndrome 
and
> reinventing the wheel. What I'm looking for is a detailed comparison of 
the
> gaps and why those can't be improved in the REST proxy that already 
exists
> and is actively maintained."
> 
> We are not looking at this as  NIH. What we are worried about is a 
project
> that's not maintained in a community. So the process of accepting 
patches
> and priorities is not clear, and it's not developed in Apache community.
> Not only that, existing REST API project doesn't support new client API 
and
> hence there is no security support either.
> We don't know the timeline when that's made available. We would like to 
add
> admin functionality into the REST API. So the Roadmap of that project is
> not driven by Apache.
> 
> 
> "This doesn't materially have an impact on expanding the usability of
>Kafka. In my experience, REST proxy + Java clients only cover ~50% of 
all
>Kafka users, and maybe 10% of those are the ones who will use the 
REST
>proxy. The remaining 50% are non-java client users (C, python, go, 
node
>etc)."
> 
> REST API is most often asked feature in my interactions with Kafka 
users.
> In an organization, There will be independent teams who will write their
>  Kafka clients using different language libraries available today, and
> there is no way to standardize this. Instead of supporting several
> different client libraries users will be interested in using a REST API
> server. The need for a REST API will only increase as more and more 
users
> start using Kafka.
> 
> "More surface area means more work to keep things consistent. Failure
>to do that has, in fact, hurt the user experience."
> Having myriad Kafka client GitHub projects that support different 
languages
> hurts the user experience and pushes burden to maintain these libraries.
> REST API is a simple code base that uses existing java client libraries 
to
> make life easier for the users.
> 
> Thanks,
> Harsha
> 
> On Sat, Oct 1, 2016 at 10:41 AM Neha Narkhede  wrote:
> 
> > Manikumar,
> >
> > Thanks for sharing the proposal. I think there are 2 parts to this
> > discussion -
> >
> > 1. Should we rewrite a REST proxy given that there is a 
feature-complete,
> > open-source and actively maintained REST proxy in the community?
> > 2. Does adding a REST proxy to Apache Kafka make us more agile and 
maintain
> > the high-quality experience that Kafka users have today?
> >
> > For 1, I don't think there is value in giving in to the NIH syndrome 
and
> > reinventing the wheel. What I'm looking for is a detailed comparison 
of the
> > gaps and why those can't be improved in the REST proxy that already 
exists
> > and is actively maintained. For example, we depend on zkClient and 
have
> > found as well as fixed several bugs by working closely with the people 
who
> > maintain zkClient. This should be possible for REST proxy as well, 
right?
> >
> > For 2, I'd like us to review our history of expanding the surface area 
to
> > add more clients in the past. Here is a summary -
> >
> >- This doesn't materially have an impact on expanding the usability 
of
> >Kafka. In my experience, REST proxy + Java clients only cover ~50% 
of
> > all
> >Kafka users, and maybe 10% of those are the 

[jira] [Commented] (KAFKA-4010) ConfigDef.toRst() should create sections for each group

2016-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546720#comment-15546720
 ] 

ASF GitHub Bot commented on KAFKA-4010:
---

GitHub user shikhar opened a pull request:

https://github.com/apache/kafka/pull/1964

KAFKA-4010: add ConfigDef toEnrichedRst() for additional fields in output

followup on https://github.com/apache/kafka/pull/1696

cc @rekhajoshm 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shikhar/kafka kafka-4010

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1964.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1964


commit 630fd4c5220ca5c934492318037f8a493a305b01
Author: Joshi 
Date:   2016-08-03T04:08:41Z

KAFKA-4010; ConfigDef.toEnrichedRst() to have grouped sections with 
dependents info

commit b7d4e80f32714de351b3af3a26e34817258be0cc
Author: Joshi 
Date:   2016-09-08T22:24:25Z

KAFKA-4010; updated for review comments

commit 70cb9ff98f075376c1537feb8abc8fe41bea1b83
Author: Joshi 
Date:   2016-09-09T00:59:18Z

KAFKA-4010; updated for review comments

commit 4fabee350b0a7279c891d1a291fa04c346258703
Author: Joshi 
Date:   2016-09-15T19:22:55Z

KAFKA-4010; updated for review comments

commit fff244e6e523d6b506656fbacf252b6730d5ed98
Author: Joshi 
Date:   2016-09-16T05:32:52Z

KAFKA-4010; updated for review comments

commit ffc35f0f5a7fd8727465cbb5e481dfabe8c6b438
Author: Shikhar Bhushan 
Date:   2016-10-04T21:18:25Z

Merge branch 'KAFKA-4010' of https://github.com/rekhajoshm/kafka into 
kafka-4010

commit d8f1d8122a2b24442bf586e6be130970cdfda016
Author: Shikhar Bhushan 
Date:   2016-10-04T21:25:02Z

tweaks and tests




> ConfigDef.toRst() should create sections for each group
> ---
>
> Key: KAFKA-4010
> URL: https://issues.apache.org/jira/browse/KAFKA-4010
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Shikhar Bhushan
>Assignee: Rekha Joshi
>Priority: Minor
>
> Currently the ordering seems a bit arbitrary. There is a logical grouping 
> that connectors are now able to specify with the 'group' field, which we 
> should use as section headers. Also it would be good to generate {{:ref:}} 
> for each section.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1964: KAFKA-4010: add ConfigDef toEnrichedRst() for addi...

2016-10-04 Thread shikhar
GitHub user shikhar opened a pull request:

https://github.com/apache/kafka/pull/1964

KAFKA-4010: add ConfigDef toEnrichedRst() for additional fields in output

followup on https://github.com/apache/kafka/pull/1696

cc @rekhajoshm 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shikhar/kafka kafka-4010

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1964.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1964


commit 630fd4c5220ca5c934492318037f8a493a305b01
Author: Joshi 
Date:   2016-08-03T04:08:41Z

KAFKA-4010; ConfigDef.toEnrichedRst() to have grouped sections with 
dependents info

commit b7d4e80f32714de351b3af3a26e34817258be0cc
Author: Joshi 
Date:   2016-09-08T22:24:25Z

KAFKA-4010; updated for review comments

commit 70cb9ff98f075376c1537feb8abc8fe41bea1b83
Author: Joshi 
Date:   2016-09-09T00:59:18Z

KAFKA-4010; updated for review comments

commit 4fabee350b0a7279c891d1a291fa04c346258703
Author: Joshi 
Date:   2016-09-15T19:22:55Z

KAFKA-4010; updated for review comments

commit fff244e6e523d6b506656fbacf252b6730d5ed98
Author: Joshi 
Date:   2016-09-16T05:32:52Z

KAFKA-4010; updated for review comments

commit ffc35f0f5a7fd8727465cbb5e481dfabe8c6b438
Author: Shikhar Bhushan 
Date:   2016-10-04T21:18:25Z

Merge branch 'KAFKA-4010' of https://github.com/rekhajoshm/kafka into 
kafka-4010

commit d8f1d8122a2b24442bf586e6be130970cdfda016
Author: Shikhar Bhushan 
Date:   2016-10-04T21:25:02Z

tweaks and tests




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3985) Transient system test failure ZooKeeperSecurityUpgradeTest.test_zk_security_upgrade.security_protocol

2016-10-04 Thread Rajini Sivaram (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546706#comment-15546706
 ] 

Rajini Sivaram commented on KAFKA-3985:
---

[~fpj] Would it be possible to attach the full logs of the failing test to the 
JIRA? Thanks.

> Transient system test failure 
> ZooKeeperSecurityUpgradeTest.test_zk_security_upgrade.security_protocol
> -
>
> Key: KAFKA-3985
> URL: https://issues.apache.org/jira/browse/KAFKA-3985
> Project: Kafka
>  Issue Type: Test
>  Components: system tests
>Affects Versions: 0.10.0.0
>Reporter: Jason Gustafson
>
> Found this in the nightly build on the 0.10.0 branch. Full details here: 
> http://testing.confluent.io/confluent-kafka-0-10-0-system-test-results/?prefix=2016-07-22--001.1469199875--apache--0.10.0--71a598a/.
>   
> {code}
> test_id:
> 2016-07-22--001.kafkatest.tests.core.zookeeper_security_upgrade_test.ZooKeeperSecurityUpgradeTest.test_zk_security_upgrade.security_protocol=SSL
> status: FAIL
> run time:   5 minutes 14.067 seconds
> 292 acked message did not make it to the Consumer. They are: 11264, 
> 11265, 11266, 11267, 11268, 11269, 11270, 11271, 11272, 11273, 11274, 11275, 
> 11276, 11277, 11278, 11279, 11280, 11281, 11282, 11283, ...plus 252 more. 
> Total Acked: 11343, Total Consumed: 11054. We validated that the first 272 of 
> these missing messages correctly made it into Kafka's data files. This 
> suggests they were lost on their way to the consumer.
> Traceback (most recent call last):
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka-0.10.0/kafka/venv/local/lib/python2.7/site-packages/ducktape/tests/runner.py",
>  line 106, in run_all_tests
> data = self.run_single_test()
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka-0.10.0/kafka/venv/local/lib/python2.7/site-packages/ducktape/tests/runner.py",
>  line 162, in run_single_test
> return self.current_test_context.function(self.current_test)
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka-0.10.0/kafka/venv/local/lib/python2.7/site-packages/ducktape/mark/_mark.py",
>  line 331, in wrapper
> return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka-0.10.0/kafka/tests/kafkatest/tests/core/zookeeper_security_upgrade_test.py",
>  line 115, in test_zk_security_upgrade
> self.run_produce_consume_validate(self.run_zk_migration)
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka-0.10.0/kafka/tests/kafkatest/tests/produce_consume_validate.py",
>  line 79, in run_produce_consume_validate
> raise e
> AssertionError: 292 acked message did not make it to the Consumer. They are: 
> 11264, 11265, 11266, 11267, 11268, 11269, 11270, 11271, 11272, 11273, 11274, 
> 11275, 11276, 11277, 11278, 11279, 11280, 11281, 11282, 11283, ...plus 252 
> more. Total Acked: 11343, Total Consumed: 11054. We validated that the first 
> 272 of these missing messages correctly made it into Kafka's data files. This 
> suggests they were lost on their way to the consumer.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4176) Node stopped receiving heartbeat responses once another node started within the same group

2016-10-04 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546686#comment-15546686
 ] 

Guozhang Wang commented on KAFKA-4176:
--

[~MarekSvitok] Thanks for filing this issue. I looked into the {{print}} 
function and found there is a minor issue causing us trying to close 
{{System.out}}, which will be a blocking call.

I will submit a PR to fix this issue.

> Node stopped receiving heartbeat responses once another node started within 
> the same group
> --
>
> Key: KAFKA-4176
> URL: https://issues.apache.org/jira/browse/KAFKA-4176
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.0.1
> Environment: Centos 7: 3.10.0-229.el7.x86_64 #1 SMP Fri Mar 6 
> 11:36:42 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
> Java: java version "1.8.0_101"
> Java(TM) SE Runtime Environment (build 1.8.0_101-b13)
> Java HotSpot(TM) 64-Bit Server VM (build 25.101-b13, mixed mode)
>Reporter: Marek Svitok
>
> I have 3 nodes working in the same group. I started them one after the other. 
> As I can see from the log the node once started receives heartbeat responses
> for the group it is part of. However once I start another node the former one 
> stops receiving these responses and the new one keeps receiving them. 
> Moreover it stops consuming any messages from previously assigner partitions:
> Node0
> 03:14:36.224 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:14:39.223 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:14:39.224 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:14:39.429 [main-SendThread(mujsignal-03:2182)] DEBUG 
> org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 
> 0x256bc1ce8c30170 after 0ms
> 03:14:39.462 [main-SendThread(mujsignal-03:2182)] DEBUG 
> org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 
> 0x256bc1ce8c30171 after 0ms
> 03:14:42.224 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:14:42.224 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:14:45.224 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:14:45.224 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:14:48.224 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - Attempt 
> to heart beat failed for group test_streams_id since it is rebalancing.
> 03:14:48.224 [StreamThread-2] INFO  o.a.k.c.c.i.ConsumerCoordinator - 
> Revoking previously assigned partitions [StreamTopic-2] for group 
> test_streams_id
> 03:14:48.224 [StreamThread-2] INFO  o.a.k.s.p.internals.StreamThread - 
> Removing a task 0_2
> Node1
> 03:22:18.710 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:22:18.716 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:22:21.709 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:22:21.716 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:22:24.710 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:22:24.717 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:22:24.872 [main-SendThread(mujsignal-03:2182)] DEBUG 
> org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 
> 0x256bc1ce8c30172 after 0ms
> 03:22:24.992 [main-SendThread(mujsignal-03:2182)] DEBUG 
> org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 
> 0x256bc1ce8c30173 after 0ms
> 03:22:27.710 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:22:27.717 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:22:30.710 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:22:30.716 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful 

[GitHub] kafka pull request #1963: MINOR: Update documentation for 0.10.1 release

2016-10-04 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/1963

MINOR: Update documentation for 0.10.1 release



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka update-docs-for-0.10.1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1963.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1963


commit 6aea8fb67dd50dbbb85173b36c233c947511f01b
Author: Jason Gustafson 
Date:   2016-10-04T20:56:33Z

MINOR: Update documentation for 0.10.1 release




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [UPDATE] 0.10.1 Release Progress

2016-10-04 Thread Guozhang Wang
Thanks for running the release Jason!

On Mon, Oct 3, 2016 at 9:06 PM, Jason Gustafson  wrote:

> Hi Everyone,
>
> The code freeze is upon us! We've made incredible progress fixing bugs and
> improving testing. If your feature or bug fix didn't get in this time,
> don't worry since the next release will be here in a few short months. Now
> the focus is on verifying the release candidates, the first of which will
> be cut tomorrow. Only blocking bugs or significant regressions will result
> in new release candidates. The goal is to have a stable release by Oct. 17.
>
> Thanks to everyone who has contributed thus far!
>
> -Jason
>
> On Tue, Sep 27, 2016 at 6:29 PM, Sriram Subramanian 
> wrote:
>
> > Thanks Jason!
> >
> > On Tue, Sep 27, 2016 at 5:38 PM, Ismael Juma  wrote:
> >
> > > Thanks for the update Jason. :)
> > >
> > > Ismael
> > >
> > > On Wed, Sep 28, 2016 at 1:28 AM, Jason Gustafson 
> > > wrote:
> > >
> > > > Hi All,
> > > >
> > > > Pardon all the JIRA noise, but I've been busy reducing the scope to
> > match
> > > > the available time since we're now 6 days from the code freeze. I've
> > > pruned
> > > > the list to about 30 tickets, some of which probably won't get in:
> > > > https://cwiki.apache.org/confluence/display/KAFKA/
> Release+Plan+0.10.1.
> > > > Other
> > > > than a few important bug fixes which are nearing completion, the main
> > > > remaining items are documentation improvements, additional system
> > tests,
> > > > and transient test failures. If you are looking to help out,
> addressing
> > > the
> > > > transient test failures are the biggest need since I'm sure you've
> > > noticed
> > > > all the build failures. Other than that, I think we're on track for
> the
> > > > code freeze. Keep up the great work!
> > > >
> > > > Thanks,
> > > > Jason
> > > >
> > > > On Tue, Sep 20, 2016 at 3:18 PM, Mayuresh Gharat <
> > > > gharatmayures...@gmail.com
> > > > > wrote:
> > > >
> > > > > Great !
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Mayuresh
> > > > >
> > > > > On Tue, Sep 20, 2016 at 2:43 PM, Becket Qin 
> > > > wrote:
> > > > >
> > > > > > Awesome!
> > > > > >
> > > > > > On Mon, Sep 19, 2016 at 11:42 PM, Neha Narkhede <
> n...@confluent.io
> > >
> > > > > wrote:
> > > > > >
> > > > > > > Nice!
> > > > > > > On Mon, Sep 19, 2016 at 11:33 PM Ismael Juma <
> ism...@juma.me.uk>
> > > > > wrote:
> > > > > > >
> > > > > > > > Well done everyone. :)
> > > > > > > >
> > > > > > > > On 20 Sep 2016 5:23 am, "Jason Gustafson" <
> ja...@confluent.io>
> > > > > wrote:
> > > > > > > >
> > > > > > > > > Thanks everyone for the hard work! The 0.10.1 release
> branch
> > > has
> > > > > been
> > > > > > > > > created. We're now entering the stabilization phase of this
> > > > release
> > > > > > > which
> > > > > > > > > means we'll focus on bug fixes and testing.
> > > > > > > > >
> > > > > > > > > -Jason
> > > > > > > > >
> > > > > > > > > On Fri, Sep 16, 2016 at 5:00 PM, Jason Gustafson <
> > > > > ja...@confluent.io
> > > > > > >
> > > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Hi All,
> > > > > > > > > >
> > > > > > > > > > Thanks everyone for the hard work! Here's an update on
> the
> > > > > > remaining
> > > > > > > > KIPs
> > > > > > > > > > that we are hoping to include:
> > > > > > > > > >
> > > > > > > > > > KIP-78 (clusterId): Review is basically complete.
> Assuming
> > no
> > > > > > > problems
> > > > > > > > > > emerge, Ismael is planning to merge today.
> > > > > > > > > > KIP-74 (max fetch size): Review is nearing completion,
> > just a
> > > > few
> > > > > > > minor
> > > > > > > > > > issues remain. This will probably be merged tomorrow or
> > > Sunday.
> > > > > > > > > > KIP-55 (secure quotas): The patch has been rebased and
> > > probably
> > > > > > needs
> > > > > > > > one
> > > > > > > > > > more review pass before merging. Jun is confident it can
> > get
> > > in
> > > > > > > before
> > > > > > > > > > Monday.
> > > > > > > > > >
> > > > > > > > > > As for KIP-79, I've made one review pass, but to make it
> > in,
> > > > > we'll
> > > > > > > need
> > > > > > > > > 1)
> > > > > > > > > > some more votes on the vote thread, and 2) a few review
> > > > > iterations.
> > > > > > > > It's
> > > > > > > > > > looking a bit doubtful, but let's see how it goes!
> > > > > > > > > >
> > > > > > > > > > Since we are nearing the feature freeze, it would be
> > helpful
> > > if
> > > > > > > people
> > > > > > > > > > begin setting some priorities on the bugs that need to
> get
> > in
> > > > > > before
> > > > > > > > the
> > > > > > > > > > code freeze. I am going to make an effort to prune the
> list
> > > > early
> > > > > > > next
> > > > > > > > > > week, so if there are any critical issues you know about,
> > > make
> > > > > sure
> > > > > > > > they
> > > > > > > > > > are marked as such.
> > > > > > > > > >
> > > > > > > > > > Thanks,
> > > > > > > 

[jira] [Updated] (KAFKA-4252) Missing ProducerRequestPurgatory

2016-10-04 Thread Narendra Bidari (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Narendra Bidari updated KAFKA-4252:
---
Attachment: Screen Shot 2016-10-04 at 1.34.00 PM.png

> Missing ProducerRequestPurgatory
> 
>
> Key: KAFKA-4252
> URL: https://issues.apache.org/jira/browse/KAFKA-4252
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, documentation, metrics, producer 
>Affects Versions: 0.9.0.1, 0.10.0.0, 0.10.0.1
>Reporter: Narendra Bidari
>Priority: Minor
> Attachments: Screen Shot 2016-10-04 at 1.34.00 PM.png
>
>
> Most of the docs indicate of ProducerRequestPurgatory and 
> FetchRequestPurgatory Mbeans present in . But when I see MBean there is no 
> bean by that name.  The name which is present is *DelayedOperationPurgatory*
> are DelayedOperationPurgatory and ProducerRequestPurgatory same??
> https://github.com/apache/kafka/blob/d2a267b111d23d6b98f2784382095b9ae5ddf886/docs/ops.html
> http://docs.confluent.io/1.0/kafka/monitoring.html
> https://kafka.apache.org/08/ops.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4252) Missing ProducerRequestPurgatory

2016-10-04 Thread Narendra Bidari (JIRA)
Narendra Bidari created KAFKA-4252:
--

 Summary: Missing ProducerRequestPurgatory
 Key: KAFKA-4252
 URL: https://issues.apache.org/jira/browse/KAFKA-4252
 Project: Kafka
  Issue Type: Bug
  Components: consumer, documentation, metrics, producer 
Affects Versions: 0.10.0.1, 0.10.0.0, 0.9.0.1
Reporter: Narendra Bidari
Priority: Minor


Most of the docs indicate of ProducerRequestPurgatory and FetchRequestPurgatory 
Mbeans present in . But when I see MBean there is no bean by that name.  The 
name which is present is *DelayedOperationPurgatory*

are DelayedOperationPurgatory and ProducerRequestPurgatory same??


https://github.com/apache/kafka/blob/d2a267b111d23d6b98f2784382095b9ae5ddf886/docs/ops.html
http://docs.confluent.io/1.0/kafka/monitoring.html
https://kafka.apache.org/08/ops.html




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #946

2016-10-04 Thread Apache Jenkins Server
See 

Changes:

[jason] MINOR: Tweak upgrade note on KIP-62 to include request.timeout.ms

[jason] KAFKA-4165; Add 0.10.0.1 as a source for compatibility tests

--
[...truncated 14050 lines...]
org.apache.kafka.streams.kstream.internals.KeyValuePrinterProcessorTest > 
testPrintKeyValueDefaultSerde PASSED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > 
testSendingOldValue STARTED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > 
testSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > 
testNotSendingOldValue STARTED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > 
testNotSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > testKTable STARTED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > testKTable PASSED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > testValueGetter 
STARTED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > testValueGetter 
PASSED

org.apache.kafka.streams.kstream.internals.KTableMapKeysTest > 
testMapKeysConvertingToStream STARTED

org.apache.kafka.streams.kstream.internals.KTableMapKeysTest > 
testMapKeysConvertingToStream PASSED

org.apache.kafka.streams.kstream.internals.KStreamForeachTest > testForeach 
STARTED

org.apache.kafka.streams.kstream.internals.KStreamForeachTest > testForeach 
PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableOuterJoinTest > 
testSendingOldValue STARTED

org.apache.kafka.streams.kstream.internals.KTableKTableOuterJoinTest > 
testSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableOuterJoinTest > testJoin 
STARTED

org.apache.kafka.streams.kstream.internals.KTableKTableOuterJoinTest > testJoin 
PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableOuterJoinTest > 
testNotSendingOldValue STARTED

org.apache.kafka.streams.kstream.internals.KTableKTableOuterJoinTest > 
testNotSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testOuterJoin STARTED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testOuterJoin PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > testJoin 
STARTED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > testJoin 
PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testWindowing STARTED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testWindowing PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testAsymetricWindowingBefore STARTED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testAsymetricWindowingBefore PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testAsymetricWindowingAfter STARTED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testAsymetricWindowingAfter PASSED

org.apache.kafka.streams.kstream.internals.KStreamFlatMapValuesTest > 
testFlatMapValues STARTED

org.apache.kafka.streams.kstream.internals.KStreamFlatMapValuesTest > 
testFlatMapValues PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > testJoin 
STARTED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > testJoin 
PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > 
testNotSendingOldValues STARTED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > 
testNotSendingOldValues PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > 
testSendingOldValues STARTED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > 
testSendingOldValues PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > testAggBasic 
STARTED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > testAggBasic 
PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > testCount 
STARTED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > testCount 
PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testAggCoalesced STARTED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testAggCoalesced PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testAggRepartition STARTED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testAggRepartition PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testRemoveOldBeforeAddNew STARTED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testRemoveOldBeforeAddNew PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testCountCoalesced STARTED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testCountCoalesced PASSED

org.apache.kafka.streams.kstream.internals.KStreamFilterTest > testFilterNot 
STARTED


Build failed in Jenkins: kafka-0.10.1-jdk7 #44

2016-10-04 Thread Apache Jenkins Server
See 

Changes:

[jason] MINOR: Tweak upgrade note on KIP-62 to include request.timeout.ms

--
[...truncated 3814 lines...]

kafka.coordinator.GroupMetadataManagerTest > testExpireGroup STARTED

kafka.coordinator.GroupMetadataManagerTest > testExpireGroup PASSED

kafka.coordinator.GroupMetadataManagerTest > testAddGroup STARTED

kafka.coordinator.GroupMetadataManagerTest > testAddGroup PASSED

kafka.coordinator.GroupMetadataManagerTest > testCommitOffset STARTED

kafka.coordinator.GroupMetadataManagerTest > testCommitOffset PASSED

kafka.coordinator.GroupMetadataManagerTest > testStoreGroupErrorMapping STARTED

kafka.coordinator.GroupMetadataManagerTest > testStoreGroupErrorMapping PASSED

kafka.coordinator.GroupMetadataManagerTest > testCommitOffsetFailure STARTED

kafka.coordinator.GroupMetadataManagerTest > testCommitOffsetFailure PASSED

kafka.coordinator.GroupMetadataManagerTest > testExpireOffset STARTED

kafka.coordinator.GroupMetadataManagerTest > testExpireOffset PASSED

kafka.coordinator.GroupMetadataManagerTest > testExpireOffsetsWithActiveGroup 
STARTED

kafka.coordinator.GroupMetadataManagerTest > testExpireOffsetsWithActiveGroup 
PASSED

kafka.coordinator.GroupMetadataManagerTest > testStoreEmptyGroup STARTED

kafka.coordinator.GroupMetadataManagerTest > testStoreEmptyGroup PASSED

kafka.coordinator.MemberMetadataTest > testMatchesSupportedProtocols STARTED

kafka.coordinator.MemberMetadataTest > testMatchesSupportedProtocols PASSED

kafka.coordinator.MemberMetadataTest > testMetadata STARTED

kafka.coordinator.MemberMetadataTest > testMetadata PASSED

kafka.coordinator.MemberMetadataTest > testMetadataRaisesOnUnsupportedProtocol 
STARTED

kafka.coordinator.MemberMetadataTest > testMetadataRaisesOnUnsupportedProtocol 
PASSED

kafka.coordinator.MemberMetadataTest > testVoteForPreferredProtocol STARTED

kafka.coordinator.MemberMetadataTest > testVoteForPreferredProtocol PASSED

kafka.coordinator.MemberMetadataTest > testVoteRaisesOnNoSupportedProtocols 
STARTED

kafka.coordinator.MemberMetadataTest > testVoteRaisesOnNoSupportedProtocols 
PASSED

kafka.coordinator.GroupMetadataTest > testDeadToAwaitingSyncIllegalTransition 
STARTED

kafka.coordinator.GroupMetadataTest > testDeadToAwaitingSyncIllegalTransition 
PASSED

kafka.coordinator.GroupMetadataTest > testOffsetCommitFailure STARTED

kafka.coordinator.GroupMetadataTest > testOffsetCommitFailure PASSED

kafka.coordinator.GroupMetadataTest > 
testPreparingRebalanceToStableIllegalTransition STARTED

kafka.coordinator.GroupMetadataTest > 
testPreparingRebalanceToStableIllegalTransition PASSED

kafka.coordinator.GroupMetadataTest > testStableToDeadTransition STARTED

kafka.coordinator.GroupMetadataTest > testStableToDeadTransition PASSED

kafka.coordinator.GroupMetadataTest > testInitNextGenerationEmptyGroup STARTED

kafka.coordinator.GroupMetadataTest > testInitNextGenerationEmptyGroup PASSED

kafka.coordinator.GroupMetadataTest > testCannotRebalanceWhenDead STARTED

kafka.coordinator.GroupMetadataTest > testCannotRebalanceWhenDead PASSED

kafka.coordinator.GroupMetadataTest > testInitNextGeneration STARTED

kafka.coordinator.GroupMetadataTest > testInitNextGeneration PASSED

kafka.coordinator.GroupMetadataTest > testPreparingRebalanceToEmptyTransition 
STARTED

kafka.coordinator.GroupMetadataTest > testPreparingRebalanceToEmptyTransition 
PASSED

kafka.coordinator.GroupMetadataTest > testSelectProtocol STARTED

kafka.coordinator.GroupMetadataTest > testSelectProtocol PASSED

kafka.coordinator.GroupMetadataTest > testCannotRebalanceWhenPreparingRebalance 
STARTED

kafka.coordinator.GroupMetadataTest > testCannotRebalanceWhenPreparingRebalance 
PASSED

kafka.coordinator.GroupMetadataTest > 
testDeadToPreparingRebalanceIllegalTransition STARTED

kafka.coordinator.GroupMetadataTest > 
testDeadToPreparingRebalanceIllegalTransition PASSED

kafka.coordinator.GroupMetadataTest > testCanRebalanceWhenAwaitingSync STARTED

kafka.coordinator.GroupMetadataTest > testCanRebalanceWhenAwaitingSync PASSED

kafka.coordinator.GroupMetadataTest > 
testAwaitingSyncToPreparingRebalanceTransition STARTED

kafka.coordinator.GroupMetadataTest > 
testAwaitingSyncToPreparingRebalanceTransition PASSED

kafka.coordinator.GroupMetadataTest > testStableToAwaitingSyncIllegalTransition 
STARTED

kafka.coordinator.GroupMetadataTest > testStableToAwaitingSyncIllegalTransition 
PASSED

kafka.coordinator.GroupMetadataTest > testEmptyToDeadTransition STARTED

kafka.coordinator.GroupMetadataTest > testEmptyToDeadTransition PASSED

kafka.coordinator.GroupMetadataTest > testSelectProtocolRaisesIfNoMembers 
STARTED

kafka.coordinator.GroupMetadataTest > testSelectProtocolRaisesIfNoMembers PASSED

kafka.coordinator.GroupMetadataTest > testStableToPreparingRebalanceTransition 
STARTED

kafka.coordinator.GroupMetadataTest > testStableToPreparingRebalanceTransition 
PASSED


[jira] [Commented] (KAFKA-4251) Test driver not launching in Vagrant 1.8.6

2016-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546304#comment-15546304
 ] 

ASF GitHub Bot commented on KAFKA-4251:
---

GitHub user xvrl opened a pull request:

https://github.com/apache/kafka/pull/1962

KAFKA-4251: fix test driver not launching in Vagrant 1.8.6

custom ip resolver in test driver makes incorrect assumption when calling 
vm.communicate.execute, causing driver to fail launching with Vagrant 1.8.6, 
due to https://github.com/mitchellh/vagrant/pull/7676

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xvrl/kafka fix-vagrant-resolver

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1962.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1962


commit eb0924184d5b5d29e1f818e2223944e74eb8e8ce
Author: Xavier Léauté 
Date:   2016-10-04T18:44:09Z

KAFKA-4251: fix incorrect assumptions when calling execute in Vagrantfile




> Test driver not launching in Vagrant 1.8.6
> --
>
> Key: KAFKA-4251
> URL: https://issues.apache.org/jira/browse/KAFKA-4251
> Project: Kafka
>  Issue Type: Bug
>  Components: system tests
>Reporter: Xavier Léauté
>
> custom ip resolver in test driver makes incorrect assumption when calling 
> vm.communicate.execute, causing driver to fail launching with Vagrant 1.8.6



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1962: KAFKA-4251: fix test driver not launching in Vagra...

2016-10-04 Thread xvrl
GitHub user xvrl opened a pull request:

https://github.com/apache/kafka/pull/1962

KAFKA-4251: fix test driver not launching in Vagrant 1.8.6

custom ip resolver in test driver makes incorrect assumption when calling 
vm.communicate.execute, causing driver to fail launching with Vagrant 1.8.6, 
due to https://github.com/mitchellh/vagrant/pull/7676

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xvrl/kafka fix-vagrant-resolver

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1962.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1962


commit eb0924184d5b5d29e1f818e2223944e74eb8e8ce
Author: Xavier Léauté 
Date:   2016-10-04T18:44:09Z

KAFKA-4251: fix incorrect assumptions when calling execute in Vagrantfile




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-4251) Test driver not launching in Vagrant 1.8.6

2016-10-04 Thread JIRA
Xavier Léauté created KAFKA-4251:


 Summary: Test driver not launching in Vagrant 1.8.6
 Key: KAFKA-4251
 URL: https://issues.apache.org/jira/browse/KAFKA-4251
 Project: Kafka
  Issue Type: Bug
  Components: system tests
Reporter: Xavier Léauté


custom ip resolver in test driver makes incorrect assumption when calling 
vm.communicate.execute, causing driver to fail launching with Vagrant 1.8.6




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-4246) Discretionary partition assignment on the consumer side not functional

2016-10-04 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546229#comment-15546229
 ] 

Vahid Hashemian edited comment on KAFKA-4246 at 10/4/16 6:29 PM:
-

I'll try to run this in my environment to see if I can reproduce the issue. In 
the meantime, can you confirm if you have been able to reproduce the warning 
message in a fresh cluster?

Also, on your last point about the potential conflict between subscribing to 
topic and assigning partitions to the same consumer, there is at least [one 
unit 
test|https://github.com/apache/kafka/blob/a23859e5686bf93ed7e0d310f949757694d47a1b/clients/src/test/java/org/apache/kafka/clients/consumer/KafkaConsumerTest.java#L144]
 in the code that does both for the same consumer without an issue. The only 
thing necessary between the two actions is an {{unsubscribe()}} call.


was (Author: vahid):
I'll try to run this in my environment to see if I can reproduce the issue. In 
the meantime, can you confirm if you have been able to reproduce the warning 
message in a fresh cluster?

Also, on your last point about the potential conflict between subscribing to 
topic and assigning partitions to the same consumer, there is at least [one 
unit 
test|https://github.com/apache/kafka/blob/trunk/clients/src/test/java/org/apache/kafka/clients/consumer/KafkaConsumerTest.java#L144]
 in the code that does both for the same consumer without an issue. The only 
thing necessary between the two actions is an {{unsubscribe()}} call.

> Discretionary partition assignment on the consumer side not functional
> --
>
> Key: KAFKA-4246
> URL: https://issues.apache.org/jira/browse/KAFKA-4246
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.0.1
>Reporter: Alexandru Ionita
>
> Trying to manually assign partition/topics to a consumer will not work 
> correctly. The consumer will be able to fetch records from the given 
> partitions, but the first commit will fail with the following message:
> {code}
> 2016-10-03 13:44:50.673 DEBUG 11757 --- [pool-9-thread-1] 
> o.a.k.c.c.internals.ConsumerCoordinator  : Offset commit for group XX 
> failed: The coordinator is not aware of this member.
> 2016-10-03 13:44:50.673  WARN 11757 --- [pool-9-thread-1] 
> o.a.k.c.c.internals.ConsumerCoordinator  : Auto offset commit failed for 
> group XX: Commit cannot be completed since the group has already 
> rebalanced and assigned the partitions to another member. This means that the 
> time between subsequent calls to poll() was longer than the configured 
> session.timeout.ms, which typically implies that the poll loop is spending 
> too much time message processing. You can address this either by increasing 
> the session timeout or by reducing the maximum size of batches returned in 
> poll() with max.poll.records.
> {code}.
> All this while the consumer will continue to poll records from the kafka 
> cluster, but every commit will fail with the same message.
> I tried setting the {{session.timeout.ms}} to values like 5, but I was 
> getting the same outcome => no successfull commits.
> If I only switch from {{consumer.assign( subscribedPartitions )}} to 
> {{consumer.subscribe( topics )}}, everything works as expected. No other 
> client configurations should be changed to make it work.
> Am I missing something here?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4250) make ProducerRecord and ConsumerRecord extensible

2016-10-04 Thread radai rosenblatt (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546226#comment-15546226
 ] 

radai rosenblatt commented on KAFKA-4250:
-

PR up - https://github.com/apache/kafka/pull/1961

> make ProducerRecord and ConsumerRecord extensible
> -
>
> Key: KAFKA-4250
> URL: https://issues.apache.org/jira/browse/KAFKA-4250
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.10.0.1
>Reporter: radai rosenblatt
>
> KafkaProducer and KafkaConsumer implement interfaces are are designed to be 
> extensible (or at least allow it).
> ProducerRecord and ConsumerRecord, however, are final, making extending 
> producer/consumer very difficult.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4250) make ProducerRecord and ConsumerRecord extensible

2016-10-04 Thread radai rosenblatt (JIRA)
radai rosenblatt created KAFKA-4250:
---

 Summary: make ProducerRecord and ConsumerRecord extensible
 Key: KAFKA-4250
 URL: https://issues.apache.org/jira/browse/KAFKA-4250
 Project: Kafka
  Issue Type: Improvement
  Components: core
Affects Versions: 0.10.0.1
Reporter: radai rosenblatt


KafkaProducer and KafkaConsumer implement interfaces are are designed to be 
extensible (or at least allow it).

ProducerRecord and ConsumerRecord, however, are final, making extending 
producer/consumer very difficult.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)