[GitHub] kafka pull request: MINOR: Use `EasyMock.newCapture` instead of de...

2015-08-18 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/149

MINOR: Use `EasyMock.newCapture` instead of deprecated `new Capture`



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka fix-easy-mock-deprecations

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/149.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #149


commit 1b4172e666ca3b7714a2b3e4d836d8b709e504d6
Author: Ismael Juma ism...@juma.me.uk
Date:   2015-08-18T10:26:00Z

MINOR: Use `EasyMock.newCapture` instead of deprecated `new Capture`




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2439) Add MirrorMakerService to ducktape system tests

2015-08-18 Thread Geoffrey Anderson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Anderson updated KAFKA-2439:
-
Reviewer: Ewen Cheslack-Postava

 Add MirrorMakerService to ducktape system tests
 ---

 Key: KAFKA-2439
 URL: https://issues.apache.org/jira/browse/KAFKA-2439
 Project: Kafka
  Issue Type: Sub-task
  Components: system tests
Reporter: Geoffrey Anderson
Assignee: Geoffrey Anderson
 Fix For: 0.8.3






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2439: Add MirrorMaker service class for ...

2015-08-18 Thread granders
GitHub user granders opened a pull request:

https://github.com/apache/kafka/pull/148

KAFKA-2439: Add MirrorMaker service class for system tests

Added MirrorMaker service and a few corresponding sanity checks, as well as 
necessary config template files. A few additional updates to accomodate the 
change in wait_until from ducktape0.2.0-0.3.0

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/confluentinc/kafka KAFKA-2439

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/148.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #148


commit 1b4b04935eafa2e93583ea5683c2e8851ed43476
Author: Geoff Anderson ge...@confluent.io
Date:   2015-08-18T08:15:00Z

Added MirrorMaker service and a few corresponding sanity checks, as well as 
necessary config template files. A few additional updates to accomodate the 
change in wait_until from ducktape0.2.0-0.3.0




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2439) Add MirrorMakerService to ducktape system tests

2015-08-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14700909#comment-14700909
 ] 

ASF GitHub Bot commented on KAFKA-2439:
---

GitHub user granders opened a pull request:

https://github.com/apache/kafka/pull/148

KAFKA-2439: Add MirrorMaker service class for system tests

Added MirrorMaker service and a few corresponding sanity checks, as well as 
necessary config template files. A few additional updates to accomodate the 
change in wait_until from ducktape0.2.0-0.3.0

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/confluentinc/kafka KAFKA-2439

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/148.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #148


commit 1b4b04935eafa2e93583ea5683c2e8851ed43476
Author: Geoff Anderson ge...@confluent.io
Date:   2015-08-18T08:15:00Z

Added MirrorMaker service and a few corresponding sanity checks, as well as 
necessary config template files. A few additional updates to accomodate the 
change in wait_until from ducktape0.2.0-0.3.0




 Add MirrorMakerService to ducktape system tests
 ---

 Key: KAFKA-2439
 URL: https://issues.apache.org/jira/browse/KAFKA-2439
 Project: Kafka
  Issue Type: Sub-task
  Components: system tests
Reporter: Geoffrey Anderson
Assignee: Geoffrey Anderson
 Fix For: 0.8.3






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] Client-side Assignment for New Consumer

2015-08-18 Thread Jason Gustafson
Hi Jun,

Answers below:

1. When there are multiple common protocols in the JoinGroupRequest, which
one would the coordinator pick?

I was intending to use the list to indicate preference. If all group
members support protocols [A, B] in that order, then we will choose
A. If some support [B, A], then we would either choose based on
respective counts or just randomly. The main use case of supporting the
list is for rolling upgrades when a change is made to the assignment
strategy. In that case, the new assignment strategy would be listed first
in the upgraded client. I think it's debatable whether this feature would
get much use in practice, so we might consider dropping it.

2. If the protocols don't agree, the group construction fails. What exactly
does it mean? Do we send an error in every JoinGroupResponse and remove all
members in the group in the coordinator?

Yes, that is right. It would be handled similarly to inconsistent
assignment strategies in the current protocol. The coordinator returns an
error in each join group response, and the client propagates the error to
the user.

3. Consumer embedded protocol: The proposal has two different formats of
subscription depending on whether wildcards are used or not. This seems a
bit complicated. Would it be better to always use the metadata hash? The
clients know the subscribed topics already. This way, the client code
behaves the same whether wildcards are used or not.

Yeah, I think this is possible (Neha also suggested it). I haven't updated
the wiki yet, but the patch I started working on uses only the metadata
hash. In the case that an explicit topic list is provided, the hash just
covers the metadata for those topics.


Thanks,
Jason



On Tue, Aug 18, 2015 at 10:06 AM, Jun Rao j...@confluent.io wrote:

 Jason,

 Thanks for the writeup. A few comments below.

 1. When there are multiple common protocols in the JoinGroupRequest, which
 one would the coordinator pick?
 2. If the protocols don't agree, the group construction fails. What exactly
 does it mean? Do we send an error in every JoinGroupResponse and remove all
 members in the group in the coordinator?
 3. Consumer embedded protocol: The proposal has two different formats of
 subscription depending on whether wildcards are used or not. This seems a
 bit complicated. Would it be better to always use the metadata hash? The
 clients know the subscribed topics already. This way, the client code
 behaves the same whether wildcards are used or not.

 Jiangjie,

 With respect to rebalance churns due to topics being created/deleted. With
 the new consumer, the rebalance can probably settle within 200ms when there
 is a topic change. So, as long as we are not changing topic more than 5
 times per sec, there shouldn't be constant churns, right?

 Thanks,

 Jun



 On Tue, Aug 11, 2015 at 1:19 PM, Jason Gustafson ja...@confluent.io
 wrote:

  Hi Kafka Devs,
 
  One of the nagging issues in the current design of the new consumer has
  been the need to support a variety of assignment strategies. We've
  encountered this in particular in the design of copycat and the
 processing
  framework (KIP-28). From what I understand, Samza also has a number of
 use
  cases with custom assignment needs. The new consumer protocol supports
 new
  assignment strategies by hooking them into the broker. For many
  environments, this is a major pain and in some cases, a non-starter. It
  also challenges the validation that the coordinator can provide. For
  example, some assignment strategies call for partitions to be assigned
  multiple times, which means that the coordinator can only check that
  partitions have been assigned at least once.
 
  To solve these issues, we'd like to propose moving assignment to the
  client. I've written a wiki which outlines some protocol changes to
 achieve
  this:
 
 
 https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Client-side+Assignment+Proposal
  .
  To summarize briefly, instead of the coordinator assigning the partitions
  itself, all subscriptions are forwarded to each member of the group which
  then decides independently which partitions it should consume. The
 protocol
  provides a mechanism for the coordinator to validate that all consumers
 use
  the same assignment strategy, but it does not ensure that the resulting
  assignment is correct. This provides a powerful capability for users to
  control the full data flow on the client side. They control how data is
  written to partitions through the Partitioner interface and they control
  how data is consumed through the assignment strategy, all without
 touching
  the server.
 
  Of course nothing comes for free. In particular, this change removes the
  ability of the coordinator to validate that commits are made by consumers
  who were assigned the respective partition. This might not be too bad
 since
  we retain the ability to validate the generation id, but it is a
 potential
  concern. We have considered 

Re: [DISCUSSION] Kafka 0.8.2.2 release?

2015-08-18 Thread Edward Ribeiro
I sort of follow Daniel Nelson on this issue: cut a 0.8.2.2 but not include
much thing besides Snappy fixes. I mean, include a couple of additional
critical bug fixes, if really urgent, and that's it.

On Tue, Aug 18, 2015 at 3:25 PM, Daniel Nelson daniel.nel...@vungle.com
wrote:

 I am strongly in favor of cutting a 0.8.2.2 release, but I don’t think
 that it needs to include anything other than the fix for Snappy that kicked
 off this discussion in the first place. If there are additional critical
 issues that can be included without delaying the process, I see no downside.

  On Aug 18, 2015, at 10:54 AM, Neha Narkhede n...@confluent.io wrote:
 
  How about looking at the scope for the 0.8.3 release first before we cut
  yet another point release off of 0.8.2.2? Each release includes some
  overhead and if there is a larger release in the works, it might be worth
  working on getting that. My take is that the 2 things the community has
  been waiting for is SSL support and the new consumer and we have been
  promising to get 0.8.3 with both those features for several months now.
 
  Looking at the progress on both, it seems we are very close to getting
 both
  those checked in and it looks like we should get there in another 5-6
  weeks. Furthermore, both of these features are large and I anticipate us
  receiving feedback and bugs that will require a couple of point releases
 on
  top of 0.8.3 anyway. One possibility is to work on 0.8.3 together now and
  get the community to use the newly released features, gather feedback and
  do point releases incorporating that feedback and iterate on it.
 
  We could absolutely do both 0.8.2.2 and 0.8.3. What I'd ask for is for us
  to look at the 0.8.3 timeline too and make a call whether 0.8.2.2 still
  makes sense.
 
  Thanks,
  Neha
 
  On Tue, Aug 18, 2015 at 10:24 AM, Gwen Shapira g...@confluent.io
 wrote:
 
  Thanks Jun.
 
  I updated the list with your suggestions.
  If anyone feels we are missing a critical patch for 0.8.2.2, please
 speak
  up.
 
  Gwen
 
  On Mon, Aug 17, 2015 at 5:40 PM, Jun Rao j...@confluent.io wrote:
 
  Hi, Grant,
 
  I took a look at that list. None of those is really critical as you
 said.
  So, I'd suggest that we not include those to minimize the scope of the
  release.
 
  Thanks,
 
  Jun
 
  On Mon, Aug 17, 2015 at 5:16 PM, Grant Henke ghe...@cloudera.com
  wrote:
 
  Thanks Gwen.
 
  I updated a few small things on the wiki page.
 
  Below is a list of jiras I think could also be marked as included. All
  of
  these, though not super critical, seem like fairly small and low risk
  changes that help avoid potentially confusing issues or errors for
  users.
 
  KAFKA-2012
  KAFKA-972
  KAFKA-2337  KAFKA-2393
  KAFKA-1867
  KAFKA-2407
  KAFKA-2234
  KAFKA-1866
  KAFKA-2345  KAFKA-2355
 
  thoughts?
 
  Thank you,
  Grant
 
  On Mon, Aug 17, 2015 at 4:56 PM, Gwen Shapira g...@confluent.io
  wrote:
 
  Thanks for creating a list, Grant!
 
  I placed it on the wiki with a quick evaluation of the content and
  whether
  it should be in 0.8.2.2:
 
 
 
 
 
 https://cwiki.apache.org/confluence/display/KAFKA/Proposed+patches+for+0.8.2.2
 
  I'm attempting to only cherrypick fixes that are both important for
  large
  number of users (or very critical to some users) and very safe
  (mostly
  judged by the size of the change, but not only)
 
  If your favorite bugfix is missing from the list, or is there but
  marked
  No, please let us know (in this thread) what we are missing and why
  it
  is
  both important and safe.
  Also, if I accidentally included something you consider unsafe, speak
  up!
 
  Gwen
 
  On Mon, Aug 17, 2015 at 8:17 AM, Grant Henke ghe...@cloudera.com
  wrote:
 
  +dev
 
  Adding dev list back in. Somehow it got dropped.
 
 
  On Mon, Aug 17, 2015 at 10:16 AM, Grant Henke ghe...@cloudera.com
 
  wrote:
 
  Below is a list of candidate bug fix jiras marked fixed for
  0.8.3.
  I
  don't
  suspect all of these will (or should) make it into the release
  but
  this
  should be a relatively complete list to work from:
 
- KAFKA-2114 
  https://issues.apache.org/jira/browse/KAFKA-2114
  :
  Unable
to change min.insync.replicas default
- KAFKA-1702 
  https://issues.apache.org/jira/browse/KAFKA-1702
  :
Messages silently Lost by producer
- KAFKA-2012 
  https://issues.apache.org/jira/browse/KAFKA-2012
  :
Broker should automatically handle corrupt index files
- KAFKA-2406 
  https://issues.apache.org/jira/browse/KAFKA-2406
  :
  ISR
propagation should be throttled to avoid overwhelming
  controller.
- KAFKA-2336 
  https://issues.apache.org/jira/browse/KAFKA-2336
  :
Changing offsets.topic.num.partitions after the offset topic
  is
  created
breaks consumer group partition assignment
- KAFKA-2337 
  https://issues.apache.org/jira/browse/KAFKA-2337
  :
  Verify
that metric names will not collide when creating new topics
- KAFKA-2393 
  

[jira] [Updated] (KAFKA-2308) New producer + Snappy face un-compression errors after broker restart

2015-08-18 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2308:

Fix Version/s: 0.8.2.2

 New producer + Snappy face un-compression errors after broker restart
 -

 Key: KAFKA-2308
 URL: https://issues.apache.org/jira/browse/KAFKA-2308
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Fix For: 0.8.3, 0.8.2.2

 Attachments: KAFKA-2308.patch


 Looks like the new producer, when used with Snappy, following a broker 
 restart is sending messages the brokers can't decompress. This issue was 
 discussed at few mailing lists thread, but I don't think we ever resolved it.
 I can reproduce with trunk and Snappy 1.1.1.7. 
 To reproduce:
 1. Start 3 brokers
 2. Create a topic with 3 partitions and 3 replicas each.
 2. Start performance producer with --new-producer --compression-codec 2 (and 
 set the number of messages to fairly high, to give you time. I went with 10M)
 3. Bounce one of the brokers
 4. The log of one of the surviving nodes should contain errors like:
 {code}
 2015-07-02 13:45:59,300 ERROR kafka.server.ReplicaManager: [Replica Manager 
 on Broker 66]: Error processing append operation on partition [t3,0]
 kafka.common.KafkaException:
 at 
 kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:94)
 at 
 kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:64)
 at 
 kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
 at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.innerDone(ByteBufferMessageSet.scala:177)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:218)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:173)
 at 
 kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
 at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
 at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
 at scala.collection.Iterator$class.foreach(Iterator.scala:727)
 at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
 at 
 scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
 at 
 scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
 at 
 scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
 at 
 scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
 at scala.collection.AbstractIterator.to(Iterator.scala:1157)
 at 
 scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
 at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
 at 
 kafka.message.ByteBufferMessageSet.validateMessagesAndAssignOffsets(ByteBufferMessageSet.scala:267)
 at kafka.log.Log.liftedTree1$1(Log.scala:327)
 at kafka.log.Log.append(Log.scala:326)
 at 
 kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:423)
 at 
 kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:409)
 at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
 at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:268)
 at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:409)
 at 
 kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:365)
 at 
 kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:350)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
 at 
 scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
 at 
 scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
 at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
 at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
 at 
 scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
 at scala.collection.AbstractTraversable.map(Traversable.scala:105)
 at 
 kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:350)
 at 
 kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:286)
 at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:270)
 at 

[jira] [Updated] (KAFKA-2235) LogCleaner offset map overflow

2015-08-18 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2235:

Fix Version/s: 0.8.2.2

 LogCleaner offset map overflow
 --

 Key: KAFKA-2235
 URL: https://issues.apache.org/jira/browse/KAFKA-2235
 Project: Kafka
  Issue Type: Bug
  Components: core, log
Affects Versions: 0.8.1, 0.8.2.0
Reporter: Ivan Simoneko
Assignee: Ivan Simoneko
 Fix For: 0.8.3, 0.8.2.2

 Attachments: KAFKA-2235_v1.patch, KAFKA-2235_v2.patch


 We've seen log cleaning generating an error for a topic with lots of small 
 messages. It seems that cleanup map overflow is possible if a log segment 
 contains more unique keys than empty slots in offsetMap. Check for baseOffset 
 and map utilization before processing segment seems to be not enough because 
 it doesn't take into account segment size (number of unique messages in the 
 segment).
 I suggest to estimate upper bound of keys in a segment as a number of 
 messages in the segment and compare it with the number of available slots in 
 the map (keeping in mind desired load factor). It should work in cases where 
 an empty map is capable to hold all the keys for a single segment. If even a 
 single segment no able to fit into an empty map cleanup process will still 
 fail. Probably there should be a limit on the log segment entries count?
 Here is the stack trace for this error:
 2015-05-19 16:52:48,758 ERROR [kafka-log-cleaner-thread-0] 
 kafka.log.LogCleaner - [kafka-log-cleaner-thread-0], Error due to
 java.lang.IllegalArgumentException: requirement failed: Attempt to add a new 
 entry to a full offset map.
at scala.Predef$.require(Predef.scala:233)
at kafka.log.SkimpyOffsetMap.put(OffsetMap.scala:79)
at 
 kafka.log.Cleaner$$anonfun$kafka$log$Cleaner$$buildOffsetMapForSegment$1.apply(LogCleaner.scala:543)
at 
 kafka.log.Cleaner$$anonfun$kafka$log$Cleaner$$buildOffsetMapForSegment$1.apply(LogCleaner.scala:538)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at kafka.utils.IteratorTemplate.foreach(IteratorTemplate.scala:32)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at kafka.message.MessageSet.foreach(MessageSet.scala:67)
at 
 kafka.log.Cleaner.kafka$log$Cleaner$$buildOffsetMapForSegment(LogCleaner.scala:538)
at 
 kafka.log.Cleaner$$anonfun$buildOffsetMap$3.apply(LogCleaner.scala:515)
at 
 kafka.log.Cleaner$$anonfun$buildOffsetMap$3.apply(LogCleaner.scala:512)
at scala.collection.immutable.Stream.foreach(Stream.scala:547)
at kafka.log.Cleaner.buildOffsetMap(LogCleaner.scala:512)
at kafka.log.Cleaner.clean(LogCleaner.scala:307)
at 
 kafka.log.LogCleaner$CleanerThread.cleanOrSleep(LogCleaner.scala:221)
at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:199)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2189) Snappy compression of message batches less efficient in 0.8.2.1

2015-08-18 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2189:

Fix Version/s: 0.8.2.2

 Snappy compression of message batches less efficient in 0.8.2.1
 ---

 Key: KAFKA-2189
 URL: https://issues.apache.org/jira/browse/KAFKA-2189
 Project: Kafka
  Issue Type: Bug
  Components: build, compression, log
Affects Versions: 0.8.2.1
Reporter: Olson,Andrew
Assignee: Ismael Juma
Priority: Blocker
  Labels: trivial
 Fix For: 0.8.3, 0.8.2.2

 Attachments: KAFKA-2189.patch


 We are using snappy compression and noticed a fairly substantial increase 
 (about 2.25x) in log filesystem space consumption after upgrading a Kafka 
 cluster from 0.8.1.1 to 0.8.2.1. We found that this is caused by messages 
 being seemingly recompressed individually (or possibly with a much smaller 
 buffer or dictionary?) instead of as a batch as sent by producers. We 
 eventually tracked down the change in compression ratio/scope to this [1] 
 commit that updated the snappy version from 1.0.5 to 1.1.1.3. The Kafka 
 client version does not appear to be relevant as we can reproduce this with 
 both the 0.8.1.1 and 0.8.2.1 Producer.
 Here are the log files from our troubleshooting that contain the same set of 
 1000 messages, for batch sizes of 1, 10, 100, and 1000. f9d9b was the last 
 commit with 0.8.1.1-like behavior prior to f5ab8 introducing the issue.
 {noformat}
 -rw-rw-r-- 1 kafka kafka 404967 May 12 11:45 
 /var/kafka2/f9d9b-batch-1-0/.log
 -rw-rw-r-- 1 kafka kafka 119951 May 12 11:45 
 /var/kafka2/f9d9b-batch-10-0/.log
 -rw-rw-r-- 1 kafka kafka  89645 May 12 11:45 
 /var/kafka2/f9d9b-batch-100-0/.log
 -rw-rw-r-- 1 kafka kafka  88279 May 12 11:45 
 /var/kafka2/f9d9b-batch-1000-0/.log
 -rw-rw-r-- 1 kafka kafka 402837 May 12 11:41 
 /var/kafka2/f5ab8-batch-1-0/.log
 -rw-rw-r-- 1 kafka kafka 382437 May 12 11:41 
 /var/kafka2/f5ab8-batch-10-0/.log
 -rw-rw-r-- 1 kafka kafka 364791 May 12 11:41 
 /var/kafka2/f5ab8-batch-100-0/.log
 -rw-rw-r-- 1 kafka kafka 380693 May 12 11:41 
 /var/kafka2/f5ab8-batch-1000-0/.log
 {noformat}
 [1] 
 https://github.com/apache/kafka/commit/f5ab8e1780cf80f267906e3259ad4f9278c32d28
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] Client-side Assignment for New Consumer

2015-08-18 Thread Jun Rao
Jason,

Thanks for the writeup. A few comments below.

1. When there are multiple common protocols in the JoinGroupRequest, which
one would the coordinator pick?
2. If the protocols don't agree, the group construction fails. What exactly
does it mean? Do we send an error in every JoinGroupResponse and remove all
members in the group in the coordinator?
3. Consumer embedded protocol: The proposal has two different formats of
subscription depending on whether wildcards are used or not. This seems a
bit complicated. Would it be better to always use the metadata hash? The
clients know the subscribed topics already. This way, the client code
behaves the same whether wildcards are used or not.

Jiangjie,

With respect to rebalance churns due to topics being created/deleted. With
the new consumer, the rebalance can probably settle within 200ms when there
is a topic change. So, as long as we are not changing topic more than 5
times per sec, there shouldn't be constant churns, right?

Thanks,

Jun



On Tue, Aug 11, 2015 at 1:19 PM, Jason Gustafson ja...@confluent.io wrote:

 Hi Kafka Devs,

 One of the nagging issues in the current design of the new consumer has
 been the need to support a variety of assignment strategies. We've
 encountered this in particular in the design of copycat and the processing
 framework (KIP-28). From what I understand, Samza also has a number of use
 cases with custom assignment needs. The new consumer protocol supports new
 assignment strategies by hooking them into the broker. For many
 environments, this is a major pain and in some cases, a non-starter. It
 also challenges the validation that the coordinator can provide. For
 example, some assignment strategies call for partitions to be assigned
 multiple times, which means that the coordinator can only check that
 partitions have been assigned at least once.

 To solve these issues, we'd like to propose moving assignment to the
 client. I've written a wiki which outlines some protocol changes to achieve
 this:

 https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Client-side+Assignment+Proposal
 .
 To summarize briefly, instead of the coordinator assigning the partitions
 itself, all subscriptions are forwarded to each member of the group which
 then decides independently which partitions it should consume. The protocol
 provides a mechanism for the coordinator to validate that all consumers use
 the same assignment strategy, but it does not ensure that the resulting
 assignment is correct. This provides a powerful capability for users to
 control the full data flow on the client side. They control how data is
 written to partitions through the Partitioner interface and they control
 how data is consumed through the assignment strategy, all without touching
 the server.

 Of course nothing comes for free. In particular, this change removes the
 ability of the coordinator to validate that commits are made by consumers
 who were assigned the respective partition. This might not be too bad since
 we retain the ability to validate the generation id, but it is a potential
 concern. We have considered alternative protocols which add a second
 round-trip to the protocol in order to give the coordinator the ability to
 confirm the assignment. As mentioned above, the coordinator is somewhat
 limited in what it can actually validate, but this would return its ability
 to validate commits. The tradeoff is that it increases the protocol's
 complexity which means more ways for the protocol to fail and consequently
 more edge cases in the code.

 It also misses an opportunity to generalize the group membership protocol
 for additional use cases. In fact, after you've gone to the trouble of
 moving assignment to the client, the main thing that is left in this
 protocol is basically a general group management capability. This is
 exactly what is needed for a few cases that are currently under discussion
 (e.g. copycat or single-writer producer). We've taken this further step in
 the proposal and attempted to envision what that general protocol might
 look like and how it could be used both by the consumer and for some of
 these other cases.

 Anyway, since time is running out on the new consumer, we have perhaps one
 last chance to consider a significant change in the protocol like this, so
 have a look at the wiki and share your thoughts. I've no doubt that some
 ideas seem clearer in my mind than they do on paper, so ask questions if
 there is any confusion.

 Thanks!
 Jason



[jira] [Updated] (KAFKA-1690) Add SSL support to Kafka Broker, Producer and Consumer

2015-08-18 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-1690:
--
Attachment: KAFKA-1690_2015-08-18_11:24:46.patch

 Add SSL support to Kafka Broker, Producer and Consumer
 --

 Key: KAFKA-1690
 URL: https://issues.apache.org/jira/browse/KAFKA-1690
 Project: Kafka
  Issue Type: Sub-task
Reporter: Joe Stein
Assignee: Sriharsha Chintalapani
 Fix For: 0.8.3

 Attachments: KAFKA-1690.patch, KAFKA-1690.patch, 
 KAFKA-1690_2015-05-10_23:20:30.patch, KAFKA-1690_2015-05-10_23:31:42.patch, 
 KAFKA-1690_2015-05-11_16:09:36.patch, KAFKA-1690_2015-05-12_16:20:08.patch, 
 KAFKA-1690_2015-05-15_07:18:21.patch, KAFKA-1690_2015-05-20_14:54:35.patch, 
 KAFKA-1690_2015-05-21_10:37:08.patch, KAFKA-1690_2015-06-03_18:52:29.patch, 
 KAFKA-1690_2015-06-23_13:18:20.patch, KAFKA-1690_2015-07-20_06:10:42.patch, 
 KAFKA-1690_2015-07-20_11:59:57.patch, KAFKA-1690_2015-07-25_12:10:55.patch, 
 KAFKA-1690_2015-08-16_20:41:02.patch, KAFKA-1690_2015-08-17_08:12:50.patch, 
 KAFKA-1690_2015-08-17_09:28:52.patch, KAFKA-1690_2015-08-17_12:20:53.patch, 
 KAFKA-1690_2015-08-18_11:24:46.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1690) Add SSL support to Kafka Broker, Producer and Consumer

2015-08-18 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14701735#comment-14701735
 ] 

Sriharsha Chintalapani commented on KAFKA-1690:
---

Updated reviewboard https://reviews.apache.org/r/33620/diff/
 against branch origin/trunk

 Add SSL support to Kafka Broker, Producer and Consumer
 --

 Key: KAFKA-1690
 URL: https://issues.apache.org/jira/browse/KAFKA-1690
 Project: Kafka
  Issue Type: Sub-task
Reporter: Joe Stein
Assignee: Sriharsha Chintalapani
 Fix For: 0.8.3

 Attachments: KAFKA-1690.patch, KAFKA-1690.patch, 
 KAFKA-1690_2015-05-10_23:20:30.patch, KAFKA-1690_2015-05-10_23:31:42.patch, 
 KAFKA-1690_2015-05-11_16:09:36.patch, KAFKA-1690_2015-05-12_16:20:08.patch, 
 KAFKA-1690_2015-05-15_07:18:21.patch, KAFKA-1690_2015-05-20_14:54:35.patch, 
 KAFKA-1690_2015-05-21_10:37:08.patch, KAFKA-1690_2015-06-03_18:52:29.patch, 
 KAFKA-1690_2015-06-23_13:18:20.patch, KAFKA-1690_2015-07-20_06:10:42.patch, 
 KAFKA-1690_2015-07-20_11:59:57.patch, KAFKA-1690_2015-07-25_12:10:55.patch, 
 KAFKA-1690_2015-08-16_20:41:02.patch, KAFKA-1690_2015-08-17_08:12:50.patch, 
 KAFKA-1690_2015-08-17_09:28:52.patch, KAFKA-1690_2015-08-17_12:20:53.patch, 
 KAFKA-1690_2015-08-18_11:24:46.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 33620: Patch for KAFKA-1690

2015-08-18 Thread Sriharsha Chintalapani

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33620/
---

(Updated Aug. 18, 2015, 6:24 p.m.)


Review request for kafka.


Bugs: KAFKA-1690
https://issues.apache.org/jira/browse/KAFKA-1690


Repository: kafka


Description (updated)
---

KAFKA-1690. new java producer needs ssl support as a client.


KAFKA-1690. new java producer needs ssl support as a client.


KAFKA-1690. new java producer needs ssl support as a client.


KAFKA-1690. new java producer needs ssl support as a client. SSLFactory tests.


KAFKA-1690. new java producer needs ssl support as a client. Added 
PrincipalBuilder.


KAFKA-1690. new java producer needs ssl support as a client. Addressing reviews.


KAFKA-1690. new java producer needs ssl support as a client. Addressing reviews.


KAFKA-1690. new java producer needs ssl support as a client. Addressing reviews.


KAFKA-1690. new java producer needs ssl support as a client. Fixed minor issues 
with the patch.


KAFKA-1690. new java producer needs ssl support as a client. Fixed minor issues 
with the patch.


KAFKA-1690. new java producer needs ssl support as a client.


KAFKA-1690. new java producer needs ssl support as a client.


Merge remote-tracking branch 'refs/remotes/origin/trunk' into KAFKA-1690-V1


KAFKA-1690. Broker side ssl changes.


KAFKA-1684. SSL for socketServer.


KAFKA-1690. Added SSLProducerSendTest and fixes to get right port for SSL.


Merge branch 'trunk' into KAFKA-1690-V1


KAFKA-1690. Post merge fixes.


KAFKA-1690. Added SSLProducerSendTest.


KAFKA-1690. Minor fixes based on patch review comments.


Merge commit


KAFKA-1690. Added SSL Consumer Test.


KAFKA-1690. SSL Support.


KAFKA-1690. Addressing reviews.


Merge branch 'trunk' into KAFKA-1690-V1


Merge branch 'trunk' into KAFKA-1690-V1


KAFKA-1690. Addressing reviews. Removed interestOps from SSLTransportLayer.


KAFKA-1690. Addressing reviews.


KAFKA-1690. added staged receives to selector.


KAFKA-1690. Addressing reviews.


Merge branch 'trunk' into KAFKA-1690-V1


KAFKA-1690. Addressing reviews.


KAFKA-1690. Add SSL support to broker, producer and consumer.


Merge remote-tracking branch 'refs/remotes/origin/trunk' into KAFKA-1690-V1


KAFKA-1690. Add SSL support to Kafka Broker, Producer  Client.


KAFKA-1690. Add SSL support for Kafka Brokers, Producers and Consumers.


Merge remote-tracking branch 'refs/remotes/origin/trunk' into KAFKA-1690-V1


KAFKA-1690. Add SSL support for Kafka Brokers, Producers and Consumers.


KAFKA-1690. Add SSL Support Kafka Broker, Producer and Consumer.


KAFKA-1690. Add SSL support for Kafka Broker, Producer and Consumer.


Diffs (updated)
-

  build.gradle 983587fd0b7604c3a26fcbb6a1d63e5e470d23fe 
  checkstyle/import-control.xml e3f4f84c6becfd9087627f018690e1e2fc2b3bba 
  clients/src/main/java/org/apache/kafka/clients/ClientUtils.java 
0d68bf1e1e90fe9d5d4397ddf817b9a9af8d9f7a 
  clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
2c421f42ed3fc5d61cf9c87a7eaa7bb23e26f63b 
  clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
0e51d7bd461d253f4396a5b6ca7cd391658807fa 
  clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
d35b421a515074d964c7fccb73d260b847ea5f00 
  clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
be46b6c213ad8c6c09ad22886a5f36175ab0e13a 
  clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
03b8dd23df63a8d8a117f02eabcce4a2d48c44f7 
  clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
aa264202f2724907924985a5ecbe74afc4c6c04b 
  clients/src/main/java/org/apache/kafka/common/config/AbstractConfig.java 
6c317480a181678747bfb6b77e315b08668653c5 
  clients/src/main/java/org/apache/kafka/common/config/SSLConfigs.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/network/Authenticator.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/network/ByteBufferSend.java 
df0e6d5105ca97b7e1cb4d334ffb7b443506bd0b 
  clients/src/main/java/org/apache/kafka/common/network/ChannelBuilder.java 
PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/common/network/DefaultAuthenticator.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/network/KafkaChannel.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/network/NetworkReceive.java 
3ca0098b8ec8cfdf81158465b2d40afc47eb6f80 
  
clients/src/main/java/org/apache/kafka/common/network/PlaintextChannelBuilder.java
 PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/common/network/PlaintextTransportLayer.java
 PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/network/SSLChannelBuilder.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/network/SSLTransportLayer.java 
PRE-CREATION 
  

Re: [DISCUSSION] Kafka 0.8.2.2 release?

2015-08-18 Thread Neha Narkhede
How about looking at the scope for the 0.8.3 release first before we cut
yet another point release off of 0.8.2.2? Each release includes some
overhead and if there is a larger release in the works, it might be worth
working on getting that. My take is that the 2 things the community has
been waiting for is SSL support and the new consumer and we have been
promising to get 0.8.3 with both those features for several months now.

Looking at the progress on both, it seems we are very close to getting both
those checked in and it looks like we should get there in another 5-6
weeks. Furthermore, both of these features are large and I anticipate us
receiving feedback and bugs that will require a couple of point releases on
top of 0.8.3 anyway. One possibility is to work on 0.8.3 together now and
get the community to use the newly released features, gather feedback and
do point releases incorporating that feedback and iterate on it.

We could absolutely do both 0.8.2.2 and 0.8.3. What I'd ask for is for us
to look at the 0.8.3 timeline too and make a call whether 0.8.2.2 still
makes sense.

Thanks,
Neha

On Tue, Aug 18, 2015 at 10:24 AM, Gwen Shapira g...@confluent.io wrote:

 Thanks Jun.

 I updated the list with your suggestions.
 If anyone feels we are missing a critical patch for 0.8.2.2, please speak
 up.

 Gwen

 On Mon, Aug 17, 2015 at 5:40 PM, Jun Rao j...@confluent.io wrote:

  Hi, Grant,
 
  I took a look at that list. None of those is really critical as you said.
  So, I'd suggest that we not include those to minimize the scope of the
  release.
 
  Thanks,
 
  Jun
 
  On Mon, Aug 17, 2015 at 5:16 PM, Grant Henke ghe...@cloudera.com
 wrote:
 
   Thanks Gwen.
  
   I updated a few small things on the wiki page.
  
   Below is a list of jiras I think could also be marked as included. All
 of
   these, though not super critical, seem like fairly small and low risk
   changes that help avoid potentially confusing issues or errors for
 users.
  
   KAFKA-2012
   KAFKA-972
   KAFKA-2337  KAFKA-2393
   KAFKA-1867
   KAFKA-2407
   KAFKA-2234
   KAFKA-1866
   KAFKA-2345  KAFKA-2355
  
   thoughts?
  
   Thank you,
   Grant
  
   On Mon, Aug 17, 2015 at 4:56 PM, Gwen Shapira g...@confluent.io
 wrote:
  
Thanks for creating a list, Grant!
   
I placed it on the wiki with a quick evaluation of the content and
   whether
it should be in 0.8.2.2:
   
   
  
 
 https://cwiki.apache.org/confluence/display/KAFKA/Proposed+patches+for+0.8.2.2
   
I'm attempting to only cherrypick fixes that are both important for
  large
number of users (or very critical to some users) and very safe
 (mostly
judged by the size of the change, but not only)
   
If your favorite bugfix is missing from the list, or is there but
  marked
No, please let us know (in this thread) what we are missing and why
  it
   is
both important and safe.
Also, if I accidentally included something you consider unsafe, speak
  up!
   
Gwen
   
On Mon, Aug 17, 2015 at 8:17 AM, Grant Henke ghe...@cloudera.com
   wrote:
   
 +dev

 Adding dev list back in. Somehow it got dropped.


 On Mon, Aug 17, 2015 at 10:16 AM, Grant Henke ghe...@cloudera.com
 
wrote:

  Below is a list of candidate bug fix jiras marked fixed for
 0.8.3.
  I
 don't
  suspect all of these will (or should) make it into the release
 but
   this
  should be a relatively complete list to work from:
 
 - KAFKA-2114 
 https://issues.apache.org/jira/browse/KAFKA-2114
  :
 Unable
 to change min.insync.replicas default
 - KAFKA-1702 
 https://issues.apache.org/jira/browse/KAFKA-1702
  :
 Messages silently Lost by producer
 - KAFKA-2012 
 https://issues.apache.org/jira/browse/KAFKA-2012
  :
 Broker should automatically handle corrupt index files
 - KAFKA-2406 
 https://issues.apache.org/jira/browse/KAFKA-2406
  :
ISR
 propagation should be throttled to avoid overwhelming
  controller.
 - KAFKA-2336 
 https://issues.apache.org/jira/browse/KAFKA-2336
  :
 Changing offsets.topic.num.partitions after the offset topic
 is
 created
 breaks consumer group partition assignment
 - KAFKA-2337 
 https://issues.apache.org/jira/browse/KAFKA-2337
  :
 Verify
 that metric names will not collide when creating new topics
 - KAFKA-2393 
 https://issues.apache.org/jira/browse/KAFKA-2393
  :
 Correctly Handle InvalidTopicException in
KafkaApis.getTopicMetadata()
 - KAFKA-2189 
 https://issues.apache.org/jira/browse/KAFKA-2189
  :
 Snappy
 compression of message batches less efficient in 0.8.2.1
 - KAFKA-2308 
 https://issues.apache.org/jira/browse/KAFKA-2308
  :
New
 producer + Snappy face un-compression errors after broker
  restart
 - KAFKA-2042 
 https://issues.apache.org/jira/browse/KAFKA-2042
  :
New
 producer metadata 

[jira] [Created] (KAFKA-2442) QuotasTest should not fail when cpu is busy

2015-08-18 Thread Dong Lin (JIRA)
Dong Lin created KAFKA-2442:
---

 Summary: QuotasTest should not fail when cpu is busy
 Key: KAFKA-2442
 URL: https://issues.apache.org/jira/browse/KAFKA-2442
 Project: Kafka
  Issue Type: Bug
Reporter: Dong Lin
Assignee: Dong Lin


We observed that testThrottledProducerConsumer in QuotasTest may fail or 
succeed randomly. It appears that the test may fail when the system is slow. We 
can add timer in the integration test to avoid random failure.

See an example failure at 
https://builds.apache.org/job/kafka-trunk-git-pr/166/console for patch 
https://github.com/apache/kafka/pull/142.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1883) NullPointerException in RequestSendThread

2015-08-18 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1883:

Fix Version/s: 0.8.2.2

 NullPointerException in RequestSendThread
 -

 Key: KAFKA-1883
 URL: https://issues.apache.org/jira/browse/KAFKA-1883
 Project: Kafka
  Issue Type: Bug
  Components: core
Reporter: jaikiran pai
Assignee: jaikiran pai
 Fix For: 0.8.3, 0.8.2.2

 Attachments: KAFKA-1883.patch


 I often see the following exception while running some tests
 (ProducerFailureHandlingTest.testNoResponse is one such instance):
 {code}
 [2015-01-19 22:30:24,257] ERROR [Controller-0-to-broker-1-send-thread],
 Controller 0 fails to send a request to broker
 id:1,host:localhost,port:56729 (kafka.controller.RequestSendThread:103)
 java.lang.NullPointerException
 at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.
 scala:150)
 at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
 {code}
 Looking at that code in question, I can see that the NPE can be triggered
 when the receive is null which can happen if the isRunning is false
 (i.e a shutdown has been requested).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1836) metadata.fetch.timeout.ms set to zero blocks forever

2015-08-18 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1836:

Fix Version/s: 0.8.2.2

 metadata.fetch.timeout.ms set to zero blocks forever
 

 Key: KAFKA-1836
 URL: https://issues.apache.org/jira/browse/KAFKA-1836
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.8.2.0
Reporter: Paul Pearcy
Priority: Minor
  Labels: newbie
 Fix For: 0.8.3, 0.8.2.2

 Attachments: KAFKA-1836-new-patch.patch, KAFKA-1836.patch


 You can easily work around this by setting the timeout value to 1ms, but 0ms 
 should mean 0ms or at least have the behavior documented. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSSION] Kafka 0.8.2.2 release?

2015-08-18 Thread Andrew Otto
I agree: keep it simple :)  The latest stable version of Kafka right now has a 
critical bug in it.  Fixing that would be good enough.  0.8.2.2 should probably 
just a maintenance/bugfix release.


 On Aug 18, 2015, at 14:29, Edward Ribeiro edward.ribe...@gmail.com wrote:
 
 I sort of follow Daniel Nelson on this issue: cut a 0.8.2.2 but not include
 much thing besides Snappy fixes. I mean, include a couple of additional
 critical bug fixes, if really urgent, and that's it.
 
 On Tue, Aug 18, 2015 at 3:25 PM, Daniel Nelson daniel.nel...@vungle.com
 wrote:
 
 I am strongly in favor of cutting a 0.8.2.2 release, but I don’t think
 that it needs to include anything other than the fix for Snappy that kicked
 off this discussion in the first place. If there are additional critical
 issues that can be included without delaying the process, I see no downside.
 
 On Aug 18, 2015, at 10:54 AM, Neha Narkhede n...@confluent.io wrote:
 
 How about looking at the scope for the 0.8.3 release first before we cut
 yet another point release off of 0.8.2.2? Each release includes some
 overhead and if there is a larger release in the works, it might be worth
 working on getting that. My take is that the 2 things the community has
 been waiting for is SSL support and the new consumer and we have been
 promising to get 0.8.3 with both those features for several months now.
 
 Looking at the progress on both, it seems we are very close to getting
 both
 those checked in and it looks like we should get there in another 5-6
 weeks. Furthermore, both of these features are large and I anticipate us
 receiving feedback and bugs that will require a couple of point releases
 on
 top of 0.8.3 anyway. One possibility is to work on 0.8.3 together now and
 get the community to use the newly released features, gather feedback and
 do point releases incorporating that feedback and iterate on it.
 
 We could absolutely do both 0.8.2.2 and 0.8.3. What I'd ask for is for us
 to look at the 0.8.3 timeline too and make a call whether 0.8.2.2 still
 makes sense.
 
 Thanks,
 Neha
 
 On Tue, Aug 18, 2015 at 10:24 AM, Gwen Shapira g...@confluent.io
 wrote:
 
 Thanks Jun.
 
 I updated the list with your suggestions.
 If anyone feels we are missing a critical patch for 0.8.2.2, please
 speak
 up.
 
 Gwen
 
 On Mon, Aug 17, 2015 at 5:40 PM, Jun Rao j...@confluent.io wrote:
 
 Hi, Grant,
 
 I took a look at that list. None of those is really critical as you
 said.
 So, I'd suggest that we not include those to minimize the scope of the
 release.
 
 Thanks,
 
 Jun
 
 On Mon, Aug 17, 2015 at 5:16 PM, Grant Henke ghe...@cloudera.com
 wrote:
 
 Thanks Gwen.
 
 I updated a few small things on the wiki page.
 
 Below is a list of jiras I think could also be marked as included. All
 of
 these, though not super critical, seem like fairly small and low risk
 changes that help avoid potentially confusing issues or errors for
 users.
 
 KAFKA-2012
 KAFKA-972
 KAFKA-2337  KAFKA-2393
 KAFKA-1867
 KAFKA-2407
 KAFKA-2234
 KAFKA-1866
 KAFKA-2345  KAFKA-2355
 
 thoughts?
 
 Thank you,
 Grant
 
 On Mon, Aug 17, 2015 at 4:56 PM, Gwen Shapira g...@confluent.io
 wrote:
 
 Thanks for creating a list, Grant!
 
 I placed it on the wiki with a quick evaluation of the content and
 whether
 it should be in 0.8.2.2:
 
 
 
 
 
 https://cwiki.apache.org/confluence/display/KAFKA/Proposed+patches+for+0.8.2.2
 
 I'm attempting to only cherrypick fixes that are both important for
 large
 number of users (or very critical to some users) and very safe
 (mostly
 judged by the size of the change, but not only)
 
 If your favorite bugfix is missing from the list, or is there but
 marked
 No, please let us know (in this thread) what we are missing and why
 it
 is
 both important and safe.
 Also, if I accidentally included something you consider unsafe, speak
 up!
 
 Gwen
 
 On Mon, Aug 17, 2015 at 8:17 AM, Grant Henke ghe...@cloudera.com
 wrote:
 
 +dev
 
 Adding dev list back in. Somehow it got dropped.
 
 
 On Mon, Aug 17, 2015 at 10:16 AM, Grant Henke ghe...@cloudera.com
 
 wrote:
 
 Below is a list of candidate bug fix jiras marked fixed for
 0.8.3.
 I
 don't
 suspect all of these will (or should) make it into the release
 but
 this
 should be a relatively complete list to work from:
 
  - KAFKA-2114 
 https://issues.apache.org/jira/browse/KAFKA-2114
 :
 Unable
  to change min.insync.replicas default
  - KAFKA-1702 
 https://issues.apache.org/jira/browse/KAFKA-1702
 :
  Messages silently Lost by producer
  - KAFKA-2012 
 https://issues.apache.org/jira/browse/KAFKA-2012
 :
  Broker should automatically handle corrupt index files
  - KAFKA-2406 
 https://issues.apache.org/jira/browse/KAFKA-2406
 :
 ISR
  propagation should be throttled to avoid overwhelming
 controller.
  - KAFKA-2336 
 https://issues.apache.org/jira/browse/KAFKA-2336
 :
  Changing offsets.topic.num.partitions after the offset topic
 is
 created
  breaks consumer group partition assignment
  - KAFKA-2337 
 

Re: Kafka KIP meeting Aug. 18

2015-08-18 Thread Jun Rao
The the following are the notes for today's KIP discussion.

* client-side assignment strategy: We discussed concerns about rebalancing
time due to metadata inconsistency, especially when lots of topics are
subscribed. Will discuss a bit more on the mailing list.

* CopyCat data api: The discussions are in KAFKA-2367 for people who are
interested.

* 0.8.2.2: We want to make this a low risk bug fix release since 0.8.3 is
coming. So, will only include a small number of critical and small fixes.

* 0.8.3: The main features will be security and the new consumer. We will
be cutting a release branch when the major pieces for these new features
have been committed.

The link to the recording will be added to
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
in a day or two.

Thanks,

Jun





On Mon, Aug 17, 2015 at 2:54 PM, Jun Rao j...@confluent.io wrote:

 Hi, Everyone,

 We will have a Kafka KIP meeting tomorrow at 11am PST. If you plan to
 attend but haven't received an invite, please let me know. The following is
 the agenda.

 Agenda
 1. Discuss the proposal on client-side assignment strategy (
 https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Client-side+Assignment+Proposal
 )
 2. Discuss comments on CopyCat data api (KAFKA-2367)
 3. Kafka 0.8.2.2 release
 4. Review Backlog

 https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20status%20%3D%20%22Patch%20Available%22%20ORDER%20BY%20updated%20DESC


 Thanks,

 Jun



[jira] [Updated] (KAFKA-1057) Trim whitespaces from user specified configs

2015-08-18 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1057:

Fix Version/s: 0.8.2.2

 Trim whitespaces from user specified configs
 

 Key: KAFKA-1057
 URL: https://issues.apache.org/jira/browse/KAFKA-1057
 Project: Kafka
  Issue Type: Bug
  Components: config
Reporter: Neha Narkhede
Assignee: Manikumar Reddy
  Labels: newbie
 Fix For: 0.8.3, 0.8.2.2

 Attachments: KAFKA-1057.patch, KAFKA-1057_2014-10-04_20:15:32.patch


 Whitespaces in configs are a common problem that leads to config errors. It 
 will be nice if Kafka can trim the whitespaces from configs automatically



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1758) corrupt recovery file prevents startup

2015-08-18 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1758:

Fix Version/s: 0.8.2.2

 corrupt recovery file prevents startup
 --

 Key: KAFKA-1758
 URL: https://issues.apache.org/jira/browse/KAFKA-1758
 Project: Kafka
  Issue Type: Bug
  Components: log
Reporter: Jason Rosenberg
Assignee: Manikumar Reddy
  Labels: newbie
 Fix For: 0.8.3, 0.8.2.2

 Attachments: KAFKA-1758.patch, KAFKA-1758_2015-05-09_12:29:20.patch


 Hi,
 We recently had a kafka node go down suddenly. When it came back up, it 
 apparently had a corrupt recovery file, and refused to startup:
 {code}
 2014-11-06 08:17:19,299  WARN [main] server.KafkaServer - Error starting up 
 KafkaServer
 java.lang.NumberFormatException: For input string: 
 ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
 ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
 at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
 at java.lang.Integer.parseInt(Integer.java:481)
 at java.lang.Integer.parseInt(Integer.java:527)
 at 
 scala.collection.immutable.StringLike$class.toInt(StringLike.scala:229)
 at scala.collection.immutable.StringOps.toInt(StringOps.scala:31)
 at kafka.server.OffsetCheckpoint.read(OffsetCheckpoint.scala:76)
 at 
 kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:106)
 at 
 kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:105)
 at 
 scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
 at 
 scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
 at kafka.log.LogManager.loadLogs(LogManager.scala:105)
 at kafka.log.LogManager.init(LogManager.scala:57)
 at kafka.server.KafkaServer.createLogManager(KafkaServer.scala:275)
 at kafka.server.KafkaServer.startup(KafkaServer.scala:72)
 {code}
 And the app is under a monitor (so it was repeatedly restarting and failing 
 with this error for several minutes before we got to it)…
 We moved the ‘recovery-point-offset-checkpoint’ file out of the way, and it 
 then restarted cleanly (but of course re-synced all it’s data from replicas, 
 so we had no data loss).
 Anyway, I’m wondering if that’s the expected behavior? Or should it not 
 declare it corrupt and then proceed automatically to an unclean restart?
 Should this NumberFormatException be handled a bit more gracefully?
 We saved the corrupt file if it’s worth inspecting (although I doubt it will 
 be useful!)….
 The corrupt files appeared to be all zeroes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2114) Unable to change min.insync.replicas default

2015-08-18 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2114:

Fix Version/s: 0.8.2.2

 Unable to change min.insync.replicas default
 

 Key: KAFKA-2114
 URL: https://issues.apache.org/jira/browse/KAFKA-2114
 Project: Kafka
  Issue Type: Bug
Reporter: Bryan Baugher
Assignee: Gwen Shapira
 Fix For: 0.8.3, 0.8.2.2

 Attachments: KAFKA-2114.patch


 Following the comment here[1] I was unable to change the min.insync.replicas 
 default value. I tested this by setting up a 3 node cluster, wrote to a topic 
 with a replication factor of 3, using request.required.acks=-1 and setting 
 min.insync.replicas=2 on the broker's server.properties. I then shutdown 2 
 brokers but I was still able to write successfully. Only after running the 
 alter topic command setting min.insync.replicas=2 on the topic did I see 
 write failures.
 [1] - 
 http://mail-archives.apache.org/mod_mbox/kafka-users/201504.mbox/%3CCANZ-JHF71yqKE6%2BKKhWe2EGUJv6R3bTpoJnYck3u1-M35sobgg%40mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1668) TopicCommand doesn't warn if --topic argument doesn't match any topics

2015-08-18 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1668:

Fix Version/s: 0.8.2.2

 TopicCommand doesn't warn if --topic argument doesn't match any topics
 --

 Key: KAFKA-1668
 URL: https://issues.apache.org/jira/browse/KAFKA-1668
 Project: Kafka
  Issue Type: Bug
  Components: tools
Reporter: Ryan Berdeen
Assignee: Manikumar Reddy
Priority: Minor
  Labels: newbie
 Fix For: 0.8.3, 0.8.2.2

 Attachments: KAFKA-1668.patch


 Running {{kafka-topics.sh --alter}} with an invalid {{--topic}} argument 
 produces no output and exits with 0, indicating success.
 {code}
 $ bin/kafka-topics.sh --topic does-not-exist --alter --config invalid=xxx 
 --zookeeper zkhost:2181
 $ echo $?
 0
 {code}
 An invalid topic name or a regular expression that matches 0 topics should at 
 least print a warning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2096) Enable keepalive socket option for broker to prevent socket leak

2015-08-18 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2096:

Fix Version/s: 0.8.2.2

 Enable keepalive socket option for broker to prevent socket leak
 

 Key: KAFKA-2096
 URL: https://issues.apache.org/jira/browse/KAFKA-2096
 Project: Kafka
  Issue Type: Improvement
  Components: network
Affects Versions: 0.8.2.1
Reporter: Allen Wang
Assignee: Allen Wang
Priority: Critical
 Fix For: 0.8.3, 0.8.2.2

 Attachments: patch.diff


 We run a Kafka 0.8.2.1 cluster in AWS with large number of producers ( 
 1). Also the number of producer instances scale up and down significantly 
 on a daily basis.
 The issue we found is that after 10 days, the open file descriptor count will 
 approach the limit of 32K. An investigation of these open file descriptors 
 shows that a significant portion of these are from client instances that are 
 terminated during scaling down. Somehow they still show as ESTABLISHED in 
 netstat. We suspect that the AWS firewall between the client and broker 
 causes this issue.
 We attempted to use keepalive socket option to reduce this socket leak on 
 broker and it appears to be working. Specifically, we added this line to 
 kafka.network.Acceptor.accept():
   socketChannel.socket().setKeepAlive(true)
 It is confirmed during our experiment of this change that entries in netstat 
 where the client instance is terminated were probed as configured in 
 operating system. After configured number of probes, the OS determined that 
 the peer is no longer alive and the entry is removed, possibly after an error 
 in Kafka to read from the channel and closing the channel. Also, our 
 experiment shows that after a few days, the instance was able to keep a 
 stable low point of open file descriptor count, compared with other instances 
 where the low point keeps increasing day to day.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSSION] Kafka 0.8.2.2 release?

2015-08-18 Thread Gwen Shapira
Thanks Jun.

I updated the list with your suggestions.
If anyone feels we are missing a critical patch for 0.8.2.2, please speak
up.

Gwen

On Mon, Aug 17, 2015 at 5:40 PM, Jun Rao j...@confluent.io wrote:

 Hi, Grant,

 I took a look at that list. None of those is really critical as you said.
 So, I'd suggest that we not include those to minimize the scope of the
 release.

 Thanks,

 Jun

 On Mon, Aug 17, 2015 at 5:16 PM, Grant Henke ghe...@cloudera.com wrote:

  Thanks Gwen.
 
  I updated a few small things on the wiki page.
 
  Below is a list of jiras I think could also be marked as included. All of
  these, though not super critical, seem like fairly small and low risk
  changes that help avoid potentially confusing issues or errors for users.
 
  KAFKA-2012
  KAFKA-972
  KAFKA-2337  KAFKA-2393
  KAFKA-1867
  KAFKA-2407
  KAFKA-2234
  KAFKA-1866
  KAFKA-2345  KAFKA-2355
 
  thoughts?
 
  Thank you,
  Grant
 
  On Mon, Aug 17, 2015 at 4:56 PM, Gwen Shapira g...@confluent.io wrote:
 
   Thanks for creating a list, Grant!
  
   I placed it on the wiki with a quick evaluation of the content and
  whether
   it should be in 0.8.2.2:
  
  
 
 https://cwiki.apache.org/confluence/display/KAFKA/Proposed+patches+for+0.8.2.2
  
   I'm attempting to only cherrypick fixes that are both important for
 large
   number of users (or very critical to some users) and very safe (mostly
   judged by the size of the change, but not only)
  
   If your favorite bugfix is missing from the list, or is there but
 marked
   No, please let us know (in this thread) what we are missing and why
 it
  is
   both important and safe.
   Also, if I accidentally included something you consider unsafe, speak
 up!
  
   Gwen
  
   On Mon, Aug 17, 2015 at 8:17 AM, Grant Henke ghe...@cloudera.com
  wrote:
  
+dev
   
Adding dev list back in. Somehow it got dropped.
   
   
On Mon, Aug 17, 2015 at 10:16 AM, Grant Henke ghe...@cloudera.com
   wrote:
   
 Below is a list of candidate bug fix jiras marked fixed for 0.8.3.
 I
don't
 suspect all of these will (or should) make it into the release but
  this
 should be a relatively complete list to work from:

- KAFKA-2114 https://issues.apache.org/jira/browse/KAFKA-2114
 :
Unable
to change min.insync.replicas default
- KAFKA-1702 https://issues.apache.org/jira/browse/KAFKA-1702
 :
Messages silently Lost by producer
- KAFKA-2012 https://issues.apache.org/jira/browse/KAFKA-2012
 :
Broker should automatically handle corrupt index files
- KAFKA-2406 https://issues.apache.org/jira/browse/KAFKA-2406
 :
   ISR
propagation should be throttled to avoid overwhelming
 controller.
- KAFKA-2336 https://issues.apache.org/jira/browse/KAFKA-2336
 :
Changing offsets.topic.num.partitions after the offset topic is
created
breaks consumer group partition assignment
- KAFKA-2337 https://issues.apache.org/jira/browse/KAFKA-2337
 :
Verify
that metric names will not collide when creating new topics
- KAFKA-2393 https://issues.apache.org/jira/browse/KAFKA-2393
 :
Correctly Handle InvalidTopicException in
   KafkaApis.getTopicMetadata()
- KAFKA-2189 https://issues.apache.org/jira/browse/KAFKA-2189
 :
Snappy
compression of message batches less efficient in 0.8.2.1
- KAFKA-2308 https://issues.apache.org/jira/browse/KAFKA-2308
 :
   New
producer + Snappy face un-compression errors after broker
 restart
- KAFKA-2042 https://issues.apache.org/jira/browse/KAFKA-2042
 :
   New
producer metadata update always get all topics.
- KAFKA-1367 https://issues.apache.org/jira/browse/KAFKA-1367
 :
Broker
topic metadata not kept in sync with ZooKeeper
- KAFKA-972 https://issues.apache.org/jira/browse/KAFKA-972:
MetadataRequest
returns stale list of brokers
- KAFKA-1867 https://issues.apache.org/jira/browse/KAFKA-1867
 :
liveBroker
list not updated on a cluster with no topics
- KAFKA-1650 https://issues.apache.org/jira/browse/KAFKA-1650
 :
Mirror
Maker could lose data on unclean shutdown.
- KAFKA-2009 https://issues.apache.org/jira/browse/KAFKA-2009
 :
   Fix
UncheckedOffset.removeOffset synchronization and trace logging
  issue
in
mirror maker
- KAFKA-2407 https://issues.apache.org/jira/browse/KAFKA-2407
 :
   Only
create a log directory when it will be used
- KAFKA-2327 https://issues.apache.org/jira/browse/KAFKA-2327
 :
broker doesn't start if config defines advertised.host but not
advertised.port
- KAFKA-1788: producer record can stay in RecordAccumulator
  forever
   if
leader is no available
- KAFKA-2234 https://issues.apache.org/jira/browse/KAFKA-2234
 :
Partition reassignment of a nonexistent topic prevents future
reassignments
- 

Re: [DISCUSS] Client-side Assignment for New Consumer

2015-08-18 Thread Jiangjie Qin
Jun,

Yes, I agree. If the metadata can be synced quickly there should not be an
issue. It just occurred to me that there is a proposal to allow consuming
from followers in ISR, that could potentially cause more frequent metadata
change for consumers. Would that be an issue?

Thanks,

Jiangjie (Becket) Qin

On Tue, Aug 18, 2015 at 10:22 AM, Jason Gustafson ja...@confluent.io
wrote:

 Hi Jun,

 Answers below:

 1. When there are multiple common protocols in the JoinGroupRequest, which
 one would the coordinator pick?

 I was intending to use the list to indicate preference. If all group
 members support protocols [A, B] in that order, then we will choose
 A. If some support [B, A], then we would either choose based on
 respective counts or just randomly. The main use case of supporting the
 list is for rolling upgrades when a change is made to the assignment
 strategy. In that case, the new assignment strategy would be listed first
 in the upgraded client. I think it's debatable whether this feature would
 get much use in practice, so we might consider dropping it.

 2. If the protocols don't agree, the group construction fails. What exactly
 does it mean? Do we send an error in every JoinGroupResponse and remove all
 members in the group in the coordinator?

 Yes, that is right. It would be handled similarly to inconsistent
 assignment strategies in the current protocol. The coordinator returns an
 error in each join group response, and the client propagates the error to
 the user.

 3. Consumer embedded protocol: The proposal has two different formats of
 subscription depending on whether wildcards are used or not. This seems a
 bit complicated. Would it be better to always use the metadata hash? The
 clients know the subscribed topics already. This way, the client code
 behaves the same whether wildcards are used or not.

 Yeah, I think this is possible (Neha also suggested it). I haven't updated
 the wiki yet, but the patch I started working on uses only the metadata
 hash. In the case that an explicit topic list is provided, the hash just
 covers the metadata for those topics.


 Thanks,
 Jason



 On Tue, Aug 18, 2015 at 10:06 AM, Jun Rao j...@confluent.io wrote:

  Jason,
 
  Thanks for the writeup. A few comments below.
 
  1. When there are multiple common protocols in the JoinGroupRequest,
 which
  one would the coordinator pick?
  2. If the protocols don't agree, the group construction fails. What
 exactly
  does it mean? Do we send an error in every JoinGroupResponse and remove
 all
  members in the group in the coordinator?
  3. Consumer embedded protocol: The proposal has two different formats of
  subscription depending on whether wildcards are used or not. This seems a
  bit complicated. Would it be better to always use the metadata hash? The
  clients know the subscribed topics already. This way, the client code
  behaves the same whether wildcards are used or not.
 
  Jiangjie,
 
  With respect to rebalance churns due to topics being created/deleted.
 With
  the new consumer, the rebalance can probably settle within 200ms when
 there
  is a topic change. So, as long as we are not changing topic more than 5
  times per sec, there shouldn't be constant churns, right?
 
  Thanks,
 
  Jun
 
 
 
  On Tue, Aug 11, 2015 at 1:19 PM, Jason Gustafson ja...@confluent.io
  wrote:
 
   Hi Kafka Devs,
  
   One of the nagging issues in the current design of the new consumer has
   been the need to support a variety of assignment strategies. We've
   encountered this in particular in the design of copycat and the
  processing
   framework (KIP-28). From what I understand, Samza also has a number of
  use
   cases with custom assignment needs. The new consumer protocol supports
  new
   assignment strategies by hooking them into the broker. For many
   environments, this is a major pain and in some cases, a non-starter. It
   also challenges the validation that the coordinator can provide. For
   example, some assignment strategies call for partitions to be assigned
   multiple times, which means that the coordinator can only check that
   partitions have been assigned at least once.
  
   To solve these issues, we'd like to propose moving assignment to the
   client. I've written a wiki which outlines some protocol changes to
  achieve
   this:
  
  
 
 https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Client-side+Assignment+Proposal
   .
   To summarize briefly, instead of the coordinator assigning the
 partitions
   itself, all subscriptions are forwarded to each member of the group
 which
   then decides independently which partitions it should consume. The
  protocol
   provides a mechanism for the coordinator to validate that all consumers
  use
   the same assignment strategy, but it does not ensure that the resulting
   assignment is correct. This provides a powerful capability for users
 to
   control the full data flow on the client side. They control how data is
   written to partitions 

Re: Review Request 33620: Patch for KAFKA-1690

2015-08-18 Thread Sriharsha Chintalapani


 On Aug. 18, 2015, 4:09 p.m., Jun Rao wrote:
  clients/src/main/java/org/apache/kafka/common/network/SSLTransportLayer.java,
   line 438
  https://reviews.apache.org/r/33620/diff/17/?file=1042291#file1042291line438
 
  Hmm, do we want to break here? It seems that we want to continue in the 
  loop until all bytes in netReadBuffer are read.

Jun, my earlier code is right and added it back. Once we expanded the 
appReadBuffer to currentApplicationBufferSize we want to read the contents off 
from appReadBuffer onto dst before we can do further unwrapping the data.
If dst doesn't have any space we break.


- Sriharsha


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33620/#review95671
---


On Aug. 18, 2015, 6:24 p.m., Sriharsha Chintalapani wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/33620/
 ---
 
 (Updated Aug. 18, 2015, 6:24 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1690
 https://issues.apache.org/jira/browse/KAFKA-1690
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-1690. new java producer needs ssl support as a client.
 
 
 KAFKA-1690. new java producer needs ssl support as a client.
 
 
 KAFKA-1690. new java producer needs ssl support as a client.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. SSLFactory tests.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Added 
 PrincipalBuilder.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Addressing 
 reviews.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Addressing 
 reviews.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Addressing 
 reviews.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Fixed minor 
 issues with the patch.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Fixed minor 
 issues with the patch.
 
 
 KAFKA-1690. new java producer needs ssl support as a client.
 
 
 KAFKA-1690. new java producer needs ssl support as a client.
 
 
 Merge remote-tracking branch 'refs/remotes/origin/trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Broker side ssl changes.
 
 
 KAFKA-1684. SSL for socketServer.
 
 
 KAFKA-1690. Added SSLProducerSendTest and fixes to get right port for SSL.
 
 
 Merge branch 'trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Post merge fixes.
 
 
 KAFKA-1690. Added SSLProducerSendTest.
 
 
 KAFKA-1690. Minor fixes based on patch review comments.
 
 
 Merge commit
 
 
 KAFKA-1690. Added SSL Consumer Test.
 
 
 KAFKA-1690. SSL Support.
 
 
 KAFKA-1690. Addressing reviews.
 
 
 Merge branch 'trunk' into KAFKA-1690-V1
 
 
 Merge branch 'trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Addressing reviews. Removed interestOps from SSLTransportLayer.
 
 
 KAFKA-1690. Addressing reviews.
 
 
 KAFKA-1690. added staged receives to selector.
 
 
 KAFKA-1690. Addressing reviews.
 
 
 Merge branch 'trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Addressing reviews.
 
 
 KAFKA-1690. Add SSL support to broker, producer and consumer.
 
 
 Merge remote-tracking branch 'refs/remotes/origin/trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Add SSL support to Kafka Broker, Producer  Client.
 
 
 KAFKA-1690. Add SSL support for Kafka Brokers, Producers and Consumers.
 
 
 Merge remote-tracking branch 'refs/remotes/origin/trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Add SSL support for Kafka Brokers, Producers and Consumers.
 
 
 KAFKA-1690. Add SSL Support Kafka Broker, Producer and Consumer.
 
 
 KAFKA-1690. Add SSL support for Kafka Broker, Producer and Consumer.
 
 
 Diffs
 -
 
   build.gradle 983587fd0b7604c3a26fcbb6a1d63e5e470d23fe 
   checkstyle/import-control.xml e3f4f84c6becfd9087627f018690e1e2fc2b3bba 
   clients/src/main/java/org/apache/kafka/clients/ClientUtils.java 
 0d68bf1e1e90fe9d5d4397ddf817b9a9af8d9f7a 
   clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
 2c421f42ed3fc5d61cf9c87a7eaa7bb23e26f63b 
   clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
 0e51d7bd461d253f4396a5b6ca7cd391658807fa 
   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
 d35b421a515074d964c7fccb73d260b847ea5f00 
   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
 be46b6c213ad8c6c09ad22886a5f36175ab0e13a 
   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
 03b8dd23df63a8d8a117f02eabcce4a2d48c44f7 
   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
 aa264202f2724907924985a5ecbe74afc4c6c04b 
   clients/src/main/java/org/apache/kafka/common/config/AbstractConfig.java 
 6c317480a181678747bfb6b77e315b08668653c5 
   

[jira] [Updated] (KAFKA-2439) Add MirrorMakerService to ducktape system tests

2015-08-18 Thread Geoff Anderson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoff Anderson updated KAFKA-2439:
--
Labels: patch-available  (was: )

 Add MirrorMakerService to ducktape system tests
 ---

 Key: KAFKA-2439
 URL: https://issues.apache.org/jira/browse/KAFKA-2439
 Project: Kafka
  Issue Type: Sub-task
  Components: system tests
Reporter: Geoff Anderson
Assignee: Geoff Anderson
  Labels: patch-available
 Fix For: 0.8.3






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSSION] Kafka 0.8.2.2 release?

2015-08-18 Thread Gwen Shapira
Jun,

KAFKA-2147 doesn't seem to have a commit associated with it, so I can't
cherrypick just this fix.
I suggest leaving this out since there is a 0.8.2.x workaround in the JIRA.

Gwen

On Mon, Aug 17, 2015 at 5:24 PM, Jun Rao j...@confluent.io wrote:

 Gwen,

 Thanks for putting the list together.

 I'd recommend that we exclude the following:
 KAFKA-1702: This is for the old producer and is only a problem if there are
 some unexpected exceptions (e.g. UnknownClass).
 KAFKA-2336: Most people don't change offsets.topic.num.partitions.
 KAFKA-1724: The patch there is never committed since the fix is included in
 another jira (a much larger patch).
 KAFKA-2241: This doesn't seem be a common problem. It only happens when the
 fetch request blocks on the broker for an extended period of time, which
 should be rare.

 I'd also recommend that we include the following:
 KAFKA-2147: This impacts the memory size of the purgatory and a number of
 people have experienced that. The fix is small and has been tested in
 production usage. It hasn't been committed though since the issue is
 already fixed in trunk and we weren't planning for an 0.8.2.2 release then.

 Thanks,

 Jun

 On Mon, Aug 17, 2015 at 2:56 PM, Gwen Shapira g...@confluent.io wrote:

  Thanks for creating a list, Grant!
 
  I placed it on the wiki with a quick evaluation of the content and
 whether
  it should be in 0.8.2.2:
 
 
 https://cwiki.apache.org/confluence/display/KAFKA/Proposed+patches+for+0.8.2.2
 
  I'm attempting to only cherrypick fixes that are both important for large
  number of users (or very critical to some users) and very safe (mostly
  judged by the size of the change, but not only)
 
  If your favorite bugfix is missing from the list, or is there but marked
  No, please let us know (in this thread) what we are missing and why it
 is
  both important and safe.
  Also, if I accidentally included something you consider unsafe, speak up!
 
  Gwen
 
  On Mon, Aug 17, 2015 at 8:17 AM, Grant Henke ghe...@cloudera.com
 wrote:
 
   +dev
  
   Adding dev list back in. Somehow it got dropped.
  
  
   On Mon, Aug 17, 2015 at 10:16 AM, Grant Henke ghe...@cloudera.com
  wrote:
  
Below is a list of candidate bug fix jiras marked fixed for 0.8.3. I
   don't
suspect all of these will (or should) make it into the release but
 this
should be a relatively complete list to work from:
   
   - KAFKA-2114 https://issues.apache.org/jira/browse/KAFKA-2114:
   Unable
   to change min.insync.replicas default
   - KAFKA-1702 https://issues.apache.org/jira/browse/KAFKA-1702:
   Messages silently Lost by producer
   - KAFKA-2012 https://issues.apache.org/jira/browse/KAFKA-2012:
   Broker should automatically handle corrupt index files
   - KAFKA-2406 https://issues.apache.org/jira/browse/KAFKA-2406:
  ISR
   propagation should be throttled to avoid overwhelming controller.
   - KAFKA-2336 https://issues.apache.org/jira/browse/KAFKA-2336:
   Changing offsets.topic.num.partitions after the offset topic is
   created
   breaks consumer group partition assignment
   - KAFKA-2337 https://issues.apache.org/jira/browse/KAFKA-2337:
   Verify
   that metric names will not collide when creating new topics
   - KAFKA-2393 https://issues.apache.org/jira/browse/KAFKA-2393:
   Correctly Handle InvalidTopicException in
  KafkaApis.getTopicMetadata()
   - KAFKA-2189 https://issues.apache.org/jira/browse/KAFKA-2189:
   Snappy
   compression of message batches less efficient in 0.8.2.1
   - KAFKA-2308 https://issues.apache.org/jira/browse/KAFKA-2308:
  New
   producer + Snappy face un-compression errors after broker restart
   - KAFKA-2042 https://issues.apache.org/jira/browse/KAFKA-2042:
  New
   producer metadata update always get all topics.
   - KAFKA-1367 https://issues.apache.org/jira/browse/KAFKA-1367:
   Broker
   topic metadata not kept in sync with ZooKeeper
   - KAFKA-972 https://issues.apache.org/jira/browse/KAFKA-972:
   MetadataRequest
   returns stale list of brokers
   - KAFKA-1867 https://issues.apache.org/jira/browse/KAFKA-1867:
   liveBroker
   list not updated on a cluster with no topics
   - KAFKA-1650 https://issues.apache.org/jira/browse/KAFKA-1650:
   Mirror
   Maker could lose data on unclean shutdown.
   - KAFKA-2009 https://issues.apache.org/jira/browse/KAFKA-2009:
  Fix
   UncheckedOffset.removeOffset synchronization and trace logging
 issue
   in
   mirror maker
   - KAFKA-2407 https://issues.apache.org/jira/browse/KAFKA-2407:
  Only
   create a log directory when it will be used
   - KAFKA-2327 https://issues.apache.org/jira/browse/KAFKA-2327:
   broker doesn't start if config defines advertised.host but not
   advertised.port
   - KAFKA-1788: producer record can stay in RecordAccumulator
 forever
  if
   leader is no available
   - KAFKA-2234 

Re: Review Request 33620: Patch for KAFKA-1690

2015-08-18 Thread Sriharsha Chintalapani


 On Aug. 18, 2015, 4:09 p.m., Jun Rao wrote:
  clients/src/main/java/org/apache/kafka/common/network/SSLTransportLayer.java,
   line 450
  https://reviews.apache.org/r/33620/diff/17/?file=1042291#file1042291line450
 
  The termination condition doesn't seem quite right. If dst has no 
  remaining space but there are still bytes in netReadBuffer, we should 
  return, right?

Jun, we want to read everything in netReadBuffer to appReadBuffer. 
readFromAppBuffer will check if dst has any remaining than only trasnfers the 
bytes from appReadBuffer.


- Sriharsha


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33620/#review95671
---


On Aug. 17, 2015, 7:21 p.m., Sriharsha Chintalapani wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/33620/
 ---
 
 (Updated Aug. 17, 2015, 7:21 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1690
 https://issues.apache.org/jira/browse/KAFKA-1690
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-1690. new java producer needs ssl support as a client.
 
 
 KAFKA-1690. new java producer needs ssl support as a client.
 
 
 KAFKA-1690. new java producer needs ssl support as a client.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. SSLFactory tests.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Added 
 PrincipalBuilder.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Addressing 
 reviews.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Addressing 
 reviews.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Addressing 
 reviews.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Fixed minor 
 issues with the patch.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Fixed minor 
 issues with the patch.
 
 
 KAFKA-1690. new java producer needs ssl support as a client.
 
 
 KAFKA-1690. new java producer needs ssl support as a client.
 
 
 Merge remote-tracking branch 'refs/remotes/origin/trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Broker side ssl changes.
 
 
 KAFKA-1684. SSL for socketServer.
 
 
 KAFKA-1690. Added SSLProducerSendTest and fixes to get right port for SSL.
 
 
 Merge branch 'trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Post merge fixes.
 
 
 KAFKA-1690. Added SSLProducerSendTest.
 
 
 KAFKA-1690. Minor fixes based on patch review comments.
 
 
 Merge commit
 
 
 KAFKA-1690. Added SSL Consumer Test.
 
 
 KAFKA-1690. SSL Support.
 
 
 KAFKA-1690. Addressing reviews.
 
 
 Merge branch 'trunk' into KAFKA-1690-V1
 
 
 Merge branch 'trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Addressing reviews. Removed interestOps from SSLTransportLayer.
 
 
 KAFKA-1690. Addressing reviews.
 
 
 KAFKA-1690. added staged receives to selector.
 
 
 KAFKA-1690. Addressing reviews.
 
 
 Merge branch 'trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Addressing reviews.
 
 
 KAFKA-1690. Add SSL support to broker, producer and consumer.
 
 
 Merge remote-tracking branch 'refs/remotes/origin/trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Add SSL support to Kafka Broker, Producer  Client.
 
 
 KAFKA-1690. Add SSL support for Kafka Brokers, Producers and Consumers.
 
 
 Merge remote-tracking branch 'refs/remotes/origin/trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Add SSL support for Kafka Brokers, Producers and Consumers.
 
 
 KAFKA-1690. Add SSL Support Kafka Broker, Producer and Consumer.
 
 
 Diffs
 -
 
   build.gradle 983587fd0b7604c3a26fcbb6a1d63e5e470d23fe 
   checkstyle/import-control.xml e3f4f84c6becfd9087627f018690e1e2fc2b3bba 
   clients/src/main/java/org/apache/kafka/clients/ClientUtils.java 
 0d68bf1e1e90fe9d5d4397ddf817b9a9af8d9f7a 
   clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
 2c421f42ed3fc5d61cf9c87a7eaa7bb23e26f63b 
   clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
 0e51d7bd461d253f4396a5b6ca7cd391658807fa 
   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
 d35b421a515074d964c7fccb73d260b847ea5f00 
   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
 be46b6c213ad8c6c09ad22886a5f36175ab0e13a 
   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
 03b8dd23df63a8d8a117f02eabcce4a2d48c44f7 
   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
 aa264202f2724907924985a5ecbe74afc4c6c04b 
   clients/src/main/java/org/apache/kafka/common/config/AbstractConfig.java 
 6c317480a181678747bfb6b77e315b08668653c5 
   clients/src/main/java/org/apache/kafka/common/config/SSLConfigs.java 
 PRE-CREATION 
   clients/src/main/java/org/apache/kafka/common/network/Authenticator.java 
 PRE-CREATION 
   

[jira] [Updated] (KAFKA-2136) Client side protocol changes to return quota delays

2015-08-18 Thread Aditya A Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya A Auradkar updated KAFKA-2136:
-
Attachment: KAFKA-2136_2015-08-18_13:24:00.patch

 Client side protocol changes to return quota delays
 ---

 Key: KAFKA-2136
 URL: https://issues.apache.org/jira/browse/KAFKA-2136
 Project: Kafka
  Issue Type: Sub-task
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar
  Labels: quotas
 Attachments: KAFKA-2136.patch, KAFKA-2136_2015-05-06_18:32:48.patch, 
 KAFKA-2136_2015-05-06_18:35:54.patch, KAFKA-2136_2015-05-11_14:50:56.patch, 
 KAFKA-2136_2015-05-12_14:40:44.patch, KAFKA-2136_2015-06-09_10:07:13.patch, 
 KAFKA-2136_2015-06-09_10:10:25.patch, KAFKA-2136_2015-06-30_19:43:55.patch, 
 KAFKA-2136_2015-07-13_13:34:03.patch, KAFKA-2136_2015-08-18_13:19:57.patch, 
 KAFKA-2136_2015-08-18_13:24:00.patch


 As described in KIP-13, evolve the protocol to return a throttle_time_ms in 
 the Fetch and the ProduceResponse objects. Add client side metrics on the new 
 producer and consumer to expose the delay time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 33378: Patch for KAFKA-2136

2015-08-18 Thread Aditya Auradkar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33378/
---

(Updated Aug. 18, 2015, 8:24 p.m.)


Review request for kafka, Joel Koshy and Jun Rao.


Bugs: KAFKA-2136
https://issues.apache.org/jira/browse/KAFKA-2136


Repository: kafka


Description (updated)
---

Changes are
- Addressing Joel's comments
- protocol changes to the fetch request and response to return the 
throttle_time_ms to clients
- New producer/consumer metrics to expose the avg and max delay time for a 
client
- Test cases.
- Addressed Joel and Juns comments


Diffs
-

  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/Fetcher.java 
9dc669728e6b052f5c6686fcf1b5696a50538ab4 
  clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
0baf16e55046a2f49f6431e01d52c323c95eddf0 
  clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java 
3dc8b015afd2347a41c9a9dbc02b8e367da5f75f 
  clients/src/main/java/org/apache/kafka/common/requests/FetchRequest.java 
df073a0e76cc5cc731861b9604d0e19a928970e0 
  clients/src/main/java/org/apache/kafka/common/requests/FetchResponse.java 
eb8951fba48c335095cc43fc3672de1c733e07ff 
  clients/src/main/java/org/apache/kafka/common/requests/ProduceRequest.java 
715504b32950666e9aa5a260fa99d5f897b2007a 
  clients/src/main/java/org/apache/kafka/common/requests/ProduceResponse.java 
febfc70dabc23671fd8a85cf5c5b274dff1e10fb 
  
clients/src/test/java/org/apache/kafka/clients/consumer/internals/FetcherTest.java
 a7c83cac47d41138d47d7590a3787432d675c1b0 
  
clients/src/test/java/org/apache/kafka/clients/producer/internals/SenderTest.java
 8b1805d3d2bcb9fe2bacb37d870c3236aa9532c4 
  
clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java 
8b2aca85fa738180e5420985fddc39a4bf9681ea 
  core/src/main/scala/kafka/api/FetchRequest.scala 
5b38f8554898e54800abd65a7415dd0ac41fd958 
  core/src/main/scala/kafka/api/FetchResponse.scala 
0b6b33ab6f7a732ff1322b6f48abd4c43e0d7215 
  core/src/main/scala/kafka/api/ProducerRequest.scala 
c866180d3680da03e48d374415f10220f6ca68c4 
  core/src/main/scala/kafka/api/ProducerResponse.scala 
5d1fac4cb8943f5bfaa487f8e9d9d2856efbd330 
  core/src/main/scala/kafka/consumer/SimpleConsumer.scala 
7ebc0405d1f309bed9943e7116051d1d8276f200 
  core/src/main/scala/kafka/server/AbstractFetcherThread.scala 
f84306143c43049e3aa44e42beaffe7eb2783163 
  core/src/main/scala/kafka/server/ClientQuotaManager.scala 
9f8473f1c64d10c04cf8cc91967688e29e54ae2e 
  core/src/main/scala/kafka/server/KafkaApis.scala 
67f0cad802f901f255825aa2158545d7f5e7cc3d 
  core/src/main/scala/kafka/server/ReplicaFetcherThread.scala 
fae22d2af8daccd528ac24614290f46ae8f6c797 
  core/src/main/scala/kafka/server/ReplicaManager.scala 
d829e180c3943a90861a12ec184f9b4e4bbafe7d 
  core/src/main/scala/kafka/server/ThrottledResponse.scala 
1f80d5480ccf7c411a02dd90296a7046ede0fae2 
  core/src/test/scala/unit/kafka/api/RequestResponseSerializationTest.scala 
b4c2a228c3c9872e5817ac58d3022e4903e317b7 
  core/src/test/scala/unit/kafka/server/ClientQuotaManagerTest.scala 
97dcca8c96f955acb3d92b29d7faa1e031ba71d4 
  core/src/test/scala/unit/kafka/server/ThrottledResponseExpirationTest.scala 
14a7f4538041d557c190127e3d5f85edf2a0e7c1 

Diff: https://reviews.apache.org/r/33378/diff/


Testing
---

New tests added


Thanks,

Aditya Auradkar



[jira] [Commented] (KAFKA-2136) Client side protocol changes to return quota delays

2015-08-18 Thread Aditya A Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14701929#comment-14701929
 ] 

Aditya A Auradkar commented on KAFKA-2136:
--

Updated reviewboard https://reviews.apache.org/r/33378/diff/
 against branch origin/trunk

 Client side protocol changes to return quota delays
 ---

 Key: KAFKA-2136
 URL: https://issues.apache.org/jira/browse/KAFKA-2136
 Project: Kafka
  Issue Type: Sub-task
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar
  Labels: quotas
 Attachments: KAFKA-2136.patch, KAFKA-2136_2015-05-06_18:32:48.patch, 
 KAFKA-2136_2015-05-06_18:35:54.patch, KAFKA-2136_2015-05-11_14:50:56.patch, 
 KAFKA-2136_2015-05-12_14:40:44.patch, KAFKA-2136_2015-06-09_10:07:13.patch, 
 KAFKA-2136_2015-06-09_10:10:25.patch, KAFKA-2136_2015-06-30_19:43:55.patch, 
 KAFKA-2136_2015-07-13_13:34:03.patch, KAFKA-2136_2015-08-18_13:19:57.patch, 
 KAFKA-2136_2015-08-18_13:24:00.patch


 As described in KIP-13, evolve the protocol to return a throttle_time_ms in 
 the Fetch and the ProduceResponse objects. Add client side metrics on the new 
 producer and consumer to expose the delay time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1070) Auto-assign node id

2015-08-18 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14702059#comment-14702059
 ] 

Sriharsha Chintalapani commented on KAFKA-1070:
---

[~paetling] Here auto-assign node id is one-time op. Currently users have to 
declare a broker.id in server.properties and this is unique per broker in a 
cluster.
Whats this JIRA addressed is to allow users to not to declare broker.id instead 
kafka when it startsup acquire a sequnce id from zookeeper writes it into 
meta.properties to use as broker.id
once this is generated and written meta.properties it won't generate a new id. 
So the problem you are trying solve is not addressed by this JIRA it just 
allows users not to worry about declaring a broker.id in server.properties.

 Auto-assign node id
 ---

 Key: KAFKA-1070
 URL: https://issues.apache.org/jira/browse/KAFKA-1070
 Project: Kafka
  Issue Type: Bug
Reporter: Jay Kreps
Assignee: Sriharsha Chintalapani
  Labels: usability
 Fix For: 0.8.3

 Attachments: KAFKA-1070.patch, KAFKA-1070_2014-07-19_16:06:13.patch, 
 KAFKA-1070_2014-07-22_11:34:18.patch, KAFKA-1070_2014-07-24_20:58:17.patch, 
 KAFKA-1070_2014-07-24_21:05:33.patch, KAFKA-1070_2014-08-21_10:26:20.patch, 
 KAFKA-1070_2014-11-20_10:50:04.patch, KAFKA-1070_2014-11-25_20:29:37.patch, 
 KAFKA-1070_2015-01-01_17:39:30.patch, KAFKA-1070_2015-01-12_10:46:54.patch, 
 KAFKA-1070_2015-01-12_18:30:17.patch


 It would be nice to have Kafka brokers auto-assign node ids rather than 
 having that be a configuration. Having a configuration is irritating because 
 (1) you have to generate a custom config for each broker and (2) even though 
 it is in configuration, changing the node id can cause all kinds of bad 
 things to happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSSION] Kafka 0.8.2.2 release?

2015-08-18 Thread Gwen Shapira
Any objections if I leave KAFKA-2114 (setting min.insync.replicas default)
out?

The test code is using changes that were done after 0.8.2.x cut-off, which
makes it difficult to cherry-pick.

Gwen



On Tue, Aug 18, 2015 at 12:16 PM, Gwen Shapira g...@confluent.io wrote:

 Jun,

 KAFKA-2147 doesn't seem to have a commit associated with it, so I can't
 cherrypick just this fix.
 I suggest leaving this out since there is a 0.8.2.x workaround in the JIRA.

 Gwen

 On Mon, Aug 17, 2015 at 5:24 PM, Jun Rao j...@confluent.io wrote:

 Gwen,

 Thanks for putting the list together.

 I'd recommend that we exclude the following:
 KAFKA-1702: This is for the old producer and is only a problem if there
 are
 some unexpected exceptions (e.g. UnknownClass).
 KAFKA-2336: Most people don't change offsets.topic.num.partitions.
 KAFKA-1724: The patch there is never committed since the fix is included
 in
 another jira (a much larger patch).
 KAFKA-2241: This doesn't seem be a common problem. It only happens when
 the
 fetch request blocks on the broker for an extended period of time, which
 should be rare.

 I'd also recommend that we include the following:
 KAFKA-2147: This impacts the memory size of the purgatory and a number of
 people have experienced that. The fix is small and has been tested in
 production usage. It hasn't been committed though since the issue is
 already fixed in trunk and we weren't planning for an 0.8.2.2 release
 then.

 Thanks,

 Jun

 On Mon, Aug 17, 2015 at 2:56 PM, Gwen Shapira g...@confluent.io wrote:

  Thanks for creating a list, Grant!
 
  I placed it on the wiki with a quick evaluation of the content and
 whether
  it should be in 0.8.2.2:
 
 
 https://cwiki.apache.org/confluence/display/KAFKA/Proposed+patches+for+0.8.2.2
 
  I'm attempting to only cherrypick fixes that are both important for
 large
  number of users (or very critical to some users) and very safe (mostly
  judged by the size of the change, but not only)
 
  If your favorite bugfix is missing from the list, or is there but marked
  No, please let us know (in this thread) what we are missing and why
 it is
  both important and safe.
  Also, if I accidentally included something you consider unsafe, speak
 up!
 
  Gwen
 
  On Mon, Aug 17, 2015 at 8:17 AM, Grant Henke ghe...@cloudera.com
 wrote:
 
   +dev
  
   Adding dev list back in. Somehow it got dropped.
  
  
   On Mon, Aug 17, 2015 at 10:16 AM, Grant Henke ghe...@cloudera.com
  wrote:
  
Below is a list of candidate bug fix jiras marked fixed for 0.8.3. I
   don't
suspect all of these will (or should) make it into the release but
 this
should be a relatively complete list to work from:
   
   - KAFKA-2114 https://issues.apache.org/jira/browse/KAFKA-2114:
   Unable
   to change min.insync.replicas default
   - KAFKA-1702 https://issues.apache.org/jira/browse/KAFKA-1702:
   Messages silently Lost by producer
   - KAFKA-2012 https://issues.apache.org/jira/browse/KAFKA-2012:
   Broker should automatically handle corrupt index files
   - KAFKA-2406 https://issues.apache.org/jira/browse/KAFKA-2406:
  ISR
   propagation should be throttled to avoid overwhelming controller.
   - KAFKA-2336 https://issues.apache.org/jira/browse/KAFKA-2336:
   Changing offsets.topic.num.partitions after the offset topic is
   created
   breaks consumer group partition assignment
   - KAFKA-2337 https://issues.apache.org/jira/browse/KAFKA-2337:
   Verify
   that metric names will not collide when creating new topics
   - KAFKA-2393 https://issues.apache.org/jira/browse/KAFKA-2393:
   Correctly Handle InvalidTopicException in
  KafkaApis.getTopicMetadata()
   - KAFKA-2189 https://issues.apache.org/jira/browse/KAFKA-2189:
   Snappy
   compression of message batches less efficient in 0.8.2.1
   - KAFKA-2308 https://issues.apache.org/jira/browse/KAFKA-2308:
  New
   producer + Snappy face un-compression errors after broker restart
   - KAFKA-2042 https://issues.apache.org/jira/browse/KAFKA-2042:
  New
   producer metadata update always get all topics.
   - KAFKA-1367 https://issues.apache.org/jira/browse/KAFKA-1367:
   Broker
   topic metadata not kept in sync with ZooKeeper
   - KAFKA-972 https://issues.apache.org/jira/browse/KAFKA-972:
   MetadataRequest
   returns stale list of brokers
   - KAFKA-1867 https://issues.apache.org/jira/browse/KAFKA-1867:
   liveBroker
   list not updated on a cluster with no topics
   - KAFKA-1650 https://issues.apache.org/jira/browse/KAFKA-1650:
   Mirror
   Maker could lose data on unclean shutdown.
   - KAFKA-2009 https://issues.apache.org/jira/browse/KAFKA-2009:
  Fix
   UncheckedOffset.removeOffset synchronization and trace logging
 issue
   in
   mirror maker
   - KAFKA-2407 https://issues.apache.org/jira/browse/KAFKA-2407:
  Only
   create a log directory when it will be used
   - KAFKA-2327 

[jira] [Updated] (KAFKA-2136) Client side protocol changes to return quota delays

2015-08-18 Thread Aditya A Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya A Auradkar updated KAFKA-2136:
-
Attachment: KAFKA-2136_2015-08-18_13:19:57.patch

 Client side protocol changes to return quota delays
 ---

 Key: KAFKA-2136
 URL: https://issues.apache.org/jira/browse/KAFKA-2136
 Project: Kafka
  Issue Type: Sub-task
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar
  Labels: quotas
 Attachments: KAFKA-2136.patch, KAFKA-2136_2015-05-06_18:32:48.patch, 
 KAFKA-2136_2015-05-06_18:35:54.patch, KAFKA-2136_2015-05-11_14:50:56.patch, 
 KAFKA-2136_2015-05-12_14:40:44.patch, KAFKA-2136_2015-06-09_10:07:13.patch, 
 KAFKA-2136_2015-06-09_10:10:25.patch, KAFKA-2136_2015-06-30_19:43:55.patch, 
 KAFKA-2136_2015-07-13_13:34:03.patch, KAFKA-2136_2015-08-18_13:19:57.patch


 As described in KIP-13, evolve the protocol to return a throttle_time_ms in 
 the Fetch and the ProduceResponse objects. Add client side metrics on the new 
 producer and consumer to expose the delay time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 33378: Patch for KAFKA-2136

2015-08-18 Thread Aditya Auradkar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33378/
---

(Updated Aug. 18, 2015, 8:20 p.m.)


Review request for kafka, Joel Koshy and Jun Rao.


Bugs: KAFKA-2136
https://issues.apache.org/jira/browse/KAFKA-2136


Repository: kafka


Description (updated)
---

kafka-2005; Generate html report for system tests; patched by Ashish Singh; 
reviewed by Jun Rao


kafka-2266; Client Selector can drop idle connections without notifying 
NetworkClient; patched by Jason Gustafson; reviewed by Jun Rao


kafka-2232; make MockProducer generic; patched by Alexander Pakulov; reviewed 
by Jun Rao


kafka-2164; ReplicaFetcherThread: suspicious log message on reset offset; 
patched by Alexey Ozeritski; reviewed by Jun Rao


kafka-2101; Metric metadata-age is reset on a failed update; patched by Tim 
Brooks; reviewed by Jun Rao


kafka-2195; Add versionId to AbstractRequest.getErrorResponse and 
AbstractRequest.getRequest; patched by Andrii Biletskyi; reviewed by Jun Rao


kafka-2270; incorrect package name in unit tests; patched by Proneet Verma; 
reviewed by Jun Rao


kafka-2272; listeners endpoint parsing fails if the hostname has capital 
letter; patched by Sriharsha Chintalapani; reviewed by Jun Rao


kafka-2264; SESSION_TIMEOUT_MS_CONFIG in ConsumerConfig should be int; patched 
by Manikumar Reddy; reviewed by Jun Rao


kafka-2252; Socket connection closing is logged, but not corresponding opening 
of socket; patched by Gwen Shapira; reviewed by Jun Rao


kafka-2262; LogSegmentSize validation should be consistent; patched by 
Manikumar Reddy; reviewed by Jun Rao


trivial fix for stylecheck error on Jenkins


kafka-2249; KafkaConfig does not preserve original Properties; patched by Gwen 
Shapira; reviewed by Jun Rao


kafka-2265; creating a topic with large number of partitions takes a long time; 
patched by Manikumar Reddy; reviewed by Jun Rao


kafka-2234; Partition reassignment of a nonexistent topic prevents future 
reassignments; patched by Manikumar Reddy; reviewed by Jun Rao


trivial change to fix unit test failure introduced in kafka-2234


kafka-1758; corrupt recovery file prevents startup; patched by Manikumar Reddy; 
reviewed by Neha Narkhede and Jun Rao


kafka-1646; Improve consumer read performance for Windows; patched by Honghai 
Chen; reviewed by Jay Kreps and Jun Rao


kafka-2012; Broker should automatically handle corrupt index files;  patched by 
Manikumar Reddy; reviewed by Jun Rao


kafka-2290; OffsetIndex should open RandomAccessFile consistently; patched by 
Chris Black; reviewed by Jun Rao


kafka-2235; LogCleaner offset map overflow; patched by Ivan Simoneko; reviewed 
by Jun Rao


KAFKA-2245; Add response tests for consumer coordinator; reviewed by Joel Koshy


KAFKA-2293; Fix incorrect format specification in Partition.scala; reviewed by 
Joel Koshy


kafka-2168; New consumer poll() can block other calls like position(), 
commit(), and close() indefinitely; patched by Jason Gustafson; reviewed by Jay 
Kreps, Ewen Cheslack-Postava, Guozhang Wang and Jun Rao


KAFKA-2294; javadoc compile error due to illegal p/ , build failing (jdk 8); 
patched by Jeff Maxwell; reviewed by Jakob Homan


KAFKA-2281: avoid unnecessary value copying if logAsString is false; reviewed 
by Guozhang Wang


KAFKA-2168: minor follow-up patch; reviewed by Guozhang Wang


KAFKA-1740: merge offset manager into consumer coordinator; reviewed by Onur 
Karaman and Jason Gustafson


kafka-2248; Use Apache Rat to enforce copyright headers; patched by Ewen 
Cheslack-Postava; reviewed by Gwen Shapira, Joel Joshy and Jun Rao


kafka-2132; Move Log4J appender to a separate module; patched by Ashish Singh; 
reviewed by Gwen Shapira, Aditya Auradkar and Jun Rao


KAFKA-2314: proper MirrorMaker's message handler help message; reviewed by 
Guozhang Wang


kafka-1367; Broker topic metadata not kept in sync with ZooKeeper; patched by 
Ashish Singh; reviewed by Jun Rao


KAFKA-2304 Supported enabling JMX in Kafka Vagrantfile patch by Stevo Slavic 
reviewed by Ewen Cheslack-Postava


KAFKA-2306: add another metric for buffer exhausted; reviewed by Guozhang Wang


KAFKA-2317: follow-up of KAFKA1367; reviewed by Guozhang Wang


KAFKA-2313: javadoc fix for KafkaConsumer deserialization; reviewed by Guozhang 
Wang


KAFKA-2298; Client Selector can drop connections on InvalidReceiveException 
without notifying NetworkClient; reviewed by Jason Gustafson and Joel Koshy


Trivial commit - explicitly exclude build/rat-report.xml from rat check


KAFKA-2308: make MemoryRecords idempotent; reviewed by Guozhang Wang


KAFKA-2316: Drop java 1.6 support; patched by Sriharsha Chintalapani reviewed 
by Ismael Juma and Gwen Shapira


KAFKA-2327; broker doesn't start if config defines advertised.host but not 
advertised.port

Added unit tests as well. These fail without the fix, but pass 

[jira] [Commented] (KAFKA-2136) Client side protocol changes to return quota delays

2015-08-18 Thread Aditya A Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14701925#comment-14701925
 ] 

Aditya A Auradkar commented on KAFKA-2136:
--

Updated reviewboard https://reviews.apache.org/r/33378/diff/
 against branch trunk

 Client side protocol changes to return quota delays
 ---

 Key: KAFKA-2136
 URL: https://issues.apache.org/jira/browse/KAFKA-2136
 Project: Kafka
  Issue Type: Sub-task
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar
  Labels: quotas
 Attachments: KAFKA-2136.patch, KAFKA-2136_2015-05-06_18:32:48.patch, 
 KAFKA-2136_2015-05-06_18:35:54.patch, KAFKA-2136_2015-05-11_14:50:56.patch, 
 KAFKA-2136_2015-05-12_14:40:44.patch, KAFKA-2136_2015-06-09_10:07:13.patch, 
 KAFKA-2136_2015-06-09_10:10:25.patch, KAFKA-2136_2015-06-30_19:43:55.patch, 
 KAFKA-2136_2015-07-13_13:34:03.patch, KAFKA-2136_2015-08-18_13:19:57.patch


 As described in KIP-13, evolve the protocol to return a throttle_time_ms in 
 the Fetch and the ProduceResponse objects. Add client side metrics on the new 
 producer and consumer to expose the delay time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2436) log.retention.hours should be honored by LogManager

2015-08-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14701894#comment-14701894
 ] 

ASF GitHub Bot commented on KAFKA-2436:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/142


 log.retention.hours should be honored by LogManager
 ---

 Key: KAFKA-2436
 URL: https://issues.apache.org/jira/browse/KAFKA-2436
 Project: Kafka
  Issue Type: Bug
Reporter: Dong Lin
Assignee: Dong Lin
Priority: Critical

 Currently log.retention.hours is used to calculate 
 KafkaConfig.logRetentionTimeMillis. But it is not used in LogManager to 
 decide when to delete a log. LogManager is only using the log.retention.ms in 
 the broker configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2436; log.retention.hours should be hono...

2015-08-18 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/142


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Kafka KIP meeting Aug. 18

2015-08-18 Thread Flavio Junqueira
Thanks for the notes, Jun. There was a mention of single-writer behavior, if I 
understood correctly, in the context of the new consumer, and I was wondering 
if that's specific to the new consumer work or if it's going to have any 
dependency or relation to KIP-27.

-Flavio

 On 18 Aug 2015, at 20:20, Jun Rao j...@confluent.io wrote:
 
 The the following are the notes for today's KIP discussion.
 
 * client-side assignment strategy: We discussed concerns about rebalancing
 time due to metadata inconsistency, especially when lots of topics are
 subscribed. Will discuss a bit more on the mailing list.
 
 * CopyCat data api: The discussions are in KAFKA-2367 for people who are
 interested.
 
 * 0.8.2.2: We want to make this a low risk bug fix release since 0.8.3 is
 coming. So, will only include a small number of critical and small fixes.
 
 * 0.8.3: The main features will be security and the new consumer. We will
 be cutting a release branch when the major pieces for these new features
 have been committed.
 
 The link to the recording will be added to
 https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
 in a day or two.
 
 Thanks,
 
 Jun
 
 
 
 
 
 On Mon, Aug 17, 2015 at 2:54 PM, Jun Rao j...@confluent.io wrote:
 
 Hi, Everyone,
 
 We will have a Kafka KIP meeting tomorrow at 11am PST. If you plan to
 attend but haven't received an invite, please let me know. The following is
 the agenda.
 
 Agenda
 1. Discuss the proposal on client-side assignment strategy (
 https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Client-side+Assignment+Proposal
 )
 2. Discuss comments on CopyCat data api (KAFKA-2367)
 3. Kafka 0.8.2.2 release
 4. Review Backlog
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20status%20%3D%20%22Patch%20Available%22%20ORDER%20BY%20updated%20DESC
 
 
 Thanks,
 
 Jun
 



Re: Review Request 33378: Patch for KAFKA-2136

2015-08-18 Thread Aditya Auradkar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33378/
---

(Updated Aug. 18, 2015, 8:24 p.m.)


Review request for kafka, Joel Koshy and Jun Rao.


Bugs: KAFKA-2136
https://issues.apache.org/jira/browse/KAFKA-2136


Repository: kafka


Description (updated)
---

Addressing Joel's comments


Merging


Chaning variable name


Addressing Joel's comments


Addressing Joel's comments


Addressing comments


Diffs (updated)
-

  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/Fetcher.java 
9dc669728e6b052f5c6686fcf1b5696a50538ab4 
  clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
0baf16e55046a2f49f6431e01d52c323c95eddf0 
  clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java 
3dc8b015afd2347a41c9a9dbc02b8e367da5f75f 
  clients/src/main/java/org/apache/kafka/common/requests/FetchRequest.java 
df073a0e76cc5cc731861b9604d0e19a928970e0 
  clients/src/main/java/org/apache/kafka/common/requests/FetchResponse.java 
eb8951fba48c335095cc43fc3672de1c733e07ff 
  clients/src/main/java/org/apache/kafka/common/requests/ProduceRequest.java 
715504b32950666e9aa5a260fa99d5f897b2007a 
  clients/src/main/java/org/apache/kafka/common/requests/ProduceResponse.java 
febfc70dabc23671fd8a85cf5c5b274dff1e10fb 
  
clients/src/test/java/org/apache/kafka/clients/consumer/internals/FetcherTest.java
 a7c83cac47d41138d47d7590a3787432d675c1b0 
  
clients/src/test/java/org/apache/kafka/clients/producer/internals/SenderTest.java
 8b1805d3d2bcb9fe2bacb37d870c3236aa9532c4 
  
clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java 
8b2aca85fa738180e5420985fddc39a4bf9681ea 
  core/src/main/scala/kafka/api/FetchRequest.scala 
5b38f8554898e54800abd65a7415dd0ac41fd958 
  core/src/main/scala/kafka/api/FetchResponse.scala 
0b6b33ab6f7a732ff1322b6f48abd4c43e0d7215 
  core/src/main/scala/kafka/api/ProducerRequest.scala 
c866180d3680da03e48d374415f10220f6ca68c4 
  core/src/main/scala/kafka/api/ProducerResponse.scala 
5d1fac4cb8943f5bfaa487f8e9d9d2856efbd330 
  core/src/main/scala/kafka/consumer/SimpleConsumer.scala 
7ebc0405d1f309bed9943e7116051d1d8276f200 
  core/src/main/scala/kafka/server/AbstractFetcherThread.scala 
f84306143c43049e3aa44e42beaffe7eb2783163 
  core/src/main/scala/kafka/server/ClientQuotaManager.scala 
9f8473f1c64d10c04cf8cc91967688e29e54ae2e 
  core/src/main/scala/kafka/server/KafkaApis.scala 
67f0cad802f901f255825aa2158545d7f5e7cc3d 
  core/src/main/scala/kafka/server/ReplicaFetcherThread.scala 
fae22d2af8daccd528ac24614290f46ae8f6c797 
  core/src/main/scala/kafka/server/ReplicaManager.scala 
d829e180c3943a90861a12ec184f9b4e4bbafe7d 
  core/src/main/scala/kafka/server/ThrottledResponse.scala 
1f80d5480ccf7c411a02dd90296a7046ede0fae2 
  core/src/test/scala/unit/kafka/api/RequestResponseSerializationTest.scala 
b4c2a228c3c9872e5817ac58d3022e4903e317b7 
  core/src/test/scala/unit/kafka/server/ClientQuotaManagerTest.scala 
97dcca8c96f955acb3d92b29d7faa1e031ba71d4 
  core/src/test/scala/unit/kafka/server/ThrottledResponseExpirationTest.scala 
14a7f4538041d557c190127e3d5f85edf2a0e7c1 

Diff: https://reviews.apache.org/r/33378/diff/


Testing
---

New tests added


Thanks,

Aditya Auradkar



[jira] [Commented] (KAFKA-1070) Auto-assign node id

2015-08-18 Thread Alex Etling (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14702030#comment-14702030
 ] 

Alex Etling commented on KAFKA-1070:


Hello Kafka Geniuses,
 I have a question regarding this PR.  Some context: 
We have started playing around with kafka 0.8.2.1 to build a data pipeline 
at the company I work at.  Initially, to get autoscaling with AWS working, we 
were mapping the IP of our boxes to our broker id.  For a while everything was 
good.  Today I realized (please correct me if I am wrong) that kafka assigns 
the replicas to a topic at topic creation time. These replicas are not modified 
later unless you specifically rebalance the cluster(this is different than ISR 
which can go from 0 servers to the set of replicas).  This leads to an 
interesting question on how to cycle in new boxes.  The easiest way seems to be 
to copy all data from one box to another, kill the old box and start the new 
box with the same broker.id.  This is not really easy when you do a direct 
mapping of IP - broker.id.  
 So now we come to this Jira ticket.  I was wondering if you could 
enumerate for me how this auto-assign node id would deal with the cycling of a 
box.  If a bring down a box that was auto-assigned a broker.id of X and bring 
back up a new box, what will happen.  Will that new box have broker.id X as 
well?  What if I bring down two boxes with broker.id X and broker.id Y, what is 
the broker.id of the new box i spin up. 

Thanks for the help,
Alex
 

 Auto-assign node id
 ---

 Key: KAFKA-1070
 URL: https://issues.apache.org/jira/browse/KAFKA-1070
 Project: Kafka
  Issue Type: Bug
Reporter: Jay Kreps
Assignee: Sriharsha Chintalapani
  Labels: usability
 Fix For: 0.8.3

 Attachments: KAFKA-1070.patch, KAFKA-1070_2014-07-19_16:06:13.patch, 
 KAFKA-1070_2014-07-22_11:34:18.patch, KAFKA-1070_2014-07-24_20:58:17.patch, 
 KAFKA-1070_2014-07-24_21:05:33.patch, KAFKA-1070_2014-08-21_10:26:20.patch, 
 KAFKA-1070_2014-11-20_10:50:04.patch, KAFKA-1070_2014-11-25_20:29:37.patch, 
 KAFKA-1070_2015-01-01_17:39:30.patch, KAFKA-1070_2015-01-12_10:46:54.patch, 
 KAFKA-1070_2015-01-12_18:30:17.patch


 It would be nice to have Kafka brokers auto-assign node ids rather than 
 having that be a configuration. Having a configuration is irritating because 
 (1) you have to generate a custom config for each broker and (2) even though 
 it is in configuration, changing the node id can cause all kinds of bad 
 things to happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2367) Add Copycat runtime data API

2015-08-18 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14702063#comment-14702063
 ] 

Neha Narkhede commented on KAFKA-2367:
--

I think there are various tradeoffs, as with most choices that a framework is 
presented with :)

The tradeoffs I see are:
1. Agility vs maturity: The maturity argument is that Avro is an advanced 
serialization library that already exists and in spite of having been through 
various compatibility issues, is now well tested and adopted. The agility 
argument against Avro is that for a new framework like Copycat, we might be 
able to move faster (over several releases) by owning and fixing our runtime 
data model, while not waiting for the Avro community to release a patched 
version. This is a problem we struggled with ZkClient, codahale-metrics and 
ZooKeeper on the core Kafka side and though one can argue that the Avro 
community is better, this still remains a concern. The success of the Copycat 
framework depends on its ability to always be the present framework for copying 
data to Kafka and as an early project, agility is key.
2. Cost/time savings vs control: The cost/time saving argument goes for 
adopting Avro even if we really need a very small percentage of it. This does 
save us a little time upfront but the downside is that now we end up having 
Copycat depend on Avro (and all its dependencies). I'm totally in favor of 
using a mature open source library but observing the size of the code we need 
to pull from Avro, I couldn't convince myself of the benefit it presents in 
saving some effort upfront. After all, there will be bugs in either codebase, 
we'd have to find the fastest way to fix those.
3. Generic public interface to encourage connector developers: This is a very 
right-brain argument and a subtle one. I agree with [~jkreps] here. Given 
that our goal should be to attract a large ecosystem of connectors, I would 
want us to remove every bit of pain and friction that would cause connector 
developers to either question our choice of Avro or spend time clarifying its 
impact on them. I understand that in practice this isn't a concern and as long 
we have the right serializers, this will not even be quite so visible but a 
simple SchemaBuilder imported from org.apache.avro can start this discussion 
and distract connector developers who aren't necessarily Avro fans. 

Overall, given the tradeoffs, I'm leaning towards us picking a custom one and 
not depending on all of Avro. 

 Add Copycat runtime data API
 

 Key: KAFKA-2367
 URL: https://issues.apache.org/jira/browse/KAFKA-2367
 Project: Kafka
  Issue Type: Sub-task
  Components: copycat
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava
 Fix For: 0.8.3


 Design the API used for runtime data in Copycat. This API is used to 
 construct schemas and records that Copycat processes. This needs to be a 
 fairly general data model (think Avro, JSON, Protobufs, Thrift) in order to 
 support complex, varied data types that may be input from/output to many data 
 systems.
 This should issue should also address the serialization interfaces used 
 within Copycat, which translate the runtime data into serialized byte[] form. 
 It is important that these be considered together because the data format can 
 be used in multiple ways (records, partition IDs, partition offsets), so it 
 and the corresponding serializers must be sufficient for all these use cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2330) Vagrantfile sets global configs instead of per-provider override configs

2015-08-18 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-2330:
-
Reviewer: Gwen Shapira  (was: Joe Stein)

 Vagrantfile sets global configs instead of per-provider override configs
 

 Key: KAFKA-2330
 URL: https://issues.apache.org/jira/browse/KAFKA-2330
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2.1
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-2330.patch


 There's a couple of minor incorrect usages of the global configuration object 
 in the Vagrantfile inside provider-specific override blocks where we should 
 be using the override config object. Two end up being harmless since they 
 have no affect on other providers (but should still be corrected). The third 
 results in using rsync when using Virtualbox, which is unnecessary, slower, 
 and requires copying the entire kafka directory to every VM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1690) Add SSL support to Kafka Broker, Producer and Consumer

2015-08-18 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-1690:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

[~sriharsha], thanks a lot for your endurance. +1 and committed to trunk.

 Add SSL support to Kafka Broker, Producer and Consumer
 --

 Key: KAFKA-1690
 URL: https://issues.apache.org/jira/browse/KAFKA-1690
 Project: Kafka
  Issue Type: Sub-task
Reporter: Joe Stein
Assignee: Sriharsha Chintalapani
 Fix For: 0.8.3

 Attachments: KAFKA-1690.patch, KAFKA-1690.patch, 
 KAFKA-1690_2015-05-10_23:20:30.patch, KAFKA-1690_2015-05-10_23:31:42.patch, 
 KAFKA-1690_2015-05-11_16:09:36.patch, KAFKA-1690_2015-05-12_16:20:08.patch, 
 KAFKA-1690_2015-05-15_07:18:21.patch, KAFKA-1690_2015-05-20_14:54:35.patch, 
 KAFKA-1690_2015-05-21_10:37:08.patch, KAFKA-1690_2015-06-03_18:52:29.patch, 
 KAFKA-1690_2015-06-23_13:18:20.patch, KAFKA-1690_2015-07-20_06:10:42.patch, 
 KAFKA-1690_2015-07-20_11:59:57.patch, KAFKA-1690_2015-07-25_12:10:55.patch, 
 KAFKA-1690_2015-08-16_20:41:02.patch, KAFKA-1690_2015-08-17_08:12:50.patch, 
 KAFKA-1690_2015-08-17_09:28:52.patch, KAFKA-1690_2015-08-17_12:20:53.patch, 
 KAFKA-1690_2015-08-18_11:24:46.patch, KAFKA-1690_2015-08-18_17:24:48.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 33620: Patch for KAFKA-1690

2015-08-18 Thread Sriharsha Chintalapani


 On Aug. 19, 2015, 1:56 a.m., Joel Koshy wrote:
  clients/src/main/java/org/apache/kafka/common/utils/Utils.java, line 520
  https://reviews.apache.org/r/33620/diff/18/?file=1043256#file1043256line520
 
  second flip after `BUFFER_OVERFLOW` in `handshakeWrap` - you had 
  mentioned offline that the javadoc/source indicated that `netWriteBuffer` 
  is untouched if it is going to yield a `BUFFER_OVERFLOW`. I had trouble 
  finding it - can you point me to javadoc/grepcode/equivalent?

http://docs.oracle.com/javase/7/docs/api/javax/net/ssl/SSLEngine.html
For example, unwrap() will return a SSLEngineResult.Status.BUFFER_OVERFLOW 
result if the engine determines that there is not enough destination buffer 
space available. Applications should call SSLSession.getApplicationBufferSize() 
and compare that value with the space available in the destination buffer, 
enlarging the buffer if necessary. Similarly, if unwrap() were to return a 
SSLEngineResult.Status.BUFFER_UNDERFLOW, the application should call 
SSLSession.getPacketBufferSize() to ensure that the source buffer has enough 
room to hold a record (enlarging if necessary), and then obtain more inbound 
data. 
I'll add unit test around this.


- Sriharsha


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33620/#review95765
---


On Aug. 19, 2015, 12:24 a.m., Sriharsha Chintalapani wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/33620/
 ---
 
 (Updated Aug. 19, 2015, 12:24 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1690
 https://issues.apache.org/jira/browse/KAFKA-1690
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-1690. new java producer needs ssl support as a client.
 
 
 KAFKA-1690. new java producer needs ssl support as a client.
 
 
 KAFKA-1690. new java producer needs ssl support as a client.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. SSLFactory tests.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Added 
 PrincipalBuilder.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Addressing 
 reviews.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Addressing 
 reviews.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Addressing 
 reviews.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Fixed minor 
 issues with the patch.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Fixed minor 
 issues with the patch.
 
 
 KAFKA-1690. new java producer needs ssl support as a client.
 
 
 KAFKA-1690. new java producer needs ssl support as a client.
 
 
 Merge remote-tracking branch 'refs/remotes/origin/trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Broker side ssl changes.
 
 
 KAFKA-1684. SSL for socketServer.
 
 
 KAFKA-1690. Added SSLProducerSendTest and fixes to get right port for SSL.
 
 
 Merge branch 'trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Post merge fixes.
 
 
 KAFKA-1690. Added SSLProducerSendTest.
 
 
 KAFKA-1690. Minor fixes based on patch review comments.
 
 
 Merge commit
 
 
 KAFKA-1690. Added SSL Consumer Test.
 
 
 KAFKA-1690. SSL Support.
 
 
 KAFKA-1690. Addressing reviews.
 
 
 Merge branch 'trunk' into KAFKA-1690-V1
 
 
 Merge branch 'trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Addressing reviews. Removed interestOps from SSLTransportLayer.
 
 
 KAFKA-1690. Addressing reviews.
 
 
 KAFKA-1690. added staged receives to selector.
 
 
 KAFKA-1690. Addressing reviews.
 
 
 Merge branch 'trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Addressing reviews.
 
 
 KAFKA-1690. Add SSL support to broker, producer and consumer.
 
 
 Merge remote-tracking branch 'refs/remotes/origin/trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Add SSL support to Kafka Broker, Producer  Client.
 
 
 KAFKA-1690. Add SSL support for Kafka Brokers, Producers and Consumers.
 
 
 Merge remote-tracking branch 'refs/remotes/origin/trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Add SSL support for Kafka Brokers, Producers and Consumers.
 
 
 KAFKA-1690. Add SSL Support Kafka Broker, Producer and Consumer.
 
 
 KAFKA-1690. Add SSL support for Kafka Broker, Producer and Consumer.
 
 
 Merge remote-tracking branch 'refs/remotes/origin/trunk' into KAFKA-1690-V1
 
 
 Diffs
 -
 
   build.gradle 983587fd0b7604c3a26fcbb6a1d63e5e470d23fe 
   checkstyle/import-control.xml e3f4f84c6becfd9087627f018690e1e2fc2b3bba 
   clients/src/main/java/org/apache/kafka/clients/ClientUtils.java 
 0d68bf1e1e90fe9d5d4397ddf817b9a9af8d9f7a 
   clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
 2c421f42ed3fc5d61cf9c87a7eaa7bb23e26f63b 
   clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
 

Re: Review Request 33620: Patch for KAFKA-1690

2015-08-18 Thread Joel Koshy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33620/#review95765
---


I haven't finished going through the latest patch, but discussed concerns so 
far offline. I'll continue tomorrow, but this should probably be checked-in 
since given the multiple reviews so far and effort to rebase.


clients/src/main/java/org/apache/kafka/common/config/SSLConfigs.java (line 21)
https://reviews.apache.org/r/33620/#comment150879

remove EITHER



clients/src/main/java/org/apache/kafka/common/network/KafkaChannel.java (line 
135)
https://reviews.apache.org/r/33620/#comment150890

can drop `long x`

actually.. i'll accumulate these stylistic comments and we'll handle in a 
follow-up.



clients/src/main/java/org/apache/kafka/common/utils/Utils.java (line 520)
https://reviews.apache.org/r/33620/#comment150915

second flip after `BUFFER_OVERFLOW` in `handshakeWrap` - you had mentioned 
offline that the javadoc/source indicated that `netWriteBuffer` is untouched if 
it is going to yield a `BUFFER_OVERFLOW`. I had trouble finding it - can you 
point me to javadoc/grepcode/equivalent?


- Joel Koshy


On Aug. 19, 2015, 12:24 a.m., Sriharsha Chintalapani wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/33620/
 ---
 
 (Updated Aug. 19, 2015, 12:24 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1690
 https://issues.apache.org/jira/browse/KAFKA-1690
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-1690. new java producer needs ssl support as a client.
 
 
 KAFKA-1690. new java producer needs ssl support as a client.
 
 
 KAFKA-1690. new java producer needs ssl support as a client.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. SSLFactory tests.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Added 
 PrincipalBuilder.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Addressing 
 reviews.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Addressing 
 reviews.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Addressing 
 reviews.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Fixed minor 
 issues with the patch.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Fixed minor 
 issues with the patch.
 
 
 KAFKA-1690. new java producer needs ssl support as a client.
 
 
 KAFKA-1690. new java producer needs ssl support as a client.
 
 
 Merge remote-tracking branch 'refs/remotes/origin/trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Broker side ssl changes.
 
 
 KAFKA-1684. SSL for socketServer.
 
 
 KAFKA-1690. Added SSLProducerSendTest and fixes to get right port for SSL.
 
 
 Merge branch 'trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Post merge fixes.
 
 
 KAFKA-1690. Added SSLProducerSendTest.
 
 
 KAFKA-1690. Minor fixes based on patch review comments.
 
 
 Merge commit
 
 
 KAFKA-1690. Added SSL Consumer Test.
 
 
 KAFKA-1690. SSL Support.
 
 
 KAFKA-1690. Addressing reviews.
 
 
 Merge branch 'trunk' into KAFKA-1690-V1
 
 
 Merge branch 'trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Addressing reviews. Removed interestOps from SSLTransportLayer.
 
 
 KAFKA-1690. Addressing reviews.
 
 
 KAFKA-1690. added staged receives to selector.
 
 
 KAFKA-1690. Addressing reviews.
 
 
 Merge branch 'trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Addressing reviews.
 
 
 KAFKA-1690. Add SSL support to broker, producer and consumer.
 
 
 Merge remote-tracking branch 'refs/remotes/origin/trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Add SSL support to Kafka Broker, Producer  Client.
 
 
 KAFKA-1690. Add SSL support for Kafka Brokers, Producers and Consumers.
 
 
 Merge remote-tracking branch 'refs/remotes/origin/trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Add SSL support for Kafka Brokers, Producers and Consumers.
 
 
 KAFKA-1690. Add SSL Support Kafka Broker, Producer and Consumer.
 
 
 KAFKA-1690. Add SSL support for Kafka Broker, Producer and Consumer.
 
 
 Merge remote-tracking branch 'refs/remotes/origin/trunk' into KAFKA-1690-V1
 
 
 Diffs
 -
 
   build.gradle 983587fd0b7604c3a26fcbb6a1d63e5e470d23fe 
   checkstyle/import-control.xml e3f4f84c6becfd9087627f018690e1e2fc2b3bba 
   clients/src/main/java/org/apache/kafka/clients/ClientUtils.java 
 0d68bf1e1e90fe9d5d4397ddf817b9a9af8d9f7a 
   clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
 2c421f42ed3fc5d61cf9c87a7eaa7bb23e26f63b 
   clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
 0e51d7bd461d253f4396a5b6ca7cd391658807fa 
   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
 d35b421a515074d964c7fccb73d260b847ea5f00 
   

[jira] [Commented] (KAFKA-2330) Vagrantfile sets global configs instead of per-provider override configs

2015-08-18 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14702443#comment-14702443
 ] 

Ewen Cheslack-Postava commented on KAFKA-2330:
--

[~gwenshap] Reassigned to you since I think you're familiar with some of this 
since you reviewed the system test patch. This is pretty minimal and still 
applies cleanly, should be a quick review. Or maybe [~guozhang] wants to review?

 Vagrantfile sets global configs instead of per-provider override configs
 

 Key: KAFKA-2330
 URL: https://issues.apache.org/jira/browse/KAFKA-2330
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2.1
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-2330.patch


 There's a couple of minor incorrect usages of the global configuration object 
 in the Vagrantfile inside provider-specific override blocks where we should 
 be using the override config object. Two end up being harmless since they 
 have no affect on other providers (but should still be corrected). The third 
 results in using rsync when using Virtualbox, which is unnecessary, slower, 
 and requires copying the entire kafka directory to every VM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 33620: Patch for KAFKA-1690

2015-08-18 Thread Sriharsha Chintalapani


 On Aug. 19, 2015, 1:56 a.m., Joel Koshy wrote:
  clients/src/main/java/org/apache/kafka/common/utils/Utils.java, line 520
  https://reviews.apache.org/r/33620/diff/18/?file=1043256#file1043256line520
 
  second flip after `BUFFER_OVERFLOW` in `handshakeWrap` - you had 
  mentioned offline that the javadoc/source indicated that `netWriteBuffer` 
  is untouched if it is going to yield a `BUFFER_OVERFLOW`. I had trouble 
  finding it - can you point me to javadoc/grepcode/equivalent?
 
 Sriharsha Chintalapani wrote:
 http://docs.oracle.com/javase/7/docs/api/javax/net/ssl/SSLEngine.html
 For example, unwrap() will return a 
 SSLEngineResult.Status.BUFFER_OVERFLOW result if the engine determines that 
 there is not enough destination buffer space available. Applications should 
 call SSLSession.getApplicationBufferSize() and compare that value with the 
 space available in the destination buffer, enlarging the buffer if necessary. 
 Similarly, if unwrap() were to return a 
 SSLEngineResult.Status.BUFFER_UNDERFLOW, the application should call 
 SSLSession.getPacketBufferSize() to ensure that the source buffer has enough 
 room to hold a record (enlarging if necessary), and then obtain more inbound 
 data. 
 I'll add unit test around this.

and in openjdk SSLEngineImpl

  * Check for OVERFLOW.
 *
 * To be considered: We could delay enforcing the application buffer
 * free space requirement until after the initial handshaking.
 */
if ((packetLen - Record.headerSize)  ea.getAppRemaining()) {
return new SSLEngineResult(Status.BUFFER_OVERFLOW, hsStatus, 0, 0);
}


- Sriharsha


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33620/#review95765
---


On Aug. 19, 2015, 12:24 a.m., Sriharsha Chintalapani wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/33620/
 ---
 
 (Updated Aug. 19, 2015, 12:24 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1690
 https://issues.apache.org/jira/browse/KAFKA-1690
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-1690. new java producer needs ssl support as a client.
 
 
 KAFKA-1690. new java producer needs ssl support as a client.
 
 
 KAFKA-1690. new java producer needs ssl support as a client.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. SSLFactory tests.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Added 
 PrincipalBuilder.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Addressing 
 reviews.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Addressing 
 reviews.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Addressing 
 reviews.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Fixed minor 
 issues with the patch.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Fixed minor 
 issues with the patch.
 
 
 KAFKA-1690. new java producer needs ssl support as a client.
 
 
 KAFKA-1690. new java producer needs ssl support as a client.
 
 
 Merge remote-tracking branch 'refs/remotes/origin/trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Broker side ssl changes.
 
 
 KAFKA-1684. SSL for socketServer.
 
 
 KAFKA-1690. Added SSLProducerSendTest and fixes to get right port for SSL.
 
 
 Merge branch 'trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Post merge fixes.
 
 
 KAFKA-1690. Added SSLProducerSendTest.
 
 
 KAFKA-1690. Minor fixes based on patch review comments.
 
 
 Merge commit
 
 
 KAFKA-1690. Added SSL Consumer Test.
 
 
 KAFKA-1690. SSL Support.
 
 
 KAFKA-1690. Addressing reviews.
 
 
 Merge branch 'trunk' into KAFKA-1690-V1
 
 
 Merge branch 'trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Addressing reviews. Removed interestOps from SSLTransportLayer.
 
 
 KAFKA-1690. Addressing reviews.
 
 
 KAFKA-1690. added staged receives to selector.
 
 
 KAFKA-1690. Addressing reviews.
 
 
 Merge branch 'trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Addressing reviews.
 
 
 KAFKA-1690. Add SSL support to broker, producer and consumer.
 
 
 Merge remote-tracking branch 'refs/remotes/origin/trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Add SSL support to Kafka Broker, Producer  Client.
 
 
 KAFKA-1690. Add SSL support for Kafka Brokers, Producers and Consumers.
 
 
 Merge remote-tracking branch 'refs/remotes/origin/trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Add SSL support for Kafka Brokers, Producers and Consumers.
 
 
 KAFKA-1690. Add SSL Support Kafka Broker, Producer and Consumer.
 
 
 KAFKA-1690. Add SSL support for Kafka Broker, Producer and Consumer.
 
 
 Merge remote-tracking branch 'refs/remotes/origin/trunk' into KAFKA-1690-V1
 
 
 Diffs
 -
 
   

Build failed in Jenkins: Kafka-trunk #590

2015-08-18 Thread Apache Jenkins Server
See https://builds.apache.org/job/Kafka-trunk/590/changes

Changes:

[cshapi] KAFKA-2436; log.retention.hours should be honored by LogManager

--
[...truncated 1501 lines...]
kafka.api.test.ProducerCompressionTest  testCompression[2] PASSED

kafka.api.ProducerFailureHandlingTest  testTooLargeRecordWithAckOne PASSED

kafka.api.test.ProducerCompressionTest  testCompression[3] PASSED

kafka.tools.ConsoleProducerTest  testParseKeyProp PASSED

kafka.tools.ConsoleProducerTest  testValidConfigsNewProducer PASSED

kafka.tools.ConsoleProducerTest  testInvalidConfigs PASSED

kafka.tools.ConsoleProducerTest  testValidConfigsOldProducer PASSED

kafka.server.OffsetCommitTest  testUpdateOffsets PASSED

kafka.api.ProducerSendTest  testCloseWithZeroTimeoutFromSenderThread PASSED

kafka.api.ProducerFailureHandlingTest  testTooLargeRecordWithAckZero PASSED

kafka.api.ConsumerBounceTest  testSeekAndCommitWithBrokerFailures PASSED

kafka.coordinator.PartitionAssignorTest  
testRangeAssignorOneConsumerNonexistentTopic PASSED

kafka.coordinator.PartitionAssignorTest  
testRangeAssignorOnlyAssignsPartitionsFromSubscribedTopics PASSED

kafka.coordinator.PartitionAssignorTest  
testRoundRobinAssignorOneConsumerMultipleTopics PASSED

kafka.coordinator.PartitionAssignorTest  
testRoundRobinAssignorTwoConsumersOneTopicOnePartition PASSED

kafka.coordinator.PartitionAssignorTest  
testRoundRobinAssignorTwoConsumersOneTopicTwoPartitions PASSED

kafka.coordinator.PartitionAssignorTest  
testRoundRobinAssignorMultipleConsumersMixedTopics PASSED

kafka.coordinator.PartitionAssignorTest  
testRoundRobinAssignorTwoConsumersTwoTopicsSixPartitions PASSED

kafka.coordinator.PartitionAssignorTest  testRangeAssignorOneConsumerOneTopic 
PASSED

kafka.coordinator.PartitionAssignorTest  
testRangeAssignorTwoConsumersTwoTopicsSixPartitions PASSED

kafka.coordinator.PartitionAssignorTest  
testRoundRobinAssignorOneConsumerNoTopic PASSED

kafka.coordinator.PartitionAssignorTest  
testRoundRobinAssignorOneConsumerNonexistentTopic PASSED

kafka.coordinator.PartitionAssignorTest  
testRangeAssignorOneConsumerMultipleTopics PASSED

kafka.coordinator.PartitionAssignorTest  
testRangeAssignorTwoConsumersOneTopicOnePartition PASSED

kafka.coordinator.PartitionAssignorTest  
testRangeAssignorTwoConsumersOneTopicTwoPartitions PASSED

kafka.coordinator.PartitionAssignorTest  
testRangeAssignorMultipleConsumersMixedTopics PASSED

kafka.coordinator.PartitionAssignorTest  testRangeAssignorOneConsumerNoTopic 
PASSED

kafka.coordinator.PartitionAssignorTest  
testRoundRobinAssignorOneConsumerOneTopic PASSED

kafka.coordinator.PartitionAssignorTest  
testRoundRobinAssignorOnlyAssignsPartitionsFromSubscribedTopics PASSED

kafka.api.ProducerSendTest  testClose PASSED

kafka.api.ProducerFailureHandlingTest  testInvalidPartition PASSED

kafka.integration.UncleanLeaderElectionTest  
testUncleanLeaderElectionEnabledByTopicOverride PASSED

kafka.api.ProducerSendTest  testSendToPartition PASSED

kafka.server.OffsetCommitTest  testNonExistingTopicOffsetCommit PASSED

kafka.api.ProducerFailureHandlingTest  testSendAfterClosed PASSED

kafka.coordinator.ConsumerGroupMetadataTest  
testPreparingRebalanceToRebalancingTransition PASSED

kafka.coordinator.ConsumerGroupMetadataTest  
testStableToRebalancingIllegalTransition PASSED

kafka.coordinator.ConsumerGroupMetadataTest  
testPreparingRebalanceToPreparingRebalanceIllegalTransition PASSED

kafka.coordinator.ConsumerGroupMetadataTest  
testStableToStableIllegalTransition PASSED

kafka.coordinator.ConsumerGroupMetadataTest  testStableToDeadIllegalTransition 
PASSED

kafka.coordinator.ConsumerGroupMetadataTest  
testStableToPreparingRebalanceTransition PASSED

kafka.coordinator.ConsumerGroupMetadataTest  
testCannotRebalanceWhenPreparingRebalance PASSED

kafka.coordinator.ConsumerGroupMetadataTest  
testPreparingRebalanceToStableIllegalTransition PASSED

kafka.coordinator.ConsumerGroupMetadataTest  
testRebalancingToPreparingRebalanceTransition PASSED

kafka.coordinator.ConsumerGroupMetadataTest  
testDeadToPreparingRebalanceIllegalTransition PASSED

kafka.coordinator.ConsumerGroupMetadataTest  testDeadToDeadIllegalTransition 
PASSED

kafka.coordinator.ConsumerGroupMetadataTest  testCannotRebalanceWhenDead PASSED

kafka.coordinator.ConsumerGroupMetadataTest  
testRebalancingToDeadIllegalTransition PASSED

kafka.coordinator.ConsumerGroupMetadataTest  testCanRebalanceWhenStable PASSED

kafka.coordinator.ConsumerGroupMetadataTest  
testCannotRebalanceWhenRebalancing PASSED

kafka.coordinator.ConsumerGroupMetadataTest  testDeadToStableIllegalTransition 
PASSED

kafka.coordinator.ConsumerGroupMetadataTest  
testPreparingRebalanceToDeadTransition PASSED

kafka.coordinator.ConsumerGroupMetadataTest  
testDeadToRebalancingIllegalTransition PASSED

kafka.coordinator.ConsumerGroupMetadataTest  testRebalancingToStableTransition 
PASSED

kafka.coordinator.ConsumerGroupMetadataTest  

Re: Kafka KIP meeting Aug. 18

2015-08-18 Thread Jun Rao
Yes, that's related to the zombie consumer. Basically, if a consumer is in
a long GC, it may be taken out of the group. However, when the consumer
wakes up from the GC, it may still be able to consume data that it's no
longer responsible for. We protect this a little bit by not allowing a
zombie consumer to commit offsets. However, there could be duplicated read.
So, it's related to the single-writer in that if we can fence off the
zombie consumer, we can eliminate or reduce duplicate reads.

Thanks,

Jun

On Tue, Aug 18, 2015 at 1:42 PM, Flavio Junqueira f...@apache.org wrote:

 Thanks for the notes, Jun. There was a mention of single-writer behavior,
 if I understood correctly, in the context of the new consumer, and I was
 wondering if that's specific to the new consumer work or if it's going to
 have any dependency or relation to KIP-27.

 -Flavio

  On 18 Aug 2015, at 20:20, Jun Rao j...@confluent.io wrote:
 
  The the following are the notes for today's KIP discussion.
 
  * client-side assignment strategy: We discussed concerns about
 rebalancing
  time due to metadata inconsistency, especially when lots of topics are
  subscribed. Will discuss a bit more on the mailing list.
 
  * CopyCat data api: The discussions are in KAFKA-2367 for people who are
  interested.
 
  * 0.8.2.2: We want to make this a low risk bug fix release since 0.8.3
 is
  coming. So, will only include a small number of critical and small fixes.
 
  * 0.8.3: The main features will be security and the new consumer. We will
  be cutting a release branch when the major pieces for these new features
  have been committed.
 
  The link to the recording will be added to
 
 https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
  in a day or two.
 
  Thanks,
 
  Jun
 
 
 
 
 
  On Mon, Aug 17, 2015 at 2:54 PM, Jun Rao j...@confluent.io wrote:
 
  Hi, Everyone,
 
  We will have a Kafka KIP meeting tomorrow at 11am PST. If you plan to
  attend but haven't received an invite, please let me know. The
 following is
  the agenda.
 
  Agenda
  1. Discuss the proposal on client-side assignment strategy (
 
 https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Client-side+Assignment+Proposal
  )
  2. Discuss comments on CopyCat data api (KAFKA-2367)
  3. Kafka 0.8.2.2 release
  4. Review Backlog
 
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20status%20%3D%20%22Patch%20Available%22%20ORDER%20BY%20updated%20DESC
 
 
  Thanks,
 
  Jun
 




Re: [DISCUSS] Client-side Assignment for New Consumer

2015-08-18 Thread Jason Gustafson
Follow-up from the kip call:

1. Onur brought up the question of whether this protocol provides enough
coordination capabilities to be generally useful in practice (is that
accurate, Onur?). If it doesn't, then each use case would probably need a
dependence on zookeeper anyway, and we haven't really gained anything. The
group membership provided by this protocol is a useful primitive for
coordination, but it's limited in the sense that everything shared among
the group has to be communicated at the time the group is created. If any
shared data changes, then the only way the group can ensure agreement is to
force a rebalance. This is expensive since all members must stall while the
rebalancing takes place. As we have also seen, there is a practical limit
on the amount of metadata that can be sent through this protocol when
groups get a little larger. This protocol is therefore not suitable to
cases which require frequent communication or which require a large amount
of data to be communicated. For the use cases listed on the wiki, neither
of these appear to be an issue, but there may be other limitations which
would limit reuse of the protocol. Perhaps it would be sufficient to sketch
how these cases might work?

2. We talked a little bit about the issue of metadata churn. Becket brought
up the interesting point that not only do we depend on topic metadata
changing relatively infrequently, but we also expect timely agreement among
the brokers on what that metadata is. To resolve this, we can have the
consumers fetch metadata from the coordinator. We still depend on topic
metadata not changing frequently, but this should resolve any disagreement
among the brokers themselves. In fact, since we expect that disagreement is
relatively rare, we can have the consumers fetch from the coordinator only
when when a disagreement occurs. The nice thing about this proposal is that
it doesn't affect the join group semantics, so the coordinator would remain
oblivious to the metadata used by the group for agreement. Also, if
metadata churn becomes an issue, it might be possible to have the
coordinator provide a snapshot for the group to ensure that a generation
would be able to reach agreement (this would probably require adding
groupId/generation to the metadata request).

3. We talked briefly about support for multiple protocols in the join group
request in order to allow changing the assignment strategy without
downtime. I think it's a little doubtful that this would get much use in
practice, but I agree it's a nice option to have on the table. An
alternative, for the sake of argument, is to have each member provide only
one version of the protocol, and to let the coordinator choose the protocol
with the largest number of supporters. All members which can't support the
selected protocol would be kicked out of the group. The drawback in a
rolling upgrade is that the total capacity of the group would be
momentarily halved. It would also be a little tricky to handle the case of
retrying when a consumer is kicked out of the group. We wouldn't want it to
be able to effect a rebalance, for example, if it would just be kicked out
again. That would probably complicate the group management logic on the
coordinator.


Thanks,
Jason


On Tue, Aug 18, 2015 at 11:16 AM, Jiangjie Qin j...@linkedin.com.invalid
wrote:

 Jun,

 Yes, I agree. If the metadata can be synced quickly there should not be an
 issue. It just occurred to me that there is a proposal to allow consuming
 from followers in ISR, that could potentially cause more frequent metadata
 change for consumers. Would that be an issue?

 Thanks,

 Jiangjie (Becket) Qin

 On Tue, Aug 18, 2015 at 10:22 AM, Jason Gustafson ja...@confluent.io
 wrote:

  Hi Jun,
 
  Answers below:
 
  1. When there are multiple common protocols in the JoinGroupRequest,
 which
  one would the coordinator pick?
 
  I was intending to use the list to indicate preference. If all group
  members support protocols [A, B] in that order, then we will choose
  A. If some support [B, A], then we would either choose based on
  respective counts or just randomly. The main use case of supporting the
  list is for rolling upgrades when a change is made to the assignment
  strategy. In that case, the new assignment strategy would be listed first
  in the upgraded client. I think it's debatable whether this feature would
  get much use in practice, so we might consider dropping it.
 
  2. If the protocols don't agree, the group construction fails. What
 exactly
  does it mean? Do we send an error in every JoinGroupResponse and remove
 all
  members in the group in the coordinator?
 
  Yes, that is right. It would be handled similarly to inconsistent
  assignment strategies in the current protocol. The coordinator returns an
  error in each join group response, and the client propagates the error to
  the user.
 
  3. Consumer embedded protocol: The proposal has two different formats of
  subscription depending on 

Re: Review Request 33620: Patch for KAFKA-1690

2015-08-18 Thread Jun Rao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33620/#review95798
---

Ship it!


Looks good to me. Could you rebase?

- Jun Rao


On Aug. 18, 2015, 6:24 p.m., Sriharsha Chintalapani wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/33620/
 ---
 
 (Updated Aug. 18, 2015, 6:24 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1690
 https://issues.apache.org/jira/browse/KAFKA-1690
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-1690. new java producer needs ssl support as a client.
 
 
 KAFKA-1690. new java producer needs ssl support as a client.
 
 
 KAFKA-1690. new java producer needs ssl support as a client.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. SSLFactory tests.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Added 
 PrincipalBuilder.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Addressing 
 reviews.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Addressing 
 reviews.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Addressing 
 reviews.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Fixed minor 
 issues with the patch.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Fixed minor 
 issues with the patch.
 
 
 KAFKA-1690. new java producer needs ssl support as a client.
 
 
 KAFKA-1690. new java producer needs ssl support as a client.
 
 
 Merge remote-tracking branch 'refs/remotes/origin/trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Broker side ssl changes.
 
 
 KAFKA-1684. SSL for socketServer.
 
 
 KAFKA-1690. Added SSLProducerSendTest and fixes to get right port for SSL.
 
 
 Merge branch 'trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Post merge fixes.
 
 
 KAFKA-1690. Added SSLProducerSendTest.
 
 
 KAFKA-1690. Minor fixes based on patch review comments.
 
 
 Merge commit
 
 
 KAFKA-1690. Added SSL Consumer Test.
 
 
 KAFKA-1690. SSL Support.
 
 
 KAFKA-1690. Addressing reviews.
 
 
 Merge branch 'trunk' into KAFKA-1690-V1
 
 
 Merge branch 'trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Addressing reviews. Removed interestOps from SSLTransportLayer.
 
 
 KAFKA-1690. Addressing reviews.
 
 
 KAFKA-1690. added staged receives to selector.
 
 
 KAFKA-1690. Addressing reviews.
 
 
 Merge branch 'trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Addressing reviews.
 
 
 KAFKA-1690. Add SSL support to broker, producer and consumer.
 
 
 Merge remote-tracking branch 'refs/remotes/origin/trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Add SSL support to Kafka Broker, Producer  Client.
 
 
 KAFKA-1690. Add SSL support for Kafka Brokers, Producers and Consumers.
 
 
 Merge remote-tracking branch 'refs/remotes/origin/trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Add SSL support for Kafka Brokers, Producers and Consumers.
 
 
 KAFKA-1690. Add SSL Support Kafka Broker, Producer and Consumer.
 
 
 KAFKA-1690. Add SSL support for Kafka Broker, Producer and Consumer.
 
 
 Diffs
 -
 
   build.gradle 983587fd0b7604c3a26fcbb6a1d63e5e470d23fe 
   checkstyle/import-control.xml e3f4f84c6becfd9087627f018690e1e2fc2b3bba 
   clients/src/main/java/org/apache/kafka/clients/ClientUtils.java 
 0d68bf1e1e90fe9d5d4397ddf817b9a9af8d9f7a 
   clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
 2c421f42ed3fc5d61cf9c87a7eaa7bb23e26f63b 
   clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
 0e51d7bd461d253f4396a5b6ca7cd391658807fa 
   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
 d35b421a515074d964c7fccb73d260b847ea5f00 
   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
 be46b6c213ad8c6c09ad22886a5f36175ab0e13a 
   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
 03b8dd23df63a8d8a117f02eabcce4a2d48c44f7 
   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
 aa264202f2724907924985a5ecbe74afc4c6c04b 
   clients/src/main/java/org/apache/kafka/common/config/AbstractConfig.java 
 6c317480a181678747bfb6b77e315b08668653c5 
   clients/src/main/java/org/apache/kafka/common/config/SSLConfigs.java 
 PRE-CREATION 
   clients/src/main/java/org/apache/kafka/common/network/Authenticator.java 
 PRE-CREATION 
   clients/src/main/java/org/apache/kafka/common/network/ByteBufferSend.java 
 df0e6d5105ca97b7e1cb4d334ffb7b443506bd0b 
   clients/src/main/java/org/apache/kafka/common/network/ChannelBuilder.java 
 PRE-CREATION 
   
 clients/src/main/java/org/apache/kafka/common/network/DefaultAuthenticator.java
  PRE-CREATION 
   clients/src/main/java/org/apache/kafka/common/network/KafkaChannel.java 
 PRE-CREATION 
   

Re: Review Request 33049: Patch for KAFKA-2084

2015-08-18 Thread Aditya Auradkar


 On Aug. 18, 2015, 11:33 p.m., Jun Rao wrote:
  core/src/main/scala/kafka/server/ClientQuotaManager.scala, line 147
  https://reviews.apache.org/r/33049/diff/28/?file=1040947#file1040947line147
 
  The window used here is a bit different from that used in 
  Rate.measure() and it doesn't take into account that the current window is 
  partial. Could we expose a windowSize(long now) method in Rate and use it 
  here?

Hey Jun - Makes sense. Filed a ticket.. shall submit a quick patch for it.

https://issues.apache.org/jira/browse/KAFKA-2443


 On Aug. 18, 2015, 11:33 p.m., Jun Rao wrote:
  clients/src/main/java/org/apache/kafka/common/metrics/stats/Rate.java, 
  lines 62-65
  https://reviews.apache.org/r/33049/diff/28/?file=1040945#file1040945line62
 
  Actually, the original computation on the window seems more accurate 
  since it handles the initial case when not all the sample windows have been 
  populated. Is there a particular reason to change it?

This was discussed in a fair bit of detail between Dong, Jay and myself. Here 
is a ticket: https://issues.apache.org/jira/browse/KAFKA-2191
https://reviews.apache.org/r/34170/

Basically, we were having issues with very large metric values when the metric 
was very recently created.


- Aditya


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33049/#review95793
---


On Aug. 15, 2015, 12:43 a.m., Aditya Auradkar wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/33049/
 ---
 
 (Updated Aug. 15, 2015, 12:43 a.m.)
 
 
 Review request for kafka, Joel Koshy and Jun Rao.
 
 
 Bugs: KAFKA-2084
 https://issues.apache.org/jira/browse/KAFKA-2084
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Updated patch for quotas. This patch does the following: 
 1. Add per-client metrics for both producer and consumers 
 2. Add configuration for quotas 
 3. Compute delay times in the metrics package and return the delay times in 
 QuotaViolationException 
 4. Add a DelayQueue in KafkaApi's that can be used to throttle any type of 
 request. Implemented request throttling for produce and fetch requests. 
 5. Added unit and integration test cases for both producer and consumer
 6. This doesn't include a system test. There is a separate ticket for that
 7. Fixed KAFKA-2191 - (Included fix from : 
 https://reviews.apache.org/r/34418/ )
 
 Addressed comments from Joel and Jun
 
 
 Diffs
 -
 
   build.gradle 1b67e628c2fca897177c12b6afad9a8700fffd1f 
   clients/src/main/java/org/apache/kafka/common/metrics/Quota.java 
 d82bb0c055e631425bc1ebbc7d387baac76aeeaa 
   
 clients/src/main/java/org/apache/kafka/common/metrics/QuotaViolationException.java
  a451e5385c9eca76b38b425e8ac856b2715fcffe 
   clients/src/main/java/org/apache/kafka/common/metrics/Sensor.java 
 ca823fd4639523018311b814fde69b6177e73b97 
   clients/src/main/java/org/apache/kafka/common/metrics/stats/Rate.java 
 98429da34418f7f1deba1b5e44e2e6025212edb3 
   clients/src/test/java/org/apache/kafka/common/metrics/MetricsTest.java 
 544e120594de78c43581a980b1e4087b4fb98ccb 
   core/src/main/scala/kafka/server/ClientQuotaManager.scala PRE-CREATION 
   core/src/main/scala/kafka/server/KafkaApis.scala 
 7ea509c2c41acc00430c74e025e069a833aac4e7 
   core/src/main/scala/kafka/server/KafkaConfig.scala 
 a06f0bd40e2f90972b44b80a106f98f3d50e5e2b 
   core/src/main/scala/kafka/server/KafkaServer.scala 
 84d4730ac634f9a5bf12a656e422fea03ad72da8 
   core/src/main/scala/kafka/server/ReplicaManager.scala 
 2e0bbcd6e4f0e38997ea18202b249ee3553640ec 
   core/src/main/scala/kafka/server/ThrottledResponse.scala PRE-CREATION 
   core/src/main/scala/kafka/utils/ShutdownableThread.scala 
 fc226c863095b7761290292cd8755cd7ad0f155c 
   core/src/test/scala/integration/kafka/api/QuotasTest.scala PRE-CREATION 
   core/src/test/scala/unit/kafka/server/ClientQuotaManagerTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/server/KafkaConfigTest.scala 
 e26a7306a6ea3104b3fa3df60006c0a473bfb2cc 
   core/src/test/scala/unit/kafka/server/ThrottledResponseExpirationTest.scala 
 PRE-CREATION 
 
 Diff: https://reviews.apache.org/r/33049/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Aditya Auradkar
 




[jira] [Created] (KAFKA-2443) Expose windowSize on Measurable

2015-08-18 Thread Aditya Auradkar (JIRA)
Aditya Auradkar created KAFKA-2443:
--

 Summary: Expose windowSize on Measurable
 Key: KAFKA-2443
 URL: https://issues.apache.org/jira/browse/KAFKA-2443
 Project: Kafka
  Issue Type: Task
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar


Currently, we dont have a means to measure the size of the metric window since 
the final sample can be incomplete.

Expose windowSize(now) on Measurable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1690) Add SSL support to Kafka Broker, Producer and Consumer

2015-08-18 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14702223#comment-14702223
 ] 

Sriharsha Chintalapani commented on KAFKA-1690:
---

Updated reviewboard https://reviews.apache.org/r/33620/diff/
 against branch origin/trunk

 Add SSL support to Kafka Broker, Producer and Consumer
 --

 Key: KAFKA-1690
 URL: https://issues.apache.org/jira/browse/KAFKA-1690
 Project: Kafka
  Issue Type: Sub-task
Reporter: Joe Stein
Assignee: Sriharsha Chintalapani
 Fix For: 0.8.3

 Attachments: KAFKA-1690.patch, KAFKA-1690.patch, 
 KAFKA-1690_2015-05-10_23:20:30.patch, KAFKA-1690_2015-05-10_23:31:42.patch, 
 KAFKA-1690_2015-05-11_16:09:36.patch, KAFKA-1690_2015-05-12_16:20:08.patch, 
 KAFKA-1690_2015-05-15_07:18:21.patch, KAFKA-1690_2015-05-20_14:54:35.patch, 
 KAFKA-1690_2015-05-21_10:37:08.patch, KAFKA-1690_2015-06-03_18:52:29.patch, 
 KAFKA-1690_2015-06-23_13:18:20.patch, KAFKA-1690_2015-07-20_06:10:42.patch, 
 KAFKA-1690_2015-07-20_11:59:57.patch, KAFKA-1690_2015-07-25_12:10:55.patch, 
 KAFKA-1690_2015-08-16_20:41:02.patch, KAFKA-1690_2015-08-17_08:12:50.patch, 
 KAFKA-1690_2015-08-17_09:28:52.patch, KAFKA-1690_2015-08-17_12:20:53.patch, 
 KAFKA-1690_2015-08-18_11:24:46.patch, KAFKA-1690_2015-08-18_17:24:48.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 33049: Patch for KAFKA-2084

2015-08-18 Thread Jun Rao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33049/#review95793
---



clients/src/main/java/org/apache/kafka/common/metrics/stats/Rate.java (lines 62 
- 65)
https://reviews.apache.org/r/33049/#comment150927

Actually, the original computation on the window seems more accurate since 
it handles the initial case when not all the sample windows have been 
populated. Is there a particular reason to change it?



core/src/main/scala/kafka/server/ClientQuotaManager.scala (line 147)
https://reviews.apache.org/r/33049/#comment150934

The window used here is a bit different from that used in Rate.measure() 
and it doesn't take into account that the current window is partial. Could we 
expose a windowSize(long now) method in Rate and use it here?


- Jun Rao


On Aug. 15, 2015, 12:43 a.m., Aditya Auradkar wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/33049/
 ---
 
 (Updated Aug. 15, 2015, 12:43 a.m.)
 
 
 Review request for kafka, Joel Koshy and Jun Rao.
 
 
 Bugs: KAFKA-2084
 https://issues.apache.org/jira/browse/KAFKA-2084
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Updated patch for quotas. This patch does the following: 
 1. Add per-client metrics for both producer and consumers 
 2. Add configuration for quotas 
 3. Compute delay times in the metrics package and return the delay times in 
 QuotaViolationException 
 4. Add a DelayQueue in KafkaApi's that can be used to throttle any type of 
 request. Implemented request throttling for produce and fetch requests. 
 5. Added unit and integration test cases for both producer and consumer
 6. This doesn't include a system test. There is a separate ticket for that
 7. Fixed KAFKA-2191 - (Included fix from : 
 https://reviews.apache.org/r/34418/ )
 
 Addressed comments from Joel and Jun
 
 
 Diffs
 -
 
   build.gradle 1b67e628c2fca897177c12b6afad9a8700fffd1f 
   clients/src/main/java/org/apache/kafka/common/metrics/Quota.java 
 d82bb0c055e631425bc1ebbc7d387baac76aeeaa 
   
 clients/src/main/java/org/apache/kafka/common/metrics/QuotaViolationException.java
  a451e5385c9eca76b38b425e8ac856b2715fcffe 
   clients/src/main/java/org/apache/kafka/common/metrics/Sensor.java 
 ca823fd4639523018311b814fde69b6177e73b97 
   clients/src/main/java/org/apache/kafka/common/metrics/stats/Rate.java 
 98429da34418f7f1deba1b5e44e2e6025212edb3 
   clients/src/test/java/org/apache/kafka/common/metrics/MetricsTest.java 
 544e120594de78c43581a980b1e4087b4fb98ccb 
   core/src/main/scala/kafka/server/ClientQuotaManager.scala PRE-CREATION 
   core/src/main/scala/kafka/server/KafkaApis.scala 
 7ea509c2c41acc00430c74e025e069a833aac4e7 
   core/src/main/scala/kafka/server/KafkaConfig.scala 
 a06f0bd40e2f90972b44b80a106f98f3d50e5e2b 
   core/src/main/scala/kafka/server/KafkaServer.scala 
 84d4730ac634f9a5bf12a656e422fea03ad72da8 
   core/src/main/scala/kafka/server/ReplicaManager.scala 
 2e0bbcd6e4f0e38997ea18202b249ee3553640ec 
   core/src/main/scala/kafka/server/ThrottledResponse.scala PRE-CREATION 
   core/src/main/scala/kafka/utils/ShutdownableThread.scala 
 fc226c863095b7761290292cd8755cd7ad0f155c 
   core/src/test/scala/integration/kafka/api/QuotasTest.scala PRE-CREATION 
   core/src/test/scala/unit/kafka/server/ClientQuotaManagerTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/server/KafkaConfigTest.scala 
 e26a7306a6ea3104b3fa3df60006c0a473bfb2cc 
   core/src/test/scala/unit/kafka/server/ThrottledResponseExpirationTest.scala 
 PRE-CREATION 
 
 Diff: https://reviews.apache.org/r/33049/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Aditya Auradkar
 




Re: [DISCUSSION] Kafka 0.8.2.2 release?

2015-08-18 Thread Daniel Nelson
I am strongly in favor of cutting a 0.8.2.2 release, but I don’t think that it 
needs to include anything other than the fix for Snappy that kicked off this 
discussion in the first place. If there are additional critical issues that can 
be included without delaying the process, I see no downside.

 On Aug 18, 2015, at 10:54 AM, Neha Narkhede n...@confluent.io wrote:
 
 How about looking at the scope for the 0.8.3 release first before we cut
 yet another point release off of 0.8.2.2? Each release includes some
 overhead and if there is a larger release in the works, it might be worth
 working on getting that. My take is that the 2 things the community has
 been waiting for is SSL support and the new consumer and we have been
 promising to get 0.8.3 with both those features for several months now.
 
 Looking at the progress on both, it seems we are very close to getting both
 those checked in and it looks like we should get there in another 5-6
 weeks. Furthermore, both of these features are large and I anticipate us
 receiving feedback and bugs that will require a couple of point releases on
 top of 0.8.3 anyway. One possibility is to work on 0.8.3 together now and
 get the community to use the newly released features, gather feedback and
 do point releases incorporating that feedback and iterate on it.
 
 We could absolutely do both 0.8.2.2 and 0.8.3. What I'd ask for is for us
 to look at the 0.8.3 timeline too and make a call whether 0.8.2.2 still
 makes sense.
 
 Thanks,
 Neha
 
 On Tue, Aug 18, 2015 at 10:24 AM, Gwen Shapira g...@confluent.io wrote:
 
 Thanks Jun.
 
 I updated the list with your suggestions.
 If anyone feels we are missing a critical patch for 0.8.2.2, please speak
 up.
 
 Gwen
 
 On Mon, Aug 17, 2015 at 5:40 PM, Jun Rao j...@confluent.io wrote:
 
 Hi, Grant,
 
 I took a look at that list. None of those is really critical as you said.
 So, I'd suggest that we not include those to minimize the scope of the
 release.
 
 Thanks,
 
 Jun
 
 On Mon, Aug 17, 2015 at 5:16 PM, Grant Henke ghe...@cloudera.com
 wrote:
 
 Thanks Gwen.
 
 I updated a few small things on the wiki page.
 
 Below is a list of jiras I think could also be marked as included. All
 of
 these, though not super critical, seem like fairly small and low risk
 changes that help avoid potentially confusing issues or errors for
 users.
 
 KAFKA-2012
 KAFKA-972
 KAFKA-2337  KAFKA-2393
 KAFKA-1867
 KAFKA-2407
 KAFKA-2234
 KAFKA-1866
 KAFKA-2345  KAFKA-2355
 
 thoughts?
 
 Thank you,
 Grant
 
 On Mon, Aug 17, 2015 at 4:56 PM, Gwen Shapira g...@confluent.io
 wrote:
 
 Thanks for creating a list, Grant!
 
 I placed it on the wiki with a quick evaluation of the content and
 whether
 it should be in 0.8.2.2:
 
 
 
 
 https://cwiki.apache.org/confluence/display/KAFKA/Proposed+patches+for+0.8.2.2
 
 I'm attempting to only cherrypick fixes that are both important for
 large
 number of users (or very critical to some users) and very safe
 (mostly
 judged by the size of the change, but not only)
 
 If your favorite bugfix is missing from the list, or is there but
 marked
 No, please let us know (in this thread) what we are missing and why
 it
 is
 both important and safe.
 Also, if I accidentally included something you consider unsafe, speak
 up!
 
 Gwen
 
 On Mon, Aug 17, 2015 at 8:17 AM, Grant Henke ghe...@cloudera.com
 wrote:
 
 +dev
 
 Adding dev list back in. Somehow it got dropped.
 
 
 On Mon, Aug 17, 2015 at 10:16 AM, Grant Henke ghe...@cloudera.com
 
 wrote:
 
 Below is a list of candidate bug fix jiras marked fixed for
 0.8.3.
 I
 don't
 suspect all of these will (or should) make it into the release
 but
 this
 should be a relatively complete list to work from:
 
   - KAFKA-2114 
 https://issues.apache.org/jira/browse/KAFKA-2114
 :
 Unable
   to change min.insync.replicas default
   - KAFKA-1702 
 https://issues.apache.org/jira/browse/KAFKA-1702
 :
   Messages silently Lost by producer
   - KAFKA-2012 
 https://issues.apache.org/jira/browse/KAFKA-2012
 :
   Broker should automatically handle corrupt index files
   - KAFKA-2406 
 https://issues.apache.org/jira/browse/KAFKA-2406
 :
 ISR
   propagation should be throttled to avoid overwhelming
 controller.
   - KAFKA-2336 
 https://issues.apache.org/jira/browse/KAFKA-2336
 :
   Changing offsets.topic.num.partitions after the offset topic
 is
 created
   breaks consumer group partition assignment
   - KAFKA-2337 
 https://issues.apache.org/jira/browse/KAFKA-2337
 :
 Verify
   that metric names will not collide when creating new topics
   - KAFKA-2393 
 https://issues.apache.org/jira/browse/KAFKA-2393
 :
   Correctly Handle InvalidTopicException in
 KafkaApis.getTopicMetadata()
   - KAFKA-2189 
 https://issues.apache.org/jira/browse/KAFKA-2189
 :
 Snappy
   compression of message batches less efficient in 0.8.2.1
   - KAFKA-2308 
 https://issues.apache.org/jira/browse/KAFKA-2308
 :
 New
   producer + Snappy face un-compression errors after broker
 restart
   - KAFKA-2042 
 

[jira] [Commented] (KAFKA-2041) Add ability to specify a KeyClass for KafkaLog4jAppender

2015-08-18 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14702161#comment-14702161
 ] 

Benoy Antony commented on KAFKA-2041:
-

[~junrao], Would you be able to review the latest patch ? 

 Add ability to specify a KeyClass for KafkaLog4jAppender
 

 Key: KAFKA-2041
 URL: https://issues.apache.org/jira/browse/KAFKA-2041
 Project: Kafka
  Issue Type: Improvement
  Components: producer 
Reporter: Benoy Antony
Assignee: Jun Rao
 Attachments: KAFKA-2041.patch, kafka-2041-001.patch, 
 kafka-2041-002.patch, kafka-2041-003.patch, kafka-2041-004.patch


 KafkaLog4jAppender is the Log4j Appender to publish messages to Kafka. 
 Since there is no key or explicit partition number, the messages are sent to 
 random partitions. 
 In some cases, it is possible to derive a key from the message itself. 
 So it may be beneficial to enable KafkaLog4jAppender to accept KeyClass which 
 will provide a key for a given message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1690) Add SSL support to Kafka Broker, Producer and Consumer

2015-08-18 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-1690:
--
Attachment: KAFKA-1690_2015-08-18_17:24:48.patch

 Add SSL support to Kafka Broker, Producer and Consumer
 --

 Key: KAFKA-1690
 URL: https://issues.apache.org/jira/browse/KAFKA-1690
 Project: Kafka
  Issue Type: Sub-task
Reporter: Joe Stein
Assignee: Sriharsha Chintalapani
 Fix For: 0.8.3

 Attachments: KAFKA-1690.patch, KAFKA-1690.patch, 
 KAFKA-1690_2015-05-10_23:20:30.patch, KAFKA-1690_2015-05-10_23:31:42.patch, 
 KAFKA-1690_2015-05-11_16:09:36.patch, KAFKA-1690_2015-05-12_16:20:08.patch, 
 KAFKA-1690_2015-05-15_07:18:21.patch, KAFKA-1690_2015-05-20_14:54:35.patch, 
 KAFKA-1690_2015-05-21_10:37:08.patch, KAFKA-1690_2015-06-03_18:52:29.patch, 
 KAFKA-1690_2015-06-23_13:18:20.patch, KAFKA-1690_2015-07-20_06:10:42.patch, 
 KAFKA-1690_2015-07-20_11:59:57.patch, KAFKA-1690_2015-07-25_12:10:55.patch, 
 KAFKA-1690_2015-08-16_20:41:02.patch, KAFKA-1690_2015-08-17_08:12:50.patch, 
 KAFKA-1690_2015-08-17_09:28:52.patch, KAFKA-1690_2015-08-17_12:20:53.patch, 
 KAFKA-1690_2015-08-18_11:24:46.patch, KAFKA-1690_2015-08-18_17:24:48.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 33620: Patch for KAFKA-1690

2015-08-18 Thread Sriharsha Chintalapani

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33620/
---

(Updated Aug. 19, 2015, 12:24 a.m.)


Review request for kafka.


Bugs: KAFKA-1690
https://issues.apache.org/jira/browse/KAFKA-1690


Repository: kafka


Description (updated)
---

KAFKA-1690. new java producer needs ssl support as a client.


KAFKA-1690. new java producer needs ssl support as a client.


KAFKA-1690. new java producer needs ssl support as a client.


KAFKA-1690. new java producer needs ssl support as a client. SSLFactory tests.


KAFKA-1690. new java producer needs ssl support as a client. Added 
PrincipalBuilder.


KAFKA-1690. new java producer needs ssl support as a client. Addressing reviews.


KAFKA-1690. new java producer needs ssl support as a client. Addressing reviews.


KAFKA-1690. new java producer needs ssl support as a client. Addressing reviews.


KAFKA-1690. new java producer needs ssl support as a client. Fixed minor issues 
with the patch.


KAFKA-1690. new java producer needs ssl support as a client. Fixed minor issues 
with the patch.


KAFKA-1690. new java producer needs ssl support as a client.


KAFKA-1690. new java producer needs ssl support as a client.


Merge remote-tracking branch 'refs/remotes/origin/trunk' into KAFKA-1690-V1


KAFKA-1690. Broker side ssl changes.


KAFKA-1684. SSL for socketServer.


KAFKA-1690. Added SSLProducerSendTest and fixes to get right port for SSL.


Merge branch 'trunk' into KAFKA-1690-V1


KAFKA-1690. Post merge fixes.


KAFKA-1690. Added SSLProducerSendTest.


KAFKA-1690. Minor fixes based on patch review comments.


Merge commit


KAFKA-1690. Added SSL Consumer Test.


KAFKA-1690. SSL Support.


KAFKA-1690. Addressing reviews.


Merge branch 'trunk' into KAFKA-1690-V1


Merge branch 'trunk' into KAFKA-1690-V1


KAFKA-1690. Addressing reviews. Removed interestOps from SSLTransportLayer.


KAFKA-1690. Addressing reviews.


KAFKA-1690. added staged receives to selector.


KAFKA-1690. Addressing reviews.


Merge branch 'trunk' into KAFKA-1690-V1


KAFKA-1690. Addressing reviews.


KAFKA-1690. Add SSL support to broker, producer and consumer.


Merge remote-tracking branch 'refs/remotes/origin/trunk' into KAFKA-1690-V1


KAFKA-1690. Add SSL support to Kafka Broker, Producer  Client.


KAFKA-1690. Add SSL support for Kafka Brokers, Producers and Consumers.


Merge remote-tracking branch 'refs/remotes/origin/trunk' into KAFKA-1690-V1


KAFKA-1690. Add SSL support for Kafka Brokers, Producers and Consumers.


KAFKA-1690. Add SSL Support Kafka Broker, Producer and Consumer.


KAFKA-1690. Add SSL support for Kafka Broker, Producer and Consumer.


Merge remote-tracking branch 'refs/remotes/origin/trunk' into KAFKA-1690-V1


Diffs (updated)
-

  build.gradle 983587fd0b7604c3a26fcbb6a1d63e5e470d23fe 
  checkstyle/import-control.xml e3f4f84c6becfd9087627f018690e1e2fc2b3bba 
  clients/src/main/java/org/apache/kafka/clients/ClientUtils.java 
0d68bf1e1e90fe9d5d4397ddf817b9a9af8d9f7a 
  clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
2c421f42ed3fc5d61cf9c87a7eaa7bb23e26f63b 
  clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
0e51d7bd461d253f4396a5b6ca7cd391658807fa 
  clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
d35b421a515074d964c7fccb73d260b847ea5f00 
  clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
be46b6c213ad8c6c09ad22886a5f36175ab0e13a 
  clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
03b8dd23df63a8d8a117f02eabcce4a2d48c44f7 
  clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
aa264202f2724907924985a5ecbe74afc4c6c04b 
  clients/src/main/java/org/apache/kafka/common/config/AbstractConfig.java 
6c317480a181678747bfb6b77e315b08668653c5 
  clients/src/main/java/org/apache/kafka/common/config/SSLConfigs.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/network/Authenticator.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/network/ByteBufferSend.java 
df0e6d5105ca97b7e1cb4d334ffb7b443506bd0b 
  clients/src/main/java/org/apache/kafka/common/network/ChannelBuilder.java 
PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/common/network/DefaultAuthenticator.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/network/KafkaChannel.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/network/NetworkReceive.java 
3ca0098b8ec8cfdf81158465b2d40afc47eb6f80 
  
clients/src/main/java/org/apache/kafka/common/network/PlaintextChannelBuilder.java
 PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/common/network/PlaintextTransportLayer.java
 PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/network/SSLChannelBuilder.java 
PRE-CREATION 
  

Re: Review Request 36652: Patch for KAFKA-2351

2015-08-18 Thread Joel Koshy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36652/#review95824
---


Patch needs a rebase.


core/src/main/scala/kafka/network/SocketServer.scala (line 263)
https://reviews.apache.org/r/36652/#comment150967

Per earlier comment - we don't need this right?



core/src/main/scala/kafka/network/SocketServer.scala (line 264)
https://reviews.apache.org/r/36652/#comment150968

Can you clarify why we need this? i.e., catch and rethrow without any 
logging?



core/src/main/scala/kafka/network/SocketServer.scala (line 265)
https://reviews.apache.org/r/36652/#comment150970

I'm also unclear at this point on what the right thing to do here would be 
- i.e., log and continue or make it fatal as Becket suggested. I'm leaning 
toward the latter but I agree we could revisit this.


- Joel Koshy


On Aug. 13, 2015, 8:10 p.m., Mayuresh Gharat wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/36652/
 ---
 
 (Updated Aug. 13, 2015, 8:10 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2351
 https://issues.apache.org/jira/browse/KAFKA-2351
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Added a try-catch to catch any exceptions thrown by the nioSelector
 
 
 Addressed comments on the Jira ticket
 
 
 Addressed Jun's comments
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/network/SocketServer.scala 
 dbe784b63817fd94e1593136926db17fac6fa3d7 
 
 Diff: https://reviews.apache.org/r/36652/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Mayuresh Gharat
 




[jira] [Commented] (KAFKA-2203) Get gradle build to work with Java 8

2015-08-18 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14701393#comment-14701393
 ] 

Gwen Shapira commented on KAFKA-2203:
-

Thanks [~aw]. The report in HADOOP-12129 looks great. 
Is there any documentation on how projects can incorporate the new TLP? 
Anything we can do to help you validate gradle support?

 Get gradle build to work with Java 8
 

 Key: KAFKA-2203
 URL: https://issues.apache.org/jira/browse/KAFKA-2203
 Project: Kafka
  Issue Type: Bug
  Components: build
Affects Versions: 0.8.1.1
Reporter: Gaju Bhat
Priority: Minor
 Fix For: 0.8.1.2

 Attachments: 0001-Special-case-java-8-and-javadoc-handling.patch


 The gradle build halts because javadoc in java 8 is a lot stricter about 
 valid html.
 It might be worthwhile to special case java 8 as described 
 [here|http://blog.joda.org/2014/02/turning-off-doclint-in-jdk-8-javadoc.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2385) zookeeper-shell does not work

2015-08-18 Thread Prabhjot Singh Bharaj (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14701294#comment-14701294
 ] 

Prabhjot Singh Bharaj commented on KAFKA-2385:
--

I had reported the shell does not appear, if I dont use the jline jar

jline jar is not present as a part of the tarball. So, as a new user, if I want 
to explore kafka data structures in zookeeper, I can not do that until I have 
the jline jar with me

Hope this explains

 zookeeper-shell does not work
 -

 Key: KAFKA-2385
 URL: https://issues.apache.org/jira/browse/KAFKA-2385
 Project: Kafka
  Issue Type: Bug
  Components: tools
Reporter: Jiangjie Qin
Assignee: Flavio Junqueira
 Fix For: 0.8.3


 The zookeeper shell shipped with Kafka does not work because jline jar is 
 missing.
 [jqin@jqin-ld1 bin]$ ./zookeeper-shell.sh localhost:2181
 Connecting to localhost:2181
 Welcome to ZooKeeper!
 JLine support is disabled
 WATCHER::
 WatchedEvent state:SyncConnected type:None path:null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (KAFKA-2385) zookeeper-shell does not work

2015-08-18 Thread Flavio Junqueira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flavio Junqueira reopened KAFKA-2385:
-

Ok, I'll check, and in the meanwhile, you can just get a copy of the zk 
distribution directly and use the CLI from the distribution, it should work 
fine.

 zookeeper-shell does not work
 -

 Key: KAFKA-2385
 URL: https://issues.apache.org/jira/browse/KAFKA-2385
 Project: Kafka
  Issue Type: Bug
  Components: tools
Reporter: Jiangjie Qin
Assignee: Flavio Junqueira
 Fix For: 0.8.3


 The zookeeper shell shipped with Kafka does not work because jline jar is 
 missing.
 [jqin@jqin-ld1 bin]$ ./zookeeper-shell.sh localhost:2181
 Connecting to localhost:2181
 Welcome to ZooKeeper!
 JLine support is disabled
 WATCHER::
 WatchedEvent state:SyncConnected type:None path:null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 33378: Patch for KAFKA-2136

2015-08-18 Thread Aditya Auradkar


 On Aug. 5, 2015, 4:44 a.m., Jun Rao wrote:
  clients/src/main/java/org/apache/kafka/common/requests/ProduceResponse.java,
   line 55
  https://reviews.apache.org/r/33378/diff/10/?file=1010009#file1010009line55
 
  We actually will need different constructors for different versions. We 
  want to reuse those request/response objects on the broker side. So, the 
  broker will need to construct different version of the response based on 
  the version of the request. You can take a look at OffsetCommitRequest as 
  an example.

Good point.


- Aditya


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33378/#review94167
---


On July 13, 2015, 8:36 p.m., Aditya Auradkar wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/33378/
 ---
 
 (Updated July 13, 2015, 8:36 p.m.)
 
 
 Review request for kafka, Joel Koshy and Jun Rao.
 
 
 Bugs: KAFKA-2136
 https://issues.apache.org/jira/browse/KAFKA-2136
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Changes are
 - Addressing Joel's comments
 - protocol changes to the fetch request and response to return the 
 throttle_time_ms to clients
 - New producer/consumer metrics to expose the avg and max delay time for a 
 client
 - Test cases.
 - Addressed Joel's comments
   
 For now the patch will publish a zero delay and return a response
 
 
 Diffs
 -
 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/Fetcher.java
  56281ee15cc33dfc96ff64d5b1e596047c7132a4 
   
 clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
 07e65d4a54ba4eef5b787eba3e71cbe9f6a920bd 
   clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java 
 3dc8b015afd2347a41c9a9dbc02b8e367da5f75f 
   clients/src/main/java/org/apache/kafka/common/requests/FetchRequest.java 
 8686d83aa52e435c6adafbe9ff4bd1602281072a 
   clients/src/main/java/org/apache/kafka/common/requests/FetchResponse.java 
 eb8951fba48c335095cc43fc3672de1c733e07ff 
   clients/src/main/java/org/apache/kafka/common/requests/ProduceRequest.java 
 fabeae3083a8ea55cdacbb9568f3847ccd85bab4 
   clients/src/main/java/org/apache/kafka/common/requests/ProduceResponse.java 
 37ec0b79beafcf5735c386b066eb319fb697eff5 
   
 clients/src/test/java/org/apache/kafka/clients/consumer/internals/FetcherTest.java
  419541011d652becf0cda7a5e62ce813cddb1732 
   
 clients/src/test/java/org/apache/kafka/clients/producer/internals/SenderTest.java
  8b1805d3d2bcb9fe2bacb37d870c3236aa9532c4 
   
 clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java
  e3cc1967e407b64cc734548c19e30de700b64ba8 
   core/src/main/scala/kafka/api/FetchRequest.scala 
 5b38f8554898e54800abd65a7415dd0ac41fd958 
   core/src/main/scala/kafka/api/FetchResponse.scala 
 0b6b33ab6f7a732ff1322b6f48abd4c43e0d7215 
   core/src/main/scala/kafka/api/ProducerRequest.scala 
 c866180d3680da03e48d374415f10220f6ca68c4 
   core/src/main/scala/kafka/api/ProducerResponse.scala 
 5d1fac4cb8943f5bfaa487f8e9d9d2856efbd330 
   core/src/main/scala/kafka/consumer/SimpleConsumer.scala 
 c16f7edd322709060e54c77eb505c44cbd77a4ec 
   core/src/main/scala/kafka/server/AbstractFetcherThread.scala 
 83fc47417dd7fe4edf030217fa7fd69d99b170b0 
   core/src/main/scala/kafka/server/DelayedFetch.scala 
 de6cf5bdaa0e70394162febc63b50b55ca0a92db 
   core/src/main/scala/kafka/server/DelayedProduce.scala 
 05078b24ef28f2f4e099afa943e43f1d00359fda 
   core/src/main/scala/kafka/server/KafkaApis.scala 
 d63bc18d795a6f0e6994538ca55a9a46f7fb8ffd 
   core/src/main/scala/kafka/server/OffsetManager.scala 
 5cca85cf727975f6d3acb2223fd186753ad761dc 
   core/src/main/scala/kafka/server/ReplicaFetcherThread.scala 
 b31b432a226ba79546dd22ef1d2acbb439c2e9a3 
   core/src/main/scala/kafka/server/ReplicaManager.scala 
 59c9bc3ac3a8afc07a6f8c88c5871304db588d17 
   core/src/test/scala/unit/kafka/api/RequestResponseSerializationTest.scala 
 5717165f2344823fabe8f7cfafae4bb8af2d949a 
   core/src/test/scala/unit/kafka/server/ReplicaManagerTest.scala 
 00d59337a99ac135e8689bd1ecd928f7b1423d79 
 
 Diff: https://reviews.apache.org/r/33378/diff/
 
 
 Testing
 ---
 
 New tests added
 
 
 Thanks,
 
 Aditya Auradkar
 




[jira] [Commented] (KAFKA-1387) Kafka getting stuck creating ephemeral node it has already created when two zookeeper sessions are established in a very short period of time

2015-08-18 Thread Flavio Junqueira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14701571#comment-14701571
 ] 

Flavio Junqueira commented on KAFKA-1387:
-

[~guozhang] it looks like [~jwl...@gmail.com] isn't in the list of 
contributors, could you add him so that we can assign the jira to him?

 Kafka getting stuck creating ephemeral node it has already created when two 
 zookeeper sessions are established in a very short period of time
 -

 Key: KAFKA-1387
 URL: https://issues.apache.org/jira/browse/KAFKA-1387
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1.1
Reporter: Fedor Korotkiy
Priority: Blocker
  Labels: newbie, patch, zkclient-problems
 Attachments: kafka-1387.patch


 Kafka broker re-registers itself in zookeeper every time handleNewSession() 
 callback is invoked.
 https://github.com/apache/kafka/blob/0.8.1/core/src/main/scala/kafka/server/KafkaHealthcheck.scala
  
 Now imagine the following sequence of events.
 1) Zookeeper session reestablishes. handleNewSession() callback is queued by 
 the zkClient, but not invoked yet.
 2) Zookeeper session reestablishes again, queueing callback second time.
 3) First callback is invoked, creating /broker/[id] ephemeral path.
 4) Second callback is invoked and it tries to create /broker/[id] path using 
 createEphemeralPathExpectConflictHandleZKBug() function. But the path is 
 already exists, so createEphemeralPathExpectConflictHandleZKBug() is getting 
 stuck in the infinite loop.
 Seems like controller election code have the same issue.
 I'am able to reproduce this issue on the 0.8.1 branch from github using the 
 following configs.
 # zookeeper
 tickTime=10
 dataDir=/tmp/zk/
 clientPort=2101
 maxClientCnxns=0
 # kafka
 broker.id=1
 log.dir=/tmp/kafka
 zookeeper.connect=localhost:2101
 zookeeper.connection.timeout.ms=100
 zookeeper.sessiontimeout.ms=100
 Just start kafka and zookeeper and then pause zookeeper several times using 
 Ctrl-Z.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2436) log.retention.hours should be honored by LogManager

2015-08-18 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2436:

   Resolution: Fixed
Fix Version/s: 0.8.3
   Status: Resolved  (was: Patch Available)

Pushed to trunk! Thanks for fixing this, [~lindong]

 log.retention.hours should be honored by LogManager
 ---

 Key: KAFKA-2436
 URL: https://issues.apache.org/jira/browse/KAFKA-2436
 Project: Kafka
  Issue Type: Bug
Reporter: Dong Lin
Assignee: Dong Lin
Priority: Critical
 Fix For: 0.8.3


 Currently log.retention.hours is used to calculate 
 KafkaConfig.logRetentionTimeMillis. But it is not used in LogManager to 
 decide when to delete a log. LogManager is only using the log.retention.ms in 
 the broker configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1387) Kafka getting stuck creating ephemeral node it has already created when two zookeeper sessions are established in a very short period of time

2015-08-18 Thread Flavio Junqueira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14701418#comment-14701418
 ] 

Flavio Junqueira commented on KAFKA-1387:
-

It doesn't look like it 'd be a small change to zkclient to fix this. We 
essentially need it to expose zk events as they occur. In the way it currently 
does it, the events are serialized and the operations are retried transparently 
so I don't know if the znode already exists because of a connection loss or if 
the session actually expired and there is a new one now. 

The simplest way around this seems to be to just re-register the consumer 
directly (delete and create) upon a node exists exception. This should work 
because of the following argument.

There are three possibilities when we get a node exists exception:

# The znode exists from a previous session and hasn't been reclaimed yet
# The znode exists because of a connection loss event while the znode was being 
created, so the second time we get an exception (event)
# The previous session has expired, a new one was created, and the registration 
was occurring around this transition, so when we execute handleNewSession for 
the new session, we get a node exists exception. 

In all these three cases, deleting and recreating seems fine. It is clearly 
conservative and more expensive than necessary, but at least it doesn't require 
changes to zkclient. Does it sound a reasonable? Do you see any problem? 

CC [~guozhang] [~jwl...@gmail.com]

 Kafka getting stuck creating ephemeral node it has already created when two 
 zookeeper sessions are established in a very short period of time
 -

 Key: KAFKA-1387
 URL: https://issues.apache.org/jira/browse/KAFKA-1387
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1.1
Reporter: Fedor Korotkiy
Priority: Blocker
  Labels: newbie, patch, zkclient-problems
 Attachments: kafka-1387.patch


 Kafka broker re-registers itself in zookeeper every time handleNewSession() 
 callback is invoked.
 https://github.com/apache/kafka/blob/0.8.1/core/src/main/scala/kafka/server/KafkaHealthcheck.scala
  
 Now imagine the following sequence of events.
 1) Zookeeper session reestablishes. handleNewSession() callback is queued by 
 the zkClient, but not invoked yet.
 2) Zookeeper session reestablishes again, queueing callback second time.
 3) First callback is invoked, creating /broker/[id] ephemeral path.
 4) Second callback is invoked and it tries to create /broker/[id] path using 
 createEphemeralPathExpectConflictHandleZKBug() function. But the path is 
 already exists, so createEphemeralPathExpectConflictHandleZKBug() is getting 
 stuck in the infinite loop.
 Seems like controller election code have the same issue.
 I'am able to reproduce this issue on the 0.8.1 branch from github using the 
 following configs.
 # zookeeper
 tickTime=10
 dataDir=/tmp/zk/
 clientPort=2101
 maxClientCnxns=0
 # kafka
 broker.id=1
 log.dir=/tmp/kafka
 zookeeper.connect=localhost:2101
 zookeeper.connection.timeout.ms=100
 zookeeper.sessiontimeout.ms=100
 Just start kafka and zookeeper and then pause zookeeper several times using 
 Ctrl-Z.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 33620: Patch for KAFKA-1690

2015-08-18 Thread Jun Rao


 On Aug. 3, 2015, 4:50 p.m., Jun Rao wrote:
  clients/src/main/java/org/apache/kafka/common/network/SSLTransportLayer.java,
   lines 368-381
  https://reviews.apache.org/r/33620/diff/13/?file=1021979#file1021979line368
 
  If handshake status is BUFFER_OVERFLOW, we will return to the caller 
  and then to the selector. However, we may have read all incoming bytes into 
  netReadBuffer. So, the key may never be selected again to complete the 
  handshake. It seems that this case can never happen during handshake since 
  we don't expect to use the appReadBuffer. Perhaps we can just assert that 
  state is illegal when handling NEED_UNWRAP in handshake().
 
 Sriharsha Chintalapani wrote:
 Sorry where you want to add assert? and which state you want to handle?

Basically, I am saying that in handshake(), we should never hit BUFFER_OVERFLOW 
since we never read anything to appReadBuffer. Had we hit BUFFER_OVERFLOW, the 
issue is that you may have all socket bytes read into netReadBuffer, but not 
all bytes in netReadBuffer have been read by SslEngine. In the next 
Selector.poll() call, those bytes will be handled. However, this key may never 
be selected since there are no more bytes from the socket. If this is the case, 
I'd suggest that we either assert that BUFFER_OVERFLOW should never happen when 
handling unwrap or at least add a comment that this should never happen.

case NEED_UNWRAP:
log.trace(SSLHandshake NEED_UNWRAP channelId {}, 
appReadBuffer pos {}, netReadBuffer pos {}, netWriteBuffer pos {},
  channelId, appReadBuffer.position(), 
netReadBuffer.position(), netWriteBuffer.position());
handshakeResult = handshakeUnwrap(read);
if (handshakeResult.getStatus() == Status.BUFFER_UNDERFLOW) 
{
int currentPacketBufferSize = packetBufferSize();
netReadBuffer = Utils.ensureCapacity(netReadBuffer, 
currentPacketBufferSize);
if (netReadBuffer.position() = 
currentPacketBufferSize) {
throw new IllegalStateException(Buffer underflow 
when there is available data);
}
} else if (handshakeResult.getStatus() == 
Status.BUFFER_OVERFLOW) {
int currentAppBufferSize = applicationBufferSize();
appReadBuffer = Utils.ensureCapacity(appReadBuffer, 
currentAppBufferSize);
if (appReadBuffer.position()  currentAppBufferSize) {
throw new IllegalStateException(Buffer underflow 
when available data size ( + appReadBuffer.position() +
)  packet buffer 
size ( + currentAppBufferSize + ));
}
} else if (handshakeResult.getStatus() == Status.CLOSED) {
throw new EOFException(SSL handshake status CLOSED 
during handshake UNWRAP);
}


- Jun


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33620/#review93862
---


On Aug. 17, 2015, 7:21 p.m., Sriharsha Chintalapani wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/33620/
 ---
 
 (Updated Aug. 17, 2015, 7:21 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1690
 https://issues.apache.org/jira/browse/KAFKA-1690
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-1690. new java producer needs ssl support as a client.
 
 
 KAFKA-1690. new java producer needs ssl support as a client.
 
 
 KAFKA-1690. new java producer needs ssl support as a client.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. SSLFactory tests.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Added 
 PrincipalBuilder.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Addressing 
 reviews.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Addressing 
 reviews.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Addressing 
 reviews.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Fixed minor 
 issues with the patch.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Fixed minor 
 issues with the patch.
 
 
 KAFKA-1690. new java producer needs ssl support as a client.
 
 
 KAFKA-1690. new java producer needs ssl support as a client.
 
 
 Merge remote-tracking branch 'refs/remotes/origin/trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Broker side ssl changes.
 
 
 KAFKA-1684. SSL for socketServer.
 
 
 KAFKA-1690. Added SSLProducerSendTest and fixes to get right port for SSL.
 
 
 Merge 

[jira] [Commented] (KAFKA-1387) Kafka getting stuck creating ephemeral node it has already created when two zookeeper sessions are established in a very short period of time

2015-08-18 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14701509#comment-14701509
 ] 

Guozhang Wang commented on KAFKA-1387:
--

Thanks [~fpj], that makes sense to me. [~jwlent55] do you want to submit a new 
patch following this approach?

 Kafka getting stuck creating ephemeral node it has already created when two 
 zookeeper sessions are established in a very short period of time
 -

 Key: KAFKA-1387
 URL: https://issues.apache.org/jira/browse/KAFKA-1387
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1.1
Reporter: Fedor Korotkiy
Priority: Blocker
  Labels: newbie, patch, zkclient-problems
 Attachments: kafka-1387.patch


 Kafka broker re-registers itself in zookeeper every time handleNewSession() 
 callback is invoked.
 https://github.com/apache/kafka/blob/0.8.1/core/src/main/scala/kafka/server/KafkaHealthcheck.scala
  
 Now imagine the following sequence of events.
 1) Zookeeper session reestablishes. handleNewSession() callback is queued by 
 the zkClient, but not invoked yet.
 2) Zookeeper session reestablishes again, queueing callback second time.
 3) First callback is invoked, creating /broker/[id] ephemeral path.
 4) Second callback is invoked and it tries to create /broker/[id] path using 
 createEphemeralPathExpectConflictHandleZKBug() function. But the path is 
 already exists, so createEphemeralPathExpectConflictHandleZKBug() is getting 
 stuck in the infinite loop.
 Seems like controller election code have the same issue.
 I'am able to reproduce this issue on the 0.8.1 branch from github using the 
 following configs.
 # zookeeper
 tickTime=10
 dataDir=/tmp/zk/
 clientPort=2101
 maxClientCnxns=0
 # kafka
 broker.id=1
 log.dir=/tmp/kafka
 zookeeper.connect=localhost:2101
 zookeeper.connection.timeout.ms=100
 zookeeper.sessiontimeout.ms=100
 Just start kafka and zookeeper and then pause zookeeper several times using 
 Ctrl-Z.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2203) Get gradle build to work with Java 8

2015-08-18 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14701486#comment-14701486
 ] 

Allen Wittenauer commented on KAFKA-2203:
-

We've got a ways to go before we do a release and one of those things is 
guidance on how projects should best incorporate it. That said, there is some 
usage documentation already there: 
https://github.com/apache/hadoop/tree/HADOOP-12111/dev-support/docs ... so 
we're not exactly starting from scratch.

I'm doing most of the gradle work in HADOOP-12257 with my dev code currently 
sitting in https://github.com/aw-altiscale/hadoop/tree/h12257 (in the 
dev-support dir).  It's going through a LOT of major changes still (with the 
occasional forced update) but feel free to mess around with it.  As soon as it 
gets more stable, I'll likely ping some folks from the projects I'm currently 
playing with (bigtop, samza, kafka) to have a look over since my knowledge of 
gradle isn't great.

The biggest thing I need right now are patches that are known to break or 
otherwise have bad behavior to see if the code catches it.  I've already did 
one test against a kafka patch that was out there that caused unit test 
failures. Due to junit being used, test-patch picked that up with no code 
changes required.  Same with bash code changes.   It's the scala bits that I 
definitely need help with: this has a scala compile error!  this has a scala 
warning we should flag! etc. The multiple scala support is a bigger issue that 
maybe we'll tackle in the future.  I'd want to clean up HADOOP-12337 first 
though so that we have a better idea of how to matrix builds.

Thanks!

 Get gradle build to work with Java 8
 

 Key: KAFKA-2203
 URL: https://issues.apache.org/jira/browse/KAFKA-2203
 Project: Kafka
  Issue Type: Bug
  Components: build
Affects Versions: 0.8.1.1
Reporter: Gaju Bhat
Priority: Minor
 Fix For: 0.8.1.2

 Attachments: 0001-Special-case-java-8-and-javadoc-handling.patch


 The gradle build halts because javadoc in java 8 is a lot stricter about 
 valid html.
 It might be worthwhile to special case java 8 as described 
 [here|http://blog.joda.org/2014/02/turning-off-doclint-in-jdk-8-javadoc.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 33620: Patch for KAFKA-1690

2015-08-18 Thread Jun Rao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33620/#review95671
---

Ship it!


Thanks for the latest patch. +1. I only have a few minor comments below.


clients/src/main/java/org/apache/kafka/common/network/SSLTransportLayer.java 
(line 184)
https://reviews.apache.org/r/33620/#comment150764

This sentence doesn't read well.



clients/src/main/java/org/apache/kafka/common/network/SSLTransportLayer.java 
(line 438)
https://reviews.apache.org/r/33620/#comment150769

Hmm, do we want to break here? It seems that we want to continue in the 
loop until all bytes in netReadBuffer are read.



clients/src/main/java/org/apache/kafka/common/network/SSLTransportLayer.java 
(line 450)
https://reviews.apache.org/r/33620/#comment150770

The termination condition doesn't seem quite right. If dst has no remaining 
space but there are still bytes in netReadBuffer, we should return, right?



clients/src/main/java/org/apache/kafka/common/network/SSLTransportLayer.java 
(line 531)
https://reviews.apache.org/r/33620/#comment150771

netReadBuffer should be netWriteBuffer, right?


- Jun Rao


On Aug. 17, 2015, 7:21 p.m., Sriharsha Chintalapani wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/33620/
 ---
 
 (Updated Aug. 17, 2015, 7:21 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1690
 https://issues.apache.org/jira/browse/KAFKA-1690
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-1690. new java producer needs ssl support as a client.
 
 
 KAFKA-1690. new java producer needs ssl support as a client.
 
 
 KAFKA-1690. new java producer needs ssl support as a client.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. SSLFactory tests.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Added 
 PrincipalBuilder.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Addressing 
 reviews.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Addressing 
 reviews.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Addressing 
 reviews.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Fixed minor 
 issues with the patch.
 
 
 KAFKA-1690. new java producer needs ssl support as a client. Fixed minor 
 issues with the patch.
 
 
 KAFKA-1690. new java producer needs ssl support as a client.
 
 
 KAFKA-1690. new java producer needs ssl support as a client.
 
 
 Merge remote-tracking branch 'refs/remotes/origin/trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Broker side ssl changes.
 
 
 KAFKA-1684. SSL for socketServer.
 
 
 KAFKA-1690. Added SSLProducerSendTest and fixes to get right port for SSL.
 
 
 Merge branch 'trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Post merge fixes.
 
 
 KAFKA-1690. Added SSLProducerSendTest.
 
 
 KAFKA-1690. Minor fixes based on patch review comments.
 
 
 Merge commit
 
 
 KAFKA-1690. Added SSL Consumer Test.
 
 
 KAFKA-1690. SSL Support.
 
 
 KAFKA-1690. Addressing reviews.
 
 
 Merge branch 'trunk' into KAFKA-1690-V1
 
 
 Merge branch 'trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Addressing reviews. Removed interestOps from SSLTransportLayer.
 
 
 KAFKA-1690. Addressing reviews.
 
 
 KAFKA-1690. added staged receives to selector.
 
 
 KAFKA-1690. Addressing reviews.
 
 
 Merge branch 'trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Addressing reviews.
 
 
 KAFKA-1690. Add SSL support to broker, producer and consumer.
 
 
 Merge remote-tracking branch 'refs/remotes/origin/trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Add SSL support to Kafka Broker, Producer  Client.
 
 
 KAFKA-1690. Add SSL support for Kafka Brokers, Producers and Consumers.
 
 
 Merge remote-tracking branch 'refs/remotes/origin/trunk' into KAFKA-1690-V1
 
 
 KAFKA-1690. Add SSL support for Kafka Brokers, Producers and Consumers.
 
 
 KAFKA-1690. Add SSL Support Kafka Broker, Producer and Consumer.
 
 
 Diffs
 -
 
   build.gradle 983587fd0b7604c3a26fcbb6a1d63e5e470d23fe 
   checkstyle/import-control.xml e3f4f84c6becfd9087627f018690e1e2fc2b3bba 
   clients/src/main/java/org/apache/kafka/clients/ClientUtils.java 
 0d68bf1e1e90fe9d5d4397ddf817b9a9af8d9f7a 
   clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
 2c421f42ed3fc5d61cf9c87a7eaa7bb23e26f63b 
   clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
 0e51d7bd461d253f4396a5b6ca7cd391658807fa 
   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
 d35b421a515074d964c7fccb73d260b847ea5f00 
   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
 be46b6c213ad8c6c09ad22886a5f36175ab0e13a 
   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
 

[jira] [Commented] (KAFKA-1690) Add SSL support to Kafka Broker, Producer and Consumer

2015-08-18 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14701502#comment-14701502
 ] 

Jun Rao commented on KAFKA-1690:


[~sriharsha], thanks for the latest patch. It looks good and I only have a few 
minor comments. If there is no objections, I will commit the patch as it is 
later today and we can address the minor comments in a followup patch. 

 Add SSL support to Kafka Broker, Producer and Consumer
 --

 Key: KAFKA-1690
 URL: https://issues.apache.org/jira/browse/KAFKA-1690
 Project: Kafka
  Issue Type: Sub-task
Reporter: Joe Stein
Assignee: Sriharsha Chintalapani
 Fix For: 0.8.3

 Attachments: KAFKA-1690.patch, KAFKA-1690.patch, 
 KAFKA-1690_2015-05-10_23:20:30.patch, KAFKA-1690_2015-05-10_23:31:42.patch, 
 KAFKA-1690_2015-05-11_16:09:36.patch, KAFKA-1690_2015-05-12_16:20:08.patch, 
 KAFKA-1690_2015-05-15_07:18:21.patch, KAFKA-1690_2015-05-20_14:54:35.patch, 
 KAFKA-1690_2015-05-21_10:37:08.patch, KAFKA-1690_2015-06-03_18:52:29.patch, 
 KAFKA-1690_2015-06-23_13:18:20.patch, KAFKA-1690_2015-07-20_06:10:42.patch, 
 KAFKA-1690_2015-07-20_11:59:57.patch, KAFKA-1690_2015-07-25_12:10:55.patch, 
 KAFKA-1690_2015-08-16_20:41:02.patch, KAFKA-1690_2015-08-17_08:12:50.patch, 
 KAFKA-1690_2015-08-17_09:28:52.patch, KAFKA-1690_2015-08-17_12:20:53.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)