[jira] [Created] (KAFKA-1914) Count TotalProduceRequestRate and TotalFetchRequestRate in BrokerTopicMetrics

2015-02-02 Thread Aditya A Auradkar (JIRA)
Aditya A Auradkar created KAFKA-1914:


 Summary: Count TotalProduceRequestRate and TotalFetchRequestRate 
in BrokerTopicMetrics
 Key: KAFKA-1914
 URL: https://issues.apache.org/jira/browse/KAFKA-1914
 Project: Kafka
  Issue Type: Bug
  Components: core
Reporter: Aditya A Auradkar


Currently the BrokerTopicMetrics only counts the failedProduceRequestRate and 
the failedFetchRequestRate. We should add 2 metrics to count the overall 
produce/fetch request rates.







--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1809) Refactor brokers to allow listening on multiple ports and IPs

2015-02-02 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14301791#comment-14301791
 ] 

Gwen Shapira commented on KAFKA-1809:
-

Updated reviewboard https://reviews.apache.org/r/28769/diff/
 against branch trunk

 Refactor brokers to allow listening on multiple ports and IPs 
 --

 Key: KAFKA-1809
 URL: https://issues.apache.org/jira/browse/KAFKA-1809
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Attachments: KAFKA-1809.patch, KAFKA-1809_2014-12-25_11:04:17.patch, 
 KAFKA-1809_2014-12-27_12:03:35.patch, KAFKA-1809_2014-12-27_12:22:56.patch, 
 KAFKA-1809_2014-12-29_12:11:36.patch, KAFKA-1809_2014-12-30_11:25:29.patch, 
 KAFKA-1809_2014-12-30_18:48:46.patch, KAFKA-1809_2015-01-05_14:23:57.patch, 
 KAFKA-1809_2015-01-05_20:25:15.patch, KAFKA-1809_2015-01-05_21:40:14.patch, 
 KAFKA-1809_2015-01-06_11:46:22.patch, KAFKA-1809_2015-01-13_18:16:21.patch, 
 KAFKA-1809_2015-01-28_10:26:22.patch, KAFKA-1809_2015-02-02_11:55:16.patch


 The goal is to eventually support different security mechanisms on different 
 ports. 
 Currently brokers are defined as host+port pair, and this definition exists 
 throughout the code-base, therefore some refactoring is needed to support 
 multiple ports for a single broker.
 The detailed design is here: 
 https://cwiki.apache.org/confluence/display/KAFKA/Multiple+Listeners+for+Kafka+Brokers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1809) Refactor brokers to allow listening on multiple ports and IPs

2015-02-02 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1809:

Attachment: KAFKA-1809_2015-02-02_11:55:16.patch

 Refactor brokers to allow listening on multiple ports and IPs 
 --

 Key: KAFKA-1809
 URL: https://issues.apache.org/jira/browse/KAFKA-1809
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Attachments: KAFKA-1809.patch, KAFKA-1809_2014-12-25_11:04:17.patch, 
 KAFKA-1809_2014-12-27_12:03:35.patch, KAFKA-1809_2014-12-27_12:22:56.patch, 
 KAFKA-1809_2014-12-29_12:11:36.patch, KAFKA-1809_2014-12-30_11:25:29.patch, 
 KAFKA-1809_2014-12-30_18:48:46.patch, KAFKA-1809_2015-01-05_14:23:57.patch, 
 KAFKA-1809_2015-01-05_20:25:15.patch, KAFKA-1809_2015-01-05_21:40:14.patch, 
 KAFKA-1809_2015-01-06_11:46:22.patch, KAFKA-1809_2015-01-13_18:16:21.patch, 
 KAFKA-1809_2015-01-28_10:26:22.patch, KAFKA-1809_2015-02-02_11:55:16.patch


 The goal is to eventually support different security mechanisms on different 
 ports. 
 Currently brokers are defined as host+port pair, and this definition exists 
 throughout the code-base, therefore some refactoring is needed to support 
 multiple ports for a single broker.
 The detailed design is here: 
 https://cwiki.apache.org/confluence/display/KAFKA/Multiple+Listeners+for+Kafka+Brokers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1729) add doc for Kafka-based offset management in 0.8.2

2015-02-02 Thread Joel Koshy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14301864#comment-14301864
 ] 

Joel Koshy commented on KAFKA-1729:
---

Yes I will commit it today.

Re: rolling back to ZK - yes that should work. A potential caveat is that we do 
filtered commits to ZK i.e., if an offset has not changed then don't commit. 
However, on the offset fetch in dual-commit mode we take the maximum between zk 
and kafka and will get committed on the next offset commit so I think it should 
be fine.

 add doc for Kafka-based offset management in 0.8.2
 --

 Key: KAFKA-1729
 URL: https://issues.apache.org/jira/browse/KAFKA-1729
 Project: Kafka
  Issue Type: Sub-task
Reporter: Jun Rao
Assignee: Joel Koshy
 Fix For: 0.8.2

 Attachments: KAFKA-1729.patch, KAFKA-1782-doc-v1.patch, 
 KAFKA-1782-doc-v2.patch, KAFKA-1782-doc-v3.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1886) SimpleConsumer swallowing ClosedByInterruptException

2015-02-02 Thread Aditya A Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14301981#comment-14301981
 ] 

Aditya A Auradkar commented on KAFKA-1886:
--

Created reviewboard https://reviews.apache.org/r/30527/diff/
 against branch origin/trunk

 SimpleConsumer swallowing ClosedByInterruptException
 

 Key: KAFKA-1886
 URL: https://issues.apache.org/jira/browse/KAFKA-1886
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Reporter: Aditya A Auradkar
Assignee: Aditya Auradkar
 Attachments: KAFKA-1886.patch, KAFKA-1886.patch


 This issue was originally reported by a Samza developer. I've included an 
 exchange of mine with Chris Riccomini. I'm trying to reproduce the problem on 
 my dev setup.
 From: criccomi
 Hey all,
 Samza's BrokerProxy [1] threads appear to be wedging randomly when we try to 
 interrupt its fetcher thread. I noticed that SimpleConsumer.scala catches 
 Throwable in its sendRequest method [2]. I'm wondering: if 
 blockingChannel.send/receive throws a ClosedByInterruptException
 when the thread is interrupted, what happens? It looks like sendRequest will 
 catch the exception (which I
 think clears the thread's interrupted flag), and then retries the send. If 
 the send succeeds on the retry, I think that the ClosedByInterruptException 
 exception is effectively swallowed, and the BrokerProxy will continue
 fetching messages as though its thread was never interrupted.
 Am I misunderstanding how things work?
 Cheers,
 Chris
 [1] 
 https://github.com/apache/incubator-samza/blob/master/samza-kafka/src/main/scala/org/apache/samza/system/kafka/BrokerProxy.scala#L126
 [2] 
 https://github.com/apache/kafka/blob/0.8.1/core/src/main/scala/kafka/consumer/SimpleConsumer.scala#L75



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1886) SimpleConsumer swallowing ClosedByInterruptException

2015-02-02 Thread Aditya A Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya A Auradkar updated KAFKA-1886:
-
Attachment: KAFKA-1886.patch

 SimpleConsumer swallowing ClosedByInterruptException
 

 Key: KAFKA-1886
 URL: https://issues.apache.org/jira/browse/KAFKA-1886
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Reporter: Aditya A Auradkar
Assignee: Aditya Auradkar
 Attachments: KAFKA-1886.patch, KAFKA-1886.patch


 This issue was originally reported by a Samza developer. I've included an 
 exchange of mine with Chris Riccomini. I'm trying to reproduce the problem on 
 my dev setup.
 From: criccomi
 Hey all,
 Samza's BrokerProxy [1] threads appear to be wedging randomly when we try to 
 interrupt its fetcher thread. I noticed that SimpleConsumer.scala catches 
 Throwable in its sendRequest method [2]. I'm wondering: if 
 blockingChannel.send/receive throws a ClosedByInterruptException
 when the thread is interrupted, what happens? It looks like sendRequest will 
 catch the exception (which I
 think clears the thread's interrupted flag), and then retries the send. If 
 the send succeeds on the retry, I think that the ClosedByInterruptException 
 exception is effectively swallowed, and the BrokerProxy will continue
 fetching messages as though its thread was never interrupted.
 Am I misunderstanding how things work?
 Cheers,
 Chris
 [1] 
 https://github.com/apache/incubator-samza/blob/master/samza-kafka/src/main/scala/org/apache/samza/system/kafka/BrokerProxy.scala#L126
 [2] 
 https://github.com/apache/kafka/blob/0.8.1/core/src/main/scala/kafka/consumer/SimpleConsumer.scala#L75



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 30527: Patch for KAFKA-1886

2015-02-02 Thread Aditya Auradkar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30527/
---

Review request for kafka.


Bugs: KAFKA-1886
https://issues.apache.org/jira/browse/KAFKA-1886


Repository: kafka


Description
---

Fixing KAFKA-1886. Forcing the SimpleConsumer to throw a 
ClosedByInterruptException if thrown and not retry


Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into trunk


Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into trunk


Reducing sleep in PrimitiveApiTest


Reducing sleep in PrimitiveApiTest


Diffs
-

  core/src/main/scala/kafka/consumer/SimpleConsumer.scala 
cbef84ac76e62768981f74e71d451f2bda995275 
  core/src/test/scala/unit/kafka/integration/PrimitiveApiTest.scala 
a5386a03b62956bc440b40783247c8cdf7432315 

Diff: https://reviews.apache.org/r/30527/diff/


Testing
---


Thanks,

Aditya Auradkar



Re: Review Request 28769: Patch for KAFKA-1809

2015-02-02 Thread Gwen Shapira
Good idea :)

Fixed this, rebased the patch and uploaded a new diff.

Gwen

On Thu, Jan 29, 2015 at 9:27 AM, Don Bosco Durai bo...@apache.org wrote:
 +1

 I also feel, having security.* would be easy going forward.

 Thanks

 Bosco


 On 1/29/15, 6:08 AM, Jeff Holoman jeff.holo...@gmail.com wrote:



 On Jan. 23, 2015, 1:57 a.m., Jun Rao wrote:
  core/src/main/scala/kafka/server/KafkaConfig.scala, lines 182-183
 
https://reviews.apache.org/r/28769/diff/12/?file=820431#file820431line18
2
 
  Since this is also used for communication btw the controller and
the brokers, perhaps it's better named as sth like
intra.broker.security.protocol?

Maybe it makese sense to prepend all security related configs with
security, eg: security.intra.broker.protocol,
security.new.param.for.future.jira With all of the upcoming changes it
would make security realted configs easy to spot.


- Jeff


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28769/#review69281
---


On Jan. 28, 2015, 6:26 p.m., Gwen Shapira wrote:

 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/28769/
 ---

 (Updated Jan. 28, 2015, 6:26 p.m.)


 Review request for kafka.


 Bugs: KAFKA-1809
 https://issues.apache.org/jira/browse/KAFKA-1809


 Repository: kafka


 Description
 ---

 KAFKA-1890 Fix bug preventing Mirror Maker from successful rebalance;
reviewed by Gwen Shapira and Neha Narkhede


 first commit of refactoring.


 changed topicmetadata to include brokerendpoints and fixed few unit
tests


 fixing systest and support for binding to default address


 fixed system tests


 fix default address binding and ipv6 support


 fix some issues regarding endpoint parsing. Also, larger segments for
systest make the validation much faster


 added link to security wiki in doc


 fixing unit test after rename of ProtocolType to SecurityProtocol


 Following Joe's advice, added security protocol enum on client side,
and modified protocol to use ID instead of string.


 validate producer config against enum


 add a second protocol for testing and modify SocketServerTests to check
on multi-ports


 Reverted the metadata request changes and removed the explicit security
protocol argument. Instead the socketserver will determine the protocol
based on the port and add this to the request


 bump version for UpdateMetadataRequest and added support for rolling
upgrades with new config


 following tests - fixed LeaderAndISR protocol and ZK registration for
backward compatibility


 cleaned up some changes that were actually not necessary. hopefully
making this patch slightly easier to review


 undoing some changes that don't belong here


 bring back config lost in cleanup


 fixes neccessary for an all non-plaintext cluster to work


 minor modifications following comments by Jun


 added missing license


 formatting


 clean up imports


 cleaned up V2 to not include host+port field. Using use.new.protocol
flag to decide which version to serialize


 change endpoints collection in Broker to Map[protocol, endpoint],
mostly to be clear that we intend to have one endpoint per protocol


 validate that listeners and advertised listeners have unique ports and
protocols


 support legacy configs


 some fixes following rebase


 Reverted to backward compatible zk registration, changed
use.new.protocol to support multiple versions and few minor fixes


 Diffs
 -


clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.ja
va 8b3e565edd1ae04d8d34bd9f1a41e9fa8c880a75

clients/src/main/java/org/apache/kafka/common/protocol/ApiVersion.java
PRE-CREATION

clients/src/main/java/org/apache/kafka/common/protocol/SecurityProtocol.j
ava PRE-CREATION
   clients/src/main/java/org/apache/kafka/common/utils/Utils.java
527dd0f9c47fce7310b7a37a9b95bf87f1b9c292
   clients/src/test/java/org/apache/kafka/common/utils/UtilsTest.java
a39fab532f73148316a56c0f8e9197f38ea66f79
   config/server.properties 1614260b71a658b405bb24157c8f12b1f1031aa5
   core/src/main/scala/kafka/admin/AdminUtils.scala
28b12c7b89a56c113b665fbde1b95f873f8624a3
   core/src/main/scala/kafka/admin/TopicCommand.scala
285c0333ff43543d3e46444c1cd9374bb883bb59
   core/src/main/scala/kafka/api/ConsumerMetadataResponse.scala
84f60178f6ebae735c8aa3e79ed93fe21ac4aea7
   core/src/main/scala/kafka/api/LeaderAndIsrRequest.scala
4ff7e8f8cc695551dd5d2fe65c74f6b6c571e340
   core/src/main/scala/kafka/api/TopicMetadata.scala
0190076df0adf906ecd332284f222ff974b315fc
   core/src/main/scala/kafka/api/TopicMetadataResponse.scala
92ac4e687be22e4800199c0666bfac5e0059e5bb
   core/src/main/scala/kafka/api/UpdateMetadataRequest.scala
530982e36b17934b8cc5fb668075a5342e142c59
   

[jira] [Updated] (KAFKA-1914) Count TotalProduceRequestRate and TotalFetchRequestRate in BrokerTopicMetrics

2015-02-02 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar updated KAFKA-1914:
---
Assignee: Aditya Auradkar

 Count TotalProduceRequestRate and TotalFetchRequestRate in BrokerTopicMetrics
 -

 Key: KAFKA-1914
 URL: https://issues.apache.org/jira/browse/KAFKA-1914
 Project: Kafka
  Issue Type: Bug
  Components: core
Reporter: Aditya A Auradkar
Assignee: Aditya Auradkar

 Currently the BrokerTopicMetrics only counts the failedProduceRequestRate and 
 the failedFetchRequestRate. We should add 2 metrics to count the overall 
 produce/fetch request rates.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1886) SimpleConsumer swallowing ClosedByInterruptException

2015-02-02 Thread Aditya A Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14302007#comment-14302007
 ] 

Aditya A Auradkar commented on KAFKA-1886:
--

Updated reviewboard https://reviews.apache.org/r/30196/diff/
 against branch origin/trunk

 SimpleConsumer swallowing ClosedByInterruptException
 

 Key: KAFKA-1886
 URL: https://issues.apache.org/jira/browse/KAFKA-1886
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Reporter: Aditya A Auradkar
Assignee: Aditya Auradkar
 Attachments: KAFKA-1886.patch, KAFKA-1886.patch, 
 KAFKA-1886_2015-02-02_13:57:23.patch


 This issue was originally reported by a Samza developer. I've included an 
 exchange of mine with Chris Riccomini. I'm trying to reproduce the problem on 
 my dev setup.
 From: criccomi
 Hey all,
 Samza's BrokerProxy [1] threads appear to be wedging randomly when we try to 
 interrupt its fetcher thread. I noticed that SimpleConsumer.scala catches 
 Throwable in its sendRequest method [2]. I'm wondering: if 
 blockingChannel.send/receive throws a ClosedByInterruptException
 when the thread is interrupted, what happens? It looks like sendRequest will 
 catch the exception (which I
 think clears the thread's interrupted flag), and then retries the send. If 
 the send succeeds on the retry, I think that the ClosedByInterruptException 
 exception is effectively swallowed, and the BrokerProxy will continue
 fetching messages as though its thread was never interrupted.
 Am I misunderstanding how things work?
 Cheers,
 Chris
 [1] 
 https://github.com/apache/incubator-samza/blob/master/samza-kafka/src/main/scala/org/apache/samza/system/kafka/BrokerProxy.scala#L126
 [2] 
 https://github.com/apache/kafka/blob/0.8.1/core/src/main/scala/kafka/consumer/SimpleConsumer.scala#L75



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1912) Create a simple request re-routing facility

2015-02-02 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14301956#comment-14301956
 ] 

Joe Stein commented on KAFKA-1912:
--

I think this makes sense. 

Do you want to fold re-routing into the ClusterMetaData RQ/RP 
https://reviews.apache.org/r/29301/diff/ and commit at the same time or make 
this patch work first, commit and then rebase and fold KAFKA-1694 into trunk? 

We could make it plug-able but if we did then 
https://issues.apache.org/jira/browse/KAFKA-1845 should go first before that 
interface gets in.

 Create a simple request re-routing facility
 ---

 Key: KAFKA-1912
 URL: https://issues.apache.org/jira/browse/KAFKA-1912
 Project: Kafka
  Issue Type: Improvement
Reporter: Jay Kreps

 We are accumulating a lot of requests that have to be directed to the correct 
 server. This makes sense for high volume produce or fetch requests. But it is 
 silly to put the extra burden on the client for the many miscellaneous 
 requests such as fetching or committing offsets and so on.
 This adds a ton of practical complexity to the clients with little or no 
 payoff in performance.
 We should add a generic request-type agnostic re-routing facility on the 
 server. This would allow any server to accept a request and forward it to the 
 correct destination, proxying the response back to the user. Naturally it 
 needs to do this without blocking the thread.
 The result is that a client implementation can choose to be optimally 
 efficient and manage a local cache of cluster state and attempt to always 
 direct its requests to the proper server OR it can choose simplicity and just 
 send things all to a single host and let that host figure out where to 
 forward it.
 I actually think we should implement this more or less across the board, but 
 some requests such as produce and fetch require more logic to proxy since 
 they have to be scattered out to multiple servers and gathered back to create 
 the response. So these could be done in a second phase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [kafka-clients] Re: [VOTE] 0.8.2.0 Candidate 3

2015-02-02 Thread Joe Stein
Huzzah!

Thanks Jun for preparing the release candidates and getting this out to the
community.

- Joe Stein

On Mon, Feb 2, 2015 at 2:27 PM, Jun Rao j...@confluent.io wrote:

 The following are the results of the votes.

 +1 binding = 3 votes
 +1 non-binding = 1 votes
 -1 = 0 votes
 0 = 0 votes

 The vote passes.

 I will release artifacts to maven central, update the dist svn and download
 site. Will send out an announce after that.

 Thanks everyone that contributed to the work in 0.8.2.0!

 Jun

 On Wed, Jan 28, 2015 at 9:22 PM, Jun Rao j...@confluent.io wrote:

 This is the third candidate for release of Apache Kafka 0.8.2.0.

 Release Notes for the 0.8.2.0 release

 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate3/RELEASE_NOTES.html

 *** Please download, test and vote by Saturday, Jan 31, 11:30pm PT

 Kafka's KEYS file containing PGP keys we use to sign the release:
 http://kafka.apache.org/KEYS in addition to the md5, sha1 and sha2
 (SHA256) checksum.

 * Release artifacts to be voted upon (source and binary):
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate3/

 * Maven artifacts to be voted upon prior to release:
 https://repository.apache.org/content/groups/staging/

 * scala-doc
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate3/scaladoc/

 * java-doc
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate3/javadoc/

 * The tag to be voted upon (off the 0.8.2 branch) is the 0.8.2.0 tag

 https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=223ac42a7a2a0dab378cc411f4938a9cea1eb7ea
 (commit 7130da90a9ee9e6fb4beb2a2a6ab05c06c9bfac4)

 /***

 Thanks,

 Jun


  --
 You received this message because you are subscribed to the Google Groups
 kafka-clients group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to kafka-clients+unsubscr...@googlegroups.com.
 To post to this group, send email to kafka-clie...@googlegroups.com.
 Visit this group at http://groups.google.com/group/kafka-clients.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/kafka-clients/CAFc58G-XRpw9ik35%2BCmsYm-uc2hjHet6fOpw4bF90ka9Z%3DhH%3Dw%40mail.gmail.com
 https://groups.google.com/d/msgid/kafka-clients/CAFc58G-XRpw9ik35%2BCmsYm-uc2hjHet6fOpw4bF90ka9Z%3DhH%3Dw%40mail.gmail.com?utm_medium=emailutm_source=footer
 .

 For more options, visit https://groups.google.com/d/optout.



[DISCUSS] ConfigDec Broker Changes on Trunk

2015-02-02 Thread Joe Stein
Hey, I wanted to start a quick convo around some changes on trunk. Not sure
this requires a KIP since it is kind of internal and shouldn't affect users
but we can decide if so and link this thread to that KIP if so (and keep
the discussion going on the thread if makes sense).

Before making any other broker changes I wanted to see what folks thought
about https://issues.apache.org/jira/browse/KAFKA-1845 ConfigDec patch.

I agree it will be nice to standardize and use one configuration and
validation library across the board. It helps in a lot of different changes
we have been discussing also in 0.8.3 and think we should make sure it is
what we want if so then: review, commit and keep going.

Thoughts?

/***
 Joe Stein
 Founder, Principal Consultant
 Big Data Open Source Security LLC
 http://www.stealth.ly
 Twitter: @allthingshadoop http://www.twitter.com/allthingshadoop
/


Re: Review Request 30196: Patch for KAFKA-1886

2015-02-02 Thread Aditya Auradkar


 On Jan. 26, 2015, 1:28 a.m., Neha Narkhede wrote:
  core/src/test/scala/unit/kafka/integration/PrimitiveApiTest.scala, line 235
  https://reviews.apache.org/r/30196/diff/1/?file=831148#file831148line235
 
  what is the purpose of this sleep?
 
 Aditya Auradkar wrote:
 I wanted to make sure the SimpleConsumer was making a request to the 
 broker when I interrupted. I can reduce the sleep time to 100ms if that helps.

Neha,

Is this the right place to add such a test case. I've added it to 
PrimitiveApiTest but this isn't really testing an API.. merely testing 
SimpleConsumer behavior.


- Aditya


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30196/#review69579
---


On Feb. 2, 2015, 9:57 p.m., Aditya Auradkar wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/30196/
 ---
 
 (Updated Feb. 2, 2015, 9:57 p.m.)
 
 
 Review request for kafka and Joel Koshy.
 
 
 Bugs: KAFKA-1886
 https://issues.apache.org/jira/browse/KAFKA-1886
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Fixing KAFKA-1886. SimpleConsumer should not swallow 
 ClosedByInterruptException
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/consumer/SimpleConsumer.scala 
 cbef84ac76e62768981f74e71d451f2bda995275 
   core/src/test/scala/unit/kafka/integration/PrimitiveApiTest.scala 
 aeb7a19acaefabcc161c2ee6144a56d9a8999a81 
 
 Diff: https://reviews.apache.org/r/30196/diff/
 
 
 Testing
 ---
 
 Added an integration test to PrimitiveAPITest.scala.
 
 
 Thanks,
 
 Aditya Auradkar
 




Re: [kafka-clients] Re: [VOTE] 0.8.2.0 Candidate 3

2015-02-02 Thread Neha Narkhede
Great! Thanks Jun for helping with the release and everyone involved for
your contributions.

On Mon, Feb 2, 2015 at 1:32 PM, Joe Stein joe.st...@stealth.ly wrote:

 Huzzah!

 Thanks Jun for preparing the release candidates and getting this out to the
 community.

 - Joe Stein

 On Mon, Feb 2, 2015 at 2:27 PM, Jun Rao j...@confluent.io wrote:

  The following are the results of the votes.
 
  +1 binding = 3 votes
  +1 non-binding = 1 votes
  -1 = 0 votes
  0 = 0 votes
 
  The vote passes.
 
  I will release artifacts to maven central, update the dist svn and
 download
  site. Will send out an announce after that.
 
  Thanks everyone that contributed to the work in 0.8.2.0!
 
  Jun
 
  On Wed, Jan 28, 2015 at 9:22 PM, Jun Rao j...@confluent.io wrote:
 
  This is the third candidate for release of Apache Kafka 0.8.2.0.
 
  Release Notes for the 0.8.2.0 release
 
 
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate3/RELEASE_NOTES.html
 
  *** Please download, test and vote by Saturday, Jan 31, 11:30pm PT
 
  Kafka's KEYS file containing PGP keys we use to sign the release:
  http://kafka.apache.org/KEYS in addition to the md5, sha1 and sha2
  (SHA256) checksum.
 
  * Release artifacts to be voted upon (source and binary):
  https://people.apache.org/~junrao/kafka-0.8.2.0-candidate3/
 
  * Maven artifacts to be voted upon prior to release:
  https://repository.apache.org/content/groups/staging/
 
  * scala-doc
  https://people.apache.org/~junrao/kafka-0.8.2.0-candidate3/scaladoc/
 
  * java-doc
  https://people.apache.org/~junrao/kafka-0.8.2.0-candidate3/javadoc/
 
  * The tag to be voted upon (off the 0.8.2 branch) is the 0.8.2.0 tag
 
 
 https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=223ac42a7a2a0dab378cc411f4938a9cea1eb7ea
  (commit 7130da90a9ee9e6fb4beb2a2a6ab05c06c9bfac4)
 
  /***
 
  Thanks,
 
  Jun
 
 
   --
  You received this message because you are subscribed to the Google Groups
  kafka-clients group.
  To unsubscribe from this group and stop receiving emails from it, send an
  email to kafka-clients+unsubscr...@googlegroups.com.
  To post to this group, send email to kafka-clie...@googlegroups.com.
  Visit this group at http://groups.google.com/group/kafka-clients.
  To view this discussion on the web visit
 
 https://groups.google.com/d/msgid/kafka-clients/CAFc58G-XRpw9ik35%2BCmsYm-uc2hjHet6fOpw4bF90ka9Z%3DhH%3Dw%40mail.gmail.com
  
 https://groups.google.com/d/msgid/kafka-clients/CAFc58G-XRpw9ik35%2BCmsYm-uc2hjHet6fOpw4bF90ka9Z%3DhH%3Dw%40mail.gmail.com?utm_medium=emailutm_source=footer
 
  .
 
  For more options, visit https://groups.google.com/d/optout.
 




-- 
Thanks,
Neha


Re: Review Request 30403: Patch for KAFKA-1906

2015-02-02 Thread Neha Narkhede
+1 on including a reasonable default for log.dirs that points to data/ in
the installation directory. This is followed by some other systems and is
probably a little better compared to /tmp.

Also, I actually think we should default to production ready settings for
most configs, at least those related to performance. In most cases, this
works just fine and doesn't hurt the development environment much. Same
goes for GC settings. There is no value in shipping with a smaller heap or
non standard GC configs.

I ran into a bunch of this while writing docs for Kafka and thinking from
the user's perspective. And I think doing both of these things will improve
user experience.

Thanks,
Neha

On Sun, Feb 1, 2015 at 8:43 AM, Jay Kreps boredandr...@gmail.com wrote:

 Yeah I second Jaikiran's take: I think we should stick to something that
 works out of the box without editing config. I think the experience here
 is important, and adding one manual configuration step quickly cascades
 into a bunch of these steps (since it is always easier to make something
 manual than to think through how to implement a reasonable default).

 Of course when going to production you have to think, and I don't have a
 ton of sympathy for people who are doing production deploys with their data
 in /tmp, but I agree it would be better to make this a little safer even in
 that case if we can come up with a way to give a local data dir.

 Another approach I just thought of would be to just ship two configs: a
 developer.properties and production.properties each of which would have
 reasonable starting point configurations. This might actually be better
 since many of the settings we default to are a little anemic for a
 production install (e.g. default number of partitions, default replication
 factor, thread count, etc). I suspect this might solve a lot of support
 problems, actually, since most people don't think about that stuff.

 -Jay

 On Sat, Jan 31, 2015 at 7:27 AM, Jaikiran Pai jai.forums2...@gmail.com
 wrote:

   Hi Jeff,
 
  I guess you are :)
 
  Personally, whenever I download and try a new project *in development
  environment* I always just want to get it up and running without having
 to
  fiddle with configurations. Of course, I do a bit of reading the docs,
  before trying it out, but I like to have the install and run to be
  straightforward without having to change/add configurations. Having
  sensible defaults helps in development environments and in getting
 started.
  IMO, this param belongs to that category.
 
  -Jaikiran
 
 
  On Thursday 29 January 2015 08:00 PM, Jeff Holoman wrote:
 
  Maybe I'm in the minority here, but I actually don't think there should
 be
  a default for this param and you should be required to explicitly set
 this.
 
  On Thu, Jan 29, 2015 at 5:43 AM, Jay Kreps boredandr...@gmail.com
 wrote:
 
 
 
   On Jan. 29, 2015, 6:50 a.m., Gwen Shapira wrote:
We added --override option to KafkaServer that allows overriding
  default configuration from commandline.
I believe that just changing the shell script to include --override
  log.dir=${KAFKA_HOME}/data
may be enough?
   
overriding configuration from server.properties in code can be very
  unintuitive.
  
   Jaikiran Pai wrote:
   That sounds a good idea. I wasn't aware of the --override option.
  I'll give that a try and if it works then the change will be limited to
  just the scripts.
  
   Jaikiran Pai wrote:
   Hi Gwen,
  
   I had a look at the JIRA
  https://issues.apache.org/jira/browse/KAFKA-1654 which added support
 for
  --override and also the purpose of that option. From what I see it
 won't be
  useful in this case, because in this current task, we don't want to
  override a value that has been explicitly set (via server.properties for
  example). Instead we want to handle a case where no explicit value is
  specified for the data log directory and in such cases default it to a
 path
  which resides under the Kafka install directory.
  
   If we use the --override option in our (startup) scripts to set
  log.dir=${KAFKA_HOME}/data, we will end up forcing this value as the
  log.dir even when the user has intentionally specified a different path
 for
  the data logs.
  
   Let me know if I misunderstood your suggestion.
  
   Jay Kreps wrote:
   I think you are right that --override won't work but maybe this is
  still a good suggestion?
  
   Something seems odd about force overriding the working directory
 of
  the process just to set the log directory.
  
   Another option might be to add --default. This would work like
  --override but would provide a default value only if none is specified.
 I
  think this might be possible because the java.util.Properties we use for
  config supports a hierarchy of defaults. E.g. you can say new
  Properties(defaultProperties). Not sure if this is better or worse.
  
   Thoughts?
  
   Jaikiran Pai wrote:
   Hi Jay,
  
Another 

Re: Review Request 30403: Patch for KAFKA-1906

2015-02-02 Thread Gwen Shapira
While I appreciate production-grade configs a lot, I prefer to ship
with usable-out-of-the-box configs.

I suspect that large part of the amazing adoption that projects like
MySQL, MongoDB, Cassandra and Kafka had is how easy it is to install
and get going on a local dev box. Working as DB Architect, I could see
how new projects would always start with MySQL first. Getting MySQL on
local box is trivial, but no developer would willingly install Oracle
on his local box if he can help it.

I suspect that there's no one true production configuration, things
depend on hardware, environment, requirements, etc.

So, voting on the side of dev ready and not production ready

Gwen



On Mon, Feb 2, 2015 at 11:45 AM, Neha Narkhede n...@confluent.io wrote:
 +1 on including a reasonable default for log.dirs that points to data/ in
 the installation directory. This is followed by some other systems and is
 probably a little better compared to /tmp.

 Also, I actually think we should default to production ready settings for
 most configs, at least those related to performance. In most cases, this
 works just fine and doesn't hurt the development environment much. Same goes
 for GC settings. There is no value in shipping with a smaller heap or non
 standard GC configs.

 I ran into a bunch of this while writing docs for Kafka and thinking from
 the user's perspective. And I think doing both of these things will improve
 user experience.

 Thanks,
 Neha

 On Sun, Feb 1, 2015 at 8:43 AM, Jay Kreps boredandr...@gmail.com wrote:

 Yeah I second Jaikiran's take: I think we should stick to something that
 works out of the box without editing config. I think the experience here
 is important, and adding one manual configuration step quickly cascades
 into a bunch of these steps (since it is always easier to make something
 manual than to think through how to implement a reasonable default).

 Of course when going to production you have to think, and I don't have a
 ton of sympathy for people who are doing production deploys with their
 data
 in /tmp, but I agree it would be better to make this a little safer even
 in
 that case if we can come up with a way to give a local data dir.

 Another approach I just thought of would be to just ship two configs: a
 developer.properties and production.properties each of which would have
 reasonable starting point configurations. This might actually be better
 since many of the settings we default to are a little anemic for a
 production install (e.g. default number of partitions, default replication
 factor, thread count, etc). I suspect this might solve a lot of support
 problems, actually, since most people don't think about that stuff.

 -Jay

 On Sat, Jan 31, 2015 at 7:27 AM, Jaikiran Pai jai.forums2...@gmail.com
 wrote:

   Hi Jeff,
 
  I guess you are :)
 
  Personally, whenever I download and try a new project *in development
  environment* I always just want to get it up and running without having
  to
  fiddle with configurations. Of course, I do a bit of reading the docs,
  before trying it out, but I like to have the install and run to be
  straightforward without having to change/add configurations. Having
  sensible defaults helps in development environments and in getting
  started.
  IMO, this param belongs to that category.
 
  -Jaikiran
 
 
  On Thursday 29 January 2015 08:00 PM, Jeff Holoman wrote:
 
  Maybe I'm in the minority here, but I actually don't think there should
  be
  a default for this param and you should be required to explicitly set
  this.
 
  On Thu, Jan 29, 2015 at 5:43 AM, Jay Kreps boredandr...@gmail.com
  wrote:
 
 
 
   On Jan. 29, 2015, 6:50 a.m., Gwen Shapira wrote:
We added --override option to KafkaServer that allows overriding
  default configuration from commandline.
I believe that just changing the shell script to include --override
  log.dir=${KAFKA_HOME}/data
may be enough?
   
overriding configuration from server.properties in code can be very
  unintuitive.
  
   Jaikiran Pai wrote:
   That sounds a good idea. I wasn't aware of the --override option.
  I'll give that a try and if it works then the change will be limited to
  just the scripts.
  
   Jaikiran Pai wrote:
   Hi Gwen,
  
   I had a look at the JIRA
  https://issues.apache.org/jira/browse/KAFKA-1654 which added support
  for
  --override and also the purpose of that option. From what I see it
  won't be
  useful in this case, because in this current task, we don't want to
  override a value that has been explicitly set (via server.properties
  for
  example). Instead we want to handle a case where no explicit value is
  specified for the data log directory and in such cases default it to a
  path
  which resides under the Kafka install directory.
  
   If we use the --override option in our (startup) scripts to set
  log.dir=${KAFKA_HOME}/data, we will end up forcing this value as the
  log.dir even when the user has intentionally specified a 

Re: [DISCUSS] ConfigDec Broker Changes on Trunk

2015-02-02 Thread Gwen Shapira
Strong +1 from me (obviously). Lots of good reasons to do it:
consistency, code reuse, better validations, etc, etc.

I had one comment on the patch in RB, but it can also be refactored as
follow up JIRA to avoid blocking everyone who is waiting on this.

Gwen

On Mon, Feb 2, 2015 at 1:31 PM, Joe Stein joe.st...@stealth.ly wrote:
 Hey, I wanted to start a quick convo around some changes on trunk. Not sure
 this requires a KIP since it is kind of internal and shouldn't affect users
 but we can decide if so and link this thread to that KIP if so (and keep
 the discussion going on the thread if makes sense).

 Before making any other broker changes I wanted to see what folks thought
 about https://issues.apache.org/jira/browse/KAFKA-1845 ConfigDec patch.

 I agree it will be nice to standardize and use one configuration and
 validation library across the board. It helps in a lot of different changes
 we have been discussing also in 0.8.3 and think we should make sure it is
 what we want if so then: review, commit and keep going.

 Thoughts?

 /***
  Joe Stein
  Founder, Principal Consultant
  Big Data Open Source Security LLC
  http://www.stealth.ly
  Twitter: @allthingshadoop http://www.twitter.com/allthingshadoop
 /


Re: Review Request 28769: Patch for KAFKA-1809

2015-02-02 Thread Joel Koshy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28769/#review70642
---


It seems some of the recent trunk changes have crept into this diff. Seems to 
be an issue with the patch review script as you normally shouldn't need to git 
pull before submitting.

- Joel Koshy


On Feb. 2, 2015, 7:55 p.m., Gwen Shapira wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/28769/
 ---
 
 (Updated Feb. 2, 2015, 7:55 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1809
 https://issues.apache.org/jira/browse/KAFKA-1809
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-1890 Fix bug preventing Mirror Maker from successful rebalance; 
 reviewed by Gwen Shapira and Neha Narkhede
 
 
 KAFKA-1896; Record size function should check if value is null; reviewed by 
 Guozhang Wang
 
 
 KAFKA-1109 Need to fix GC log configuration code, not able to override 
 KAFKA_GC_LOG_OPTS; reviewed by Neha Narkhede
 
 
 KAFKA-1883 Fix NullPointerException in RequestSendThread; reviewed by Neha 
 Narkhede
 
 
 KAFKA-1885 Upgrade junit dependency in core to 4.6 version to allow running 
 individual test methods via gradle command line; reviewed by Neha Narkhede
 
 
 KAFKA-1818 KAFKA-1818 clean up code to more idiomatic scala usage; reviewed 
 by Neha Narkhede and Gwen Shapira
 
 
 KAFKA-1902; fix MetricName so that Yammer reporter can work correctly; 
 patched by Jun Rao; reviewed by Manikumar Reddy, Manikumar Reddy and Joel 
 Koshy
 
 
 KAFKA-1861; Publishing kafka-client:test in order to utilize the helper utils 
 in TestUtils; patched by Manikumar Reddy; reviewed by Jun Rao
 
 
 KAFKA-1760: New consumer.
 
 
 KAFKA-1760 Follow-up: fix compilation issue with Scala 2.11
 
 
 first commit of refactoring.
 
 
 changed topicmetadata to include brokerendpoints and fixed few unit tests
 
 
 fixing systest and support for binding to default address
 
 
 fixed system tests
 
 
 fix default address binding and ipv6 support
 
 
 fix some issues regarding endpoint parsing. Also, larger segments for systest 
 make the validation much faster
 
 
 added link to security wiki in doc
 
 
 fixing unit test after rename of ProtocolType to SecurityProtocol
 
 
 Following Joe's advice, added security protocol enum on client side, and 
 modified protocol to use ID instead of string.
 
 
 validate producer config against enum
 
 
 add a second protocol for testing and modify SocketServerTests to check on 
 multi-ports
 
 
 Reverted the metadata request changes and removed the explicit security 
 protocol argument. Instead the socketserver will determine the protocol based 
 on the port and add this to the request
 
 
 bump version for UpdateMetadataRequest and added support for rolling upgrades 
 with new config
 
 
 following tests - fixed LeaderAndISR protocol and ZK registration for 
 backward compatibility
 
 
 cleaned up some changes that were actually not necessary. hopefully making 
 this patch slightly easier to review
 
 
 undoing some changes that don't belong here
 
 
 bring back config lost in cleanup
 
 
 fixes neccessary for an all non-plaintext cluster to work
 
 
 minor modifications following comments by Jun
 
 
 added missing license
 
 
 formatting
 
 
 clean up imports
 
 
 cleaned up V2 to not include host+port field. Using use.new.protocol flag to 
 decide which version to serialize
 
 
 change endpoints collection in Broker to Map[protocol, endpoint], mostly to 
 be clear that we intend to have one endpoint per protocol
 
 
 validate that listeners and advertised listeners have unique ports and 
 protocols
 
 
 support legacy configs
 
 
 some fixes following rebase
 
 
 Reverted to backward compatible zk registration, changed use.new.protocol to 
 support multiple versions and few minor fixes
 
 
 fixing some issues after rebase
 
 
 modified inter.broker.protocol config to start with security per feedback
 
 
 Diffs
 -
 
   README.md 35e06b1cc6373f24ea6d405cccd4aacf0f458e0d 
   bin/kafka-run-class.sh 22a9865b5939450a9d7f4ea2eee5eba2c1ec758c 
   build.gradle 1cbab29ce83e20dae0561b51eed6fdb86d522f28 
   clients/src/main/java/org/apache/kafka/clients/ClientRequest.java 
 d32c319d8ee4c46dad309ea54b136ea9798e2fd7 
   clients/src/main/java/org/apache/kafka/clients/ClusterConnectionStates.java 
 8aece7e81a804b177a6f2c12e2dc6c89c1613262 
   clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
 PRE-CREATION 
   clients/src/main/java/org/apache/kafka/clients/ConnectionState.java 
 ab7e3220f9b76b92ef981d695299656f041ad5ed 
   clients/src/main/java/org/apache/kafka/clients/KafkaClient.java 
 397695568d3fd8e835d8f923a89b3b00c96d0ead 
   

Re: Review Request 29831: Patch for KAFKA-1476

2015-02-02 Thread Onur Karaman


 On Feb. 2, 2015, 5:42 p.m., Neha Narkhede wrote:
  core/src/main/scala/kafka/utils/ZkUtils.scala, line 758
  https://reviews.apache.org/r/29831/diff/7/?file=841974#file841974line758
 
  Let's add consumerGroupOwners inside ZKGroupDirs so we don't have to 
  hardcode zk paths like this.

I agree that it's better to avoid hardcoding zk paths. There's a bit of a 
naming issue in that ZKGroupTopicDirs will now have methods like:
consumerOffsetDir
consumerGroupOffsetDir // inherited from ZKGroupDirs
consumerOwnerDir
consumerGroupOwnerDir // inherited from ZKGroupDirs


- Onur


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29831/#review70581
---


On Jan. 30, 2015, 7:10 p.m., Onur Karaman wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/29831/
 ---
 
 (Updated Jan. 30, 2015, 7:10 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1476
 https://issues.apache.org/jira/browse/KAFKA-1476
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Merged in work for KAFKA-1476 and sub-task KAFKA-1826
 
 
 Diffs
 -
 
   bin/kafka-consumer-groups.sh PRE-CREATION 
   core/src/main/scala/kafka/admin/AdminUtils.scala 
 28b12c7b89a56c113b665fbde1b95f873f8624a3 
   core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala PRE-CREATION 
   core/src/main/scala/kafka/utils/ZkUtils.scala 
 c14bd455b6642f5e6eb254670bef9f57ae41d6cb 
   core/src/test/scala/unit/kafka/admin/DeleteConsumerGroupTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/utils/TestUtils.scala 
 54755e8dd3f23ced313067566cd4ea867f8a496e 
 
 Diff: https://reviews.apache.org/r/29831/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Onur Karaman
 




Re: Review Request 30482: Add the coordinator to server

2015-02-02 Thread Onur Karaman

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30482/#review70535
---



core/src/main/scala/kafka/coordinator/ConsumerCoordinator.scala
https://reviews.apache.org/r/30482/#comment115777

Was it intentional to have this as a member variable?



core/src/main/scala/kafka/server/DelayedOperationKey.scala
https://reviews.apache.org/r/30482/#comment115746

This gave me:
java.util.UnknownFormatConversionException: Conversion = 'l'

The docs on formatting longs are a bit misleading, but just a 
%d.format(time) should work.



core/src/main/scala/kafka/server/DelayedOperationKey.scala
https://reviews.apache.org/r/30482/#comment115744

This gave me java.util.IllegalFormatConversionException: d != 
java.lang.String

consumerId is a string, so maybe %s-%s.format(groupId, consumerId)



core/src/main/scala/kafka/server/DelayedOperationKey.scala
https://reviews.apache.org/r/30482/#comment115745

groupId is already a string, so this can be simplified to:
override def keyLabel = groupId


- Onur Karaman


On Feb. 1, 2015, 2:45 a.m., Guozhang Wang wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/30482/
 ---
 
 (Updated Feb. 1, 2015, 2:45 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1333
 https://issues.apache.org/jira/browse/KAFKA-1333
 
 
 Repository: kafka
 
 
 Description
 ---
 
 1. Add ConsumerCoordinator with GroupRegistry and ConsumerRegistry metadata, 
 and ZK listeners.
 2. Add a delayed heartbeat purgatory based on HeartbeatBucket to expire 
 heartbeat requests.
 3. Add a delayed rebalance purgatory for preparing rebalance.
 4. Add a join-group purgatory for sending back responses with assigned 
 partitions.
 5. Add TimeMsKey / ConsumerKey and ConsumerGroupKey for delayed heartbeat / 
 join-group / rebalance purgatories.
 6. Refactor KafkaApis for handling JoinGroup / Heartbeat requests with 
 coordinator, and sending reponses via callbacks.
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/coordinator/ConsumerCoordinator.scala 
 PRE-CREATION 
   core/src/main/scala/kafka/coordinator/GroupRegistry.scala PRE-CREATION 
   core/src/main/scala/kafka/coordinator/HeartbeatBucket.scala PRE-CREATION 
   core/src/main/scala/kafka/server/DelayedOperationKey.scala 
 fb7e9ed5c16dd15b71e1b1ac12948641185871db 
   core/src/main/scala/kafka/server/KafkaApis.scala 
 f2b027bf944e735fd52cc282690ec1b8395f9290 
   core/src/main/scala/kafka/server/KafkaServer.scala 
 89200da30a04943f0b9befe84ab17e62b747c8c4 
 
 Diff: https://reviews.apache.org/r/30482/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Guozhang Wang
 




Re: Cannot stop Kafka server if zookeeper is shutdown first

2015-02-02 Thread Jaikiran Pai

Thanks for pointing to that repo!

I just had a look at it and it appears that the project isn't much 
active (going by the lack of activity). The latest contribution is from 
Gwen and that was around 3 months back. I haven't found release plans 
for that project or a place to ask about it (filing an issue doesn't 
seem right to ask this question). So I'll get in touch with the repo 
owner and see what his plans for the project are.


-Jaikiran

On Monday 02 February 2015 11:33 PM, Gwen Shapira wrote:

I did!

Thanks for clarifying :)

The client that is part of Zookeeper itself actually does support timeouts.

On Mon, Feb 2, 2015 at 9:54 AM, Guozhang Wang wangg...@gmail.com wrote:

Hi Jaikiran,

I think Gwen was talking about contributing to ZkClient project:

https://github.com/sgroschupf/zkclient

Guozhang


On Sun, Feb 1, 2015 at 5:30 AM, Jaikiran Pai jai.forums2...@gmail.com
wrote:


Hi Gwen,

Yes, the KafkaZkClient is a wrapper around ZkClient and not a complete
replacement.

As for contributing to Zookeeper, yes that indeed in on my mind, but I
haven't yet had a chance to really look deeper into Zookeeper or get in
touch with their dev team to try and explain this potential improvement to
them. I have no objection to contributing this or something similar to
Zookeeper directly. I think I should be able to bring this up in the
Zookeeper dev forum, sometime soon in the next few weekends.

-Jaikiran


On Sunday 01 February 2015 11:40 AM, Gwen Shapira wrote:


It looks like the new KafkaZkClient is a wrapper around ZkClient, but
not a replacement. Did I get it right?

I think a wrapper for ZkClient can be useful - for example KAFKA-1664
can also use one.

However, I'm wondering why not contribute the fix directly to ZKClient
project and ask for a release that contains the fix?
This will benefit other users of the project who may also need a
timeout (thats pretty basic...)

As an alternative, if we don't want to collaborate with ZKClient for
some reason, forking the project into Kafka will probably give us more
control than wrappers and without much downside.

Just a thought.

Gwen





On Sat, Jan 31, 2015 at 6:32 AM, Jaikiran Pai jai.forums2...@gmail.com
wrote:


Neha, Ewen (and others), my initial attempt to solve this is uploaded
here
https://reviews.apache.org/r/30477/. It solves the shutdown problem and
now
the server shuts down even when Zookeeper has gone down before the Kafka
server.

I went with the approach of introducing a custom (enhanced) ZkClient
which
for now allows time outs to be optionally specified for certain
operations.
I intentionally haven't forced the use of this new KafkaZkClient all over
the code and instead for now have just used it in the KafkaServer.

Does this patch look like something worth using?

-Jaikiran


On Thursday 29 January 2015 10:41 PM, Neha Narkhede wrote:


Ewen is right. ZkClient APIs are blocking and the right fix for this
seems
to be patching ZkClient. At some point, if we find ourselves fiddling
too
much with ZkClient, it wouldn't hurt to write our own little zookeeper
client wrapper.

On Thu, Jan 29, 2015 at 12:57 AM, Ewen Cheslack-Postava
e...@confluent.io
wrote:

  Looks like a bug to me -- the underlying ZK library wraps a lot of

blocking
method implementations with waitUntilConnected() calls without any
timeouts. Ideally we could just add a version of
ZkUtils.getController()
with a timeout, but I don't see an easy way to accomplish that with
ZkClient.

There's at least one other call to ZkUtils besides the one in the
stacktrace you gave that would cause the same issue, possibly more that
aren't directly called in that method. One ugly solution would be to
use
an
extra thread during shutdown to trigger timeouts, but I'd imagine we
probably have other threads that could end up blocking in similar ways.

I filed https://issues.apache.org/jira/browse/KAFKA-1907 to track the
issue.


On Mon, Jan 26, 2015 at 6:35 AM, Jaikiran Pai 
jai.forums2...@gmail.com
wrote:

  The main culprit is this thread which goes into forever retry

connection
to a closed zookeeper when I shutdown Kafka (via a Ctrl + C) after
zookeeper has already been shutdown. I have attached the complete
thread
dump, but I don't know if it will be delivered to the mailing list.

Thread-2 prio=10 tid=0xb3305000 nid=0x4758 waiting on condition
[0x6ad69000]
  java.lang.Thread.State: TIMED_WAITING (parking)
   at sun.misc.Unsafe.park(Native Method)
   - parking to wait for  0x70a93368 (a
java.util.concurrent.locks.
AbstractQueuedSynchronizer$ConditionObject)
   at java.util.concurrent.locks.LockSupport.parkUntil(
LockSupport.java:267)
   at java.util.concurrent.locks.AbstractQueuedSynchronizer$
ConditionObject.awaitUntil(AbstractQueuedSynchronizer.java:2130)
   at
org.I0Itec.zkclient.ZkClient.waitForKeeperState(ZkClient.java:636)
   at
org.I0Itec.zkclient.ZkClient.waitUntilConnected(ZkClient.java:619)
   at

[jira] [Created] (KAFKA-1915) Integrate checkstyle for java code

2015-02-02 Thread Jay Kreps (JIRA)
Jay Kreps created KAFKA-1915:


 Summary: Integrate checkstyle for java code
 Key: KAFKA-1915
 URL: https://issues.apache.org/jira/browse/KAFKA-1915
 Project: Kafka
  Issue Type: Improvement
Reporter: Jay Kreps
Assignee: Jay Kreps
Priority: Minor
 Fix For: 0.8.3


There are a lot of little style and layering problems that tend to creep into 
our code, especially with external patches and lax reviewers.

These are the usual style suspects--capitalization, spacing, bracket placement, 
 etc.

My personal pet peave is a lack of clear thinking about layers. These layering 
problems crept in quite fast, and sad to say a number of them were accidentally 
caused by me. This is things like o.a.k.common depending on o.a.k.clients or 
the consumer depending on the producer.

I have a patch that integrates checkstyle to catch these issues at build time, 
and which corrects the known problems. There are a fair number of very small 
changes in this patch, all trivial.

Checkstyle can be slightly annoying, not least of which because it has a couple 
minor bugs around anonymous inner class formatting, but I find it is 98% real 
style issues so mostly worth it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1729) add doc for Kafka-based offset management in 0.8.2

2015-02-02 Thread Joel Koshy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14302676#comment-14302676
 ] 

Joel Koshy commented on KAFKA-1729:
---

I committed the patch to trunk - but then realized an unfortunate mistake in 
the 0.8.2 patch above.

The OffsetFetchRequest also hard-codes versionId to 0 so unfortunately we won't 
be able to fetch offsets from javaapi in 0.8.2

I'll fix it in trunk but I will mark this as closed.

 add doc for Kafka-based offset management in 0.8.2
 --

 Key: KAFKA-1729
 URL: https://issues.apache.org/jira/browse/KAFKA-1729
 Project: Kafka
  Issue Type: Sub-task
Reporter: Jun Rao
Assignee: Joel Koshy
 Fix For: 0.8.2

 Attachments: KAFKA-1729.patch, KAFKA-1782-doc-v1.patch, 
 KAFKA-1782-doc-v2.patch, KAFKA-1782-doc-v3.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1870) Cannot commit with simpleConsumer on Zookeeper only with Java API

2015-02-02 Thread Joel Koshy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14302679#comment-14302679
 ] 

Joel Koshy commented on KAFKA-1870:
---

[~junrao] (at least on trunk) I think the change to 
kafka.javaapi.OffsetFetchRequest should be reverted since people who are using 
that original constructor should be explicitly setting the versionId to 0. The 
issue is that right now you cannot fetch offsets from Kafka using javaapi.

 Cannot commit with simpleConsumer on Zookeeper only with Java API
 -

 Key: KAFKA-1870
 URL: https://issues.apache.org/jira/browse/KAFKA-1870
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 0.8.2
Reporter: Thomas Vandevelde
Assignee: Jun Rao
Priority: Blocker
 Fix For: 0.8.2

 Attachments: OffsetCommitRequest.diff, kafka-1870.patch, 
 kafka-1870_2015-01-16_12:08:05.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 From Kafka 0.8.2, we need to pass version 0 in the OffsetCommitRequest if we 
 want to commit only on zookeeper.
 However the Java API doesnt allow to pass the version in parameter.
 Can you please add the version in the constructor of 
 kafka.javaapi.OffsetCommitRequest ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1756) never allow the replica fetch size to be less than the max message size

2015-02-02 Thread Tong Li (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14301373#comment-14301373
 ] 

Tong Li commented on KAFKA-1756:


I wonder if we really do need two parameters. Can't we just use one? If not, in 
the program we can always use the one which is the greater of the two. Seems we 
can solve that issue relatively easy.

 never allow the replica fetch size to be less than the max message size
 ---

 Key: KAFKA-1756
 URL: https://issues.apache.org/jira/browse/KAFKA-1756
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1.1, 0.8.2
Reporter: Joe Stein
Priority: Blocker
 Fix For: 0.8.3


 There exists a very hazardous scenario where if the max.message.bytes is 
 greather than the replica.fetch.max.bytes the message will never replicate. 
 This will bring the ISR down to 1 (eventually/quickly once 
 replica.lag.max.messages is reached). If during this window the leader itself 
 goes out of the ISR then the new leader will commit the last offset it 
 replicated. This is also bad for sync producers with -1 ack because they will 
 all block (heard affect caused upstream) in this scenario too.
 The fix here is two fold
 1) when setting max.message.bytes using kafka-topics we must check first each 
 and every broker (which will need some thought about how todo this because of 
 the topiccommand zk notification) that max.message.bytes = 
 replica.fetch.max.bytes and if it is NOT then DO NOT create the topic
 2) if you change this in server.properties then the broker should not start 
 if max.message.bytes  replica.fetch.max.bytes
 This does beg the question/issue some about centralizing certain/some/all 
 configurations so that inconsistencies do not occur (where broker 1 has 
 max.message.bytes  replica.fetch.max.bytes but broker 2 max.message.bytes = 
 replica.fetch.max.bytes because of error in properties). I do not want to 
 conflate this ticket but I think it is worth mentioning/bringing up here as 
 it is a good example where it could make sense. 
 I set this as BLOCKER for 0.8.2-beta because we did so much work to enable 
 consistency vs availability and 0 data loss this corner case should be part 
 of 0.8.2-final
 Also, I could go one step further (though I would not consider this part as a 
 blocker for 0.8.2 but interested to what other folks think) about a consumer 
 replica fetch size so that if the message max is increased messages will no 
 longer be consumed (since the consumer fetch max would be   max.message.bytes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 30547: Integrate checkstyle.

2015-02-02 Thread Joel Koshy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30547/#review70712
---


Looks like a useful tool.


build.gradle
https://reviews.apache.org/r/30547/#comment116063

Is the consequence of violations this going to be a warning or an error or 
a report? Either way is there a way to suppress it via an annotation or 
something like that? For example, in some cases unnecessary parantheses may 
actually make some code much easier to read. I'm guessing no, since checkstyle 
is completely outside of compilation.



checkstyle/checkstyle.xml
https://reviews.apache.org/r/30547/#comment116065

Speaking of which... we need an ASF license header on this xml file itself 
no? probably after the xml header.



checkstyle/checkstyle.xml
https://reviews.apache.org/r/30547/#comment116061

whitespace



checkstyle/import-control.xml
https://reviews.apache.org/r/30547/#comment116067

ASF header



checkstyle/import-control.xml
https://reviews.apache.org/r/30547/#comment116068

what does exact-match do? I glanced over checkstyle docs but probably 
missed it.



checkstyle/import-control.xml
https://reviews.apache.org/r/30547/#comment116069

Is there a notion of a catch-all? Say if we were to add a new subpackage or 
package and need to add specific allow/disallow rules to it but forget to do 
that.


- Joel Koshy


On Feb. 3, 2015, 5:48 a.m., Jay Kreps wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/30547/
 ---
 
 (Updated Feb. 3, 2015, 5:48 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1915
 https://issues.apache.org/jira/browse/KAFKA-1915
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Add checkstyle.
 
 
 Diffs
 -
 
   build.gradle 68443725868c438176e68b04c0642f0e9fa29e23 
   checkstyle/checkstyle.xml PRE-CREATION 
   checkstyle/import-control.xml PRE-CREATION 
   clients/src/main/java/org/apache/kafka/clients/ClusterConnectionStates.java 
 574287d77f7d46f49522601ece486721363a88e0 
   clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
 5950191b240f3a212ffa71cc341ee927663f0e55 
   clients/src/main/java/org/apache/kafka/clients/consumer/CommitType.java 
 072cc2e6f92dbd35c123cf3bd0e2a36647185bb3 
   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
 6d4ff7cd2a2835e5bda8ccf1f2884d0fcdf04f39 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecords.java 
 416d703c3f59adee6ba98a203bd5514b7c9d95e5 
   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
 300c551f3d21a472012f41d3dc08998055654082 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/Heartbeat.java
  d9483ecf6ae4a0b6b40fd0fecbb06bf61c1477c4 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/NoOpConsumerRebalanceCallback.java
  7e57a39690d9b8ce0046c2636f3478f837a3fd53 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/SubscriptionState.java
  71ce20db955bd101efac7451b0a8216cf08826d6 
   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
 ebc4c5315fb9464ae58a34009358511b61a8432e 
   clients/src/main/java/org/apache/kafka/clients/producer/Producer.java 
 6b2471f878b7d5ad35b7992799a582b81d4ec275 
   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
 9a43d668376295f47cea0b8eb26b15a6c73bb39c 
   
 clients/src/main/java/org/apache/kafka/clients/producer/internals/BufferPool.java
  8d4156d17e94945aa4029063213a6c0b3f193894 
   
 clients/src/main/java/org/apache/kafka/clients/producer/internals/Metadata.java
  3aff6242d9d74208af6dcd764bdb153a60d5157a 
   
 clients/src/main/java/org/apache/kafka/clients/producer/internals/ProduceRequestResult.java
  b70ece71fdbd1da541ea9c4f828d173fca92e8c6 
   
 clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java
  50889e4ce4b6c4f7f9aaddfc316b047883a6dc8b 
   
 clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
 8726809f8ada69c32da5c086f553f5f6132b7b83 
   
 clients/src/main/java/org/apache/kafka/clients/tools/ProducerPerformance.java 
 689bae9e6ba69419c50bb4255177e0ef929115a1 
   clients/src/main/java/org/apache/kafka/common/Cluster.java 
 d7ccbcd91e6578176773b2ad1e13f6e06c64129a 
   clients/src/main/java/org/apache/kafka/common/MetricName.java 
 7e977e94a8e0b965a3ce92f8e84098a3bed6a6da 
   clients/src/main/java/org/apache/kafka/common/PartitionInfo.java 
 28562f9019e1a5a2aec037b40f77c84ed680a595 
   clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java 
 38ce10b31257311099d617ec0d4f7bccd758a18e 
   
 clients/src/main/java/org/apache/kafka/common/errors/NotEnoughReplicasAfterAppendException.java
  

Re: Review Request 30547: Integrate checkstyle.

2015-02-02 Thread Guozhang Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30547/#review70713
---



build.gradle
https://reviews.apache.org/r/30547/#comment116071

Update README.md for ./gradlew check commands.

Also, it seems ./gradlew check also run all the tests. Could we just 
style-check the test class files without executing the tests themselves?



clients/src/main/java/org/apache/kafka/common/metrics/JmxReporter.java
https://reviews.apache.org/r/30547/#comment116070

Wondering why LOCK needs to be all-capitalized while Log above does not 
need to be, what is the principle applied here?


- Guozhang Wang


On Feb. 3, 2015, 5:48 a.m., Jay Kreps wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/30547/
 ---
 
 (Updated Feb. 3, 2015, 5:48 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1915
 https://issues.apache.org/jira/browse/KAFKA-1915
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Add checkstyle.
 
 
 Diffs
 -
 
   build.gradle 68443725868c438176e68b04c0642f0e9fa29e23 
   checkstyle/checkstyle.xml PRE-CREATION 
   checkstyle/import-control.xml PRE-CREATION 
   clients/src/main/java/org/apache/kafka/clients/ClusterConnectionStates.java 
 574287d77f7d46f49522601ece486721363a88e0 
   clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
 5950191b240f3a212ffa71cc341ee927663f0e55 
   clients/src/main/java/org/apache/kafka/clients/consumer/CommitType.java 
 072cc2e6f92dbd35c123cf3bd0e2a36647185bb3 
   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
 6d4ff7cd2a2835e5bda8ccf1f2884d0fcdf04f39 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecords.java 
 416d703c3f59adee6ba98a203bd5514b7c9d95e5 
   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
 300c551f3d21a472012f41d3dc08998055654082 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/Heartbeat.java
  d9483ecf6ae4a0b6b40fd0fecbb06bf61c1477c4 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/NoOpConsumerRebalanceCallback.java
  7e57a39690d9b8ce0046c2636f3478f837a3fd53 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/SubscriptionState.java
  71ce20db955bd101efac7451b0a8216cf08826d6 
   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
 ebc4c5315fb9464ae58a34009358511b61a8432e 
   clients/src/main/java/org/apache/kafka/clients/producer/Producer.java 
 6b2471f878b7d5ad35b7992799a582b81d4ec275 
   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
 9a43d668376295f47cea0b8eb26b15a6c73bb39c 
   
 clients/src/main/java/org/apache/kafka/clients/producer/internals/BufferPool.java
  8d4156d17e94945aa4029063213a6c0b3f193894 
   
 clients/src/main/java/org/apache/kafka/clients/producer/internals/Metadata.java
  3aff6242d9d74208af6dcd764bdb153a60d5157a 
   
 clients/src/main/java/org/apache/kafka/clients/producer/internals/ProduceRequestResult.java
  b70ece71fdbd1da541ea9c4f828d173fca92e8c6 
   
 clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java
  50889e4ce4b6c4f7f9aaddfc316b047883a6dc8b 
   
 clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
 8726809f8ada69c32da5c086f553f5f6132b7b83 
   
 clients/src/main/java/org/apache/kafka/clients/tools/ProducerPerformance.java 
 689bae9e6ba69419c50bb4255177e0ef929115a1 
   clients/src/main/java/org/apache/kafka/common/Cluster.java 
 d7ccbcd91e6578176773b2ad1e13f6e06c64129a 
   clients/src/main/java/org/apache/kafka/common/MetricName.java 
 7e977e94a8e0b965a3ce92f8e84098a3bed6a6da 
   clients/src/main/java/org/apache/kafka/common/PartitionInfo.java 
 28562f9019e1a5a2aec037b40f77c84ed680a595 
   clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java 
 38ce10b31257311099d617ec0d4f7bccd758a18e 
   
 clients/src/main/java/org/apache/kafka/common/errors/NotEnoughReplicasAfterAppendException.java
  75c80a97e43089cb3f924a38f86d67b5a8dd2b89 
   
 clients/src/main/java/org/apache/kafka/common/errors/NotEnoughReplicasException.java
  486d5155bbb1fa70c2428700535e3d611f9e125b 
   
 clients/src/main/java/org/apache/kafka/common/message/KafkaLZ4BlockInputStream.java
  5be72fef1f976f9b0f61e663aa0a16d3abc1e232 
   
 clients/src/main/java/org/apache/kafka/common/message/KafkaLZ4BlockOutputStream.java
  e5b9e433e14ef95fa68934a7811ce8061faf67f7 
   clients/src/main/java/org/apache/kafka/common/metrics/JmxReporter.java 
 9c205387acc131151dc4fa0774a2cc2506992632 
   clients/src/main/java/org/apache/kafka/common/metrics/Sensor.java 
 e53cfaa69f518feabb00018d0cfe8004ff47c908 
   

[jira] [Resolved] (KAFKA-1871) Kafka Producer constructor hungs in case of wrong 'serializer.class' property

2015-02-02 Thread Igor Khomenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Khomenko resolved KAFKA-1871.
--
   Resolution: Invalid
Fix Version/s: 0.8.1.1

It throws that exception in my demo app, 
but by some reason not in my production app, so closed

 Kafka Producer constructor hungs in case of wrong 'serializer.class' property
 -

 Key: KAFKA-1871
 URL: https://issues.apache.org/jira/browse/KAFKA-1871
 Project: Kafka
  Issue Type: Bug
  Components: clients, producer 
Affects Versions: 0.8.1.1
 Environment: AWS Linux
Reporter: Igor Khomenko
Assignee: Jun Rao
 Fix For: 0.8.1.2, 0.8.1.1


 I have next code:
 {code}
 Properties props = new Properties();
 props.put(metadata.broker.list, 
 Services.getConfigInstance().getKafkaBrokerList());
 props.put(serializer.class, 
 main.java.com.services.kafka.MessageEntityToJsonSerializer);
 props.put(request.required.acks, 0);
 ProducerConfig config = new ProducerConfig(props);
 if (log.isLoggable(Level.INFO)) {
 log.log(Level.INFO, Connecting to Kafka...props:  + props);
 }
 producer = new ProducerString, MessageEntity(config);
 if (log.isLoggable(Level.INFO)) {
 log.log(Level.INFO, Connected to Kafka);
 }
 {code}
 It just hungs on 'new Producer' in case of wrong 'serializer.class' property



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (KAFKA-1871) Kafka Producer constructor hungs in case of wrong 'serializer.class' property

2015-02-02 Thread Igor Khomenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Khomenko closed KAFKA-1871.


 Kafka Producer constructor hungs in case of wrong 'serializer.class' property
 -

 Key: KAFKA-1871
 URL: https://issues.apache.org/jira/browse/KAFKA-1871
 Project: Kafka
  Issue Type: Bug
  Components: clients, producer 
Affects Versions: 0.8.1.1
 Environment: AWS Linux
Reporter: Igor Khomenko
Assignee: Jun Rao
 Fix For: 0.8.1.1, 0.8.1.2


 I have next code:
 {code}
 Properties props = new Properties();
 props.put(metadata.broker.list, 
 Services.getConfigInstance().getKafkaBrokerList());
 props.put(serializer.class, 
 main.java.com.services.kafka.MessageEntityToJsonSerializer);
 props.put(request.required.acks, 0);
 ProducerConfig config = new ProducerConfig(props);
 if (log.isLoggable(Level.INFO)) {
 log.log(Level.INFO, Connecting to Kafka...props:  + props);
 }
 producer = new ProducerString, MessageEntity(config);
 if (log.isLoggable(Level.INFO)) {
 log.log(Level.INFO, Connected to Kafka);
 }
 {code}
 It just hungs on 'new Producer' in case of wrong 'serializer.class' property



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1913) App hungs when calls producer.send to wrong IP of Kafka broker

2015-02-02 Thread Igor Khomenko (JIRA)
Igor Khomenko created KAFKA-1913:


 Summary: App hungs when calls producer.send to wrong IP of Kafka 
broker
 Key: KAFKA-1913
 URL: https://issues.apache.org/jira/browse/KAFKA-1913
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 0.8.1.1
 Environment: OS X 10.10.1, Java 7
Reporter: Igor Khomenko
Assignee: Jun Rao
 Fix For: 0.8.1.2


I have next test code to check the Kafka functionality:

{code}
package com.company;

import kafka.common.FailedToSendMessageException;
import kafka.javaapi.producer.Producer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;

import java.util.Date;
import java.util.Properties;

public class Main {

public static void main(String[] args) {

Properties props = new Properties();
props.put(metadata.broker.list, 192.168.9.3:9092);
props.put(serializer.class, com.company.KafkaMessageSerializer);
props.put(request.required.acks, 1);

ProducerConfig config = new ProducerConfig(props);

// The first is the type of the Partition key, the second the type of 
the message.
ProducerString, String messagesProducer = new ProducerString, 
String(config);

// Send
String topicName = my_messages;
String message = hello world;
KeyedMessageString, String data = new KeyedMessageString, 
String(topicName, message);

try {
System.out.println(new Date() + : sending...);

messagesProducer.send(data);

System.out.println(new Date() +  : sent);

}catch (FailedToSendMessageException e){
System.out.println(e:  + e);
e.printStackTrace();

}catch (Exception exc){
System.out.println(e:  + exc);
exc.printStackTrace();
}
}
}
{code}

{code}
package com.company;

import kafka.serializer.Encoder;
import kafka.utils.VerifiableProperties;

/**
 * Created by igorkhomenko on 2/2/15.
 */
public class KafkaMessageSerializer implements EncoderString {

public KafkaMessageSerializer(VerifiableProperties verifiableProperties) {
/* This constructor must be present for successful compile. */
}

@Override
public byte[] toBytes(String entity) {
byte [] serializedMessage = doCustomSerialization(entity);
return serializedMessage;
}

private byte[] doCustomSerialization(String entity) {
return entity.getBytes();
}
}
{code}

Here is also GitHub version https://github.com/soulfly/Kafka-java-producer

So it just hungs on next line:
{code}
messagesProducer.send(data)
{code}

When I replaced the brokerlist to
{code}
props.put(metadata.broker.list, localhost:9092);
{code}

then I got an exception:
{code}
kafka.common.FailedToSendMessageException: Failed to send messages after 3 
tries.
{code}

so it's okay

Why it hungs with wrong brokerlist? Any ideas?





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 28769: Patch for KAFKA-1809

2015-02-02 Thread Gwen Shapira

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28769/
---

(Updated Feb. 2, 2015, 7:55 p.m.)


Review request for kafka.


Bugs: KAFKA-1809
https://issues.apache.org/jira/browse/KAFKA-1809


Repository: kafka


Description (updated)
---

KAFKA-1890 Fix bug preventing Mirror Maker from successful rebalance; reviewed 
by Gwen Shapira and Neha Narkhede


KAFKA-1896; Record size function should check if value is null; reviewed by 
Guozhang Wang


KAFKA-1109 Need to fix GC log configuration code, not able to override 
KAFKA_GC_LOG_OPTS; reviewed by Neha Narkhede


KAFKA-1883 Fix NullPointerException in RequestSendThread; reviewed by Neha 
Narkhede


KAFKA-1885 Upgrade junit dependency in core to 4.6 version to allow running 
individual test methods via gradle command line; reviewed by Neha Narkhede


KAFKA-1818 KAFKA-1818 clean up code to more idiomatic scala usage; reviewed by 
Neha Narkhede and Gwen Shapira


KAFKA-1902; fix MetricName so that Yammer reporter can work correctly; patched 
by Jun Rao; reviewed by Manikumar Reddy, Manikumar Reddy and Joel Koshy


KAFKA-1861; Publishing kafka-client:test in order to utilize the helper utils 
in TestUtils; patched by Manikumar Reddy; reviewed by Jun Rao


KAFKA-1760: New consumer.


KAFKA-1760 Follow-up: fix compilation issue with Scala 2.11


first commit of refactoring.


changed topicmetadata to include brokerendpoints and fixed few unit tests


fixing systest and support for binding to default address


fixed system tests


fix default address binding and ipv6 support


fix some issues regarding endpoint parsing. Also, larger segments for systest 
make the validation much faster


added link to security wiki in doc


fixing unit test after rename of ProtocolType to SecurityProtocol


Following Joe's advice, added security protocol enum on client side, and 
modified protocol to use ID instead of string.


validate producer config against enum


add a second protocol for testing and modify SocketServerTests to check on 
multi-ports


Reverted the metadata request changes and removed the explicit security 
protocol argument. Instead the socketserver will determine the protocol based 
on the port and add this to the request


bump version for UpdateMetadataRequest and added support for rolling upgrades 
with new config


following tests - fixed LeaderAndISR protocol and ZK registration for backward 
compatibility


cleaned up some changes that were actually not necessary. hopefully making this 
patch slightly easier to review


undoing some changes that don't belong here


bring back config lost in cleanup


fixes neccessary for an all non-plaintext cluster to work


minor modifications following comments by Jun


added missing license


formatting


clean up imports


cleaned up V2 to not include host+port field. Using use.new.protocol flag to 
decide which version to serialize


change endpoints collection in Broker to Map[protocol, endpoint], mostly to be 
clear that we intend to have one endpoint per protocol


validate that listeners and advertised listeners have unique ports and protocols


support legacy configs


some fixes following rebase


Reverted to backward compatible zk registration, changed use.new.protocol to 
support multiple versions and few minor fixes


fixing some issues after rebase


modified inter.broker.protocol config to start with security per feedback


Diffs (updated)
-

  README.md 35e06b1cc6373f24ea6d405cccd4aacf0f458e0d 
  bin/kafka-run-class.sh 22a9865b5939450a9d7f4ea2eee5eba2c1ec758c 
  build.gradle 1cbab29ce83e20dae0561b51eed6fdb86d522f28 
  clients/src/main/java/org/apache/kafka/clients/ClientRequest.java 
d32c319d8ee4c46dad309ea54b136ea9798e2fd7 
  clients/src/main/java/org/apache/kafka/clients/ClusterConnectionStates.java 
8aece7e81a804b177a6f2c12e2dc6c89c1613262 
  clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/clients/ConnectionState.java 
ab7e3220f9b76b92ef981d695299656f041ad5ed 
  clients/src/main/java/org/apache/kafka/clients/KafkaClient.java 
397695568d3fd8e835d8f923a89b3b00c96d0ead 
  clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
6746275d0b2596cd6ff7ce464a3a8225ad75ef00 
  clients/src/main/java/org/apache/kafka/clients/NodeConnectionState.java 
752a979ea0b8bde7ff6d2e1a23bf54052305d841 
  clients/src/main/java/org/apache/kafka/clients/consumer/CommitType.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/clients/consumer/Consumer.java 
c0c636b3e1ba213033db6d23655032c9bbd5e378 
  clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
57c1807ccba9f264186f83e91f37c34b959c8060 
  
clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRebalanceCallback.java
 e4cf7d1cfa01c2844b53213a7b539cdcbcbeaa3a 
  

[jira] [Updated] (KAFKA-1886) SimpleConsumer swallowing ClosedByInterruptException

2015-02-02 Thread Aditya A Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya A Auradkar updated KAFKA-1886:
-
Attachment: KAFKA-1886_2015-02-02_13:57:23.patch

 SimpleConsumer swallowing ClosedByInterruptException
 

 Key: KAFKA-1886
 URL: https://issues.apache.org/jira/browse/KAFKA-1886
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Reporter: Aditya A Auradkar
Assignee: Aditya Auradkar
 Attachments: KAFKA-1886.patch, KAFKA-1886.patch, 
 KAFKA-1886_2015-02-02_13:57:23.patch


 This issue was originally reported by a Samza developer. I've included an 
 exchange of mine with Chris Riccomini. I'm trying to reproduce the problem on 
 my dev setup.
 From: criccomi
 Hey all,
 Samza's BrokerProxy [1] threads appear to be wedging randomly when we try to 
 interrupt its fetcher thread. I noticed that SimpleConsumer.scala catches 
 Throwable in its sendRequest method [2]. I'm wondering: if 
 blockingChannel.send/receive throws a ClosedByInterruptException
 when the thread is interrupted, what happens? It looks like sendRequest will 
 catch the exception (which I
 think clears the thread's interrupted flag), and then retries the send. If 
 the send succeeds on the retry, I think that the ClosedByInterruptException 
 exception is effectively swallowed, and the BrokerProxy will continue
 fetching messages as though its thread was never interrupted.
 Am I misunderstanding how things work?
 Cheers,
 Chris
 [1] 
 https://github.com/apache/incubator-samza/blob/master/samza-kafka/src/main/scala/org/apache/samza/system/kafka/BrokerProxy.scala#L126
 [2] 
 https://github.com/apache/kafka/blob/0.8.1/core/src/main/scala/kafka/consumer/SimpleConsumer.scala#L75



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 30196: Patch for KAFKA-1886

2015-02-02 Thread Aditya Auradkar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30196/
---

(Updated Feb. 2, 2015, 9:57 p.m.)


Review request for kafka and Joel Koshy.


Bugs: KAFKA-1886
https://issues.apache.org/jira/browse/KAFKA-1886


Repository: kafka


Description (updated)
---

Fixing KAFKA-1886. SimpleConsumer should not swallow ClosedByInterruptException


Diffs (updated)
-

  core/src/main/scala/kafka/consumer/SimpleConsumer.scala 
cbef84ac76e62768981f74e71d451f2bda995275 
  core/src/test/scala/unit/kafka/integration/PrimitiveApiTest.scala 
aeb7a19acaefabcc161c2ee6144a56d9a8999a81 

Diff: https://reviews.apache.org/r/30196/diff/


Testing
---

Added an integration test to PrimitiveAPITest.scala.


Thanks,

Aditya Auradkar



Re: Review Request 29831: Patch for KAFKA-1476

2015-02-02 Thread Neha Narkhede

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29831/#review70581
---


Onur, we should be able to check in after these review comments are addressed. 
Also, how would deleting offsets for a group work when the offset storage is 
Kafka? It's fine to not address it in this patch. Can you please create a JIRA 
to handle that?


core/src/main/scala/kafka/admin/AdminUtils.scala
https://reviews.apache.org/r/29831/#comment115788

This is confusing. I think we decided we will keep the definition of a 
valid delete consistent. In this case, we should just detect the inactive 
consumer groups for this topic and delete those. This check can be removed, I 
think.



core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala
https://reviews.apache.org/r/29831/#comment115798

lines too long



core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala
https://reviews.apache.org/r/29831/#comment115799

same here



core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala
https://reviews.apache.org/r/29831/#comment115797

Suggest you create a new local variable for the option docs. That way the 
lines won't be so long. Try to stick to within a width of 100.



core/src/main/scala/kafka/utils/ZkUtils.scala
https://reviews.apache.org/r/29831/#comment115789

Let's add consumerGroupOwners inside ZKGroupDirs so we don't have to 
hardcode zk paths like this.



core/src/main/scala/kafka/utils/ZkUtils.scala
https://reviews.apache.org/r/29831/#comment115790

same here



core/src/test/scala/unit/kafka/admin/DeleteConsumerGroupTest.scala
https://reviews.apache.org/r/29831/#comment115791

this test probably should have multiple topics and ensure that the API 
deletes only the topic provided?



core/src/test/scala/unit/kafka/admin/DeleteConsumerGroupTest.scala
https://reviews.apache.org/r/29831/#comment115792

Maybe include a topic unrelated to both consumer groups here as well?



core/src/test/scala/unit/kafka/admin/DeleteConsumerGroupTest.scala
https://reviews.apache.org/r/29831/#comment115793

I think these test names are very long. It is definitely useful to have 
test names that convey the purpose but since all tests here include deleting 
consumer groups, I suggest you take that out of all the test names. So 
DeleteConsumerGroupInfo can be dropped.



core/src/test/scala/unit/kafka/admin/DeleteConsumerGroupTest.scala
https://reviews.apache.org/r/29831/#comment115795

This test is odd. As I suggested earlier, valid delete should only have one 
rule - active consumer groups will not be deleted. It is alright to delete 
consumer group information for a topic if the topic exists.



core/src/test/scala/unit/kafka/admin/DeleteConsumerGroupTest.scala
https://reviews.apache.org/r/29831/#comment115796

verifyTopicDeletion is duplicated across 2 tests now. It is useful to 
refactor it into maybe TestUtils?


- Neha Narkhede


On Jan. 30, 2015, 7:10 p.m., Onur Karaman wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/29831/
 ---
 
 (Updated Jan. 30, 2015, 7:10 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1476
 https://issues.apache.org/jira/browse/KAFKA-1476
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Merged in work for KAFKA-1476 and sub-task KAFKA-1826
 
 
 Diffs
 -
 
   bin/kafka-consumer-groups.sh PRE-CREATION 
   core/src/main/scala/kafka/admin/AdminUtils.scala 
 28b12c7b89a56c113b665fbde1b95f873f8624a3 
   core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala PRE-CREATION 
   core/src/main/scala/kafka/utils/ZkUtils.scala 
 c14bd455b6642f5e6eb254670bef9f57ae41d6cb 
   core/src/test/scala/unit/kafka/admin/DeleteConsumerGroupTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/utils/TestUtils.scala 
 54755e8dd3f23ced313067566cd4ea867f8a496e 
 
 Diff: https://reviews.apache.org/r/29831/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Onur Karaman
 




Re: Cannot stop Kafka server if zookeeper is shutdown first

2015-02-02 Thread Gwen Shapira
I did!

Thanks for clarifying :)

The client that is part of Zookeeper itself actually does support timeouts.

On Mon, Feb 2, 2015 at 9:54 AM, Guozhang Wang wangg...@gmail.com wrote:
 Hi Jaikiran,

 I think Gwen was talking about contributing to ZkClient project:

 https://github.com/sgroschupf/zkclient

 Guozhang


 On Sun, Feb 1, 2015 at 5:30 AM, Jaikiran Pai jai.forums2...@gmail.com
 wrote:

 Hi Gwen,

 Yes, the KafkaZkClient is a wrapper around ZkClient and not a complete
 replacement.

 As for contributing to Zookeeper, yes that indeed in on my mind, but I
 haven't yet had a chance to really look deeper into Zookeeper or get in
 touch with their dev team to try and explain this potential improvement to
 them. I have no objection to contributing this or something similar to
 Zookeeper directly. I think I should be able to bring this up in the
 Zookeeper dev forum, sometime soon in the next few weekends.

 -Jaikiran


 On Sunday 01 February 2015 11:40 AM, Gwen Shapira wrote:

 It looks like the new KafkaZkClient is a wrapper around ZkClient, but
 not a replacement. Did I get it right?

 I think a wrapper for ZkClient can be useful - for example KAFKA-1664
 can also use one.

 However, I'm wondering why not contribute the fix directly to ZKClient
 project and ask for a release that contains the fix?
 This will benefit other users of the project who may also need a
 timeout (thats pretty basic...)

 As an alternative, if we don't want to collaborate with ZKClient for
 some reason, forking the project into Kafka will probably give us more
 control than wrappers and without much downside.

 Just a thought.

 Gwen





 On Sat, Jan 31, 2015 at 6:32 AM, Jaikiran Pai jai.forums2...@gmail.com
 wrote:

 Neha, Ewen (and others), my initial attempt to solve this is uploaded
 here
 https://reviews.apache.org/r/30477/. It solves the shutdown problem and
 now
 the server shuts down even when Zookeeper has gone down before the Kafka
 server.

 I went with the approach of introducing a custom (enhanced) ZkClient
 which
 for now allows time outs to be optionally specified for certain
 operations.
 I intentionally haven't forced the use of this new KafkaZkClient all over
 the code and instead for now have just used it in the KafkaServer.

 Does this patch look like something worth using?

 -Jaikiran


 On Thursday 29 January 2015 10:41 PM, Neha Narkhede wrote:

 Ewen is right. ZkClient APIs are blocking and the right fix for this
 seems
 to be patching ZkClient. At some point, if we find ourselves fiddling
 too
 much with ZkClient, it wouldn't hurt to write our own little zookeeper
 client wrapper.

 On Thu, Jan 29, 2015 at 12:57 AM, Ewen Cheslack-Postava
 e...@confluent.io
 wrote:

  Looks like a bug to me -- the underlying ZK library wraps a lot of
 blocking
 method implementations with waitUntilConnected() calls without any
 timeouts. Ideally we could just add a version of
 ZkUtils.getController()
 with a timeout, but I don't see an easy way to accomplish that with
 ZkClient.

 There's at least one other call to ZkUtils besides the one in the
 stacktrace you gave that would cause the same issue, possibly more that
 aren't directly called in that method. One ugly solution would be to
 use
 an
 extra thread during shutdown to trigger timeouts, but I'd imagine we
 probably have other threads that could end up blocking in similar ways.

 I filed https://issues.apache.org/jira/browse/KAFKA-1907 to track the
 issue.


 On Mon, Jan 26, 2015 at 6:35 AM, Jaikiran Pai 
 jai.forums2...@gmail.com
 wrote:

  The main culprit is this thread which goes into forever retry
 connection
 to a closed zookeeper when I shutdown Kafka (via a Ctrl + C) after
 zookeeper has already been shutdown. I have attached the complete
 thread
 dump, but I don't know if it will be delivered to the mailing list.

 Thread-2 prio=10 tid=0xb3305000 nid=0x4758 waiting on condition
 [0x6ad69000]
  java.lang.Thread.State: TIMED_WAITING (parking)
   at sun.misc.Unsafe.park(Native Method)
   - parking to wait for  0x70a93368 (a
 java.util.concurrent.locks.
 AbstractQueuedSynchronizer$ConditionObject)
   at java.util.concurrent.locks.LockSupport.parkUntil(
 LockSupport.java:267)
   at java.util.concurrent.locks.AbstractQueuedSynchronizer$
 ConditionObject.awaitUntil(AbstractQueuedSynchronizer.java:2130)
   at
 org.I0Itec.zkclient.ZkClient.waitForKeeperState(ZkClient.java:636)
   at
 org.I0Itec.zkclient.ZkClient.waitUntilConnected(ZkClient.java:619)
   at
 org.I0Itec.zkclient.ZkClient.waitUntilConnected(ZkClient.java:615)
   at

 org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:679)

   at org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:766)
   at org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:761)
   at kafka.utils.ZkUtils$.readDataMaybeNull(ZkUtils.scala:456)
   at kafka.utils.ZkUtils$.getController(ZkUtils.scala:65)
   at kafka.server.KafkaServer.kafka$server$KafkaServer$$
 

Re: Cannot stop Kafka server if zookeeper is shutdown first

2015-02-02 Thread Guozhang Wang
Hi Jaikiran,

I think Gwen was talking about contributing to ZkClient project:

https://github.com/sgroschupf/zkclient

Guozhang


On Sun, Feb 1, 2015 at 5:30 AM, Jaikiran Pai jai.forums2...@gmail.com
wrote:

 Hi Gwen,

 Yes, the KafkaZkClient is a wrapper around ZkClient and not a complete
 replacement.

 As for contributing to Zookeeper, yes that indeed in on my mind, but I
 haven't yet had a chance to really look deeper into Zookeeper or get in
 touch with their dev team to try and explain this potential improvement to
 them. I have no objection to contributing this or something similar to
 Zookeeper directly. I think I should be able to bring this up in the
 Zookeeper dev forum, sometime soon in the next few weekends.

 -Jaikiran


 On Sunday 01 February 2015 11:40 AM, Gwen Shapira wrote:

 It looks like the new KafkaZkClient is a wrapper around ZkClient, but
 not a replacement. Did I get it right?

 I think a wrapper for ZkClient can be useful - for example KAFKA-1664
 can also use one.

 However, I'm wondering why not contribute the fix directly to ZKClient
 project and ask for a release that contains the fix?
 This will benefit other users of the project who may also need a
 timeout (thats pretty basic...)

 As an alternative, if we don't want to collaborate with ZKClient for
 some reason, forking the project into Kafka will probably give us more
 control than wrappers and without much downside.

 Just a thought.

 Gwen





 On Sat, Jan 31, 2015 at 6:32 AM, Jaikiran Pai jai.forums2...@gmail.com
 wrote:

 Neha, Ewen (and others), my initial attempt to solve this is uploaded
 here
 https://reviews.apache.org/r/30477/. It solves the shutdown problem and
 now
 the server shuts down even when Zookeeper has gone down before the Kafka
 server.

 I went with the approach of introducing a custom (enhanced) ZkClient
 which
 for now allows time outs to be optionally specified for certain
 operations.
 I intentionally haven't forced the use of this new KafkaZkClient all over
 the code and instead for now have just used it in the KafkaServer.

 Does this patch look like something worth using?

 -Jaikiran


 On Thursday 29 January 2015 10:41 PM, Neha Narkhede wrote:

 Ewen is right. ZkClient APIs are blocking and the right fix for this
 seems
 to be patching ZkClient. At some point, if we find ourselves fiddling
 too
 much with ZkClient, it wouldn't hurt to write our own little zookeeper
 client wrapper.

 On Thu, Jan 29, 2015 at 12:57 AM, Ewen Cheslack-Postava
 e...@confluent.io
 wrote:

  Looks like a bug to me -- the underlying ZK library wraps a lot of
 blocking
 method implementations with waitUntilConnected() calls without any
 timeouts. Ideally we could just add a version of
 ZkUtils.getController()
 with a timeout, but I don't see an easy way to accomplish that with
 ZkClient.

 There's at least one other call to ZkUtils besides the one in the
 stacktrace you gave that would cause the same issue, possibly more that
 aren't directly called in that method. One ugly solution would be to
 use
 an
 extra thread during shutdown to trigger timeouts, but I'd imagine we
 probably have other threads that could end up blocking in similar ways.

 I filed https://issues.apache.org/jira/browse/KAFKA-1907 to track the
 issue.


 On Mon, Jan 26, 2015 at 6:35 AM, Jaikiran Pai 
 jai.forums2...@gmail.com
 wrote:

  The main culprit is this thread which goes into forever retry
 connection
 to a closed zookeeper when I shutdown Kafka (via a Ctrl + C) after
 zookeeper has already been shutdown. I have attached the complete
 thread
 dump, but I don't know if it will be delivered to the mailing list.

 Thread-2 prio=10 tid=0xb3305000 nid=0x4758 waiting on condition
 [0x6ad69000]
  java.lang.Thread.State: TIMED_WAITING (parking)
   at sun.misc.Unsafe.park(Native Method)
   - parking to wait for  0x70a93368 (a
 java.util.concurrent.locks.
 AbstractQueuedSynchronizer$ConditionObject)
   at java.util.concurrent.locks.LockSupport.parkUntil(
 LockSupport.java:267)
   at java.util.concurrent.locks.AbstractQueuedSynchronizer$
 ConditionObject.awaitUntil(AbstractQueuedSynchronizer.java:2130)
   at
 org.I0Itec.zkclient.ZkClient.waitForKeeperState(ZkClient.java:636)
   at
 org.I0Itec.zkclient.ZkClient.waitUntilConnected(ZkClient.java:619)
   at
 org.I0Itec.zkclient.ZkClient.waitUntilConnected(ZkClient.java:615)
   at

 org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:679)

   at org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:766)
   at org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:761)
   at kafka.utils.ZkUtils$.readDataMaybeNull(ZkUtils.scala:456)
   at kafka.utils.ZkUtils$.getController(ZkUtils.scala:65)
   at kafka.server.KafkaServer.kafka$server$KafkaServer$$
 controlledShutdown(KafkaServer.scala:194)
   at kafka.server.KafkaServer$$anonfun$shutdown$1.apply$mcV$
 sp(KafkaServer.scala:269)
   at kafka.utils.Utils$.swallow(Utils.scala:172)
   

Re: [kafka-clients] Re: [VOTE] 0.8.2.0 Candidate 3

2015-02-02 Thread Jay Kreps
Yay!

-Jay

On Mon, Feb 2, 2015 at 2:23 PM, Neha Narkhede n...@confluent.io wrote:

 Great! Thanks Jun for helping with the release and everyone involved for
 your contributions.

 On Mon, Feb 2, 2015 at 1:32 PM, Joe Stein joe.st...@stealth.ly wrote:

  Huzzah!
 
  Thanks Jun for preparing the release candidates and getting this out to
 the
  community.
 
  - Joe Stein
 
  On Mon, Feb 2, 2015 at 2:27 PM, Jun Rao j...@confluent.io wrote:
 
   The following are the results of the votes.
  
   +1 binding = 3 votes
   +1 non-binding = 1 votes
   -1 = 0 votes
   0 = 0 votes
  
   The vote passes.
  
   I will release artifacts to maven central, update the dist svn and
  download
   site. Will send out an announce after that.
  
   Thanks everyone that contributed to the work in 0.8.2.0!
  
   Jun
  
   On Wed, Jan 28, 2015 at 9:22 PM, Jun Rao j...@confluent.io wrote:
  
   This is the third candidate for release of Apache Kafka 0.8.2.0.
  
   Release Notes for the 0.8.2.0 release
  
  
 
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate3/RELEASE_NOTES.html
  
   *** Please download, test and vote by Saturday, Jan 31, 11:30pm PT
  
   Kafka's KEYS file containing PGP keys we use to sign the release:
   http://kafka.apache.org/KEYS in addition to the md5, sha1 and sha2
   (SHA256) checksum.
  
   * Release artifacts to be voted upon (source and binary):
   https://people.apache.org/~junrao/kafka-0.8.2.0-candidate3/
  
   * Maven artifacts to be voted upon prior to release:
   https://repository.apache.org/content/groups/staging/
  
   * scala-doc
   https://people.apache.org/~junrao/kafka-0.8.2.0-candidate3/scaladoc/
  
   * java-doc
   https://people.apache.org/~junrao/kafka-0.8.2.0-candidate3/javadoc/
  
   * The tag to be voted upon (off the 0.8.2 branch) is the 0.8.2.0 tag
  
  
 
 https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=223ac42a7a2a0dab378cc411f4938a9cea1eb7ea
   (commit 7130da90a9ee9e6fb4beb2a2a6ab05c06c9bfac4)
  
   /***
  
   Thanks,
  
   Jun
  
  
--
   You received this message because you are subscribed to the Google
 Groups
   kafka-clients group.
   To unsubscribe from this group and stop receiving emails from it, send
 an
   email to kafka-clients+unsubscr...@googlegroups.com.
   To post to this group, send email to kafka-clie...@googlegroups.com.
   Visit this group at http://groups.google.com/group/kafka-clients.
   To view this discussion on the web visit
  
 
 https://groups.google.com/d/msgid/kafka-clients/CAFc58G-XRpw9ik35%2BCmsYm-uc2hjHet6fOpw4bF90ka9Z%3DhH%3Dw%40mail.gmail.com
   
 
 https://groups.google.com/d/msgid/kafka-clients/CAFc58G-XRpw9ik35%2BCmsYm-uc2hjHet6fOpw4bF90ka9Z%3DhH%3Dw%40mail.gmail.com?utm_medium=emailutm_source=footer
  
   .
  
   For more options, visit https://groups.google.com/d/optout.
  
 



 --
 Thanks,
 Neha



[jira] [Commented] (KAFKA-1915) Integrate checkstyle for java code

2015-02-02 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14302796#comment-14302796
 ] 

Jay Kreps commented on KAFKA-1915:
--

Created reviewboard https://reviews.apache.org/r/30547/diff/
 against branch trunk

 Integrate checkstyle for java code
 --

 Key: KAFKA-1915
 URL: https://issues.apache.org/jira/browse/KAFKA-1915
 Project: Kafka
  Issue Type: Improvement
Reporter: Jay Kreps
Assignee: Jay Kreps
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-1915.patch


 There are a lot of little style and layering problems that tend to creep into 
 our code, especially with external patches and lax reviewers.
 These are the usual style suspects--capitalization, spacing, bracket 
 placement,  etc.
 My personal pet peave is a lack of clear thinking about layers. These 
 layering problems crept in quite fast, and sad to say a number of them were 
 accidentally caused by me. This is things like o.a.k.common depending on 
 o.a.k.clients or the consumer depending on the producer.
 I have a patch that integrates checkstyle to catch these issues at build 
 time, and which corrects the known problems. There are a fair number of very 
 small changes in this patch, all trivial.
 Checkstyle can be slightly annoying, not least of which because it has a 
 couple minor bugs around anonymous inner class formatting, but I find it is 
 98% real style issues so mostly worth it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 30547: Integrate checkstyle.

2015-02-02 Thread Jay Kreps

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30547/
---

Review request for kafka.


Bugs: KAFKA-1915
https://issues.apache.org/jira/browse/KAFKA-1915


Repository: kafka


Description
---

Add checkstyle.


Diffs
-

  build.gradle 68443725868c438176e68b04c0642f0e9fa29e23 
  checkstyle/checkstyle.xml PRE-CREATION 
  checkstyle/import-control.xml PRE-CREATION 
  clients/src/main/java/org/apache/kafka/clients/ClusterConnectionStates.java 
574287d77f7d46f49522601ece486721363a88e0 
  clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
5950191b240f3a212ffa71cc341ee927663f0e55 
  clients/src/main/java/org/apache/kafka/clients/consumer/CommitType.java 
072cc2e6f92dbd35c123cf3bd0e2a36647185bb3 
  clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
6d4ff7cd2a2835e5bda8ccf1f2884d0fcdf04f39 
  clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecords.java 
416d703c3f59adee6ba98a203bd5514b7c9d95e5 
  clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
300c551f3d21a472012f41d3dc08998055654082 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/Heartbeat.java
 d9483ecf6ae4a0b6b40fd0fecbb06bf61c1477c4 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/NoOpConsumerRebalanceCallback.java
 7e57a39690d9b8ce0046c2636f3478f837a3fd53 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/SubscriptionState.java
 71ce20db955bd101efac7451b0a8216cf08826d6 
  clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
ebc4c5315fb9464ae58a34009358511b61a8432e 
  clients/src/main/java/org/apache/kafka/clients/producer/Producer.java 
6b2471f878b7d5ad35b7992799a582b81d4ec275 
  clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
9a43d668376295f47cea0b8eb26b15a6c73bb39c 
  
clients/src/main/java/org/apache/kafka/clients/producer/internals/BufferPool.java
 8d4156d17e94945aa4029063213a6c0b3f193894 
  
clients/src/main/java/org/apache/kafka/clients/producer/internals/Metadata.java 
3aff6242d9d74208af6dcd764bdb153a60d5157a 
  
clients/src/main/java/org/apache/kafka/clients/producer/internals/ProduceRequestResult.java
 b70ece71fdbd1da541ea9c4f828d173fca92e8c6 
  
clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java
 50889e4ce4b6c4f7f9aaddfc316b047883a6dc8b 
  clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
8726809f8ada69c32da5c086f553f5f6132b7b83 
  clients/src/main/java/org/apache/kafka/clients/tools/ProducerPerformance.java 
689bae9e6ba69419c50bb4255177e0ef929115a1 
  clients/src/main/java/org/apache/kafka/common/Cluster.java 
d7ccbcd91e6578176773b2ad1e13f6e06c64129a 
  clients/src/main/java/org/apache/kafka/common/MetricName.java 
7e977e94a8e0b965a3ce92f8e84098a3bed6a6da 
  clients/src/main/java/org/apache/kafka/common/PartitionInfo.java 
28562f9019e1a5a2aec037b40f77c84ed680a595 
  clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java 
38ce10b31257311099d617ec0d4f7bccd758a18e 
  
clients/src/main/java/org/apache/kafka/common/errors/NotEnoughReplicasAfterAppendException.java
 75c80a97e43089cb3f924a38f86d67b5a8dd2b89 
  
clients/src/main/java/org/apache/kafka/common/errors/NotEnoughReplicasException.java
 486d5155bbb1fa70c2428700535e3d611f9e125b 
  
clients/src/main/java/org/apache/kafka/common/message/KafkaLZ4BlockInputStream.java
 5be72fef1f976f9b0f61e663aa0a16d3abc1e232 
  
clients/src/main/java/org/apache/kafka/common/message/KafkaLZ4BlockOutputStream.java
 e5b9e433e14ef95fa68934a7811ce8061faf67f7 
  clients/src/main/java/org/apache/kafka/common/metrics/JmxReporter.java 
9c205387acc131151dc4fa0774a2cc2506992632 
  clients/src/main/java/org/apache/kafka/common/metrics/Sensor.java 
e53cfaa69f518feabb00018d0cfe8004ff47c908 
  clients/src/main/java/org/apache/kafka/common/metrics/stats/Rate.java 
a5838b3894906bde0b74515329f370b71d6ad593 
  clients/src/main/java/org/apache/kafka/common/network/NetworkReceive.java 
dcc639a4bb451ba8401af5186c4bf3047513374f 
  clients/src/main/java/org/apache/kafka/common/network/Selector.java 
e18a769a4b3004a0fd2a1d4d45968c9902c2c771 
  clients/src/main/java/org/apache/kafka/common/protocol/ApiKeys.java 
109fc965e09b2ed186a073351bd037ac8af20a4c 
  clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java 
7517b879866fc5dad5f8d8ad30636da8bbe7784a 
  clients/src/main/java/org/apache/kafka/common/protocol/types/Struct.java 
ee1f78f06c19a1e51f4a35e7aababb39944b7b04 
  
clients/src/main/java/org/apache/kafka/common/record/ByteBufferOutputStream.java
 c7bd2f8852bd9d18315420223aa2e7bbb0e0afad 
  clients/src/main/java/org/apache/kafka/common/record/Compressor.java 
d684e6833bd81c41e2c6c4cf5d1b749da9eb1e1c 
  
clients/src/main/java/org/apache/kafka/common/record/KafkaLZ4BlockInputStream.java

[jira] [Updated] (KAFKA-1915) Integrate checkstyle for java code

2015-02-02 Thread Jay Kreps (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Kreps updated KAFKA-1915:
-
Status: Patch Available  (was: Open)

 Integrate checkstyle for java code
 --

 Key: KAFKA-1915
 URL: https://issues.apache.org/jira/browse/KAFKA-1915
 Project: Kafka
  Issue Type: Improvement
Reporter: Jay Kreps
Assignee: Jay Kreps
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-1915.patch


 There are a lot of little style and layering problems that tend to creep into 
 our code, especially with external patches and lax reviewers.
 These are the usual style suspects--capitalization, spacing, bracket 
 placement,  etc.
 My personal pet peave is a lack of clear thinking about layers. These 
 layering problems crept in quite fast, and sad to say a number of them were 
 accidentally caused by me. This is things like o.a.k.common depending on 
 o.a.k.clients or the consumer depending on the producer.
 I have a patch that integrates checkstyle to catch these issues at build 
 time, and which corrects the known problems. There are a fair number of very 
 small changes in this patch, all trivial.
 Checkstyle can be slightly annoying, not least of which because it has a 
 couple minor bugs around anonymous inner class formatting, but I find it is 
 98% real style issues so mostly worth it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1915) Integrate checkstyle for java code

2015-02-02 Thread Jay Kreps (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Kreps updated KAFKA-1915:
-
Attachment: KAFKA-1915.patch

 Integrate checkstyle for java code
 --

 Key: KAFKA-1915
 URL: https://issues.apache.org/jira/browse/KAFKA-1915
 Project: Kafka
  Issue Type: Improvement
Reporter: Jay Kreps
Assignee: Jay Kreps
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-1915.patch


 There are a lot of little style and layering problems that tend to creep into 
 our code, especially with external patches and lax reviewers.
 These are the usual style suspects--capitalization, spacing, bracket 
 placement,  etc.
 My personal pet peave is a lack of clear thinking about layers. These 
 layering problems crept in quite fast, and sad to say a number of them were 
 accidentally caused by me. This is things like o.a.k.common depending on 
 o.a.k.clients or the consumer depending on the producer.
 I have a patch that integrates checkstyle to catch these issues at build 
 time, and which corrects the known problems. There are a fair number of very 
 small changes in this patch, all trivial.
 Checkstyle can be slightly annoying, not least of which because it has a 
 couple minor bugs around anonymous inner class formatting, but I find it is 
 98% real style issues so mostly worth it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1729) add doc for Kafka-based offset management in 0.8.2

2015-02-02 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14301736#comment-14301736
 ] 

Jun Rao commented on KAFKA-1729:


Joel,

Thanks for the doc. Looks good. Just a few minor comments.

a higher setting (e.g., 100-200) is recommended..  = a higher setting (e.g., 
100-200) is recommended for production.
The brokers periodically compact the offsets topic since it only needs to 
maintain the most recent offset commit. = The brokers periodically compact the 
offsets topic since it only needs to maintain the most recent offset commit per 
partition.

Also, when migrating from Zookeeper based storage to Kafka based one, how would 
a user roll back? Should they set offset.storage to zookeeper and keep 
dual.commit to true?

Could you commit you changes today?

 add doc for Kafka-based offset management in 0.8.2
 --

 Key: KAFKA-1729
 URL: https://issues.apache.org/jira/browse/KAFKA-1729
 Project: Kafka
  Issue Type: Sub-task
Reporter: Jun Rao
Assignee: Joel Koshy
 Fix For: 0.8.2

 Attachments: KAFKA-1729.patch, KAFKA-1782-doc-v1.patch, 
 KAFKA-1782-doc-v2.patch, KAFKA-1782-doc-v3.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] 0.8.2.0 Candidate 3

2015-02-02 Thread Jun Rao
The following are the results of the votes.

+1 binding = 3 votes
+1 non-binding = 1 votes
-1 = 0 votes
0 = 0 votes

The vote passes.

I will release artifacts to maven central, update the dist svn and download
site. Will send out an announce after that.

Thanks everyone that contributed to the work in 0.8.2.0!

Jun

On Wed, Jan 28, 2015 at 9:22 PM, Jun Rao j...@confluent.io wrote:

 This is the third candidate for release of Apache Kafka 0.8.2.0.

 Release Notes for the 0.8.2.0 release

 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate3/RELEASE_NOTES.html

 *** Please download, test and vote by Saturday, Jan 31, 11:30pm PT

 Kafka's KEYS file containing PGP keys we use to sign the release:
 http://kafka.apache.org/KEYS in addition to the md5, sha1 and sha2
 (SHA256) checksum.

 * Release artifacts to be voted upon (source and binary):
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate3/

 * Maven artifacts to be voted upon prior to release:
 https://repository.apache.org/content/groups/staging/

 * scala-doc
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate3/scaladoc/

 * java-doc
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate3/javadoc/

 * The tag to be voted upon (off the 0.8.2 branch) is the 0.8.2.0 tag

 https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=223ac42a7a2a0dab378cc411f4938a9cea1eb7ea
 (commit 7130da90a9ee9e6fb4beb2a2a6ab05c06c9bfac4)

 /***

 Thanks,

 Jun