[jira] [Commented] (KAFKA-2016) RollingBounceTest takes long

2015-04-02 Thread Ted Malaska (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14392767#comment-14392767
 ] 

Ted Malaska commented on KAFKA-2016:


Thank Sriharsha and Gwen,

I'm new to Kafka so this has been very helpful.  I'm a service guy and I have 
found that commit patch is a great way to learn more about the platform.  

So my question now is what do I need to do next to get this jira committed?  
I'm new to the Kafka process.  

Thanks Again
Ted Malaska

> RollingBounceTest takes long
> 
>
> Key: KAFKA-2016
> URL: https://issues.apache.org/jira/browse/KAFKA-2016
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.2.1
>Reporter: Jun Rao
>  Labels: newbie
> Fix For: 0.8.3
>
> Attachments: KAFKA-2016-1.patch, KAFKA-2016-2.patch
>
>
> RollingBounceTest.testRollingBounce() currently takes about 48 secs. This is 
> a bit too long.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1961) Looks like its possible to delete _consumer_offsets topic

2015-04-02 Thread Ted Malaska (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14392769#comment-14392769
 ] 

Ted Malaska commented on KAFKA-1961:


Hey there,

This jira and 2016 have been hanging around for some time.  Is there anything I 
can do to get them committed or are there changes I need to do.  Let me know.

It would mean a lot to me to get my first Kafka patch.  Thanks

Ted Malaska

> Looks like its possible to delete _consumer_offsets topic
> -
>
> Key: KAFKA-1961
> URL: https://issues.apache.org/jira/browse/KAFKA-1961
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
>  Labels: newbie
> Attachments: KAFKA-1961.3.patch, KAFKA-1961.4.patch
>
>
> Noticed that kafka-topics.sh --delete can successfully delete internal topics 
> (__consumer_offsets).
> I'm pretty sure we want to prevent that, to avoid users shooting themselves 
> in the foot.
> Topic admin command should check for internal topics, just like 
> ReplicaManager does and not let users delete them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2039) Update Scala to 2.10.5 and 2.11.6

2015-04-02 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14392969#comment-14392969
 ] 

Ismael Juma commented on KAFKA-2039:


Thank you!

> Update Scala to 2.10.5 and 2.11.6
> -
>
> Key: KAFKA-2039
> URL: https://issues.apache.org/jira/browse/KAFKA-2039
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.2.1
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Minor
> Fix For: 0.8.3
>
> Attachments: kafka-2039-v2.patch, kafka-2039.patch
>
>
> Scala 2.10.5 (the last release of the 2.10.x series) is binary compatible 
> with 2.10.4  and Scala 2.11.6 is binary compatible with 2.11.5. For details 
> of the changes in each release, see:
> * http://www.scala-lang.org/news/2.10.5
> * http://www.scala-lang.org/news/2.11.6



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2087) TopicConfigManager javadoc references incorrect paths

2015-04-02 Thread Aditya Auradkar (JIRA)
Aditya Auradkar created KAFKA-2087:
--

 Summary: TopicConfigManager javadoc references incorrect paths
 Key: KAFKA-2087
 URL: https://issues.apache.org/jira/browse/KAFKA-2087
 Project: Kafka
  Issue Type: Bug
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar
Priority: Minor


The TopicConfigManager docs refer to znodes in 
/brokers/topics//config which is incorrect.

Fix javadoc



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 32778: Patch for KAFKA-2087

2015-04-02 Thread Aditya Auradkar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/32778/
---

Review request for kafka.


Bugs: KAFKA-2087
https://issues.apache.org/jira/browse/KAFKA-2087


Repository: kafka


Description
---

PATCH for KAFKA-1546


PATCH for KAFKA-1546

Brief summary of changes:
- Added a lagBegin metric inside Replica to track the lag in terms of time 
since the replica did not read from the LEO
- Using lag begin value in the check for ISR expand and shrink
- Removed the max lag messages config since it is no longer necessary
- Returning the initialLogEndOffset in LogReadResult corresponding to the the 
LEO before actually reading from the log.
- Unit test cases to test ISR shrinkage and expansion

Updated KAFKA-1546 patch to reflect Neha and Jun's comments


Addressing Joel's comments


Addressing Jun and Guozhang's comments


Addressing Jun's comments


Cleaning up javadoc for TopicConfigManager


Diffs
-

  core/src/main/scala/kafka/cluster/Partition.scala 
c4bf48a801007ebe7497077d2018d6dffe1677d4 
  core/src/main/scala/kafka/cluster/Replica.scala 
bd13c20338ce3d73113224440e858a12814e5adb 
  core/src/main/scala/kafka/log/Log.scala 
06b8ecc5d11a1acfbaf3c693c42bf3ce5b2cd86d 
  core/src/main/scala/kafka/server/FetchDataInfo.scala 
26f278f9b75b1c9c83a720ca9ebd8ab812d19d39 
  core/src/main/scala/kafka/server/KafkaConfig.scala 
422451aec5ea0442eb2e4c1ae772885b813904a9 
  core/src/main/scala/kafka/server/ReplicaManager.scala 
44f0026e6676d8d764dd59dbcc6bb7bb727a3ba6 
  core/src/main/scala/kafka/server/TopicConfigManager.scala 
47295d40131492aaac786273819b7bc6e22e5486 
  core/src/test/scala/unit/kafka/server/ISRExpirationTest.scala 
92152358c95fa9178d71bd1c079af0a0bd8f1da8 
  core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
191251d1340b5e5b2d649b37af3c6c1896d07e6e 
  core/src/test/scala/unit/kafka/server/LogRecoveryTest.scala 
92d6b2c672f74cdd526f2e98da8f7fb3696a88e3 
  core/src/test/scala/unit/kafka/server/SimpleFetchTest.scala 
efb457334bd87e4c5b0dd2c66ae1995993cd0bc1 

Diff: https://reviews.apache.org/r/32778/diff/


Testing
---


Thanks,

Aditya Auradkar



[jira] [Commented] (KAFKA-2087) TopicConfigManager javadoc references incorrect paths

2015-04-02 Thread Aditya A Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14392983#comment-14392983
 ] 

Aditya A Auradkar commented on KAFKA-2087:
--

Created reviewboard https://reviews.apache.org/r/32778/diff/
 against branch origin/trunk

> TopicConfigManager javadoc references incorrect paths
> -
>
> Key: KAFKA-2087
> URL: https://issues.apache.org/jira/browse/KAFKA-2087
> Project: Kafka
>  Issue Type: Bug
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>Priority: Minor
> Attachments: KAFKA-2087.patch
>
>
> The TopicConfigManager docs refer to znodes in 
> /brokers/topics//config which is incorrect.
> Fix javadoc



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2087) TopicConfigManager javadoc references incorrect paths

2015-04-02 Thread Aditya A Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya A Auradkar updated KAFKA-2087:
-
Attachment: KAFKA-2087.patch

> TopicConfigManager javadoc references incorrect paths
> -
>
> Key: KAFKA-2087
> URL: https://issues.apache.org/jira/browse/KAFKA-2087
> Project: Kafka
>  Issue Type: Bug
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>Priority: Minor
> Attachments: KAFKA-2087.patch
>
>
> The TopicConfigManager docs refer to znodes in 
> /brokers/topics//config which is incorrect.
> Fix javadoc



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 30809: Patch for KAFKA-1888

2015-04-02 Thread Abhishek Nigam


> On March 31, 2015, 9:20 p.m., Joel Koshy wrote:
> > core/src/main/scala/kafka/tools/ContinuousValidationTest.java, line 1
> > 
> >
> > This should definitely not be in tools - this should probably live 
> > somewhere under clients/test. I don't think those are currently exported 
> > though, so we will need to modify build.gradle. However, per other comments 
> > below I'm not sure this should be part of system tests since it is (by 
> > definition long running).

Will do.


> On March 31, 2015, 9:20 p.m., Joel Koshy wrote:
> > core/src/main/scala/kafka/tools/ContinuousValidationTest.java, line 49
> > 
> >
> > It would help a lot if you could add comments describing what 
> > validation is done. For e.g., I'm unclear on why we need the complicated 
> > file-based signaling mechanism. So a high-level description would help a 
> > lot.
> > 
> > More importantly, I really think we should separate "continuous 
> > validation" from "broker upgrade" which is the focus of KAFKA-1888
> > 
> > In order to do a broker upgrade test, we don't need any additional 
> > code. We just instantiate the producer performance and consumer via system 
> > test utils. Keep those on the old jar. The cluster will start with the old 
> > jar as well and during the test we bounce in the latest jar (the system 
> > test utils will need to be updated to support this). We then do the 
> > standard system test validation - that all messages sent were received.

I wanted to have two (topic, partition) tuples with leader on each broker. I 
have decided to use a single topic with multiple partitions rather than using 
two topics which could have also worked. The reason for picking the first 
approach was that essentially if I wanted to leverage continuous validation 
test outside of system test framework with in a test cluster with other topics. 
In order to illustrate why the second approach won't work in that scenario is 
that if we have 3 brokers with one partition if I create 3 topics (T1P1, T2P1, 
T3P1) then the following would be a valid assignment based on existing broker 
assignment algorithm.

B1B2   B3 
T1P1  TXP1 TXP2
T2P1  TYP1 TYP2
T3P1

where TX and TY are other production topics running in that cluster. In this 
case all the leaders have landed on the same broker. However the first approach 
precludes this possibility.


The file signalling was to workaround the fact that the most commonly used 
client does not have capability to consume from a particular partition. The way 
I have set it up the file signalling acts as a barrier. We make sure all the 
producer/consumer pairs have been instantiated with the hope being that they 
have talked to zookeeper and reserved their parition. Once both the consumers 
have been instantiated we expect themselves to have bound themselves to a 
particular partition we can now let the producers run in both the instances and 
this way we are assured that the consumer should never receive data from same 
producer.


> On March 31, 2015, 9:20 p.m., Joel Koshy wrote:
> > core/src/main/scala/kafka/tools/ContinuousValidationTest.java, line 52
> > 
> >
> > This appears to be for rate-limiting the producer but can be more 
> > general than that.
> > 
> > It would help to add a comment describing its purpose.
> > 
> > Also, should probably be private

This is a poor man's rate limiter as compared to guava rate limiter. I will 
make it private.


> On March 31, 2015, 9:20 p.m., Joel Koshy wrote:
> > system_test/broker_upgrade/bin/test-broker-upgrade.sh, line 1
> > 
> >
> > This appears to be a one-off script to set up the test. This needs to 
> > be done within the system test framework which already has a number of 
> > utilities that do similar things.
> > 
> > One other comment is that the patch is for an upgrade test, but I think 
> > it is a bit confusing to mix this with CVT.

The continuous validation test will be useful outside of the system test 
framework. This was an attempt to leverage CVT in the system test setting.

I think since strong objections have been raised against adopting this approach 
I will leave a comment on this patch accordingly.


- Abhishek


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30809/#review78270
---


On March 23, 2015, 6:54 p.m., Abhishek Nigam wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/30809/
> -

Review Request 32781: Patch for KAFKA-2087

2015-04-02 Thread Aditya Auradkar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/32781/
---

Review request for kafka.


Bugs: KAFKA-2087
https://issues.apache.org/jira/browse/KAFKA-2087


Repository: kafka


Description
---

Fixing KAFKA-2087


Diffs
-

  core/src/main/scala/kafka/server/TopicConfigManager.scala 47295d4 

Diff: https://reviews.apache.org/r/32781/diff/


Testing
---


Thanks,

Aditya Auradkar



Re: Review Request 32781: Patch for KAFKA-2087

2015-04-02 Thread Jiangjie Qin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/32781/#review78683
---

Ship it!


Ship It!

- Jiangjie Qin


On April 2, 2015, 5:29 p.m., Aditya Auradkar wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/32781/
> ---
> 
> (Updated April 2, 2015, 5:29 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2087
> https://issues.apache.org/jira/browse/KAFKA-2087
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Fixing KAFKA-2087
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/server/TopicConfigManager.scala 47295d4 
> 
> Diff: https://reviews.apache.org/r/32781/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Aditya Auradkar
> 
>



[jira] [Commented] (KAFKA-2078) Getting Selector [WARN] Error in I/O with host java.io.EOFException

2015-04-02 Thread Aravind (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14393025#comment-14393025
 ] 

Aravind commented on KAFKA-2078:


And when this is happening I can trigger more messages from client and they are 
produced successfully.

> Getting Selector [WARN] Error in I/O with host java.io.EOFException
> ---
>
> Key: KAFKA-2078
> URL: https://issues.apache.org/jira/browse/KAFKA-2078
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.2.0
> Environment: OS Version: 2.6.39-400.209.1.el5uek and Hardware: 8 x 
> Intel(R) Xeon(R) CPU X5660  @ 2.80GHz/44GB
>Reporter: Aravind
>Assignee: Jun Rao
>
> When trying to Produce 1000 (10 MB) messages, getting this below error some 
> where between 997 to 1000th message. There is no pattern but able to 
> reproduce.
> [PDT] 2015-03-31 13:53:50 Selector [WARN] Error in I/O with "our host" 
> java.io.EOFException at 
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:62)
>  at org.apache.kafka.common.network.Selector.poll(Selector.java:248) at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192) at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191) at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122) at 
> java.lang.Thread.run(Thread.java:724)
> This error I am getting some times @ 997th message or 999th message. There is 
> no pattern but able to reproduce.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 30809: Patch for KAFKA-1888

2015-04-02 Thread Abhishek Nigam


> On April 2, 2015, 1:38 a.m., Jun Rao wrote:
> > core/src/main/scala/kafka/tools/ContinuousValidationTest.java, lines 431-437
> > 
> >
> > Could we add a description of the test (what kind of data is generated, 
> > how does consumer to the verification, what kind of output is generated, 
> > etc)?

The data which is generated is very simple - increasing sequence of longs with 
timestamp. The producer keeps track of the newest sequence number, timestamp 
which it has sent.
The consumer keeps track of the last sequence number and timestamp which it has 
received. The system test will interrupt the CVT and compare the sequence 
numbers between the producer and the sender. If they do not line up then it is 
an error. (If either the producer or consumer threads terminate un-expectedly 
before they have been interrupted it will be flagged as an error) If the test 
fails then the data logs from the producer and consumer are not removed and can 
be inspected.

The idea behind putting the consumer and producer in the same JVM was 
orthogonal to system test and was in case it is used in a test cluster hosting 
other topics it makes easy to get hands on some things like delta etc. However, 
I think there is very strong objection to adopting this for system tests which 
are short-lived in nature. Unless there is support for the approach I have 
taken so far I plan to revert to the existing approach of spawning multiple 
JVMs for producer and consumer.

I will change the bash script to be in python similar to what other system 
tests do.


> On April 2, 2015, 1:38 a.m., Jun Rao wrote:
> > core/src/main/scala/kafka/tools/ContinuousValidationTest.java, lines 440-454
> > 
> >
> > Could we add a description of each command line option?

I need to add more documentation. I will add this in.


- Abhishek


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30809/#review78630
---


On March 23, 2015, 6:54 p.m., Abhishek Nigam wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/30809/
> ---
> 
> (Updated March 23, 2015, 6:54 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1888
> https://issues.apache.org/jira/browse/KAFKA-1888
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Updated the RB with Gwen's comments, Beckett's comments and a subset of 
> Guozhang's comments
> 
> 
> Diffs
> -
> 
>   bin/kafka-run-class.sh 881f578a8f5c796fe23415b978c1ad35869af76e 
>   core/src/main/scala/kafka/tools/ContinuousValidationTest.java PRE-CREATION 
>   core/src/main/scala/kafka/utils/ShutdownableThread.scala 
> fc226c863095b7761290292cd8755cd7ad0f155c 
>   system_test/broker_upgrade/bin/test-broker-upgrade.sh PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/30809/diff/
> 
> 
> Testing
> ---
> 
> Scripted it to run 20 times without any failures.
> Command-line: broker-upgrade/bin/test.sh  
> 
> 
> Thanks,
> 
> Abhishek Nigam
> 
>



Kafka meetup at ApacheCon in Apr

2015-04-02 Thread Jun Rao
Hi,

A few of us will be at ApacheCon in April. We are organizing a Kafka meetup
Tuesday evening (
http://events.linuxfoundation.org/events/apachecon-north-america/extend-the-experience/meetups).
Every attendee of ApacheCon is welcome to join the meetup. We also plan to
have a a few short presentations in the meetup. If you have something that
you'd like to share, please let me know.

Thanks,

Jun


Re: Review Request 28769: Patch for KAFKA-1809

2015-04-02 Thread Gwen Shapira


> On March 29, 2015, 7:33 p.m., Jun Rao wrote:
> > clients/src/main/java/org/apache/kafka/common/protocol/ApiVersion.java, 
> > line 28
> > 
> >
> > Since this is for intra-broker communication, should we move this class 
> > to core?
> 
> Gwen Shapira wrote:
> Sure. At first I thought we'll need it when we move all 
> requests/responses to o.a.k.common, but on second look, it doesn't seem like 
> the requests/responses themselves need it.

This was a bigger change than I expected, since Scala's enums are very limited 
compared to Java.


> On March 29, 2015, 7:33 p.m., Jun Rao wrote:
> > core/src/test/scala/unit/kafka/cluster/BrokerTest.scala, line 115
> > 
> >
> > Is toString() customized on EndPoint or do you intend to use 
> > connectionString()? Also, if host is null, should we standardize the 
> > connection string to be PLAINTEXT://:9092?

Ah, yes, this is confusing - EndPoints have toString() and BrokerEndPoints have 
connectionString(). For BrokerEndPoint, connectionString() makes more sense. a 
"natural" toString() would contain brokerId too. For consistency, I'd swap 
EndPoint to use connectionString() as well.

Regarding "null", I agree that its better to be consistent and not print "null" 
anywhere. BrokerEndPoint should never have null anyway (since it represents an 
actual network connection, not a listner), but we need to handle this in 
EndPoint.


> On March 29, 2015, 7:33 p.m., Jun Rao wrote:
> > core/src/main/scala/kafka/server/KafkaConfig.scala, lines 822-829
> > 
> >
> > Do we default to PLAINTEXT://null:6667 or PLAINTEXT://:9092? Do we have 
> > to explicitly deal with hostName being null?

Fixed the doc. We don't need to deal with null here - host.name defaults to "", 
which is the right thing.
We actually test for this (KafkaConfigTest.scala line 163)


> On March 29, 2015, 7:33 p.m., Jun Rao wrote:
> > core/src/main/scala/kafka/server/KafkaConfig.scala, line 840
> > 
> >
> > Similar comment as the above, do we need to handle a null 
> > adverisedHostName explicitly?

It will default to hostname.


- Gwen


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28769/#review78160
---


On March 27, 2015, 10:04 p.m., Gwen Shapira wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/28769/
> ---
> 
> (Updated March 27, 2015, 10:04 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1809
> https://issues.apache.org/jira/browse/KAFKA-1809
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> forgot rest of patch
> 
> 
> merge with trunk
> 
> 
> Diffs
> -
> 
>   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
> fa9daaef66ff7961e1c46cd0cd8fed18a53bccd8 
>   clients/src/main/java/org/apache/kafka/common/protocol/ApiVersion.java 
> PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/protocol/SecurityProtocol.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/common/utils/Utils.java 
> 920b51a6c3c99639fbc9dc0656373c19fabd 
>   clients/src/test/java/org/apache/kafka/common/utils/UtilsTest.java 
> c899813d55b9c4786adde3d840f040d6645d27c8 
>   config/server.properties 1614260b71a658b405bb24157c8f12b1f1031aa5 
>   core/src/main/scala/kafka/admin/AdminUtils.scala 
> b700110f2d7f1ede235af55d8e37e1b5592c6c7d 
>   core/src/main/scala/kafka/admin/TopicCommand.scala 
> f400b71f8444fffd3fc1d8398a283682390eba4e 
>   core/src/main/scala/kafka/api/ConsumerMetadataResponse.scala 
> 24aaf954dc42e2084454fa5fc9e8f388ea95c756 
>   core/src/main/scala/kafka/api/LeaderAndIsrRequest.scala 
> 4ff7e8f8cc695551dd5d2fe65c74f6b6c571e340 
>   core/src/main/scala/kafka/api/TopicMetadata.scala 
> 0190076df0adf906ecd332284f222ff974b315fc 
>   core/src/main/scala/kafka/api/TopicMetadataResponse.scala 
> 92ac4e687be22e4800199c0666bfac5e0059e5bb 
>   core/src/main/scala/kafka/api/UpdateMetadataRequest.scala 
> 530982e36b17934b8cc5fb668075a5342e142c59 
>   core/src/main/scala/kafka/client/ClientUtils.scala 
> ebba87f0566684c796c26cb76c64b4640a5ccfde 
>   core/src/main/scala/kafka/cluster/Broker.scala 
> 0060add008bb3bc4b0092f2173c469fce0120be6 
>   core/src/main/scala/kafka/cluster/BrokerEndPoint.scala PRE-CREATION 
>   core/src/main/scala/kafka/cluster/EndPoint.scala PRE-CREATION 
>   core/src/main/scala/kafka/cluster/SecurityProtocol.scala PRE-CREATION 
>   core/src/main/scala/kafka/common/BrokerEndPointNotAvaila

[jira] [Commented] (KAFKA-2041) Add ability to specify a KeyClass for KafkaLog4jAppender

2015-04-02 Thread Tom DeVoe (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14393424#comment-14393424
 ] 

Tom DeVoe commented on KAFKA-2041:
--

One small bug in the most recent patch you uploaded, you changed the Keyer 
trait function to {{getKey}}, but did not update KafkaLog4jAppender, which 
still uses the old {{key}} function. Once that was updated, the patch worked 
great.

> Add ability to specify a KeyClass for KafkaLog4jAppender
> 
>
> Key: KAFKA-2041
> URL: https://issues.apache.org/jira/browse/KAFKA-2041
> Project: Kafka
>  Issue Type: Improvement
>  Components: producer 
>Reporter: Benoy Antony
>Assignee: Jun Rao
> Attachments: kafka-2041-001.patch, kafka-2041-002.patch
>
>
> KafkaLog4jAppender is the Log4j Appender to publish messages to Kafka. 
> Since there is no key or explicit partition number, the messages are sent to 
> random partitions. 
> In some cases, it is possible to derive a key from the message itself. 
> So it may be beneficial to enable KafkaLog4jAppender to accept KeyClass which 
> will provide a key for a given message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Kafka meetup at ApacheCon in Apr

2015-04-02 Thread Tong Li

Jun,
 Thanks for the information, I am trying to create a travel request to
be there to meet you guys. But I need some information on who and what
company the participators will represent. I wonder if you can give me few
names and their company name so that I can put in my travel request?

Thanks.

Tong Li
OpenStack & Kafka Community Development
Building 501/B205
liton...@us.ibm.com



From:   Jun Rao 
To: "dev@kafka.apache.org" ,
"us...@kafka.apache.org" ,
"kafka-clie...@googlegroups.com"

Date:   04/02/2015 02:20 PM
Subject:Kafka meetup at ApacheCon in Apr



Hi,

A few of us will be at ApacheCon in April. We are organizing a Kafka meetup
Tuesday evening (
http://events.linuxfoundation.org/events/apachecon-north-america/extend-the-experience/meetups
).
Every attendee of ApacheCon is welcome to join the meetup. We also plan to
have a a few short presentations in the meetup. If you have something that
you'd like to share, please let me know.

Thanks,

Jun


[jira] [Commented] (KAFKA-2082) Kafka Replication ends up in a bad state

2015-04-02 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14393505#comment-14393505
 ] 

Sriharsha Chintalapani commented on KAFKA-2082:
---

[~eapache] Thank for the new branch I am able to reproduce this error. It looks 
to be an issue in replicaFetcherManager currenly we don't have 
leaderFinderThread similar to the one in ConsumerFetcherManager. So the 
replicaFetcher will keep going on the loop with older leader trying to fetch 
data and failing.  I have some work for KAFKA-1461 to delay partitions fetch if 
there are any errors I can add leaderFetcher part and send both as part of this 
JIRA.

> Kafka Replication ends up in a bad state
> 
>
> Key: KAFKA-2082
> URL: https://issues.apache.org/jira/browse/KAFKA-2082
> Project: Kafka
>  Issue Type: Bug
>  Components: replication
>Affects Versions: 0.8.2.1
>Reporter: Evan Huus
>Assignee: Neha Narkhede
>Priority: Critical
>
> While running integration tests for Sarama (the go client) we came across a 
> pattern of connection losses that reliably puts kafka into a bad state: 
> several of the brokers start spinning, chewing ~30% CPU and spamming the logs 
> with hundreds of thousands of lines like:
> {noformat}
> [2015-04-01 13:08:40,070] WARN [Replica Manager on Broker 9093]: Fetch 
> request with correlation id 111094 from client ReplicaFetcherThread-0-9093 on 
> partition [many_partition,1] failed due to Leader not local for partition 
> [many_partition,1] on broker 9093 (kafka.server.ReplicaManager)
> [2015-04-01 13:08:40,070] WARN [Replica Manager on Broker 9093]: Fetch 
> request with correlation id 111094 from client ReplicaFetcherThread-0-9093 on 
> partition [many_partition,6] failed due to Leader not local for partition 
> [many_partition,6] on broker 9093 (kafka.server.ReplicaManager)
> [2015-04-01 13:08:40,070] WARN [Replica Manager on Broker 9093]: Fetch 
> request with correlation id 111095 from client ReplicaFetcherThread-0-9093 on 
> partition [many_partition,21] failed due to Leader not local for partition 
> [many_partition,21] on broker 9093 (kafka.server.ReplicaManager)
> [2015-04-01 13:08:40,071] WARN [Replica Manager on Broker 9093]: Fetch 
> request with correlation id 111095 from client ReplicaFetcherThread-0-9093 on 
> partition [many_partition,26] failed due to Leader not local for partition 
> [many_partition,26] on broker 9093 (kafka.server.ReplicaManager)
> [2015-04-01 13:08:40,071] WARN [Replica Manager on Broker 9093]: Fetch 
> request with correlation id 111095 from client ReplicaFetcherThread-0-9093 on 
> partition [many_partition,1] failed due to Leader not local for partition 
> [many_partition,1] on broker 9093 (kafka.server.ReplicaManager)
> [2015-04-01 13:08:40,071] WARN [Replica Manager on Broker 9093]: Fetch 
> request with correlation id 111095 from client ReplicaFetcherThread-0-9093 on 
> partition [many_partition,6] failed due to Leader not local for partition 
> [many_partition,6] on broker 9093 (kafka.server.ReplicaManager)
> [2015-04-01 13:08:40,072] WARN [Replica Manager on Broker 9093]: Fetch 
> request with correlation id 111096 from client ReplicaFetcherThread-0-9093 on 
> partition [many_partition,21] failed due to Leader not local for partition 
> [many_partition,21] on broker 9093 (kafka.server.ReplicaManager)
> [2015-04-01 13:08:40,072] WARN [Replica Manager on Broker 9093]: Fetch 
> request with correlation id 111096 from client ReplicaFetcherThread-0-9093 on 
> partition [many_partition,26] failed due to Leader not local for partition 
> [many_partition,26] on broker 9093 (kafka.server.ReplicaManager)
> {noformat}
> This can be easily and reliably reproduced using the {{toxiproxy-final}} 
> branch of https://github.com/Shopify/sarama which includes a vagrant script 
> for provisioning the appropriate cluster: 
> - {{git clone https://github.com/Shopify/sarama.git}}
> - {{git checkout test-jira-kafka-2082}}
> - {{vagrant up}}
> - {{TEST_SEED=1427917826425719059 DEBUG=true go test -v}}
> After the test finishes (it fails because the cluster ends up in a bad 
> state), you can log into the cluster machine with {{vagrant ssh}} and inspect 
> the bad nodes. The vagrant script provisions five zookeepers and five brokers 
> in {{/opt/kafka-9091/}} through {{/opt/kafka-9095/}}.
> Additional context: the test produces continually to the cluster while 
> randomly cutting and restoring zookeeper connections (all connections to 
> zookeeper are run through a simple proxy on the same vm to make this easy). 
> The majority of the time this works very well and does a good job exercising 
> our producer's retry and failover code. However, under certain patterns of 
> connection loss (the {{TEST_SEED}} in the instructions is important), kafka 
> gets con

[jira] [Assigned] (KAFKA-2082) Kafka Replication ends up in a bad state

2015-04-02 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani reassigned KAFKA-2082:
-

Assignee: Sriharsha Chintalapani  (was: Neha Narkhede)

> Kafka Replication ends up in a bad state
> 
>
> Key: KAFKA-2082
> URL: https://issues.apache.org/jira/browse/KAFKA-2082
> Project: Kafka
>  Issue Type: Bug
>  Components: replication
>Affects Versions: 0.8.2.1
>Reporter: Evan Huus
>Assignee: Sriharsha Chintalapani
>Priority: Critical
>
> While running integration tests for Sarama (the go client) we came across a 
> pattern of connection losses that reliably puts kafka into a bad state: 
> several of the brokers start spinning, chewing ~30% CPU and spamming the logs 
> with hundreds of thousands of lines like:
> {noformat}
> [2015-04-01 13:08:40,070] WARN [Replica Manager on Broker 9093]: Fetch 
> request with correlation id 111094 from client ReplicaFetcherThread-0-9093 on 
> partition [many_partition,1] failed due to Leader not local for partition 
> [many_partition,1] on broker 9093 (kafka.server.ReplicaManager)
> [2015-04-01 13:08:40,070] WARN [Replica Manager on Broker 9093]: Fetch 
> request with correlation id 111094 from client ReplicaFetcherThread-0-9093 on 
> partition [many_partition,6] failed due to Leader not local for partition 
> [many_partition,6] on broker 9093 (kafka.server.ReplicaManager)
> [2015-04-01 13:08:40,070] WARN [Replica Manager on Broker 9093]: Fetch 
> request with correlation id 111095 from client ReplicaFetcherThread-0-9093 on 
> partition [many_partition,21] failed due to Leader not local for partition 
> [many_partition,21] on broker 9093 (kafka.server.ReplicaManager)
> [2015-04-01 13:08:40,071] WARN [Replica Manager on Broker 9093]: Fetch 
> request with correlation id 111095 from client ReplicaFetcherThread-0-9093 on 
> partition [many_partition,26] failed due to Leader not local for partition 
> [many_partition,26] on broker 9093 (kafka.server.ReplicaManager)
> [2015-04-01 13:08:40,071] WARN [Replica Manager on Broker 9093]: Fetch 
> request with correlation id 111095 from client ReplicaFetcherThread-0-9093 on 
> partition [many_partition,1] failed due to Leader not local for partition 
> [many_partition,1] on broker 9093 (kafka.server.ReplicaManager)
> [2015-04-01 13:08:40,071] WARN [Replica Manager on Broker 9093]: Fetch 
> request with correlation id 111095 from client ReplicaFetcherThread-0-9093 on 
> partition [many_partition,6] failed due to Leader not local for partition 
> [many_partition,6] on broker 9093 (kafka.server.ReplicaManager)
> [2015-04-01 13:08:40,072] WARN [Replica Manager on Broker 9093]: Fetch 
> request with correlation id 111096 from client ReplicaFetcherThread-0-9093 on 
> partition [many_partition,21] failed due to Leader not local for partition 
> [many_partition,21] on broker 9093 (kafka.server.ReplicaManager)
> [2015-04-01 13:08:40,072] WARN [Replica Manager on Broker 9093]: Fetch 
> request with correlation id 111096 from client ReplicaFetcherThread-0-9093 on 
> partition [many_partition,26] failed due to Leader not local for partition 
> [many_partition,26] on broker 9093 (kafka.server.ReplicaManager)
> {noformat}
> This can be easily and reliably reproduced using the {{toxiproxy-final}} 
> branch of https://github.com/Shopify/sarama which includes a vagrant script 
> for provisioning the appropriate cluster: 
> - {{git clone https://github.com/Shopify/sarama.git}}
> - {{git checkout test-jira-kafka-2082}}
> - {{vagrant up}}
> - {{TEST_SEED=1427917826425719059 DEBUG=true go test -v}}
> After the test finishes (it fails because the cluster ends up in a bad 
> state), you can log into the cluster machine with {{vagrant ssh}} and inspect 
> the bad nodes. The vagrant script provisions five zookeepers and five brokers 
> in {{/opt/kafka-9091/}} through {{/opt/kafka-9095/}}.
> Additional context: the test produces continually to the cluster while 
> randomly cutting and restoring zookeeper connections (all connections to 
> zookeeper are run through a simple proxy on the same vm to make this easy). 
> The majority of the time this works very well and does a good job exercising 
> our producer's retry and failover code. However, under certain patterns of 
> connection loss (the {{TEST_SEED}} in the instructions is important), kafka 
> gets confused. The test never cuts more than two connections at a time, so 
> zookeeper should always have quorum, and the topic (with three replicas) 
> should always be writable.
> Completely restarting the cluster via {{vagrant reload}} seems to put it back 
> into a sane state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2076) Add an API to new consumer to allow user get high watermark of partitions.

2015-04-02 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14393516#comment-14393516
 ] 

Jiangjie Qin commented on KAFKA-2076:
-

Hey [~jkreps], any more thoughts on this?

> Add an API to new consumer to allow user get high watermark of partitions.
> --
>
> Key: KAFKA-2076
> URL: https://issues.apache.org/jira/browse/KAFKA-2076
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jiangjie Qin
>
> We have a use case that user wants to know how far it is behind a particular 
> partition on startup. Currently in each fetch response, we have high 
> watermark for each partition, we only keep a global max-lag metric. It would 
> be better that we keep a record of high watermark per partition and update it 
> on each fetch response. We can add a new API to let user query the high 
> watermark.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2076) Add an API to new consumer to allow user get high watermark of partitions.

2015-04-02 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin reassigned KAFKA-2076:
---

Assignee: Jiangjie Qin

> Add an API to new consumer to allow user get high watermark of partitions.
> --
>
> Key: KAFKA-2076
> URL: https://issues.apache.org/jira/browse/KAFKA-2076
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>
> We have a use case that user wants to know how far it is behind a particular 
> partition on startup. Currently in each fetch response, we have high 
> watermark for each partition, we only keep a global max-lag metric. It would 
> be better that we keep a record of high watermark per partition and update it 
> on each fetch response. We can add a new API to let user query the high 
> watermark.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Kafka meetup at ApacheCon in Apr

2015-04-02 Thread Clark Haskins
Hi Tong,

My entire team of SREs from LinkedIn will be there.

-Clark

Sent from my iPhone

> On Apr 2, 2015, at 2:35 PM, Tong Li  wrote:
> 
> Jun,
>  Thanks for the information, I am trying to create a travel request to be 
> there to meet you guys. But I need some information on who and what company 
> the participators will represent. I wonder if you can give me few names and 
> their company name so that I can put in my travel request?
> 
> Thanks.
> 
> Tong Li
> OpenStack & Kafka Community Development
> Building 501/B205
> liton...@us.ibm.com
> 
> Jun Rao ---04/02/2015 02:20:22 PM---Hi, A few of us will be at ApacheCon in 
> April. We are organizing a Kafka meetup
> 
> From: Jun Rao 
> To:   "dev@kafka.apache.org" , "us...@kafka.apache.org" 
> , "kafka-clie...@googlegroups.com" 
> 
> Date: 04/02/2015 02:20 PM
> Subject:  Kafka meetup at ApacheCon in Apr
> 
> 
> 
> Hi,
> 
> A few of us will be at ApacheCon in April. We are organizing a Kafka meetup
> Tuesday evening (
> http://events.linuxfoundation.org/events/apachecon-north-america/extend-the-experience/meetups).
> Every attendee of ApacheCon is welcome to join the meetup. We also plan to
> have a a few short presentations in the meetup. If you have something that
> you'd like to share, please let me know.
> 
> Thanks,
> 
> Jun
> 


Re: Review Request 28769: Patch for KAFKA-1809

2015-04-02 Thread Gwen Shapira

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28769/
---

(Updated April 2, 2015, 10:12 p.m.)


Review request for kafka.


Bugs: KAFKA-1809
https://issues.apache.org/jira/browse/KAFKA-1809


Repository: kafka


Description (updated)
---

forgot rest of patch


merge with trunk


moved ApiVersion to core, fixed SocketServer concurrency issue and other minor 
things


Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into 
broker_ref2

Conflicts:
core/src/main/scala/kafka/server/KafkaApis.scala

core/src/test/scala/unit/kafka/api/RequestResponseSerializationTest.scala


Diffs (updated)
-

  clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
fa9daaef66ff7961e1c46cd0cd8fed18a53bccd8 
  clients/src/main/java/org/apache/kafka/common/protocol/SecurityProtocol.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/utils/Utils.java 
920b51a6c3c99639fbc9dc0656373c19fabd 
  clients/src/test/java/org/apache/kafka/common/utils/UtilsTest.java 
c899813d55b9c4786adde3d840f040d6645d27c8 
  config/server.properties 1614260b71a658b405bb24157c8f12b1f1031aa5 
  core/src/main/scala/kafka/admin/AdminUtils.scala 
b700110f2d7f1ede235af55d8e37e1b5592c6c7d 
  core/src/main/scala/kafka/admin/TopicCommand.scala 
f400b71f8444fffd3fc1d8398a283682390eba4e 
  core/src/main/scala/kafka/api/ApiVersion.scala PRE-CREATION 
  core/src/main/scala/kafka/api/ConsumerMetadataResponse.scala 
24aaf954dc42e2084454fa5fc9e8f388ea95c756 
  core/src/main/scala/kafka/api/LeaderAndIsrRequest.scala 
4ff7e8f8cc695551dd5d2fe65c74f6b6c571e340 
  core/src/main/scala/kafka/api/TopicMetadata.scala 
0190076df0adf906ecd332284f222ff974b315fc 
  core/src/main/scala/kafka/api/TopicMetadataResponse.scala 
92ac4e687be22e4800199c0666bfac5e0059e5bb 
  core/src/main/scala/kafka/api/UpdateMetadataRequest.scala 
530982e36b17934b8cc5fb668075a5342e142c59 
  core/src/main/scala/kafka/client/ClientUtils.scala 
ebba87f0566684c796c26cb76c64b4640a5ccfde 
  core/src/main/scala/kafka/cluster/Broker.scala 
0060add008bb3bc4b0092f2173c469fce0120be6 
  core/src/main/scala/kafka/cluster/BrokerEndPoint.scala PRE-CREATION 
  core/src/main/scala/kafka/cluster/EndPoint.scala PRE-CREATION 
  core/src/main/scala/kafka/cluster/SecurityProtocol.scala PRE-CREATION 
  core/src/main/scala/kafka/common/BrokerEndPointNotAvailableException.scala 
PRE-CREATION 
  core/src/main/scala/kafka/consumer/ConsumerConfig.scala 
9ebbee6c16dc83767297c729d2d74ebbd063a993 
  core/src/main/scala/kafka/consumer/ConsumerFetcherManager.scala 
b9e2bea7b442a19bcebd1b350d39541a8c9dd068 
  core/src/main/scala/kafka/consumer/ConsumerFetcherThread.scala 
152fda5d1dcdf319399fdeeb8457006090ebe56c 
  core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
b1cf0db8741e384e4e4119751058ea87b2589e57 
  core/src/main/scala/kafka/controller/ControllerChannelManager.scala 
c582191636f6188c25d62a67ff0315b56f163133 
  core/src/main/scala/kafka/controller/KafkaController.scala 
09fc46d759b74bcdad2d2a610d9c5a93ff02423f 
  core/src/main/scala/kafka/javaapi/ConsumerMetadataResponse.scala 
d281bb31a66fd749ecddfbe38479b6903f436831 
  core/src/main/scala/kafka/javaapi/TopicMetadata.scala 
f384e04678df10a5b46a439f475c63371bf8e32b 
  core/src/main/scala/kafka/network/RequestChannel.scala 
bc73540acb1ceb303cb30e58dfa903822f7a8a6c 
  core/src/main/scala/kafka/network/SocketServer.scala 
76ce41aed6e04ac5ba88395c4d5008aca17f9a73 
  core/src/main/scala/kafka/producer/ProducerPool.scala 
43df70bb461dd3e385e6b20396adef3c4016a3fc 
  core/src/main/scala/kafka/server/AbstractFetcherManager.scala 
20c00cb8cc2351950edbc8cb1752905a0c26e79f 
  core/src/main/scala/kafka/server/AbstractFetcherThread.scala 
5d5cf5897cc37b3595f14bfe9d7cde43456bcc4b 
  core/src/main/scala/kafka/server/KafkaApis.scala 
f372af77f58aacee7981caa361e13eefecee2278 
  core/src/main/scala/kafka/server/KafkaConfig.scala 
422451aec5ea0442eb2e4c1ae772885b813904a9 
  core/src/main/scala/kafka/server/KafkaHealthcheck.scala 
7907987e43404487382de7f4cc294f0d01ac15a7 
  core/src/main/scala/kafka/server/KafkaServer.scala 
4db3384545be8c237d6fc9646716ab67d5193ec5 
  core/src/main/scala/kafka/server/MetadataCache.scala 
6aef6e4508ecadbbcc1e12bed2054547b7aa333e 
  core/src/main/scala/kafka/server/ReplicaFetcherManager.scala 
351dbbad3bdb709937943b336a5b0a9e0162a5e2 
  core/src/main/scala/kafka/server/ReplicaFetcherThread.scala 
96faa7b4ed7c9ba8a3f6f9f114bd94e19b3a7ac0 
  core/src/main/scala/kafka/server/ReplicaManager.scala 
44f0026e6676d8d764dd59dbcc6bb7bb727a3ba6 
  core/src/main/scala/kafka/tools/ConsumerOffsetChecker.scala 
d1e7c434e77859d746b8dc68dd5d5a3740425e79 
  core/src/main/scala/kafka/tools/ReplicaVerificationTool.scala 
ba6ddd7a909df79a0f7d45e8b4a2af94ea0fceb6 
  core/src/main/scala/kafka/tools/SimpleConsumerShell.scala 
b4f903b6

[jira] [Updated] (KAFKA-1809) Refactor brokers to allow listening on multiple ports and IPs

2015-04-02 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1809:

Attachment: KAFKA-1809_2015-04-02_15:12:30.patch

> Refactor brokers to allow listening on multiple ports and IPs 
> --
>
> Key: KAFKA-1809
> URL: https://issues.apache.org/jira/browse/KAFKA-1809
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
> Attachments: KAFKA-1809.patch, KAFKA-1809.v1.patch, 
> KAFKA-1809.v2.patch, KAFKA-1809.v3.patch, 
> KAFKA-1809_2014-12-25_11:04:17.patch, KAFKA-1809_2014-12-27_12:03:35.patch, 
> KAFKA-1809_2014-12-27_12:22:56.patch, KAFKA-1809_2014-12-29_12:11:36.patch, 
> KAFKA-1809_2014-12-30_11:25:29.patch, KAFKA-1809_2014-12-30_18:48:46.patch, 
> KAFKA-1809_2015-01-05_14:23:57.patch, KAFKA-1809_2015-01-05_20:25:15.patch, 
> KAFKA-1809_2015-01-05_21:40:14.patch, KAFKA-1809_2015-01-06_11:46:22.patch, 
> KAFKA-1809_2015-01-13_18:16:21.patch, KAFKA-1809_2015-01-28_10:26:22.patch, 
> KAFKA-1809_2015-02-02_11:55:16.patch, KAFKA-1809_2015-02-03_10:45:31.patch, 
> KAFKA-1809_2015-02-03_10:46:42.patch, KAFKA-1809_2015-02-03_10:52:36.patch, 
> KAFKA-1809_2015-03-16_09:02:18.patch, KAFKA-1809_2015-03-16_09:40:49.patch, 
> KAFKA-1809_2015-03-27_15:04:10.patch, KAFKA-1809_2015-04-02_15:12:30.patch
>
>
> The goal is to eventually support different security mechanisms on different 
> ports. 
> Currently brokers are defined as host+port pair, and this definition exists 
> throughout the code-base, therefore some refactoring is needed to support 
> multiple ports for a single broker.
> The detailed design is here: 
> https://cwiki.apache.org/confluence/display/KAFKA/Multiple+Listeners+for+Kafka+Brokers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1809) Refactor brokers to allow listening on multiple ports and IPs

2015-04-02 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14393570#comment-14393570
 ] 

Gwen Shapira commented on KAFKA-1809:
-

Updated reviewboard https://reviews.apache.org/r/28769/diff/
 against branch trunk

> Refactor brokers to allow listening on multiple ports and IPs 
> --
>
> Key: KAFKA-1809
> URL: https://issues.apache.org/jira/browse/KAFKA-1809
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
> Attachments: KAFKA-1809.patch, KAFKA-1809.v1.patch, 
> KAFKA-1809.v2.patch, KAFKA-1809.v3.patch, 
> KAFKA-1809_2014-12-25_11:04:17.patch, KAFKA-1809_2014-12-27_12:03:35.patch, 
> KAFKA-1809_2014-12-27_12:22:56.patch, KAFKA-1809_2014-12-29_12:11:36.patch, 
> KAFKA-1809_2014-12-30_11:25:29.patch, KAFKA-1809_2014-12-30_18:48:46.patch, 
> KAFKA-1809_2015-01-05_14:23:57.patch, KAFKA-1809_2015-01-05_20:25:15.patch, 
> KAFKA-1809_2015-01-05_21:40:14.patch, KAFKA-1809_2015-01-06_11:46:22.patch, 
> KAFKA-1809_2015-01-13_18:16:21.patch, KAFKA-1809_2015-01-28_10:26:22.patch, 
> KAFKA-1809_2015-02-02_11:55:16.patch, KAFKA-1809_2015-02-03_10:45:31.patch, 
> KAFKA-1809_2015-02-03_10:46:42.patch, KAFKA-1809_2015-02-03_10:52:36.patch, 
> KAFKA-1809_2015-03-16_09:02:18.patch, KAFKA-1809_2015-03-16_09:40:49.patch, 
> KAFKA-1809_2015-03-27_15:04:10.patch, KAFKA-1809_2015-04-02_15:12:30.patch
>
>
> The goal is to eventually support different security mechanisms on different 
> ports. 
> Currently brokers are defined as host+port pair, and this definition exists 
> throughout the code-base, therefore some refactoring is needed to support 
> multiple ports for a single broker.
> The detailed design is here: 
> https://cwiki.apache.org/confluence/display/KAFKA/Multiple+Listeners+for+Kafka+Brokers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1809) Refactor brokers to allow listening on multiple ports and IPs

2015-04-02 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1809:

Attachment: KAFKA-1809-DOCs.V1.patch

Added doc patch.

I added the upgrade process and the new intra.broker.protocol.version 
configuration.

I did not add security.intra.broker.protocol, listeners and 
advertised.listeners - I think they wouldn't make sense to users before we add 
security protocol implementations.

> Refactor brokers to allow listening on multiple ports and IPs 
> --
>
> Key: KAFKA-1809
> URL: https://issues.apache.org/jira/browse/KAFKA-1809
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
> Attachments: KAFKA-1809-DOCs.V1.patch, KAFKA-1809.patch, 
> KAFKA-1809.v1.patch, KAFKA-1809.v2.patch, KAFKA-1809.v3.patch, 
> KAFKA-1809_2014-12-25_11:04:17.patch, KAFKA-1809_2014-12-27_12:03:35.patch, 
> KAFKA-1809_2014-12-27_12:22:56.patch, KAFKA-1809_2014-12-29_12:11:36.patch, 
> KAFKA-1809_2014-12-30_11:25:29.patch, KAFKA-1809_2014-12-30_18:48:46.patch, 
> KAFKA-1809_2015-01-05_14:23:57.patch, KAFKA-1809_2015-01-05_20:25:15.patch, 
> KAFKA-1809_2015-01-05_21:40:14.patch, KAFKA-1809_2015-01-06_11:46:22.patch, 
> KAFKA-1809_2015-01-13_18:16:21.patch, KAFKA-1809_2015-01-28_10:26:22.patch, 
> KAFKA-1809_2015-02-02_11:55:16.patch, KAFKA-1809_2015-02-03_10:45:31.patch, 
> KAFKA-1809_2015-02-03_10:46:42.patch, KAFKA-1809_2015-02-03_10:52:36.patch, 
> KAFKA-1809_2015-03-16_09:02:18.patch, KAFKA-1809_2015-03-16_09:40:49.patch, 
> KAFKA-1809_2015-03-27_15:04:10.patch, KAFKA-1809_2015-04-02_15:12:30.patch
>
>
> The goal is to eventually support different security mechanisms on different 
> ports. 
> Currently brokers are defined as host+port pair, and this definition exists 
> throughout the code-base, therefore some refactoring is needed to support 
> multiple ports for a single broker.
> The detailed design is here: 
> https://cwiki.apache.org/confluence/display/KAFKA/Multiple+Listeners+for+Kafka+Brokers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1888) Add a "rolling upgrade" system test

2015-04-02 Thread Abhishek Nigam (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14393806#comment-14393806
 ] 

Abhishek Nigam commented on KAFKA-1888:
---

Hi Gwen/Ashish,
I need to finish up something else and I will only be able to come back to this 
ticket in 2-3 weeks. 

> Add a "rolling upgrade" system test
> ---
>
> Key: KAFKA-1888
> URL: https://issues.apache.org/jira/browse/KAFKA-1888
> Project: Kafka
>  Issue Type: Improvement
>  Components: system tests
>Reporter: Gwen Shapira
>Assignee: Abhishek Nigam
> Fix For: 0.9.0
>
> Attachments: KAFKA-1888_2015-03-23_11:54:25.patch
>
>
> To help test upgrades and compatibility between versions, it will be cool to 
> add a rolling-upgrade test to system tests:
> Given two versions (just a path to the jars?), check that you can do a
> rolling upgrade of the brokers from one version to another (using clients 
> from the old version) without losing data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 28769: Patch for KAFKA-1809

2015-04-02 Thread Joel Koshy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28769/#review78739
---


Thanks for the patch and for putting in a ton of effort in to it.

Overall, looks good to me and I would really like to see this in soon.


clients/src/main/java/org/apache/kafka/common/protocol/SecurityProtocol.java


Can we go with TRACE(Short.MAX_VALUE, "TRACE") and start plaintext at 0? 
This may not play well with the scala enum but per comment below I'm not clear 
on why we need that.



clients/src/main/java/org/apache/kafka/common/protocol/SecurityProtocol.java


Can probably be final



clients/src/main/java/org/apache/kafka/common/protocol/SecurityProtocol.java


This too



clients/src/main/java/org/apache/kafka/common/protocol/SecurityProtocol.java


All caps



clients/src/main/java/org/apache/kafka/common/protocol/SecurityProtocol.java


A name -> Name



clients/src/main/java/org/apache/kafka/common/protocol/SecurityProtocol.java


All caps



core/src/main/scala/kafka/api/ApiVersion.scala


I was in the middle of reviewing your previous revision and had a follow-up 
comment on this (in reply to Jun's). Although this is for intra-cluster 
communication, at some point we may want to expose ApiVersion to clients as 
well via some cluster/broker metadata request - e.g., what is the lowest and 
highest ApiVersion that a broker supports. So it seems appropriate to keep in 
o.a.k.common no?



core/src/main/scala/kafka/api/ConsumerMetadataResponse.scala


NoBrokerEndpointOpt



core/src/main/scala/kafka/cluster/Broker.scala


Can cut down a map by:
.map(listener => {
  val ep = EndPoint.createEndPoint(listener)
  (ep.protocolType, ep)
}).toMap



core/src/main/scala/kafka/cluster/EndPoint.scala


indentation



core/src/main/scala/kafka/cluster/SecurityProtocol.scala


Note comment above in o.a.k.c



core/src/main/scala/kafka/network/SocketServer.scala


`processors.foreach(_.shutdown)`



core/src/main/scala/kafka/server/KafkaConfig.scala


I think either intraCluster or interBroker would be much better names for 
these two and the associated configs.



core/src/main/scala/kafka/server/KafkaConfig.scala


Can we use the require(constraint, message) variant?



core/src/main/scala/kafka/server/KafkaHealthcheck.scala


can we make this clearer in the code by naming this plaintextEndPoint



core/src/test/scala/unit/kafka/cluster/BrokerTest.scala


Shall we call this BrokerEndpointTest?



core/src/test/scala/unit/kafka/consumer/ConsumerIteratorTest.scala


", "



system_test/replication_testsuite/testcase_0001/testcase_0001_properties.json


were these changes intentional?


- Joel Koshy


On April 2, 2015, 10:12 p.m., Gwen Shapira wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/28769/
> ---
> 
> (Updated April 2, 2015, 10:12 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1809
> https://issues.apache.org/jira/browse/KAFKA-1809
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> forgot rest of patch
> 
> 
> merge with trunk
> 
> 
> moved ApiVersion to core, fixed SocketServer concurrency issue and other 
> minor things
> 
> 
> Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into 
> broker_ref2
> 
> Conflicts:
>   core/src/main/scala/kafka/server/KafkaApis.scala
>   
> core/src/test/scala/unit/kafka/api/RequestResponseSerializationTest.scala
> 
> 
> Diffs
> -
> 
>   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
> fa9daaef66ff7961e1c46cd0cd8fed18a53bccd8 
>   
> clients/src/main/java/org/apache/kafka/common/protocol/SecurityProtocol.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/common/utils/Utils.java 
> 920b

Re: Review Request 28769: Patch for KAFKA-1809

2015-04-02 Thread Gwen Shapira


> On April 3, 2015, 12:41 a.m., Joel Koshy wrote:
> > system_test/replication_testsuite/testcase_0001/testcase_0001_properties.json,
> >  line 32
> > 
> >
> > were these changes intentional?

Very intentional. System_tests include segment verification - they take forever 
to run with the small segments. I can't understand how anyone runs the tests 
with different configuration.


> On April 3, 2015, 12:41 a.m., Joel Koshy wrote:
> > core/src/main/scala/kafka/api/ApiVersion.scala, line 28
> > 
> >
> > I was in the middle of reviewing your previous revision and had a 
> > follow-up comment on this (in reply to Jun's). Although this is for 
> > intra-cluster communication, at some point we may want to expose ApiVersion 
> > to clients as well via some cluster/broker metadata request - e.g., what is 
> > the lowest and highest ApiVersion that a broker supports. So it seems 
> > appropriate to keep in o.a.k.common no?

Mmmm... I'm pretty sure we had good reasons not to expose broker versions to 
clients and instead rely on each wire-protocol request having its own version 
IDs (and deprecate versions by detecting this on broker and sending exceptions 
back to clients). 
I can't remember what the reason was, but I think that if we expose actual 
release versions from brokers to clients this can be pretty confusing.


> On April 3, 2015, 12:41 a.m., Joel Koshy wrote:
> > clients/src/main/java/org/apache/kafka/common/protocol/SecurityProtocol.java,
> >  line 26
> > 
> >
> > Can we go with TRACE(Short.MAX_VALUE, "TRACE") and start plaintext at 
> > 0? This may not play well with the scala enum but per comment below I'm not 
> > clear on why we need that.

sure. any reason for this? I figured this is more-or-less arbitrary.


> On April 3, 2015, 12:41 a.m., Joel Koshy wrote:
> > core/src/main/scala/kafka/server/KafkaConfig.scala, line 109
> > 
> >
> > I think either intraCluster or interBroker would be much better names 
> > for these two and the associated configs.

Took me a bit to figure out why, so for posterity: 
http://grammarist.com/usage/inter-intra/

Lets go with interBroker?


- Gwen


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28769/#review78739
---


On April 2, 2015, 10:12 p.m., Gwen Shapira wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/28769/
> ---
> 
> (Updated April 2, 2015, 10:12 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1809
> https://issues.apache.org/jira/browse/KAFKA-1809
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> forgot rest of patch
> 
> 
> merge with trunk
> 
> 
> moved ApiVersion to core, fixed SocketServer concurrency issue and other 
> minor things
> 
> 
> Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into 
> broker_ref2
> 
> Conflicts:
>   core/src/main/scala/kafka/server/KafkaApis.scala
>   
> core/src/test/scala/unit/kafka/api/RequestResponseSerializationTest.scala
> 
> 
> Diffs
> -
> 
>   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
> fa9daaef66ff7961e1c46cd0cd8fed18a53bccd8 
>   
> clients/src/main/java/org/apache/kafka/common/protocol/SecurityProtocol.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/common/utils/Utils.java 
> 920b51a6c3c99639fbc9dc0656373c19fabd 
>   clients/src/test/java/org/apache/kafka/common/utils/UtilsTest.java 
> c899813d55b9c4786adde3d840f040d6645d27c8 
>   config/server.properties 1614260b71a658b405bb24157c8f12b1f1031aa5 
>   core/src/main/scala/kafka/admin/AdminUtils.scala 
> b700110f2d7f1ede235af55d8e37e1b5592c6c7d 
>   core/src/main/scala/kafka/admin/TopicCommand.scala 
> f400b71f8444fffd3fc1d8398a283682390eba4e 
>   core/src/main/scala/kafka/api/ApiVersion.scala PRE-CREATION 
>   core/src/main/scala/kafka/api/ConsumerMetadataResponse.scala 
> 24aaf954dc42e2084454fa5fc9e8f388ea95c756 
>   core/src/main/scala/kafka/api/LeaderAndIsrRequest.scala 
> 4ff7e8f8cc695551dd5d2fe65c74f6b6c571e340 
>   core/src/main/scala/kafka/api/TopicMetadata.scala 
> 0190076df0adf906ecd332284f222ff974b315fc 
>   core/src/main/scala/kafka/api/TopicMetadataResponse.scala 
> 92ac4e687be22e4800199c0666bfac5e0059e5bb 
>   core/src/main/scala/kafka/api/UpdateMetadataRequest.scala 
> 530982e36b17934b8cc5fb668075a5342e142c59 
>   core/src/main/scala/kafka/client/ClientUtils.scala 
> ebba87f0566684c796c26cb76c64b4640a5ccfde 

[jira] [Updated] (KAFKA-2016) RollingBounceTest takes long

2015-04-02 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2016:
---
Resolution: Fixed
  Assignee: Ted Malaska
Status: Resolved  (was: Patch Available)

Thanks for the patch. +1 and committed to trunk.

> RollingBounceTest takes long
> 
>
> Key: KAFKA-2016
> URL: https://issues.apache.org/jira/browse/KAFKA-2016
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.2.1
>Reporter: Jun Rao
>Assignee: Ted Malaska
>  Labels: newbie
> Fix For: 0.8.3
>
> Attachments: KAFKA-2016-1.patch, KAFKA-2016-2.patch
>
>
> RollingBounceTest.testRollingBounce() currently takes about 48 secs. This is 
> a bit too long.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1961) Looks like its possible to delete _consumer_offsets topic

2015-04-02 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14393886#comment-14393886
 ] 

Jun Rao commented on KAFKA-1961:


[~ted.m], thanks for the patch. Just added a minor comment to the RB. Once it's 
addressed, we can commit it.

> Looks like its possible to delete _consumer_offsets topic
> -
>
> Key: KAFKA-1961
> URL: https://issues.apache.org/jira/browse/KAFKA-1961
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
>  Labels: newbie
> Attachments: KAFKA-1961.3.patch, KAFKA-1961.4.patch
>
>
> Noticed that kafka-topics.sh --delete can successfully delete internal topics 
> (__consumer_offsets).
> I'm pretty sure we want to prevent that, to avoid users shooting themselves 
> in the foot.
> Topic admin command should check for internal topics, just like 
> ReplicaManager does and not let users delete them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 28769: Patch for KAFKA-1809

2015-04-02 Thread Gwen Shapira


> On April 3, 2015, 12:41 a.m., Joel Koshy wrote:
> > clients/src/main/java/org/apache/kafka/common/protocol/SecurityProtocol.java,
> >  line 44
> > 
> >
> > All caps

Apparently "name" and "id" cannot be all caps.
Our coding style (enforced with checkstyle) indicates that allcaps are reserved 
for "final static" members. "name" and "id" are initialized in the constructor, 
so can't be static - and thereofore no allcaps :)


- Gwen


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28769/#review78739
---


On April 2, 2015, 10:12 p.m., Gwen Shapira wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/28769/
> ---
> 
> (Updated April 2, 2015, 10:12 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1809
> https://issues.apache.org/jira/browse/KAFKA-1809
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> forgot rest of patch
> 
> 
> merge with trunk
> 
> 
> moved ApiVersion to core, fixed SocketServer concurrency issue and other 
> minor things
> 
> 
> Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into 
> broker_ref2
> 
> Conflicts:
>   core/src/main/scala/kafka/server/KafkaApis.scala
>   
> core/src/test/scala/unit/kafka/api/RequestResponseSerializationTest.scala
> 
> 
> Diffs
> -
> 
>   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
> fa9daaef66ff7961e1c46cd0cd8fed18a53bccd8 
>   
> clients/src/main/java/org/apache/kafka/common/protocol/SecurityProtocol.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/common/utils/Utils.java 
> 920b51a6c3c99639fbc9dc0656373c19fabd 
>   clients/src/test/java/org/apache/kafka/common/utils/UtilsTest.java 
> c899813d55b9c4786adde3d840f040d6645d27c8 
>   config/server.properties 1614260b71a658b405bb24157c8f12b1f1031aa5 
>   core/src/main/scala/kafka/admin/AdminUtils.scala 
> b700110f2d7f1ede235af55d8e37e1b5592c6c7d 
>   core/src/main/scala/kafka/admin/TopicCommand.scala 
> f400b71f8444fffd3fc1d8398a283682390eba4e 
>   core/src/main/scala/kafka/api/ApiVersion.scala PRE-CREATION 
>   core/src/main/scala/kafka/api/ConsumerMetadataResponse.scala 
> 24aaf954dc42e2084454fa5fc9e8f388ea95c756 
>   core/src/main/scala/kafka/api/LeaderAndIsrRequest.scala 
> 4ff7e8f8cc695551dd5d2fe65c74f6b6c571e340 
>   core/src/main/scala/kafka/api/TopicMetadata.scala 
> 0190076df0adf906ecd332284f222ff974b315fc 
>   core/src/main/scala/kafka/api/TopicMetadataResponse.scala 
> 92ac4e687be22e4800199c0666bfac5e0059e5bb 
>   core/src/main/scala/kafka/api/UpdateMetadataRequest.scala 
> 530982e36b17934b8cc5fb668075a5342e142c59 
>   core/src/main/scala/kafka/client/ClientUtils.scala 
> ebba87f0566684c796c26cb76c64b4640a5ccfde 
>   core/src/main/scala/kafka/cluster/Broker.scala 
> 0060add008bb3bc4b0092f2173c469fce0120be6 
>   core/src/main/scala/kafka/cluster/BrokerEndPoint.scala PRE-CREATION 
>   core/src/main/scala/kafka/cluster/EndPoint.scala PRE-CREATION 
>   core/src/main/scala/kafka/cluster/SecurityProtocol.scala PRE-CREATION 
>   core/src/main/scala/kafka/common/BrokerEndPointNotAvailableException.scala 
> PRE-CREATION 
>   core/src/main/scala/kafka/consumer/ConsumerConfig.scala 
> 9ebbee6c16dc83767297c729d2d74ebbd063a993 
>   core/src/main/scala/kafka/consumer/ConsumerFetcherManager.scala 
> b9e2bea7b442a19bcebd1b350d39541a8c9dd068 
>   core/src/main/scala/kafka/consumer/ConsumerFetcherThread.scala 
> 152fda5d1dcdf319399fdeeb8457006090ebe56c 
>   core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
> b1cf0db8741e384e4e4119751058ea87b2589e57 
>   core/src/main/scala/kafka/controller/ControllerChannelManager.scala 
> c582191636f6188c25d62a67ff0315b56f163133 
>   core/src/main/scala/kafka/controller/KafkaController.scala 
> 09fc46d759b74bcdad2d2a610d9c5a93ff02423f 
>   core/src/main/scala/kafka/javaapi/ConsumerMetadataResponse.scala 
> d281bb31a66fd749ecddfbe38479b6903f436831 
>   core/src/main/scala/kafka/javaapi/TopicMetadata.scala 
> f384e04678df10a5b46a439f475c63371bf8e32b 
>   core/src/main/scala/kafka/network/RequestChannel.scala 
> bc73540acb1ceb303cb30e58dfa903822f7a8a6c 
>   core/src/main/scala/kafka/network/SocketServer.scala 
> 76ce41aed6e04ac5ba88395c4d5008aca17f9a73 
>   core/src/main/scala/kafka/producer/ProducerPool.scala 
> 43df70bb461dd3e385e6b20396adef3c4016a3fc 
>   core/src/main/scala/kafka/server/AbstractFetcherManager.scala 
> 20c00cb8cc2351950edbc8cb1752905a0c26e79f 
>   core/src/main/scala/kafka/server/AbstractFetcherThread.scala 
> 5d5cf5897cc37b3595f14bfe9d7cde43456bcc4b 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> f372af77f58aacee7981caa361e13eefecee2

Jenkins build is back to normal : KafkaPreCommit #51

2015-04-02 Thread Apache Jenkins Server
See 



Re: Review Request 28769: Patch for KAFKA-1809

2015-04-02 Thread Gwen Shapira

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28769/
---

(Updated April 3, 2015, 2:04 a.m.)


Review request for kafka.


Bugs: KAFKA-1809
https://issues.apache.org/jira/browse/KAFKA-1809


Repository: kafka


Description (updated)
---

forgot rest of patch


merge with trunk


moved ApiVersion to core, fixed SocketServer concurrency issue and other minor 
things


Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into 
broker_ref2

Conflicts:
core/src/main/scala/kafka/server/KafkaApis.scala

core/src/test/scala/unit/kafka/api/RequestResponseSerializationTest.scala

addressing Joel's comments and added support for older versions in 
inter.broker.protocol.version to make upgrades from older versions a bit 
clearer.


Diffs (updated)
-

  clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
fa9daaef66ff7961e1c46cd0cd8fed18a53bccd8 
  clients/src/main/java/org/apache/kafka/common/protocol/SecurityProtocol.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/utils/Utils.java 
920b51a6c3c99639fbc9dc0656373c19fabd 
  clients/src/test/java/org/apache/kafka/common/utils/UtilsTest.java 
c899813d55b9c4786adde3d840f040d6645d27c8 
  config/server.properties 1614260b71a658b405bb24157c8f12b1f1031aa5 
  core/src/main/scala/kafka/admin/AdminUtils.scala 
b700110f2d7f1ede235af55d8e37e1b5592c6c7d 
  core/src/main/scala/kafka/admin/TopicCommand.scala 
f400b71f8444fffd3fc1d8398a283682390eba4e 
  core/src/main/scala/kafka/api/ApiVersion.scala PRE-CREATION 
  core/src/main/scala/kafka/api/ConsumerMetadataResponse.scala 
24aaf954dc42e2084454fa5fc9e8f388ea95c756 
  core/src/main/scala/kafka/api/LeaderAndIsrRequest.scala 
4ff7e8f8cc695551dd5d2fe65c74f6b6c571e340 
  core/src/main/scala/kafka/api/TopicMetadata.scala 
0190076df0adf906ecd332284f222ff974b315fc 
  core/src/main/scala/kafka/api/TopicMetadataResponse.scala 
92ac4e687be22e4800199c0666bfac5e0059e5bb 
  core/src/main/scala/kafka/api/UpdateMetadataRequest.scala 
530982e36b17934b8cc5fb668075a5342e142c59 
  core/src/main/scala/kafka/client/ClientUtils.scala 
ebba87f0566684c796c26cb76c64b4640a5ccfde 
  core/src/main/scala/kafka/cluster/Broker.scala 
0060add008bb3bc4b0092f2173c469fce0120be6 
  core/src/main/scala/kafka/cluster/BrokerEndPoint.scala PRE-CREATION 
  core/src/main/scala/kafka/cluster/EndPoint.scala PRE-CREATION 
  core/src/main/scala/kafka/cluster/SecurityProtocol.scala PRE-CREATION 
  core/src/main/scala/kafka/common/BrokerEndPointNotAvailableException.scala 
PRE-CREATION 
  core/src/main/scala/kafka/consumer/ConsumerConfig.scala 
9ebbee6c16dc83767297c729d2d74ebbd063a993 
  core/src/main/scala/kafka/consumer/ConsumerFetcherManager.scala 
b9e2bea7b442a19bcebd1b350d39541a8c9dd068 
  core/src/main/scala/kafka/consumer/ConsumerFetcherThread.scala 
152fda5d1dcdf319399fdeeb8457006090ebe56c 
  core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
b1cf0db8741e384e4e4119751058ea87b2589e57 
  core/src/main/scala/kafka/controller/ControllerChannelManager.scala 
c582191636f6188c25d62a67ff0315b56f163133 
  core/src/main/scala/kafka/controller/KafkaController.scala 
09fc46d759b74bcdad2d2a610d9c5a93ff02423f 
  core/src/main/scala/kafka/javaapi/ConsumerMetadataResponse.scala 
d281bb31a66fd749ecddfbe38479b6903f436831 
  core/src/main/scala/kafka/javaapi/TopicMetadata.scala 
f384e04678df10a5b46a439f475c63371bf8e32b 
  core/src/main/scala/kafka/network/RequestChannel.scala 
bc73540acb1ceb303cb30e58dfa903822f7a8a6c 
  core/src/main/scala/kafka/network/SocketServer.scala 
76ce41aed6e04ac5ba88395c4d5008aca17f9a73 
  core/src/main/scala/kafka/producer/ProducerPool.scala 
43df70bb461dd3e385e6b20396adef3c4016a3fc 
  core/src/main/scala/kafka/server/AbstractFetcherManager.scala 
20c00cb8cc2351950edbc8cb1752905a0c26e79f 
  core/src/main/scala/kafka/server/AbstractFetcherThread.scala 
5d5cf5897cc37b3595f14bfe9d7cde43456bcc4b 
  core/src/main/scala/kafka/server/KafkaApis.scala 
f372af77f58aacee7981caa361e13eefecee2278 
  core/src/main/scala/kafka/server/KafkaConfig.scala 
422451aec5ea0442eb2e4c1ae772885b813904a9 
  core/src/main/scala/kafka/server/KafkaHealthcheck.scala 
7907987e43404487382de7f4cc294f0d01ac15a7 
  core/src/main/scala/kafka/server/KafkaServer.scala 
4db3384545be8c237d6fc9646716ab67d5193ec5 
  core/src/main/scala/kafka/server/MetadataCache.scala 
6aef6e4508ecadbbcc1e12bed2054547b7aa333e 
  core/src/main/scala/kafka/server/ReplicaFetcherManager.scala 
351dbbad3bdb709937943b336a5b0a9e0162a5e2 
  core/src/main/scala/kafka/server/ReplicaFetcherThread.scala 
96faa7b4ed7c9ba8a3f6f9f114bd94e19b3a7ac0 
  core/src/main/scala/kafka/server/ReplicaManager.scala 
44f0026e6676d8d764dd59dbcc6bb7bb727a3ba6 
  core/src/main/scala/kafka/tools/ConsumerOffsetChecker.scala 
d1e7c434e77859d746b8dc68dd5d5a3740425e79 
  core/src/main/scala/kaf

[jira] [Commented] (KAFKA-1809) Refactor brokers to allow listening on multiple ports and IPs

2015-04-02 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14393941#comment-14393941
 ] 

Gwen Shapira commented on KAFKA-1809:
-

Updated reviewboard https://reviews.apache.org/r/28769/diff/
 against branch trunk

> Refactor brokers to allow listening on multiple ports and IPs 
> --
>
> Key: KAFKA-1809
> URL: https://issues.apache.org/jira/browse/KAFKA-1809
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
> Attachments: KAFKA-1809-DOCs.V1.patch, KAFKA-1809.patch, 
> KAFKA-1809.v1.patch, KAFKA-1809.v2.patch, KAFKA-1809.v3.patch, 
> KAFKA-1809_2014-12-25_11:04:17.patch, KAFKA-1809_2014-12-27_12:03:35.patch, 
> KAFKA-1809_2014-12-27_12:22:56.patch, KAFKA-1809_2014-12-29_12:11:36.patch, 
> KAFKA-1809_2014-12-30_11:25:29.patch, KAFKA-1809_2014-12-30_18:48:46.patch, 
> KAFKA-1809_2015-01-05_14:23:57.patch, KAFKA-1809_2015-01-05_20:25:15.patch, 
> KAFKA-1809_2015-01-05_21:40:14.patch, KAFKA-1809_2015-01-06_11:46:22.patch, 
> KAFKA-1809_2015-01-13_18:16:21.patch, KAFKA-1809_2015-01-28_10:26:22.patch, 
> KAFKA-1809_2015-02-02_11:55:16.patch, KAFKA-1809_2015-02-03_10:45:31.patch, 
> KAFKA-1809_2015-02-03_10:46:42.patch, KAFKA-1809_2015-02-03_10:52:36.patch, 
> KAFKA-1809_2015-03-16_09:02:18.patch, KAFKA-1809_2015-03-16_09:40:49.patch, 
> KAFKA-1809_2015-03-27_15:04:10.patch, KAFKA-1809_2015-04-02_15:12:30.patch, 
> KAFKA-1809_2015-04-02_19:03:58.patch
>
>
> The goal is to eventually support different security mechanisms on different 
> ports. 
> Currently brokers are defined as host+port pair, and this definition exists 
> throughout the code-base, therefore some refactoring is needed to support 
> multiple ports for a single broker.
> The detailed design is here: 
> https://cwiki.apache.org/confluence/display/KAFKA/Multiple+Listeners+for+Kafka+Brokers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1809) Refactor brokers to allow listening on multiple ports and IPs

2015-04-02 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1809:

Attachment: KAFKA-1809_2015-04-02_19:03:58.patch

> Refactor brokers to allow listening on multiple ports and IPs 
> --
>
> Key: KAFKA-1809
> URL: https://issues.apache.org/jira/browse/KAFKA-1809
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
> Attachments: KAFKA-1809-DOCs.V1.patch, KAFKA-1809.patch, 
> KAFKA-1809.v1.patch, KAFKA-1809.v2.patch, KAFKA-1809.v3.patch, 
> KAFKA-1809_2014-12-25_11:04:17.patch, KAFKA-1809_2014-12-27_12:03:35.patch, 
> KAFKA-1809_2014-12-27_12:22:56.patch, KAFKA-1809_2014-12-29_12:11:36.patch, 
> KAFKA-1809_2014-12-30_11:25:29.patch, KAFKA-1809_2014-12-30_18:48:46.patch, 
> KAFKA-1809_2015-01-05_14:23:57.patch, KAFKA-1809_2015-01-05_20:25:15.patch, 
> KAFKA-1809_2015-01-05_21:40:14.patch, KAFKA-1809_2015-01-06_11:46:22.patch, 
> KAFKA-1809_2015-01-13_18:16:21.patch, KAFKA-1809_2015-01-28_10:26:22.patch, 
> KAFKA-1809_2015-02-02_11:55:16.patch, KAFKA-1809_2015-02-03_10:45:31.patch, 
> KAFKA-1809_2015-02-03_10:46:42.patch, KAFKA-1809_2015-02-03_10:52:36.patch, 
> KAFKA-1809_2015-03-16_09:02:18.patch, KAFKA-1809_2015-03-16_09:40:49.patch, 
> KAFKA-1809_2015-03-27_15:04:10.patch, KAFKA-1809_2015-04-02_15:12:30.patch, 
> KAFKA-1809_2015-04-02_19:03:58.patch
>
>
> The goal is to eventually support different security mechanisms on different 
> ports. 
> Currently brokers are defined as host+port pair, and this definition exists 
> throughout the code-base, therefore some refactoring is needed to support 
> multiple ports for a single broker.
> The detailed design is here: 
> https://cwiki.apache.org/confluence/display/KAFKA/Multiple+Listeners+for+Kafka+Brokers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Kafka-trunk #439

2015-04-02 Thread Apache Jenkins Server
See 

Changes:

[junrao] kafka-2016; RollingBounceTest takes long; patched by Ted Malaska; 
reviewed by Jun Rao

--
[...truncated 401 lines...]
org.apache.kafka.common.record.RecordTest > testFields[44] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[44] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[44] PASSED

org.apache.kafka.common.record.RecordTest > testFields[45] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[45] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[45] PASSED

org.apache.kafka.common.record.RecordTest > testFields[46] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[46] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[46] PASSED

org.apache.kafka.common.record.RecordTest > testFields[47] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[47] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[47] PASSED

org.apache.kafka.common.record.RecordTest > testFields[48] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[48] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[48] PASSED

org.apache.kafka.common.record.RecordTest > testFields[49] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[49] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[49] PASSED

org.apache.kafka.common.record.RecordTest > testFields[50] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[50] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[50] PASSED

org.apache.kafka.common.record.RecordTest > testFields[51] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[51] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[51] PASSED

org.apache.kafka.common.record.RecordTest > testFields[52] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[52] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[52] PASSED

org.apache.kafka.common.record.RecordTest > testFields[53] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[53] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[53] PASSED

org.apache.kafka.common.record.RecordTest > testFields[54] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[54] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[54] PASSED

org.apache.kafka.common.record.RecordTest > testFields[55] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[55] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[55] PASSED

org.apache.kafka.common.record.RecordTest > testFields[56] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[56] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[56] PASSED

org.apache.kafka.common.record.RecordTest > testFields[57] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[57] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[57] PASSED

org.apache.kafka.common.record.RecordTest > testFields[58] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[58] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[58] PASSED

org.apache.kafka.common.record.RecordTest > testFields[59] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[59] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[59] PASSED

org.apache.kafka.common.record.RecordTest > testFields[60] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[60] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[60] PASSED

org.apache.kafka.common.record.RecordTest > testFields[61] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[61] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[61] PASSED

org.apache.kafka.common.record.RecordTest > testFields[62] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[62] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[62] PASSED

org.apache.kafka.common.record.RecordTest > testFields[63] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[63] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[63] PASSED

org.apache.kafka.common.record.MemoryRecordsTest > testIterator[0] PASSED

org.apache.kafka.common.record.MemoryRecordsTest > testIterator[1] PASSED

org.apache.kafka.common.record.MemoryRecordsTest > testIterator[2] PASSED

org.apache.kafka.common.record.MemoryRecordsTest > testIterator[3] PASSED

org.apache.kafka.clients.ClientUtilsTest > testParseAndValidateAddresses PASSED

org.apache.kafka.clients.ClientUtilsTest > testNoPort PASSED

org.apache.kafka.clients.NetworkClientTest > testReadyAndDisconnect PASSED

org.apache.kafka.clients.NetworkClientTest > testSendToUnreadyNode PASSED

org.apache.kafka.clients.NetworkClientTest > testSimpleRequestResp

[jira] [Commented] (KAFKA-2016) RollingBounceTest takes long

2015-04-02 Thread Ted Malaska (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14393947#comment-14393947
 ] 

Ted Malaska commented on KAFKA-2016:


Wow thank you Jun.  I have learned so much about Kafka in the last couple of 
months and this has been an honor.  

I'll try to find another Jira in the next week or so and try my best to learn 
more.

Thanks Again.

> RollingBounceTest takes long
> 
>
> Key: KAFKA-2016
> URL: https://issues.apache.org/jira/browse/KAFKA-2016
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.2.1
>Reporter: Jun Rao
>Assignee: Ted Malaska
>  Labels: newbie
> Fix For: 0.8.3
>
> Attachments: KAFKA-2016-1.patch, KAFKA-2016-2.patch
>
>
> RollingBounceTest.testRollingBounce() currently takes about 48 secs. This is 
> a bit too long.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2088) kafka-console-consumer.sh should not create zookeeper path when no brokers found and chroot was set in zookeeper.connect

2015-04-02 Thread Zhiqiang He (JIRA)
Zhiqiang He created KAFKA-2088:
--

 Summary: kafka-console-consumer.sh should not create zookeeper 
path when no brokers found and chroot was set in zookeeper.connect
 Key: KAFKA-2088
 URL: https://issues.apache.org/jira/browse/KAFKA-2088
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.8.2.1
Reporter: Zhiqiang He
Priority: Minor


server.properties:
zookeeper.connect = 192.168.0.10:2181,192.168.0.10:2181,192.168.0.10:2181/kafka


[root@stream client_0402]# kafka-console-consumer.sh --zookeeper 
192.168.0.10:2181 --topic test --from-beginning
[2015-04-03 18:15:28,599] WARN 
[console-consumer-63060_stream-1428056127990-d35ca648], no brokers found when 
trying to rebalance. (kafka.consumer.ZookeeperConsumerConnector)



default zookeeepr path:
[zk: 192.168.0.10:2181(CONNECTED) 10] ls /
[zookeeper, kafka, storm]



then "/consumer" and "/brokers" path was create in zookeeper.

[zk: 192.168.0.10:2181(CONNECTED) 2] ls /
[zookeeper, consumers, kafka, storm, brokers]

so it is a bug. producer should not create 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2088) kafka-console-consumer.sh should not create zookeeper path when no brokers found and chroot was set in zookeeper.connect

2015-04-02 Thread Zhiqiang He (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhiqiang He updated KAFKA-2088:
---
Description: 
1. set server.properties
server.properties:
zookeeper.connect = 192.168.0.10:2181,192.168.0.10:2181,192.168.0.10:2181/kafka


[root@stream client_0402]# kafka-console-consumer.sh --zookeeper 
192.168.0.10:2181 --topic test --from-beginning
[2015-04-03 18:15:28,599] WARN 
[console-consumer-63060_stream-1428056127990-d35ca648], no brokers found when 
trying to rebalance. (kafka.consumer.ZookeeperConsumerConnector)



default zookeeepr path:
[zk: 192.168.0.10:2181(CONNECTED) 10] ls /
[zookeeper, kafka, storm]



then "/consumer" and "/brokers" path was create in zookeeper.

[zk: 192.168.0.10:2181(CONNECTED) 2] ls /
[zookeeper, consumers, kafka, storm, brokers]

so it is a bug. producer should not create 


  was:
server.properties:
zookeeper.connect = 192.168.0.10:2181,192.168.0.10:2181,192.168.0.10:2181/kafka


[root@stream client_0402]# kafka-console-consumer.sh --zookeeper 
192.168.0.10:2181 --topic test --from-beginning
[2015-04-03 18:15:28,599] WARN 
[console-consumer-63060_stream-1428056127990-d35ca648], no brokers found when 
trying to rebalance. (kafka.consumer.ZookeeperConsumerConnector)



default zookeeepr path:
[zk: 192.168.0.10:2181(CONNECTED) 10] ls /
[zookeeper, kafka, storm]



then "/consumer" and "/brokers" path was create in zookeeper.

[zk: 192.168.0.10:2181(CONNECTED) 2] ls /
[zookeeper, consumers, kafka, storm, brokers]

so it is a bug. producer should not create 



> kafka-console-consumer.sh should not create zookeeper path when no brokers 
> found and chroot was set in zookeeper.connect
> 
>
> Key: KAFKA-2088
> URL: https://issues.apache.org/jira/browse/KAFKA-2088
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.8.2.1
>Reporter: Zhiqiang He
>Priority: Minor
>
> 1. set server.properties
> server.properties:
> zookeeper.connect = 
> 192.168.0.10:2181,192.168.0.10:2181,192.168.0.10:2181/kafka
> [root@stream client_0402]# kafka-console-consumer.sh --zookeeper 
> 192.168.0.10:2181 --topic test --from-beginning
> [2015-04-03 18:15:28,599] WARN 
> [console-consumer-63060_stream-1428056127990-d35ca648], no brokers found when 
> trying to rebalance. (kafka.consumer.ZookeeperConsumerConnector)
> default zookeeepr path:
> [zk: 192.168.0.10:2181(CONNECTED) 10] ls /
> [zookeeper, kafka, storm]
> then "/consumer" and "/brokers" path was create in zookeeper.
> [zk: 192.168.0.10:2181(CONNECTED) 2] ls /
> [zookeeper, consumers, kafka, storm, brokers]
> so it is a bug. producer should not create 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2088) kafka-console-consumer.sh should not create zookeeper path when no brokers found and chroot was set in zookeeper.connect

2015-04-02 Thread Zhiqiang He (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhiqiang He updated KAFKA-2088:
---
Description: 
1. set server.properties
server.properties:
zookeeper.connect = 192.168.0.10:2181,192.168.0.10:2181,192.168.0.10:2181/kafka

2 default zookeeepr path:
[zk: 192.168.0.10:2181(CONNECTED) 10] ls /
[zookeeper, kafka, storm]

3.start console consumer use a not exist topic and zookeeper address without 
chroot.
[root@stream client_0402]# kafka-console-consumer.sh --zookeeper 
192.168.0.10:2181 --topic test --from-beginning
[2015-04-03 18:15:28,599] WARN 
[console-consumer-63060_stream-1428056127990-d35ca648], no brokers found when 
trying to rebalance. (kafka.consumer.ZookeeperConsumerConnector)


4.then "/consumer" and "/brokers" path was create in zookeeper.

[zk: 192.168.0.10:2181(CONNECTED) 2] ls /
[zookeeper, consumers, kafka, storm, brokers]

so it is a bug. producer should not create 


  was:
1. set server.properties
server.properties:
zookeeper.connect = 192.168.0.10:2181,192.168.0.10:2181,192.168.0.10:2181/kafka


[root@stream client_0402]# kafka-console-consumer.sh --zookeeper 
192.168.0.10:2181 --topic test --from-beginning
[2015-04-03 18:15:28,599] WARN 
[console-consumer-63060_stream-1428056127990-d35ca648], no brokers found when 
trying to rebalance. (kafka.consumer.ZookeeperConsumerConnector)



default zookeeepr path:
[zk: 192.168.0.10:2181(CONNECTED) 10] ls /
[zookeeper, kafka, storm]



then "/consumer" and "/brokers" path was create in zookeeper.

[zk: 192.168.0.10:2181(CONNECTED) 2] ls /
[zookeeper, consumers, kafka, storm, brokers]

so it is a bug. producer should not create 



> kafka-console-consumer.sh should not create zookeeper path when no brokers 
> found and chroot was set in zookeeper.connect
> 
>
> Key: KAFKA-2088
> URL: https://issues.apache.org/jira/browse/KAFKA-2088
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.8.2.1
>Reporter: Zhiqiang He
>Priority: Minor
>
> 1. set server.properties
> server.properties:
> zookeeper.connect = 
> 192.168.0.10:2181,192.168.0.10:2181,192.168.0.10:2181/kafka
> 2 default zookeeepr path:
> [zk: 192.168.0.10:2181(CONNECTED) 10] ls /
> [zookeeper, kafka, storm]
> 3.start console consumer use a not exist topic and zookeeper address without 
> chroot.
> [root@stream client_0402]# kafka-console-consumer.sh --zookeeper 
> 192.168.0.10:2181 --topic test --from-beginning
> [2015-04-03 18:15:28,599] WARN 
> [console-consumer-63060_stream-1428056127990-d35ca648], no brokers found when 
> trying to rebalance. (kafka.consumer.ZookeeperConsumerConnector)
> 4.then "/consumer" and "/brokers" path was create in zookeeper.
> [zk: 192.168.0.10:2181(CONNECTED) 2] ls /
> [zookeeper, consumers, kafka, storm, brokers]
> so it is a bug. producer should not create 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2089) MetadataTest transient failure

2015-04-02 Thread Jun Rao (JIRA)
Jun Rao created KAFKA-2089:
--

 Summary: MetadataTest transient failure
 Key: KAFKA-2089
 URL: https://issues.apache.org/jira/browse/KAFKA-2089
 Project: Kafka
  Issue Type: Sub-task
Affects Versions: 0.8.3
Reporter: Jun Rao


org.apache.kafka.clients.MetadataTest > testMetadata FAILED
java.lang.AssertionError:
at org.junit.Assert.fail(Assert.java:91)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at org.junit.Assert.assertFalse(Assert.java:79)
at org.apache.kafka.clients.MetadataTest.tearDown(MetadataTest.java:34)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2088) kafka-console-consumer.sh should not create zookeeper path when no brokers found and chroot was set in zookeeper.connect

2015-04-02 Thread Zhiqiang He (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhiqiang He updated KAFKA-2088:
---
Description: 
1. set server.properties
server.properties:
zookeeper.connect = 192.168.0.10:2181,192.168.0.10:2181,192.168.0.10:2181/kafka

2 default zookeeepr path:
[zk: 192.168.0.10:2181(CONNECTED) 10] ls /
[zookeeper, kafka, storm]

3.start console consumer use a not exist topic and zookeeper address without 
chroot.
[root@stream client_0402]# kafka-console-consumer.sh --zookeeper 
192.168.0.10:2181 --topic test --from-beginning
[2015-04-03 18:15:28,599] WARN 
[console-consumer-63060_stream-1428056127990-d35ca648], no brokers found when 
trying to rebalance. (kafka.consumer.ZookeeperConsumerConnector)


4.then "/consumer" and "/brokers" path was create in zookeeper.

[zk: 192.168.0.10:2181(CONNECTED) 2] ls /
[zookeeper, consumers, kafka, storm, brokers]

so it is a bug. producer should not create "/consumer" and "/brokers" path .


  was:
1. set server.properties
server.properties:
zookeeper.connect = 192.168.0.10:2181,192.168.0.10:2181,192.168.0.10:2181/kafka

2 default zookeeepr path:
[zk: 192.168.0.10:2181(CONNECTED) 10] ls /
[zookeeper, kafka, storm]

3.start console consumer use a not exist topic and zookeeper address without 
chroot.
[root@stream client_0402]# kafka-console-consumer.sh --zookeeper 
192.168.0.10:2181 --topic test --from-beginning
[2015-04-03 18:15:28,599] WARN 
[console-consumer-63060_stream-1428056127990-d35ca648], no brokers found when 
trying to rebalance. (kafka.consumer.ZookeeperConsumerConnector)


4.then "/consumer" and "/brokers" path was create in zookeeper.

[zk: 192.168.0.10:2181(CONNECTED) 2] ls /
[zookeeper, consumers, kafka, storm, brokers]

so it is a bug. producer should not create 



> kafka-console-consumer.sh should not create zookeeper path when no brokers 
> found and chroot was set in zookeeper.connect
> 
>
> Key: KAFKA-2088
> URL: https://issues.apache.org/jira/browse/KAFKA-2088
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.8.2.1
>Reporter: Zhiqiang He
>Priority: Minor
>
> 1. set server.properties
> server.properties:
> zookeeper.connect = 
> 192.168.0.10:2181,192.168.0.10:2181,192.168.0.10:2181/kafka
> 2 default zookeeepr path:
> [zk: 192.168.0.10:2181(CONNECTED) 10] ls /
> [zookeeper, kafka, storm]
> 3.start console consumer use a not exist topic and zookeeper address without 
> chroot.
> [root@stream client_0402]# kafka-console-consumer.sh --zookeeper 
> 192.168.0.10:2181 --topic test --from-beginning
> [2015-04-03 18:15:28,599] WARN 
> [console-consumer-63060_stream-1428056127990-d35ca648], no brokers found when 
> trying to rebalance. (kafka.consumer.ZookeeperConsumerConnector)
> 4.then "/consumer" and "/brokers" path was create in zookeeper.
> [zk: 192.168.0.10:2181(CONNECTED) 2] ls /
> [zookeeper, consumers, kafka, storm, brokers]
> so it is a bug. producer should not create "/consumer" and "/brokers" path .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)