[jira] [Commented] (KAFKA-1618) Exception thrown when running console producer with no port number for the broker

2014-09-14 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133254#comment-14133254
 ] 

Neha Narkhede commented on KAFKA-1618:
--

[~balaji.sesha...@dish.com] To answer your question - 
{noformat}
nnarkhed-mn1:tools nnarkhed$ pwd
/Users/nnarkhed/Projects/kafka-git-idea/core/src/main/scala/kafka/tools
nnarkhed-mn1:tools nnarkhed$ grep -R "broker-list" *
ConsoleProducer.scala:val brokerListOpt = parser.accepts("broker-list", 
"REQUIRED: The broker list string in the form HOST1:PORT1,HOST2:PORT2.")
ConsoleProducer.scala:  .describedAs("broker-list")
GetOffsetShell.scala:val brokerListOpt = parser.accepts("broker-list", 
"REQUIRED: The list of hostname and port of the server to connect to.")
ProducerPerformance.scala:val brokerListOpt = parser.accepts("broker-list", 
"REQUIRED: broker info (the list of broker host and port for bootstrap.")
ReplayLogProducer.scala:val brokerListOpt = parser.accepts("broker-list", 
"REQUIRED: the broker list must be specified.")
ReplicaVerificationTool.scala:val brokerListOpt = 
parser.accepts("broker-list", "REQUIRED: The list of hostname and port of the 
server to connect to.")
SimpleConsumerShell.scala:val brokerListOpt = parser.accepts("broker-list", 
"REQUIRED: The list of hostname and port of the server to connect to.")
{noformat}

Few review comments -
1. In lines 222-227, options.valueOf(brokerListOpt) is repeated several times. 
It will be great if you can extract the value in a local variable and use that. 
2. I'm actually a -1 on guessing anything and hardcoding it. Currently, the 
defaults we have for the broker port are inconsistent and exist in 2 different 
places (KafkaConfig and server.properties). Since the users can start Kafka on 
a different port anyways, the behavior of the tools would be more confusing if 
we attempt to guess the port ineffectively. I'd prefer exiting after giving a 
clear error message instead.

> Exception thrown when running console producer with no port number for the 
> broker
> -
>
> Key: KAFKA-1618
> URL: https://issues.apache.org/jira/browse/KAFKA-1618
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.1.1
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
>  Labels: newbie
> Fix For: 0.8.2
>
> Attachments: KAFKA-1618.patch
>
>
> When running console producer with just "localhost" as the broker list, I get 
> ArrayIndexOutOfBounds exception.
> I expect either a clearer error about arguments or for the producer to 
> "guess" a default port.
> [root@shapira-1 bin]# ./kafka-console-producer.sh  --topic rufus1 
> --broker-list localhost
> java.lang.ArrayIndexOutOfBoundsException: 1
>   at 
> kafka.client.ClientUtils$$anonfun$parseBrokerList$1.apply(ClientUtils.scala:102)
>   at 
> kafka.client.ClientUtils$$anonfun$parseBrokerList$1.apply(ClientUtils.scala:97)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>   at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:105)
>   at kafka.client.ClientUtils$.parseBrokerList(ClientUtils.scala:97)
>   at 
> kafka.producer.BrokerPartitionInfo.(BrokerPartitionInfo.scala:32)
>   at 
> kafka.producer.async.DefaultEventHandler.(DefaultEventHandler.scala:41)
>   at kafka.producer.Producer.(Producer.scala:59)
>   at kafka.producer.ConsoleProducer$.main(ConsoleProducer.scala:158)
>   at kafka.producer.ConsoleProducer.main(ConsoleProducer.scala)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1282) Disconnect idle socket connection in Selector

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1282:
-
Reviewer: Neha Narkhede

> Disconnect idle socket connection in Selector
> -
>
> Key: KAFKA-1282
> URL: https://issues.apache.org/jira/browse/KAFKA-1282
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.2
>Reporter: Jun Rao
>Assignee: Neha Narkhede
>  Labels: newbie++
> Fix For: 0.9.0
>
> Attachments: 
> KAFKA-1282_Disconnect_idle_socket_connection_in_Selector.patch, 
> idleDisconnect.patch
>
>
> To reduce # socket connections, it would be useful for the new producer to 
> close socket connections that are idle. We can introduce a new producer 
> config for the idle time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1282) Disconnect idle socket connection in Selector

2014-09-14 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133259#comment-14133259
 ] 

Neha Narkhede commented on KAFKA-1282:
--

Thanks for the patch, [~nmarasoi]! Looks good overall. Few review comments -

1. Do we really need connectionsLruTimeout in addition to connectionsMaxIdleMs? 
It seems to me that we are translating the idle connection timeout plugged in 
by the user to 100x times more than what is configured. That's probably why 
Jun saw the behavior he reported earlier. 
2. I don't really share Jun's concern in #2 and we can state that more clearly 
in the comment that describes the new config in KafkaConfig. Connections that 
are idle for more than connections.max.idle.ms *may* get killed. I don't think 
the users particularly care about a hard guarantee of their connections getting 
killed here. So the simplicity of this approach is well justified.
3. I do think that adding a produce and fetch test where the connections get 
killed will be great 

> Disconnect idle socket connection in Selector
> -
>
> Key: KAFKA-1282
> URL: https://issues.apache.org/jira/browse/KAFKA-1282
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.2
>Reporter: Jun Rao
>Assignee: Neha Narkhede
>  Labels: newbie++
> Fix For: 0.9.0
>
> Attachments: 
> KAFKA-1282_Disconnect_idle_socket_connection_in_Selector.patch, 
> idleDisconnect.patch
>
>
> To reduce # socket connections, it would be useful for the new producer to 
> close socket connections that are idle. We can introduce a new producer 
> config for the idle time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1282) Disconnect idle socket connection in Selector

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1282:
-
Assignee: nicu marasoiu  (was: Neha Narkhede)

> Disconnect idle socket connection in Selector
> -
>
> Key: KAFKA-1282
> URL: https://issues.apache.org/jira/browse/KAFKA-1282
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.2
>Reporter: Jun Rao
>Assignee: nicu marasoiu
>  Labels: newbie++
> Fix For: 0.9.0
>
> Attachments: 
> KAFKA-1282_Disconnect_idle_socket_connection_in_Selector.patch, 
> idleDisconnect.patch
>
>
> To reduce # socket connections, it would be useful for the new producer to 
> close socket connections that are idle. We can introduce a new producer 
> config for the idle time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1057) Trim whitespaces from user specified configs

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1057:
-
Assignee: (was: Neha Narkhede)

> Trim whitespaces from user specified configs
> 
>
> Key: KAFKA-1057
> URL: https://issues.apache.org/jira/browse/KAFKA-1057
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Reporter: Neha Narkhede
>  Labels: newbie
> Fix For: 0.9.0
>
>
> Whitespaces in configs are a common problem that leads to config errors. It 
> will be nice if Kafka can trim the whitespaces from configs automatically



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1057) Trim whitespaces from user specified configs

2014-09-14 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133261#comment-14133261
 ] 

Neha Narkhede commented on KAFKA-1057:
--

[~sriharsha] The example used by [~maasg] above is reproducible, though I may 
not get a chance to take a stab at this anytime soon.

> Trim whitespaces from user specified configs
> 
>
> Key: KAFKA-1057
> URL: https://issues.apache.org/jira/browse/KAFKA-1057
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Reporter: Neha Narkhede
>  Labels: newbie
> Fix For: 0.9.0
>
>
> Whitespaces in configs are a common problem that leads to config errors. It 
> will be nice if Kafka can trim the whitespaces from configs automatically



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 16092: Patch for KAFKA-1147

2014-09-14 Thread Neha Narkhede


> On Dec. 10, 2013, 4:14 p.m., Jun Rao wrote:
> > It seems that in the consumer, we have a separate request timeout and 
> > socket timeout. The producer only has a request timeout now. Perhaps we 
> > should add the socket timeout in the producer too?
> 
> Guozhang Wang wrote:
> Currently request.timeout.ms of producer config is used as both socket 
> timeout for the channel and produce ack timeout. Are you suggesting 
> decoupling them by adding another socket timeout parameter?

I suggest we keep the producer changes independent of this JIRA. Jun, please 
file the JIRA and propose your changes.


- Neha


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/16092/#review30101
---


On Dec. 10, 2013, 10:31 p.m., Guozhang Wang wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/16092/
> ---
> 
> (Updated Dec. 10, 2013, 10:31 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1147
> https://issues.apache.org/jira/browse/KAFKA-1147
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-1147.v2.2
> 
> 
> KAFKA-1147.v2.1
> 
> 
> KAFKA-1147.v2
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/consumer/ConsumerConfig.scala 
> c8c4212f62dd52158e6504cba87b300e6830e5fe 
>   core/src/main/scala/kafka/producer/SyncProducer.scala 
> 419156eb143fb04a305c91c964307a89ba5a82fa 
>   core/src/main/scala/kafka/producer/SyncProducerConfig.scala 
> 69b2d0c11bb1412ce76d566f285333c806be301a 
>   core/src/main/scala/kafka/server/KafkaConfig.scala 
> a7e5b739454aec796070d95b003f20f79c84ef89 
> 
> Diff: https://reviews.apache.org/r/16092/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Guozhang Wang
> 
>



[jira] [Updated] (KAFKA-1147) Consumer socket timeout should be greater than fetch max wait

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1147:
-
Reviewer: Neha Narkhede

> Consumer socket timeout should be greater than fetch max wait
> -
>
> Key: KAFKA-1147
> URL: https://issues.apache.org/jira/browse/KAFKA-1147
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.0, 0.8.1
>Reporter: Joel Koshy
>Assignee: Guozhang Wang
> Fix For: 0.8.2, 0.9.0
>
> Attachments: KAFKA-1147.patch, KAFKA-1147_2013-12-07_18:22:18.patch, 
> KAFKA-1147_2013-12-09_09:14:24.patch, KAFKA-1147_2013-12-10_14:31:46.patch
>
>
> From the mailing list:
> The consumer-config documentation states that "The actual timeout set
> will be max.fetch.wait + socket.timeout.ms." - however, that change
> seems to have been lost in the code a while ago - we should either fix the 
> doc or re-introduce the addition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1147) Consumer socket timeout should be greater than fetch max wait

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1147:
-
Status: Patch Available  (was: Open)

> Consumer socket timeout should be greater than fetch max wait
> -
>
> Key: KAFKA-1147
> URL: https://issues.apache.org/jira/browse/KAFKA-1147
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.1, 0.8.0
>Reporter: Joel Koshy
>Assignee: Guozhang Wang
> Fix For: 0.8.2, 0.9.0
>
> Attachments: KAFKA-1147.patch, KAFKA-1147_2013-12-07_18:22:18.patch, 
> KAFKA-1147_2013-12-09_09:14:24.patch, KAFKA-1147_2013-12-10_14:31:46.patch
>
>
> From the mailing list:
> The consumer-config documentation states that "The actual timeout set
> will be max.fetch.wait + socket.timeout.ms." - however, that change
> seems to have been lost in the code a while ago - we should either fix the 
> doc or re-introduce the addition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1147) Consumer socket timeout should be greater than fetch max wait

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1147:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Fix checked in for the consumer and broker. Reverted the changes to the old 
producer.

> Consumer socket timeout should be greater than fetch max wait
> -
>
> Key: KAFKA-1147
> URL: https://issues.apache.org/jira/browse/KAFKA-1147
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.0, 0.8.1
>Reporter: Joel Koshy
>Assignee: Guozhang Wang
> Fix For: 0.8.2, 0.9.0
>
> Attachments: KAFKA-1147.patch, KAFKA-1147_2013-12-07_18:22:18.patch, 
> KAFKA-1147_2013-12-09_09:14:24.patch, KAFKA-1147_2013-12-10_14:31:46.patch
>
>
> From the mailing list:
> The consumer-config documentation states that "The actual timeout set
> will be max.fetch.wait + socket.timeout.ms." - however, that change
> seems to have been lost in the code a while ago - we should either fix the 
> doc or re-introduce the addition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1108) when controlled shutdown attempt fails, the reason is not always logged

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1108:
-
Labels: newbie  (was: )

> when controlled shutdown attempt fails, the reason is not always logged
> ---
>
> Key: KAFKA-1108
> URL: https://issues.apache.org/jira/browse/KAFKA-1108
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Rosenberg
>  Labels: newbie
> Fix For: 0.9.0
>
>
> In KafkaServer.controlledShutdown(), it initiates a controlled shutdown, and 
> then if there's a failure, it will retry the controlledShutdown.
> Looking at the code, there are 2 ways a retry could fail, one with an error 
> response from the controller, and this messaging code:
> {code}
> info("Remaining partitions to move: 
> %s".format(shutdownResponse.partitionsRemaining.mkString(",")))
> info("Error code from controller: %d".format(shutdownResponse.errorCode))
> {code}
> Alternatively, there could be an IOException, with this code executed:
> {code}
> catch {
>   case ioe: java.io.IOException =>
> channel.disconnect()
> channel = null
> // ignore and try again
> }
> {code}
> And then finally, in either case:
> {code}
>   if (!shutdownSuceeded) {
> Thread.sleep(config.controlledShutdownRetryBackoffMs)
> warn("Retrying controlled shutdown after the previous attempt 
> failed...")
>   }
> {code}
> It would be nice if the nature of the IOException were logged in either case 
> (I'd be happy with an ioe.getMessage() instead of a full stack trace, as 
> kafka in general tends to be too willing to dump IOException stack traces!).
> I suspect, in my case, the actual IOException is a socket timeout (as the 
> time between initial "Starting controlled shutdown" and the first 
> "Retrying..." message is usually about 35 seconds (the socket timeout + the 
> controlled shutdown retry backoff).  So, it would seem that really, the issue 
> in this case is that controlled shutdown is taking too long.  It would seem 
> sensible instead to have the controller report back to the server (before the 
> socket timeout) that more time is needed, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1043) Time-consuming FetchRequest could block other request in the response queue

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1043:
-
Labels: newbie++  (was: )

> Time-consuming FetchRequest could block other request in the response queue
> ---
>
> Key: KAFKA-1043
> URL: https://issues.apache.org/jira/browse/KAFKA-1043
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.1
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>  Labels: newbie++
> Fix For: 0.9.0
>
>
> Since in SocketServer the processor who takes any request is also responsible 
> for writing the response for that request, we make each processor owning its 
> own response queue. If a FetchRequest takes irregularly long time to write 
> the channel buffer it would block all other responses in the queue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1545) java.net.InetAddress.getLocalHost in KafkaHealthcheck.register may fail on some irregular hostnames

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1545:
-
Labels: newbie  (was: )

> java.net.InetAddress.getLocalHost in KafkaHealthcheck.register may fail on 
> some irregular hostnames
> ---
>
> Key: KAFKA-1545
> URL: https://issues.apache.org/jira/browse/KAFKA-1545
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>  Labels: newbie
> Fix For: 0.9.0
>
>
> For example:
> kafka.server.LogOffsetTest > testGetOffsetsForUnknownTopic FAILED
> java.net.UnknownHostException: guwang-mn2: guwang-mn2: nodename nor 
> servname provided, or not known
> at java.net.InetAddress.getLocalHost(InetAddress.java:1473)
> at kafka.server.KafkaHealthcheck.register(KafkaHealthcheck.scala:59)
> at kafka.server.KafkaHealthcheck.startup(KafkaHealthcheck.scala:45)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:121)
> at kafka.utils.TestUtils$.createServer(TestUtils.scala:130)
> at kafka.server.LogOffsetTest.setUp(LogOffsetTest.scala:53)
> Caused by:
> java.net.UnknownHostException: guwang-mn2: nodename nor servname 
> provided, or not known
> at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
> at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:901)
> at 
> java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1293)
> at java.net.InetAddress.getLocalHost(InetAddress.java:1469)
> ... 5 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1034) Improve partition reassignment to optimize writes to zookeeper

2014-09-14 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133269#comment-14133269
 ] 

Neha Narkhede commented on KAFKA-1034:
--

[~guozhang], I don't quite remember. Probably [~sriramsub] can help us 
understand this one.

> Improve partition reassignment to optimize writes to zookeeper
> --
>
> Key: KAFKA-1034
> URL: https://issues.apache.org/jira/browse/KAFKA-1034
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.0, 0.8.1
>Reporter: Sriram Subramanian
>Assignee: Sriram Subramanian
>  Labels: newbie++
> Fix For: 0.8.2
>
>
> For ReassignPartition tool, check if optimizing the writes to ZK after every 
> replica reassignment is possible



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1034) Improve partition reassignment to optimize writes to zookeeper

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1034:
-
Labels: newbie++  (was: )

> Improve partition reassignment to optimize writes to zookeeper
> --
>
> Key: KAFKA-1034
> URL: https://issues.apache.org/jira/browse/KAFKA-1034
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.0, 0.8.1
>Reporter: Sriram Subramanian
>Assignee: Sriram Subramanian
>  Labels: newbie++
> Fix For: 0.8.2
>
>
> For ReassignPartition tool, check if optimizing the writes to ZK after every 
> replica reassignment is possible



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-1002) Delete aliveLeaders field from LeaderAndIsrRequest

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede resolved KAFKA-1002.
--
Resolution: Fixed

> Delete aliveLeaders field from LeaderAndIsrRequest
> --
>
> Key: KAFKA-1002
> URL: https://issues.apache.org/jira/browse/KAFKA-1002
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.1
>Reporter: Swapnil Ghike
> Fix For: 0.8.2
>
>
> After KAFKA-999 is committed, we don't need aliveLeaders in 
> LeaderAndIsrRequest.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1002) Delete aliveLeaders field from LeaderAndIsrRequest

2014-09-14 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133271#comment-14133271
 ] 

Neha Narkhede commented on KAFKA-1002:
--

[~guozhang] We have renamed it appropriately and adjusted it's usage correctly 
now. So we can close this JIRA.

> Delete aliveLeaders field from LeaderAndIsrRequest
> --
>
> Key: KAFKA-1002
> URL: https://issues.apache.org/jira/browse/KAFKA-1002
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.1
>Reporter: Swapnil Ghike
> Fix For: 0.8.2
>
>
> After KAFKA-999 is committed, we don't need aliveLeaders in 
> LeaderAndIsrRequest.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (KAFKA-1002) Delete aliveLeaders field from LeaderAndIsrRequest

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede closed KAFKA-1002.


> Delete aliveLeaders field from LeaderAndIsrRequest
> --
>
> Key: KAFKA-1002
> URL: https://issues.apache.org/jira/browse/KAFKA-1002
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.1
>Reporter: Swapnil Ghike
> Fix For: 0.8.2
>
>
> After KAFKA-999 is committed, we don't need aliveLeaders in 
> LeaderAndIsrRequest.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1618) Exception thrown when running console producer with no port number for the broker

2014-09-14 Thread BalajiSeshadri (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133272#comment-14133272
 ] 

BalajiSeshadri commented on KAFKA-1618:
---

[~nehanarkhede] thanks a lot for review comments,i agree 100% defaulting to a 
port is not good idea because its people's wish to use the port they like based 
on availability, some architects like prime numbers :).I will go ahead and 
provide error message if port is not mentioned in the input.

> Exception thrown when running console producer with no port number for the 
> broker
> -
>
> Key: KAFKA-1618
> URL: https://issues.apache.org/jira/browse/KAFKA-1618
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.1.1
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
>  Labels: newbie
> Fix For: 0.8.2
>
> Attachments: KAFKA-1618.patch
>
>
> When running console producer with just "localhost" as the broker list, I get 
> ArrayIndexOutOfBounds exception.
> I expect either a clearer error about arguments or for the producer to 
> "guess" a default port.
> [root@shapira-1 bin]# ./kafka-console-producer.sh  --topic rufus1 
> --broker-list localhost
> java.lang.ArrayIndexOutOfBoundsException: 1
>   at 
> kafka.client.ClientUtils$$anonfun$parseBrokerList$1.apply(ClientUtils.scala:102)
>   at 
> kafka.client.ClientUtils$$anonfun$parseBrokerList$1.apply(ClientUtils.scala:97)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>   at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:105)
>   at kafka.client.ClientUtils$.parseBrokerList(ClientUtils.scala:97)
>   at 
> kafka.producer.BrokerPartitionInfo.(BrokerPartitionInfo.scala:32)
>   at 
> kafka.producer.async.DefaultEventHandler.(DefaultEventHandler.scala:41)
>   at kafka.producer.Producer.(Producer.scala:59)
>   at kafka.producer.ConsoleProducer$.main(ConsoleProducer.scala:158)
>   at kafka.producer.ConsoleProducer.main(ConsoleProducer.scala)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1591) Clean-up Unnecessary stack trace in error/warn logs

2014-09-14 Thread Abhishek Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Sharma updated KAFKA-1591:
---
Attachment: Jira-1591- Complete Changes Patch.patch

Complete Changes patch - as per Guozhang latest comment. Changed files are - 

ClientUtils.scala
ConsumerFetcherManager.scala
TopicCount.scala
ZookeeperConsumerConnector.scala
BoundedByteBufferReceive.scala
SocketServer.scala
SyncProducer.scala
DefaultEventHandler.scala
AbstractFetcherThread.scala
KafkaApis.scala
MetadataCache.scala
ReplicaManager.scala

> Clean-up Unnecessary stack trace in error/warn logs
> ---
>
> Key: KAFKA-1591
> URL: https://issues.apache.org/jira/browse/KAFKA-1591
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Abhishek Sharma
>  Labels: newbie
> Fix For: 0.9.0
>
> Attachments: Jira-1591- Complete Changes Patch.patch, 
> Jira-1591-SocketConnection-Warning.patch, 
> Jira1591-SendProducerRequest-Warning.patch
>
>
> Some of the unnecessary stack traces in error / warning log entries can 
> easily pollute the log files. Examples include KAFKA-1066, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1324) Debian packaging

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1324:
-
Labels: deb debian fpm newbie packaging  (was: deb debian fpm packaging)

> Debian packaging
> 
>
> Key: KAFKA-1324
> URL: https://issues.apache.org/jira/browse/KAFKA-1324
> Project: Kafka
>  Issue Type: Improvement
>  Components: packaging
> Environment: linux
>Reporter: David Stendardi
>Priority: Minor
>  Labels: deb, debian, fpm, newbie, packaging
> Fix For: 0.9.0
>
> Attachments: packaging.patch
>
>
> The following patch add a task releaseDeb to the gradle build :
> ./gradlew releaseDeb
> This task should create a debian package in core/build/distributions using 
> fpm :
> https://github.com/jordansissel/fpm.
> We decided to use fpm so other package types would be easy to provide in 
> further iterations (eg : rpm).
> *Some implementations details* :
> - We splitted the releaseTarGz in two tasks : distDir, releaseTarGz.
> - We tried to use gradle builtin variables (project.name etc...)
> - By default the service will not start automatically so the user is free to 
> setup the service with custom configuration.
> Notes : 
>  * FPM is required and should be in the path.
>  * FPM does not allow yet to declare /etc/default/kafka as a conffiles (see : 
> https://github.com/jordansissel/fpm/issues/668)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-313) Add JSON output and looping options to ConsumerOffsetChecker

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-313:

Labels: newbie patch  (was: patch)

> Add JSON output and looping options to ConsumerOffsetChecker
> 
>
> Key: KAFKA-313
> URL: https://issues.apache.org/jira/browse/KAFKA-313
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Dave DeMaagd
>Priority: Minor
>  Labels: newbie, patch
> Fix For: 0.8.2
>
> Attachments: KAFKA-313-2012032200.diff
>
>
> Adds:
> * '--loop N' - causes the program to loop forever, sleeping for up to N 
> seconds between loops (loop time minus collection time, unless that's less 
> than 0, at which point it will just run again immediately)
> * '--asjson' - display as a JSON string instead of the more human readable 
> output format.
> Neither of the above  depend on each other (you can loop in the human 
> readable output, or do a single shot execution with JSON output).  Existing 
> behavior/output maintained if neither of the above are used.  Diff Attached.
> Impacted files:
> core/src/main/scala/kafka/tools/ConsumerOffsetChecker.scala



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1313) Support adding replicas to existing topic partitions

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1313:
-
Labels: newbie++  (was: )

> Support adding replicas to existing topic partitions
> 
>
> Key: KAFKA-1313
> URL: https://issues.apache.org/jira/browse/KAFKA-1313
> Project: Kafka
>  Issue Type: New Feature
>  Components: tools
>Affects Versions: 0.8.0
>Reporter: Marc Labbe
>Priority: Critical
>  Labels: newbie++
> Fix For: 0.9.0
>
>
> There is currently no easy way to add replicas to an existing topic 
> partitions.
> For example, topic create-test has been created with ReplicationFactor=1: 
> Topic:create-test  PartitionCount:3ReplicationFactor:1 Configs:
> Topic: create-test Partition: 0Leader: 1   Replicas: 1 Isr: 1
> Topic: create-test Partition: 1Leader: 2   Replicas: 2 Isr: 2
> Topic: create-test Partition: 2Leader: 3   Replicas: 3 Isr: 3
> I would like to increase the ReplicationFactor=2 (or more) so it shows up 
> like this instead.
> Topic:create-test  PartitionCount:3ReplicationFactor:2 Configs:
> Topic: create-test Partition: 0Leader: 1   Replicas: 1,2 Isr: 1,2
> Topic: create-test Partition: 1Leader: 2   Replicas: 2,3 Isr: 2,3
> Topic: create-test Partition: 2Leader: 3   Replicas: 3,1 Isr: 3,1
> Use cases for this:
> - adding brokers and thus increase fault tolerance
> - fixing human errors for topics created with wrong values



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1313) Support adding replicas to existing topic partitions

2014-09-14 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133275#comment-14133275
 ] 

Neha Narkhede commented on KAFKA-1313:
--

[~guozhang], [~charmalloc], Doing this through the partition reassignment tool 
is just a hack. It is more convenient to have a tool that increases the 
replication factor of a topic.

> Support adding replicas to existing topic partitions
> 
>
> Key: KAFKA-1313
> URL: https://issues.apache.org/jira/browse/KAFKA-1313
> Project: Kafka
>  Issue Type: New Feature
>  Components: tools
>Affects Versions: 0.8.0
>Reporter: Marc Labbe
>Priority: Critical
>  Labels: newbie++
> Fix For: 0.9.0
>
>
> There is currently no easy way to add replicas to an existing topic 
> partitions.
> For example, topic create-test has been created with ReplicationFactor=1: 
> Topic:create-test  PartitionCount:3ReplicationFactor:1 Configs:
> Topic: create-test Partition: 0Leader: 1   Replicas: 1 Isr: 1
> Topic: create-test Partition: 1Leader: 2   Replicas: 2 Isr: 2
> Topic: create-test Partition: 2Leader: 3   Replicas: 3 Isr: 3
> I would like to increase the ReplicationFactor=2 (or more) so it shows up 
> like this instead.
> Topic:create-test  PartitionCount:3ReplicationFactor:2 Configs:
> Topic: create-test Partition: 0Leader: 1   Replicas: 1,2 Isr: 1,2
> Topic: create-test Partition: 1Leader: 2   Replicas: 2,3 Isr: 2,3
> Topic: create-test Partition: 2Leader: 3   Replicas: 3,1 Isr: 3,1
> Use cases for this:
> - adding brokers and thus increase fault tolerance
> - fixing human errors for topics created with wrong values



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1313) Support adding replicas to existing topic partitions

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1313:
-
Fix Version/s: (was: 0.8.2)
   0.9.0

> Support adding replicas to existing topic partitions
> 
>
> Key: KAFKA-1313
> URL: https://issues.apache.org/jira/browse/KAFKA-1313
> Project: Kafka
>  Issue Type: New Feature
>  Components: tools
>Affects Versions: 0.8.0
>Reporter: Marc Labbe
>Priority: Critical
>  Labels: newbie++
> Fix For: 0.9.0
>
>
> There is currently no easy way to add replicas to an existing topic 
> partitions.
> For example, topic create-test has been created with ReplicationFactor=1: 
> Topic:create-test  PartitionCount:3ReplicationFactor:1 Configs:
> Topic: create-test Partition: 0Leader: 1   Replicas: 1 Isr: 1
> Topic: create-test Partition: 1Leader: 2   Replicas: 2 Isr: 2
> Topic: create-test Partition: 2Leader: 3   Replicas: 3 Isr: 3
> I would like to increase the ReplicationFactor=2 (or more) so it shows up 
> like this instead.
> Topic:create-test  PartitionCount:3ReplicationFactor:2 Configs:
> Topic: create-test Partition: 0Leader: 1   Replicas: 1,2 Isr: 1,2
> Topic: create-test Partition: 1Leader: 2   Replicas: 2,3 Isr: 2,3
> Topic: create-test Partition: 2Leader: 3   Replicas: 3,1 Isr: 3,1
> Use cases for this:
> - adding brokers and thus increase fault tolerance
> - fixing human errors for topics created with wrong values



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (KAFKA-1313) Support adding replicas to existing topic partitions

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede reopened KAFKA-1313:
--

> Support adding replicas to existing topic partitions
> 
>
> Key: KAFKA-1313
> URL: https://issues.apache.org/jira/browse/KAFKA-1313
> Project: Kafka
>  Issue Type: New Feature
>  Components: tools
>Affects Versions: 0.8.0
>Reporter: Marc Labbe
>Priority: Critical
>  Labels: newbie++
> Fix For: 0.9.0
>
>
> There is currently no easy way to add replicas to an existing topic 
> partitions.
> For example, topic create-test has been created with ReplicationFactor=1: 
> Topic:create-test  PartitionCount:3ReplicationFactor:1 Configs:
> Topic: create-test Partition: 0Leader: 1   Replicas: 1 Isr: 1
> Topic: create-test Partition: 1Leader: 2   Replicas: 2 Isr: 2
> Topic: create-test Partition: 2Leader: 3   Replicas: 3 Isr: 3
> I would like to increase the ReplicationFactor=2 (or more) so it shows up 
> like this instead.
> Topic:create-test  PartitionCount:3ReplicationFactor:2 Configs:
> Topic: create-test Partition: 0Leader: 1   Replicas: 1,2 Isr: 1,2
> Topic: create-test Partition: 1Leader: 2   Replicas: 2,3 Isr: 2,3
> Topic: create-test Partition: 2Leader: 3   Replicas: 3,1 Isr: 3,1
> Use cases for this:
> - adding brokers and thus increase fault tolerance
> - fixing human errors for topics created with wrong values



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1633) Data loss if broker is killed

2014-09-14 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133278#comment-14133278
 ] 

Jun Rao commented on KAFKA-1633:


What's the producer ack you used in your test? In general, we guarantee no data 
loss if (1) producer uses ack=-1 and (2) there are fewer failures than # 
replicas.

> Data loss if broker is killed
> -
>
> Key: KAFKA-1633
> URL: https://issues.apache.org/jira/browse/KAFKA-1633
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.1.1
> Environment: centos 6.3, open jdk 7
>Reporter: gautham varada
>Assignee: Jun Rao
>
> We have a 2 node kafka cluster, we experienced data loss when we did a kill 
> -9 on the brokers.  We also found a work around to prevent this loss.
> Replication factor :2, 4 partitions
> Steps to reproduce
> 1. Create a 2 node cluster with replication factor 2, num partitions 4
> 2. We used Jmeter to pump events
> 3. We used kafka web console to inspect the log size after the test
> During the test, we simultaneously killed the brokers using kill -9 and we 
> tallied the metrics reported by jmeter and the size we observed in the web 
> console, we lost tons of messages.
> We went back and set the Producer retry to 1 instead of the default 3 and 
> repeated the above tests and we did not loose a single message.
> We repeated the above tests with the Producer retry set to 3 and 1 with a 
> single broker and we observed data loss when the retry was 3 and no loss when 
> the retry was 1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-328) Write unit test for kafka server startup and shutdown API

2014-09-14 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133279#comment-14133279
 ] 

Neha Narkhede commented on KAFKA-328:
-

[~balaji.sesha...@dish.com] Thanks for looking into this. The tests mentioned 
in the description of the JIRA can be added to the ServerShutdownTest test 
suite. However, I would avoid adding any sleeps as it defies the point of the 
test. I'm guessing the fix for the right behavior already exists, but your 
tests would verify it. I will also take a look at KAFKA-1476 today.

> Write unit test for kafka server startup and shutdown API 
> --
>
> Key: KAFKA-328
> URL: https://issues.apache.org/jira/browse/KAFKA-328
> Project: Kafka
>  Issue Type: Bug
>Reporter: Neha Narkhede
>Assignee: BalajiSeshadri
>  Labels: newbie
>
> Background discussion in KAFKA-320
> People often try to embed KafkaServer in an application that ends up calling 
> startup() and shutdown() repeatedly and sometimes in odd ways. To ensure this 
> works correctly we have to be very careful about cleaning up resources. This 
> is a good practice for making unit tests reliable anyway.
> A good first step would be to add some unit tests on startup and shutdown to 
> cover various cases:
> 1. A Kafka server can startup if it is not already starting up, if it is not 
> currently being shutdown, or if it hasn't been already started
> 2. A Kafka server can shutdown if it is not already shutting down, if it is 
> not currently starting up, or if it hasn't been already shutdown. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-328) Write unit test for kafka server startup and shutdown API

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-328:

Reviewer: Neha Narkhede

> Write unit test for kafka server startup and shutdown API 
> --
>
> Key: KAFKA-328
> URL: https://issues.apache.org/jira/browse/KAFKA-328
> Project: Kafka
>  Issue Type: Bug
>Reporter: Neha Narkhede
>Assignee: BalajiSeshadri
>  Labels: newbie
>
> Background discussion in KAFKA-320
> People often try to embed KafkaServer in an application that ends up calling 
> startup() and shutdown() repeatedly and sometimes in odd ways. To ensure this 
> works correctly we have to be very careful about cleaning up resources. This 
> is a good practice for making unit tests reliable anyway.
> A good first step would be to add some unit tests on startup and shutdown to 
> cover various cases:
> 1. A Kafka server can startup if it is not already starting up, if it is not 
> currently being shutdown, or if it hasn't been already started
> 2. A Kafka server can shutdown if it is not already shutting down, if it is 
> not currently starting up, or if it hasn't been already shutdown. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1476) Get a list of consumer groups

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1476:
-
Reviewer: Neha Narkhede

> Get a list of consumer groups
> -
>
> Key: KAFKA-1476
> URL: https://issues.apache.org/jira/browse/KAFKA-1476
> Project: Kafka
>  Issue Type: Wish
>  Components: tools
>Affects Versions: 0.8.1.1
>Reporter: Ryan Williams
>Assignee: BalajiSeshadri
>  Labels: newbie
> Fix For: 0.9.0
>
> Attachments: KAFKA-1476-RENAME.patch, KAFKA-1476.patch
>
>
> It would be useful to have a way to get a list of consumer groups currently 
> active via some tool/script that ships with kafka. This would be helpful so 
> that the system tools can be explored more easily.
> For example, when running the ConsumerOffsetChecker, it requires a group 
> option
> bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --topic test --group 
> ?
> But, when just getting started with kafka, using the console producer and 
> consumer, it is not clear what value to use for the group option.  If a list 
> of consumer groups could be listed, then it would be clear what value to use.
> Background:
> http://mail-archives.apache.org/mod_mbox/kafka-users/201405.mbox/%3cCAOq_b1w=slze5jrnakxvak0gu9ctdkpazak1g4dygvqzbsg...@mail.gmail.com%3e



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1420) Replace AdminUtils.createOrUpdateTopicPartitionAssignmentPathInZK with TestUtils.createTopic in unit tests

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1420:
-
Reviewer: Jun Rao
Assignee: Jonathan Natkins  (was: Jun Rao)

> Replace AdminUtils.createOrUpdateTopicPartitionAssignmentPathInZK with 
> TestUtils.createTopic in unit tests
> --
>
> Key: KAFKA-1420
> URL: https://issues.apache.org/jira/browse/KAFKA-1420
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Jonathan Natkins
>  Labels: newbie
> Fix For: 0.8.2
>
> Attachments: KAFKA-1420.patch, KAFKA-1420_2014-07-30_11:18:26.patch, 
> KAFKA-1420_2014-07-30_11:24:55.patch, KAFKA-1420_2014-08-02_11:04:15.patch, 
> KAFKA-1420_2014-08-10_14:12:05.patch, KAFKA-1420_2014-08-10_23:03:46.patch
>
>
> This is a follow-up JIRA from KAFKA-1389.
> There are a bunch of places in the unit tests where we misuse 
> AdminUtils.createOrUpdateTopicPartitionAssignmentPathInZK to create topics, 
> where TestUtils.createTopic needs to be used instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1499) Broker-side compression configuration

2014-09-14 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-1499:
---
Reviewer: Joel Koshy

> Broker-side compression configuration
> -
>
> Key: KAFKA-1499
> URL: https://issues.apache.org/jira/browse/KAFKA-1499
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Joel Koshy
>Assignee: Manikumar Reddy
>  Labels: newbie++
> Fix For: 0.8.2
>
> Attachments: KAFKA-1499.patch, KAFKA-1499.patch, 
> KAFKA-1499_2014-08-15_14:20:27.patch, KAFKA-1499_2014-08-21_21:44:27.patch
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> A given topic can have messages in mixed compression codecs. i.e., it can
> also have a mix of uncompressed/compressed messages.
> It will be useful to support a broker-side configuration to recompress
> messages to a specific compression codec. i.e., all messages (for all
> topics) on the broker will be compressed to this codec. We could have
> per-topic overrides as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1420) Replace AdminUtils.createOrUpdateTopicPartitionAssignmentPathInZK with TestUtils.createTopic in unit tests

2014-09-14 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133282#comment-14133282
 ] 

Neha Narkhede commented on KAFKA-1420:
--

[~junrao], please feel free to reassign for review.

> Replace AdminUtils.createOrUpdateTopicPartitionAssignmentPathInZK with 
> TestUtils.createTopic in unit tests
> --
>
> Key: KAFKA-1420
> URL: https://issues.apache.org/jira/browse/KAFKA-1420
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Jonathan Natkins
>  Labels: newbie
> Fix For: 0.8.2
>
> Attachments: KAFKA-1420.patch, KAFKA-1420_2014-07-30_11:18:26.patch, 
> KAFKA-1420_2014-07-30_11:24:55.patch, KAFKA-1420_2014-08-02_11:04:15.patch, 
> KAFKA-1420_2014-08-10_14:12:05.patch, KAFKA-1420_2014-08-10_23:03:46.patch
>
>
> This is a follow-up JIRA from KAFKA-1389.
> There are a bunch of places in the unit tests where we misuse 
> AdminUtils.createOrUpdateTopicPartitionAssignmentPathInZK to create topics, 
> where TestUtils.createTopic needs to be used instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-404) When using chroot path, create chroot on startup if it doesn't exist

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-404:

Reviewer: Neha Narkhede

> When using chroot path, create chroot on startup if it doesn't exist
> 
>
> Key: KAFKA-404
> URL: https://issues.apache.org/jira/browse/KAFKA-404
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.1
> Environment: CentOS 5.5, Linux 2.6.18-194.32.1.el5 x86_64 GNU/Linux
>Reporter: Jonathan Creasy
>  Labels: newbie, patch
> Fix For: 0.8.2
>
> Attachments: KAFKA-404-0.7.1.patch, KAFKA-404-0.8.patch, 
> KAFKA-404-auto-create-zookeeper-chroot-on-start-up-i.patch, 
> KAFKA-404-auto-create-zookeeper-chroot-on-start-up-v2.patch, 
> KAFKA-404-auto-create-zookeeper-chroot-on-start-up-v3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 25236: Patch for KAFKA-1619

2014-09-14 Thread Jun Rao


> On Sept. 1, 2014, 7:56 p.m., Guozhang Wang wrote:
> > Shall we remove the whole perf directory as well?

Perf is not part of the git repo. It's probably created by the gradle script, 
which is fixed now.


- Jun


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/25236/#review52001
---


On Sept. 1, 2014, 5:55 p.m., Jun Rao wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/25236/
> ---
> 
> (Updated Sept. 1, 2014, 5:55 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1619
> https://issues.apache.org/jira/browse/KAFKA-1619
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> fix readme
> 
> 
> fix typo in readme
> 
> 
> Diffs
> -
> 
>   README.md 8cd5cfd1e04dbef3c0878ff477eebea9ac749233 
>   build.gradle 6d6f1a4349d71bd1b56b7cc5c450046a75a83d6e 
>   settings.gradle 6041784d6f84c66bb1e9df2bc112e1b06c0bb000 
> 
> Diff: https://reviews.apache.org/r/25236/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Jun Rao
> 
>



[jira] [Updated] (KAFKA-1619) perf dir can be removed

2014-09-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-1619:
---
   Resolution: Fixed
Fix Version/s: 0.8.2
   Status: Resolved  (was: Patch Available)

Thanks for the reviews. Committed to trunk.

> perf dir can be removed
> ---
>
> Key: KAFKA-1619
> URL: https://issues.apache.org/jira/browse/KAFKA-1619
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2
>Reporter: Jun Rao
>Assignee: Jun Rao
>  Labels: newbie
> Fix For: 0.8.2
>
> Attachments: KAFKA-1619_2014-09-01_10:55:38.patch
>
>
> There is no code in perf/ any more. We can also remove the perf target in 
> build.gradle.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1624) building on JDK 8 fails

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1624:
-
Labels: newbie  (was: )

> building on JDK 8 fails
> ---
>
> Key: KAFKA-1624
> URL: https://issues.apache.org/jira/browse/KAFKA-1624
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>  Labels: newbie
> Fix For: 0.9.0
>
>
> {code}
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; 
> support was removed in 8.0
> error: error while loading CharSequence, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/lang/CharSequence.class)' is 
> broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 10)
> error: error while loading Comparator, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/util/Comparator.class)' is 
> broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 20)
> error: error while loading AnnotatedElement, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/lang/reflect/AnnotatedElement.class)'
>  is broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 76)
> error: error while loading Arrays, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/util/Arrays.class)' is broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 765)
> /tmp/sbt_53783b12/xsbt/ExtractAPI.scala:395: error: java.util.Comparator does 
> not take type parameters
>   private[this] val sortClasses = new Comparator[Symbol] {
> ^
> 5 errors found
> :core:compileScala FAILED
> FAILURE: Build failed with an exception.
> * What went wrong:
> Execution failed for task ':core:compileScala'.
> > org.gradle.messaging.remote.internal.PlaceholderException (no error message)
> * Try:
> Run with --stacktrace option to get the stack trace. Run with --info or 
> --debug option to get more log output.
> BUILD FAILED
> Total time: 1 mins 48.298 secs
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1482) Transient test failures for kafka.admin.DeleteTopicTest

2014-09-14 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-1482:
--
Attachment: kafka_tests.log
kafka_delete_topic_test.log

> Transient test failures for kafka.admin.DeleteTopicTest
> ---
>
> Key: KAFKA-1482
> URL: https://issues.apache.org/jira/browse/KAFKA-1482
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Sriharsha Chintalapani
>  Labels: newbie
> Fix For: 0.8.2
>
> Attachments: kafka_delete_topic_test.log, kafka_tests.log
>
>
> A couple of test cases have timing related transient test failures:
> kafka.admin.DeleteTopicTest > testPartitionReassignmentDuringDeleteTopic 
> FAILED
> junit.framework.AssertionFailedError: Admin path /admin/delete_topic/test 
> path not deleted even after a replica is restarted
> at junit.framework.Assert.fail(Assert.java:47)
> at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:578)
> at 
> kafka.admin.DeleteTopicTest.verifyTopicDeletion(DeleteTopicTest.scala:333)
> at 
> kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic(DeleteTopicTest.scala:197)
> kafka.admin.DeleteTopicTest > testDeleteTopicDuringAddPartition FAILED
> junit.framework.AssertionFailedError: Replica logs not deleted after 
> delete topic is complete
> at junit.framework.Assert.fail(Assert.java:47)
> at junit.framework.Assert.assertTrue(Assert.java:20)
> at 
> kafka.admin.DeleteTopicTest.verifyTopicDeletion(DeleteTopicTest.scala:338)
> at 
> kafka.admin.DeleteTopicTest.testDeleteTopicDuringAddPartition(DeleteTopicTest.scala:216)
> kafka.admin.DeleteTopicTest > testRequestHandlingDuringDeleteTopic FAILED
> org.scalatest.junit.JUnitTestFailedError: fails with exception
> at 
> org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:102)
> at 
> org.scalatest.junit.JUnit3Suite.newAssertionFailedException(JUnit3Suite.scala:142)
> at org.scalatest.Assertions$class.fail(Assertions.scala:664)
> at org.scalatest.junit.JUnit3Suite.fail(JUnit3Suite.scala:142)
> at 
> kafka.admin.DeleteTopicTest.testRequestHandlingDuringDeleteTopic(DeleteTopicTest.scala:123)
> Caused by:
> org.scalatest.junit.JUnitTestFailedError: Test should fail because 
> the topic is being deleted
> at 
> org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:101)
> at 
> org.scalatest.junit.JUnit3Suite.newAssertionFailedException(JUnit3Suite.scala:142)
> at org.scalatest.Assertions$class.fail(Assertions.scala:644)
> at org.scalatest.junit.JUnit3Suite.fail(JUnit3Suite.scala:142)
> at 
> kafka.admin.DeleteTopicTest.testRequestHandlingDuringDeleteTopic(DeleteTopicTest.scala:120)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-1482) Transient test failures for kafka.admin.DeleteTopicTest

2014-09-14 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani resolved KAFKA-1482.
---
Resolution: Cannot Reproduce

ran full tests in a loop of 100 and also single DeleteTopicTest in a loop  
unable to reproduce the test failures. Attached the test output. Please re-open 
if you see this failed test again.

> Transient test failures for kafka.admin.DeleteTopicTest
> ---
>
> Key: KAFKA-1482
> URL: https://issues.apache.org/jira/browse/KAFKA-1482
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Sriharsha Chintalapani
>  Labels: newbie
> Fix For: 0.8.2
>
> Attachments: kafka_delete_topic_test.log, kafka_tests.log
>
>
> A couple of test cases have timing related transient test failures:
> kafka.admin.DeleteTopicTest > testPartitionReassignmentDuringDeleteTopic 
> FAILED
> junit.framework.AssertionFailedError: Admin path /admin/delete_topic/test 
> path not deleted even after a replica is restarted
> at junit.framework.Assert.fail(Assert.java:47)
> at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:578)
> at 
> kafka.admin.DeleteTopicTest.verifyTopicDeletion(DeleteTopicTest.scala:333)
> at 
> kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic(DeleteTopicTest.scala:197)
> kafka.admin.DeleteTopicTest > testDeleteTopicDuringAddPartition FAILED
> junit.framework.AssertionFailedError: Replica logs not deleted after 
> delete topic is complete
> at junit.framework.Assert.fail(Assert.java:47)
> at junit.framework.Assert.assertTrue(Assert.java:20)
> at 
> kafka.admin.DeleteTopicTest.verifyTopicDeletion(DeleteTopicTest.scala:338)
> at 
> kafka.admin.DeleteTopicTest.testDeleteTopicDuringAddPartition(DeleteTopicTest.scala:216)
> kafka.admin.DeleteTopicTest > testRequestHandlingDuringDeleteTopic FAILED
> org.scalatest.junit.JUnitTestFailedError: fails with exception
> at 
> org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:102)
> at 
> org.scalatest.junit.JUnit3Suite.newAssertionFailedException(JUnit3Suite.scala:142)
> at org.scalatest.Assertions$class.fail(Assertions.scala:664)
> at org.scalatest.junit.JUnit3Suite.fail(JUnit3Suite.scala:142)
> at 
> kafka.admin.DeleteTopicTest.testRequestHandlingDuringDeleteTopic(DeleteTopicTest.scala:123)
> Caused by:
> org.scalatest.junit.JUnitTestFailedError: Test should fail because 
> the topic is being deleted
> at 
> org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:101)
> at 
> org.scalatest.junit.JUnit3Suite.newAssertionFailedException(JUnit3Suite.scala:142)
> at org.scalatest.Assertions$class.fail(Assertions.scala:644)
> at org.scalatest.junit.JUnit3Suite.fail(JUnit3Suite.scala:142)
> at 
> kafka.admin.DeleteTopicTest.testRequestHandlingDuringDeleteTopic(DeleteTopicTest.scala:120)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1419) cross build for scala 2.11

2014-09-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-1419:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks for the delta patch. Committed to trunk.

> cross build for scala 2.11
> --
>
> Key: KAFKA-1419
> URL: https://issues.apache.org/jira/browse/KAFKA-1419
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.8.1
>Reporter: Scott Clasen
>Assignee: Ivan Lyutov
>Priority: Blocker
> Fix For: 0.8.2, 0.8.1.2
>
> Attachments: KAFKA-1419-scalaBinaryVersion.patch, 
> KAFKA-1419-scalaBinaryVersion.patch, KAFKA-1419.patch, KAFKA-1419.patch, 
> KAFKA-1419_2014-07-28_15:05:16.patch, KAFKA-1419_2014-07-29_15:13:43.patch, 
> KAFKA-1419_2014-08-04_14:43:26.patch, KAFKA-1419_2014-08-05_12:51:16.patch, 
> KAFKA-1419_2014-08-07_10:17:34.patch, KAFKA-1419_2014-08-07_10:52:18.patch
>
>
> Please publish builds for scala 2.11, hopefully just needs a small tweak to 
> the gradle conf?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1438) Migrate kafka client tools

2014-09-14 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133296#comment-14133296
 ] 

Jun Rao commented on KAFKA-1438:


The followup patch has been committed as part of kafka-1419. Thanks for the 
help.

> Migrate kafka client tools
> --
>
> Key: KAFKA-1438
> URL: https://issues.apache.org/jira/browse/KAFKA-1438
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Sriharsha Chintalapani
>  Labels: newbie, tools, usability
> Fix For: 0.8.2
>
> Attachments: KAFKA-1438-windows_bat.patch, KAFKA-1438.patch, 
> KAFKA-1438.patch, KAFKA-1438_2014-05-27_11:45:29.patch, 
> KAFKA-1438_2014-05-27_12:16:00.patch, KAFKA-1438_2014-05-27_17:08:59.patch, 
> KAFKA-1438_2014-05-28_08:32:46.patch, KAFKA-1438_2014-05-28_08:36:28.patch, 
> KAFKA-1438_2014-05-28_08:40:22.patch, KAFKA-1438_2014-05-30_11:36:01.patch, 
> KAFKA-1438_2014-05-30_11:38:46.patch, KAFKA-1438_2014-05-30_11:42:32.patch
>
>
> Currently the console/perf client tools scatter across different packages, 
> we'd better to:
> 1. Move Consumer/ProducerPerformance and SimpleConsumerPerformance to tools 
> and remove the perf sub-project.
> 2. Move ConsoleConsumer from kafka.consumer to kafka.tools.
> 3. Move other consumer related tools from kafka.consumer to kafka.tools.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: (info)how kafka delete the specific topic data

2014-09-14 Thread Jun Rao
We are working on an admin tool to delete an existing topic in trunk.

Thanks,

Jun

On Sat, Sep 13, 2014 at 1:22 AM, Jacky.J.Wang (mis.cnxa01.Newegg) 43048 <
jacky.j.w...@newegg.com.invalid> wrote:

> Hello kafka
>
> I truncate data of kafka as follow
>
> 1:stop kafka service
> 2:delete zookeeper /broker/topics/topic and /consumers/group
> 3:delete kafka log dir
> 4:restart kafka service
> 5:recreate topic info
> but this way need to stop the service,so how truncate kafka topic data
> with no stopping kafka service?
>
> Eagerly awaiting your reply,thanks
>
>
> Best regards,
> Jacky.J.Wang
> Eng Software Engineer,NESC-XA
> Newegg Tech (Xian) Co., Ltd.
> 15th to 16th floor, 01 Plaza, Xi’an Software Park, No. 72 Keji 2nd Road,
> Xi’an P.R.China(710075)
> Once you know, you Newegg.
>
> -
> CONFIDENTIALITY NOTICE: This email and any files transmitted with it may
> contain privileged or otherwise confidential information.  It is intended
> only for the person or persons to whom it is addressed. If you received
> this message in error, you are not authorized to read, print, retain, copy,
> disclose, disseminate, distribute, or use this message any part thereof or
> any information contained therein. Please notify the sender immediately and
> delete all copies of this message. Thank you in advance for your
> cooperation.
>
> 保密注意:此邮件及其附随文件可能包含了保密信息。该邮件的目的是发送给指定收件人。如果您非指定收件人而错误地收到了本邮件,您将无权阅读、打印、保存、复制、泄露、传播、分发或使用此邮件全部或部分内容或者邮件中包含的任何信息。请立即通知发件人,并删除该邮件。感谢您的配合!
>
>


[jira] [Commented] (KAFKA-1623) kafka is sometimes slow to accept connections

2014-09-14 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133305#comment-14133305
 ] 

Neha Narkhede commented on KAFKA-1623:
--

[~hzshlomi], [~guozhang], One thing that might be worth trying is setting the 
backlog explicitly on line 236 of SocketServer.scala. Today we rely on the 
default which I'm not sure is perfect. Agree that this shouldn't be 
configurable, so it will require some performance testing to arrive at the 
right value of the backlog. 
Suggested change -
{noformat}
serverChannel.socket.bind(socketAddress, backlog)
{noformat}



> kafka is sometimes slow to accept connections
> -
>
> Key: KAFKA-1623
> URL: https://issues.apache.org/jira/browse/KAFKA-1623
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0
>Reporter: Shlomi Hazan
>  Labels: performance
>
> from SocketServer.scala:144
> the Acceptor can wait up to 500 millis before processing the accumulated FDs. 
> Also, the backlog of the acceptor socket seems not to be defined, which may 
> be problematic if all 500 millis are elapsed before the thread awakes. 
> setting the backlog is doable using the proper ServerSocket Ctor, and maybe 
> better be provisioned via configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] 0.8.2 release branch, "unofficial" release candidates(s), 0.8.1.2 release

2014-09-14 Thread Neha Narkhede
Thanks a lot for going through the unresolved tickets, Guozhang!

On Thu, Sep 4, 2014 at 8:19 AM, Guozhang Wang  wrote:

> Just made a pass over the unresolved tickets tagged for 0.8.2, I think many
> of them can be pushed to 0.8.3 / 0.9.
>
>
> On Wed, Sep 3, 2014 at 8:05 PM, Jonathan Weeks 
> wrote:
>
> >
> > +1 on a 0.8.1.2 release as described.
> >
> > I manually applied patches to cobble together a working gradle build for
> > kafka for scala 2.11, but would really appreciate an official release —
> > i.e. 0.8.1.2, as we also have other dependent libraries we use as well
> > (e.g. akka-kafka) that would be much easier to migrate and support if the
> > build was public and official.
> >
> > There were at least several others on the “users” list that expressed
> > interest in scala 2.11 support, who knows how many more “lurkers” are out
> > there.
> >
> > Best Regards,
> >
> > -Jonathan
> >
> > > Hey, I wanted to take a quick pulse to see if we are getting closer to
> a
> > > branch for 0.8.2.
> > >
> > > 1) There still seems to be a lot of open issues
> > >
> >
> https://issues.apache.org/jira/browse/KAFKA/fixforversion/12326167/?selectedTab=com.atlassian.jira.jira-projects-plugin:version-issues-panel
> > > and our 30 day summary is showing issues: 51 created and *34* resolved
> > and not
> > > sure how much of that we could really just decide to push off to 0.8.3
> or
> > > 0.9.0 vs working on 0.8.2 as stable for release.  There is already so
> > much
> > > goodness on trunk.  I appreciate the double commit pain especially as
> > trunk
> > > and branch drift (ugh).
> > >
> > > 2) Also, I wanted to float the idea of after making the 0.8.2 branch
> > that I
> > > would do some unofficial release candidates for folks to test prior to
> > > release and vote.  What I was thinking was I would build, upload and
> > stage
> > > like I was preparing artifacts for vote but let the community know to
> go
> > in
> > > and "have at it" well prior to the vote release.  We don't get a lot of
> > > community votes during a release but issues after (which is natural
> > because
> > > of how things are done).  I have seen four Apache projects doing this
> > very
> > > successfully not only have they had less iterations of RC votes
> > (sensitive
> > > to that myself) but the community kicked back issues they saw by giving
> > > them some "pre release" time to go through their own test and staging
> > > environments as the release are coming about.
> > >
> > > 3) Checking again on "should we have a 0.8.1.2" release if folks in the
> > > community find important features (this might be best asked on the user
> > > list maybe not sure) they don't want/can't wait for which wouldn't be
> too
> > > much pain/dangerous to back port. Two things that spring to the top of
> my
> > > head are 2.11 Scala support and fixing the source jars.  Both of these
> > are
> > > easy to patch personally I don't mind but want to gauge more from the
> > > community on this too.  I have heard gripes ad hoc from folks in direct
> > > communication but no complains really in the public forum and wanted to
> > > open the floor if folks had a need.
> > >
> > > 4) 0.9 work I feel is being held up some (or at least resourcing it
> from
> > my
> > > perspective).  We decided to hold up including SSL (even though we
> have a
> > > path for it). Jay did a nice update recently to the Security wiki
> which I
> > > think we should move forward with.  I have some more to
> add/change/update
> > > and want to start getting down to more details and getting specific
> > people
> > > working on specific tasks but without knowing what we are doing when it
> > is
> > > hard to manage.
> > >
> > > 5) I just updated https://issues.apache.org/jira/browse/KAFKA-1555 I
> > think
> > > it is a really important feature update doesn't have to be in 0.8.2 but
> > we
> > > need consensus (no pun intended). It fundamentally allows for data in
> min
> > > two rack requirement which A LOT of data requires for successful save
> to
> > > occur.
> > >
> > > /***
> > >  Joe Stein
> > >  Founder, Principal Consultant
> > >  Big Data Open Source Security LLC
> > >  http://www.stealth.ly
> > >  Twitter: @allthingshadoop 
> > > /
> >
>
>
>
> --
> -- Guozhang
>


[jira] [Commented] (KAFKA-1632) No such method error on KafkaStream.head

2014-09-14 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133314#comment-14133314
 ] 

Jun Rao commented on KAFKA-1632:


If there is no new message, the iterator on KafkaStream blocks by default. If 
you set "consumer.timeout.ms", you will get a ConsumerTimeoutException after no 
more messages for that amount of time.

> No such method error on KafkaStream.head
> 
>
> Key: KAFKA-1632
> URL: https://issues.apache.org/jira/browse/KAFKA-1632
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.0
>Reporter: aarti gupta
>
> \The following error is thrown, (when I call KafkaStream.head(), as shown in 
> the code snippet below)
>  WARN -  java.lang.NoSuchMethodError: 
> kafka.consumer.KafkaStream.head()Lkafka/message/MessageAndMetadata;
> My use case, is that I want to block on the receive() method, and when a 
> message is published on the topic, I 'return the head' of the queue to the 
> calling method, that processes it.
> I do not use partitioning and have a single stream.
> import com.google.common.collect.Maps;
> import x.x.x.Task;
> import kafka.consumer.ConsumerConfig;
> import kafka.consumer.KafkaStream;
> import kafka.javaapi.consumer.ConsumerConnector;
> import kafka.javaapi.consumer.ZookeeperConsumerConnector;
> import kafka.message.MessageAndMetadata;
> import org.codehaus.jettison.json.JSONException;
> import org.codehaus.jettison.json.JSONObject;
> import org.slf4j.Logger;
> import org.slf4j.LoggerFactory;
> import java.io.IOException;
> import java.util.List;
> import java.util.Map;
> import java.util.Properties;
> import java.util.concurrent.Callable;
> import java.util.concurrent.Executors;
> /**
>  * @author agupta
>  */
> public class KafkaConsumerDelegate implements ConsumerDelegate {
> private ConsumerConnector consumerConnector;
> private String topicName;
> private static Logger LOG = 
> LoggerFactory.getLogger(KafkaProducerDelegate.class.getName());
> private final Map topicCount = Maps.newHashMap();
> private Map>> messageStreams;
> private List> kafkaStreams;
> @Override
> public Task receive(final boolean consumerConfirms) {
> try {
> LOG.info("Kafka consumer delegate listening on topic " + 
> getTopicName());
> kafkaStreams = messageStreams.get(getTopicName());
> final KafkaStream kafkaStream = 
> kafkaStreams.get(0);
> return Executors.newSingleThreadExecutor().submit(new 
> Callable() {
> @Override
> public Task call() throws Exception {
>  final MessageAndMetadata 
> messageAndMetadata= kafkaStream.head();
> final Task message = new Task() {
> @Override
> public byte[] getBytes() {
> return messageAndMetadata.message();
> }
> };
> return message;
> }
> }).get();
> } catch (Exception e) {
> LOG.warn("Error in consumer " + e.getMessage());
> }
> return null;
> }
> @Override
> public void initialize(JSONObject configData, boolean publisherAckMode) 
> throws IOException {
> try {
> this.topicName = configData.getString("topicName");
> LOG.info("Topic name is " + topicName);
> } catch (JSONException e) {
> e.printStackTrace();
> LOG.error("Error parsing configuration", e);
> }
> Properties properties = new Properties();
> properties.put("zookeeper.connect", "localhost:2181");
> properties.put("group.id", "testgroup");
> ConsumerConfig consumerConfig = new ConsumerConfig(properties);
> //only one stream, and one topic, (Since we are not supporting 
> partitioning)
> topicCount.put(getTopicName(), 1);
> consumerConnector = new ZookeeperConsumerConnector(consumerConfig);
> messageStreams = consumerConnector.createMessageStreams(topicCount);
> }
> @Override
> public void stop() throws IOException {
> //TODO
> throw new UnsupportedOperationException("Method Not Implemented");
> }
> public String getTopicName() {
> return this.topicName;
> }
> }
> Lastly, I am using the following binary 
> kafka_2.8.0-0.8.1.1  
> and the following maven dependency
>   
> org.apache.kafka
> kafka_2.10
> 0.8.1.1
> 
> Any suggestions?
> Thanks
> aarti



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1369) snappy version update 1.1.x

2014-09-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-1369:
---
   Resolution: Fixed
Fix Version/s: 0.8.2
 Assignee: Roger Hoover
   Status: Resolved  (was: Patch Available)

Thanks for the patch. +1 and committed to trunk.

> snappy version update 1.1.x
> ---
>
> Key: KAFKA-1369
> URL: https://issues.apache.org/jira/browse/KAFKA-1369
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.0, 0.8.1.1
> Environment: Red Hat Enterprise Linux Server release 5.8 (Tikanga)
> - x64 
>Reporter: thinker0
>Assignee: Roger Hoover
>Priority: Minor
> Fix For: 0.8.2
>
> Attachments: patch.diff
>
>
> https://github.com/xerial/snappy-java/issues/38 issue
> snappy version 1.1.x
> {code}
> org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] null
> at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:239)
> at org.xerial.snappy.Snappy.(Snappy.java:48)
> at 
> org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:351)
> at org.xerial.snappy.SnappyInputStream.rawRead(SnappyInputStream.java:159)
> at org.xerial.snappy.SnappyInputStream.read(SnappyInputStream.java:142)
> at java.io.InputStream.read(InputStream.java:101)
> at 
> kafka.message.ByteBufferMessageSet$$anonfun$decompress$1.apply$mcI$sp(ByteBufferMessageSet.scala:68)
> at 
> kafka.message.ByteBufferMessageSet$$anonfun$decompress$1.apply(ByteBufferMessageSet.scala:68)
> at 
> kafka.message.ByteBufferMessageSet$$anonfun$decompress$1.apply(ByteBufferMessageSet.scala:68)
> at scala.collection.immutable.Stream$.continually(Stream.scala:1129)
> at 
> kafka.message.ByteBufferMessageSet$.decompress(ByteBufferMessageSet.scala:68)
> at 
> kafka.message.ByteBufferMessageSet$$anon$1.makeNextOuter(ByteBufferMessageSet.scala:178)
> at 
> kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:191)
> at 
> kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:145)
> at 
> kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
> at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
> {code}
> {code}
> /tmp] ldd snappy-1.0.5-libsnappyjava.so
> ./snappy-1.0.5-libsnappyjava.so: /usr/lib64/libstdc++.so.6: version 
> `GLIBCXX_3.4.9' not found (required by ./snappy-1.0.5-libsnappyjava.so)
> ./snappy-1.0.5-libsnappyjava.so: /usr/lib64/libstdc++.so.6: version 
> `GLIBCXX_3.4.11' not found (required by ./snappy-1.0.5-libsnappyjava.so)
>   linux-vdso.so.1 =>  (0x7fff81dfc000)
>   libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x2b554b43)
>   libm.so.6 => /lib64/libm.so.6 (0x2b554b731000)
>   libc.so.6 => /lib64/libc.so.6 (0x2b554b9b4000)
>   libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x2b554bd0c000)
>   /lib64/ld-linux-x86-64.so.2 (0x0033e2a0)
> {code}
> {code}
> /tmp] ldd snappy-1.1.1M1-be6ba593-9ac7-488e-953e-ba5fd9530ee1-libsnappyjava.so
> ldd: warning: you do not have execution permission for 
> `./snappy-1.1.1M1-be6ba593-9ac7-488e-953e-ba5fd9530ee1-libsnappyjava.so'
>   linux-vdso.so.1 =>  (0x7fff1c132000)
>   libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x2b9548319000)
>   libm.so.6 => /lib64/libm.so.6 (0x2b954861a000)
>   libc.so.6 => /lib64/libc.so.6 (0x2b954889d000)
>   libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x2b9548bf5000)
>   /lib64/ld-linux-x86-64.so.2 (0x0033e2a0)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1610) Local modifications to collections generated from mapValues will be lost

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1610:
-
Reviewer: Neha Narkhede

> Local modifications to collections generated from mapValues will be lost
> 
>
> Key: KAFKA-1610
> URL: https://issues.apache.org/jira/browse/KAFKA-1610
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Mayuresh Gharat
>  Labels: newbie
> Fix For: 0.9.0
>
> Attachments: KAFKA-1610.patch, KAFKA-1610_2014-08-29_09:51:51.patch, 
> KAFKA-1610_2014-08-29_10:03:55.patch, KAFKA-1610_2014-09-03_11:27:50.patch
>
>
> In our current Scala code base we have 40+ usages of mapValues, however it 
> has an important semantic difference with map, which is that "map" creates a 
> new map collection instance, while "mapValues" just create a map view of the 
> original map, and hence any further value changes to the view will be 
> effectively lost.
> Example code:
> {code}
> scala> case class Test(i: Int, var j: Int) {}
> defined class Test
> scala> val a = collection.mutable.Map(1 -> 1)
> a: scala.collection.mutable.Map[Int,Int] = Map(1 -> 1)
> scala> val b = a.mapValues(v => Test(v, v))
> b: scala.collection.Map[Int,Test] = Map(1 -> Test(1,1))
> scala> val c = a.map(v => v._1 -> Test(v._2, v._2))
> c: scala.collection.mutable.Map[Int,Test] = Map(1 -> Test(1,1))
> scala> b.foreach(kv => kv._2.j = kv._2.j + 1)
> scala> b
> res1: scala.collection.Map[Int,Test] = Map(1 -> Test(1,1))
> scala> c.foreach(kv => kv._2.j = kv._2.j + 1)
> scala> c
> res3: scala.collection.mutable.Map[Int,Test] = Map(1 -> Test(1,2))
> scala> a.put(1,3)
> res4: Option[Int] = Some(1)
> scala> b
> res5: scala.collection.Map[Int,Test] = Map(1 -> Test(3,3))
> scala> c
> res6: scala.collection.mutable.Map[Int,Test] = Map(1 -> Test(1,2))
> {code}
> We need to go through all these mapValue to see if they should be changed to 
> map



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1558) AdminUtils.deleteTopic does not work

2014-09-14 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133329#comment-14133329
 ] 

Jun Rao commented on KAFKA-1558:


Sriharsha,

It would be useful to test if "delete topic" works under the following cases.
1. after the controller is restarted
2. after a soft failure (can simulate by pausing the jvm for longer that zk 
session timeout) of the controller
3. after a topic's partitions have been reassigned to some other brokers
4. after running a preferred leader command
5. after a topic's partition has been increased

Thanks,

> AdminUtils.deleteTopic does not work
> 
>
> Key: KAFKA-1558
> URL: https://issues.apache.org/jira/browse/KAFKA-1558
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.1.1
>Reporter: Henning Schmiedehausen
>Assignee: Sriharsha Chintalapani
>Priority: Blocker
> Fix For: 0.8.2
>
>
> the AdminUtils:.deleteTopic method is implemented as
> {code}
> def deleteTopic(zkClient: ZkClient, topic: String) {
> ZkUtils.createPersistentPath(zkClient, 
> ZkUtils.getDeleteTopicPath(topic))
> }
> {code}
> but the DeleteTopicCommand actually does
> {code}
> zkClient = new ZkClient(zkConnect, 3, 3, ZKStringSerializer)
> zkClient.deleteRecursive(ZkUtils.getTopicPath(topic))
> {code}
> so I guess, that the 'createPersistentPath' above should actually be 
> {code}
> def deleteTopic(zkClient: ZkClient, topic: String) {
> ZkUtils.deletePathRecursive(zkClient, ZkUtils.getTopicPath(topic))
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1558) AdminUtils.deleteTopic does not work

2014-09-14 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1413#comment-1413
 ] 

Sriharsha Chintalapani commented on KAFKA-1558:
---

Thanks [~junrao] will test those cases.

> AdminUtils.deleteTopic does not work
> 
>
> Key: KAFKA-1558
> URL: https://issues.apache.org/jira/browse/KAFKA-1558
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.1.1
>Reporter: Henning Schmiedehausen
>Assignee: Sriharsha Chintalapani
>Priority: Blocker
> Fix For: 0.8.2
>
>
> the AdminUtils:.deleteTopic method is implemented as
> {code}
> def deleteTopic(zkClient: ZkClient, topic: String) {
> ZkUtils.createPersistentPath(zkClient, 
> ZkUtils.getDeleteTopicPath(topic))
> }
> {code}
> but the DeleteTopicCommand actually does
> {code}
> zkClient = new ZkClient(zkConnect, 3, 3, ZKStringSerializer)
> zkClient.deleteRecursive(ZkUtils.getTopicPath(topic))
> {code}
> so I guess, that the 'createPersistentPath' above should actually be 
> {code}
> def deleteTopic(zkClient: ZkClient, topic: String) {
> ZkUtils.deletePathRecursive(zkClient, ZkUtils.getTopicPath(topic))
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1610) Local modifications to collections generated from mapValues will be lost

2014-09-14 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1414#comment-1414
 ] 

Neha Narkhede commented on KAFKA-1610:
--

Great catch, [~guozhang]. Thanks for submitting a patch, [~mgharat]. Reviewed 
and have one comment -
Would it be worth taking a close look at the following usage of mapValues as 
well?
./core/src/main/scala/kafka/log/LogManager.scala:  
this.recoveryPointCheckpoints(dir).write(recoveryPoints.get.mapValues(_.recoveryPoint))

> Local modifications to collections generated from mapValues will be lost
> 
>
> Key: KAFKA-1610
> URL: https://issues.apache.org/jira/browse/KAFKA-1610
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Mayuresh Gharat
>  Labels: newbie
> Fix For: 0.9.0
>
> Attachments: KAFKA-1610.patch, KAFKA-1610_2014-08-29_09:51:51.patch, 
> KAFKA-1610_2014-08-29_10:03:55.patch, KAFKA-1610_2014-09-03_11:27:50.patch
>
>
> In our current Scala code base we have 40+ usages of mapValues, however it 
> has an important semantic difference with map, which is that "map" creates a 
> new map collection instance, while "mapValues" just create a map view of the 
> original map, and hence any further value changes to the view will be 
> effectively lost.
> Example code:
> {code}
> scala> case class Test(i: Int, var j: Int) {}
> defined class Test
> scala> val a = collection.mutable.Map(1 -> 1)
> a: scala.collection.mutable.Map[Int,Int] = Map(1 -> 1)
> scala> val b = a.mapValues(v => Test(v, v))
> b: scala.collection.Map[Int,Test] = Map(1 -> Test(1,1))
> scala> val c = a.map(v => v._1 -> Test(v._2, v._2))
> c: scala.collection.mutable.Map[Int,Test] = Map(1 -> Test(1,1))
> scala> b.foreach(kv => kv._2.j = kv._2.j + 1)
> scala> b
> res1: scala.collection.Map[Int,Test] = Map(1 -> Test(1,1))
> scala> c.foreach(kv => kv._2.j = kv._2.j + 1)
> scala> c
> res3: scala.collection.mutable.Map[Int,Test] = Map(1 -> Test(1,2))
> scala> a.put(1,3)
> res4: Option[Int] = Some(1)
> scala> b
> res5: scala.collection.Map[Int,Test] = Map(1 -> Test(3,3))
> scala> c
> res6: scala.collection.mutable.Map[Int,Test] = Map(1 -> Test(1,2))
> {code}
> We need to go through all these mapValue to see if they should be changed to 
> map



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1632) No such method error on KafkaStream.head

2014-09-14 Thread aarti gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133345#comment-14133345
 ] 

aarti gupta commented on KAFKA-1632:


Agreed, so to process a single message, one would call head() on the stream? 
which would block till a message is available for consumption, returning the 
message when it returns?
Am i missing some 'Runtime' dependency? (that could explain the runtime 
exception thrown on calling head()?)



> No such method error on KafkaStream.head
> 
>
> Key: KAFKA-1632
> URL: https://issues.apache.org/jira/browse/KAFKA-1632
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.0
>Reporter: aarti gupta
>
> \The following error is thrown, (when I call KafkaStream.head(), as shown in 
> the code snippet below)
>  WARN -  java.lang.NoSuchMethodError: 
> kafka.consumer.KafkaStream.head()Lkafka/message/MessageAndMetadata;
> My use case, is that I want to block on the receive() method, and when a 
> message is published on the topic, I 'return the head' of the queue to the 
> calling method, that processes it.
> I do not use partitioning and have a single stream.
> import com.google.common.collect.Maps;
> import x.x.x.Task;
> import kafka.consumer.ConsumerConfig;
> import kafka.consumer.KafkaStream;
> import kafka.javaapi.consumer.ConsumerConnector;
> import kafka.javaapi.consumer.ZookeeperConsumerConnector;
> import kafka.message.MessageAndMetadata;
> import org.codehaus.jettison.json.JSONException;
> import org.codehaus.jettison.json.JSONObject;
> import org.slf4j.Logger;
> import org.slf4j.LoggerFactory;
> import java.io.IOException;
> import java.util.List;
> import java.util.Map;
> import java.util.Properties;
> import java.util.concurrent.Callable;
> import java.util.concurrent.Executors;
> /**
>  * @author agupta
>  */
> public class KafkaConsumerDelegate implements ConsumerDelegate {
> private ConsumerConnector consumerConnector;
> private String topicName;
> private static Logger LOG = 
> LoggerFactory.getLogger(KafkaProducerDelegate.class.getName());
> private final Map topicCount = Maps.newHashMap();
> private Map>> messageStreams;
> private List> kafkaStreams;
> @Override
> public Task receive(final boolean consumerConfirms) {
> try {
> LOG.info("Kafka consumer delegate listening on topic " + 
> getTopicName());
> kafkaStreams = messageStreams.get(getTopicName());
> final KafkaStream kafkaStream = 
> kafkaStreams.get(0);
> return Executors.newSingleThreadExecutor().submit(new 
> Callable() {
> @Override
> public Task call() throws Exception {
>  final MessageAndMetadata 
> messageAndMetadata= kafkaStream.head();
> final Task message = new Task() {
> @Override
> public byte[] getBytes() {
> return messageAndMetadata.message();
> }
> };
> return message;
> }
> }).get();
> } catch (Exception e) {
> LOG.warn("Error in consumer " + e.getMessage());
> }
> return null;
> }
> @Override
> public void initialize(JSONObject configData, boolean publisherAckMode) 
> throws IOException {
> try {
> this.topicName = configData.getString("topicName");
> LOG.info("Topic name is " + topicName);
> } catch (JSONException e) {
> e.printStackTrace();
> LOG.error("Error parsing configuration", e);
> }
> Properties properties = new Properties();
> properties.put("zookeeper.connect", "localhost:2181");
> properties.put("group.id", "testgroup");
> ConsumerConfig consumerConfig = new ConsumerConfig(properties);
> //only one stream, and one topic, (Since we are not supporting 
> partitioning)
> topicCount.put(getTopicName(), 1);
> consumerConnector = new ZookeeperConsumerConnector(consumerConfig);
> messageStreams = consumerConnector.createMessageStreams(topicCount);
> }
> @Override
> public void stop() throws IOException {
> //TODO
> throw new UnsupportedOperationException("Method Not Implemented");
> }
> public String getTopicName() {
> return this.topicName;
> }
> }
> Lastly, I am using the following binary 
> kafka_2.8.0-0.8.1.1  
> and the following maven dependency
>   
> org.apache.kafka
> kafka_2.10
> 0.8.1.1
> 
> Any suggestions?
> Thanks
> aarti



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1382) Update zkVersion on partition state update failures

2014-09-14 Thread Jagbir (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133349#comment-14133349
 ] 

Jagbir commented on KAFKA-1382:
---

Not sure if this is related. We applied the final patch on 0.8.1.1 and noticed 
a null pointer that we haven't noticed earlier

--8<
[2014-09-13 15:26:36,254] ERROR Error processing config change: 
(kafka.server.TopicConfigManager)
java.lang.NullPointerException
at 
scala.collection.convert.Wrappers$JListWrapper.length(Wrappers.scala:85)
at scala.collection.SeqLike$class.size(SeqLike.scala:106)
at scala.collection.AbstractSeq.size(Seq.scala:40)
at 
kafka.server.TopicConfigManager.kafka$server$TopicConfigManager$$processConfigChanges(TopicConfigManager.scala:89)
at 
kafka.server.TopicConfigManager$ConfigChangeListener$.handleChildChange(TopicConfigManager.scala:144)
at org.I0Itec.zkclient.ZkClient$7.run(ZkClient.java:570)
at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
[2014-09-13 15:26:36,273] INFO New leader is 1 
(kafka.server.ZookeeperLeaderElector$LeaderChangeListener)
--8<

Our configuration is 3 brokers 3 zookeepers and topic replication set to 3.

Thanks,
jagbir

> Update zkVersion on partition state update failures
> ---
>
> Key: KAFKA-1382
> URL: https://issues.apache.org/jira/browse/KAFKA-1382
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joel Koshy
>Assignee: Sriharsha Chintalapani
> Fix For: 0.8.2, 0.8.1.2
>
> Attachments: KAFKA-1382.patch, KAFKA-1382_2014-05-30_21:19:21.patch, 
> KAFKA-1382_2014-05-31_15:50:25.patch, KAFKA-1382_2014-06-04_12:30:40.patch, 
> KAFKA-1382_2014-06-07_09:00:56.patch, KAFKA-1382_2014-06-09_18:23:42.patch, 
> KAFKA-1382_2014-06-11_09:37:22.patch, KAFKA-1382_2014-06-16_13:50:16.patch, 
> KAFKA-1382_2014-06-16_14:19:27.patch
>
>
> Our updateIsr code is currently:
>   private def updateIsr(newIsr: Set[Replica]) {
> debug("Updated ISR for partition [%s,%d] to %s".format(topic, 
> partitionId, newIsr.mkString(",")))
> val newLeaderAndIsr = new LeaderAndIsr(localBrokerId, leaderEpoch, 
> newIsr.map(r => r.brokerId).toList, zkVersion)
> // use the epoch of the controller that made the leadership decision, 
> instead of the current controller epoch
> val (updateSucceeded, newVersion) = 
> ZkUtils.conditionalUpdatePersistentPath(zkClient,
>   ZkUtils.getTopicPartitionLeaderAndIsrPath(topic, partitionId),
>   ZkUtils.leaderAndIsrZkData(newLeaderAndIsr, controllerEpoch), zkVersion)
> if (updateSucceeded){
>   inSyncReplicas = newIsr
>   zkVersion = newVersion
>   trace("ISR updated to [%s] and zkVersion updated to 
> [%d]".format(newIsr.mkString(","), zkVersion))
> } else {
>   info("Cached zkVersion [%d] not equal to that in zookeeper, skip 
> updating ISR".format(zkVersion))
> }
> We encountered an interesting scenario recently when a large producer fully
> saturated the broker's NIC for over an hour. The large volume of data led to
> a number of ISR shrinks (and subsequent expands). The NIC saturation
> affected the zookeeper client heartbeats and led to a session timeout. The
> timeline was roughly as follows:
> - Attempt to expand ISR
> - Expansion written to zookeeper (confirmed in zookeeper transaction logs)
> - Session timeout after around 13 seconds (the configured timeout is 20
>   seconds) so that lines up.
> - zkclient reconnects to zookeeper (with the same session ID) and retries
>   the write - but uses the old zkVersion. This fails because the zkVersion
>   has already been updated (above).
> - The ISR expand keeps failing after that and the only way to get out of it
>   is to bounce the broker.
> In the above code, if the zkVersion is different we should probably update
> the cached version and even retry the expansion until it succeeds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-1382) Update zkVersion on partition state update failures

2014-09-14 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133383#comment-14133383
 ] 

Sriharsha Chintalapani edited comment on KAFKA-1382 at 9/14/14 9:11 PM:


[~jhooda]
 can you share how did you applied this patch against 0.8.1.1. I tried doing 
the following 0.8.1 branch
git apply --check  ../KAFKA-1382_2014-06-16_14:19:27.patch 
I get 
error: patch failed: core/src/main/scala/kafka/cluster/Partition.scala:18
error: core/src/main/scala/kafka/cluster/Partition.scala: patch does not apply

I tried it on http://kafka.apache.org/downloads.html kafka-0.8.1.1-src.tgz it 
doesn't apply cleanly.



was (Author: sriharsha):
[~jhooda]
 can you share how did you apply this patch against 0.8.1.1. I tried doing the 
following 0.8.1 branch
git apply --check  ../KAFKA-1382_2014-06-16_14:19:27.patch 
I get 
error: patch failed: core/src/main/scala/kafka/cluster/Partition.scala:18
error: core/src/main/scala/kafka/cluster/Partition.scala: patch does not apply

I tried it on http://kafka.apache.org/downloads.html kafka-0.8.1.1-src.tgz it 
doesn't apply cleanly.


> Update zkVersion on partition state update failures
> ---
>
> Key: KAFKA-1382
> URL: https://issues.apache.org/jira/browse/KAFKA-1382
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joel Koshy
>Assignee: Sriharsha Chintalapani
> Fix For: 0.8.2, 0.8.1.2
>
> Attachments: KAFKA-1382.patch, KAFKA-1382_2014-05-30_21:19:21.patch, 
> KAFKA-1382_2014-05-31_15:50:25.patch, KAFKA-1382_2014-06-04_12:30:40.patch, 
> KAFKA-1382_2014-06-07_09:00:56.patch, KAFKA-1382_2014-06-09_18:23:42.patch, 
> KAFKA-1382_2014-06-11_09:37:22.patch, KAFKA-1382_2014-06-16_13:50:16.patch, 
> KAFKA-1382_2014-06-16_14:19:27.patch
>
>
> Our updateIsr code is currently:
>   private def updateIsr(newIsr: Set[Replica]) {
> debug("Updated ISR for partition [%s,%d] to %s".format(topic, 
> partitionId, newIsr.mkString(",")))
> val newLeaderAndIsr = new LeaderAndIsr(localBrokerId, leaderEpoch, 
> newIsr.map(r => r.brokerId).toList, zkVersion)
> // use the epoch of the controller that made the leadership decision, 
> instead of the current controller epoch
> val (updateSucceeded, newVersion) = 
> ZkUtils.conditionalUpdatePersistentPath(zkClient,
>   ZkUtils.getTopicPartitionLeaderAndIsrPath(topic, partitionId),
>   ZkUtils.leaderAndIsrZkData(newLeaderAndIsr, controllerEpoch), zkVersion)
> if (updateSucceeded){
>   inSyncReplicas = newIsr
>   zkVersion = newVersion
>   trace("ISR updated to [%s] and zkVersion updated to 
> [%d]".format(newIsr.mkString(","), zkVersion))
> } else {
>   info("Cached zkVersion [%d] not equal to that in zookeeper, skip 
> updating ISR".format(zkVersion))
> }
> We encountered an interesting scenario recently when a large producer fully
> saturated the broker's NIC for over an hour. The large volume of data led to
> a number of ISR shrinks (and subsequent expands). The NIC saturation
> affected the zookeeper client heartbeats and led to a session timeout. The
> timeline was roughly as follows:
> - Attempt to expand ISR
> - Expansion written to zookeeper (confirmed in zookeeper transaction logs)
> - Session timeout after around 13 seconds (the configured timeout is 20
>   seconds) so that lines up.
> - zkclient reconnects to zookeeper (with the same session ID) and retries
>   the write - but uses the old zkVersion. This fails because the zkVersion
>   has already been updated (above).
> - The ISR expand keeps failing after that and the only way to get out of it
>   is to bounce the broker.
> In the above code, if the zkVersion is different we should probably update
> the cached version and even retry the expansion until it succeeds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1382) Update zkVersion on partition state update failures

2014-09-14 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133383#comment-14133383
 ] 

Sriharsha Chintalapani commented on KAFKA-1382:
---

[~jhooda]
 can you share how did you apply this patch against 0.8.1.1. I tried doing the 
following 0.8.1 branch
git apply --check  ../KAFKA-1382_2014-06-16_14:19:27.patch 
I get 
error: patch failed: core/src/main/scala/kafka/cluster/Partition.scala:18
error: core/src/main/scala/kafka/cluster/Partition.scala: patch does not apply

I tried it on http://kafka.apache.org/downloads.html kafka-0.8.1.1-src.tgz it 
doesn't apply cleanly.


> Update zkVersion on partition state update failures
> ---
>
> Key: KAFKA-1382
> URL: https://issues.apache.org/jira/browse/KAFKA-1382
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joel Koshy
>Assignee: Sriharsha Chintalapani
> Fix For: 0.8.2, 0.8.1.2
>
> Attachments: KAFKA-1382.patch, KAFKA-1382_2014-05-30_21:19:21.patch, 
> KAFKA-1382_2014-05-31_15:50:25.patch, KAFKA-1382_2014-06-04_12:30:40.patch, 
> KAFKA-1382_2014-06-07_09:00:56.patch, KAFKA-1382_2014-06-09_18:23:42.patch, 
> KAFKA-1382_2014-06-11_09:37:22.patch, KAFKA-1382_2014-06-16_13:50:16.patch, 
> KAFKA-1382_2014-06-16_14:19:27.patch
>
>
> Our updateIsr code is currently:
>   private def updateIsr(newIsr: Set[Replica]) {
> debug("Updated ISR for partition [%s,%d] to %s".format(topic, 
> partitionId, newIsr.mkString(",")))
> val newLeaderAndIsr = new LeaderAndIsr(localBrokerId, leaderEpoch, 
> newIsr.map(r => r.brokerId).toList, zkVersion)
> // use the epoch of the controller that made the leadership decision, 
> instead of the current controller epoch
> val (updateSucceeded, newVersion) = 
> ZkUtils.conditionalUpdatePersistentPath(zkClient,
>   ZkUtils.getTopicPartitionLeaderAndIsrPath(topic, partitionId),
>   ZkUtils.leaderAndIsrZkData(newLeaderAndIsr, controllerEpoch), zkVersion)
> if (updateSucceeded){
>   inSyncReplicas = newIsr
>   zkVersion = newVersion
>   trace("ISR updated to [%s] and zkVersion updated to 
> [%d]".format(newIsr.mkString(","), zkVersion))
> } else {
>   info("Cached zkVersion [%d] not equal to that in zookeeper, skip 
> updating ISR".format(zkVersion))
> }
> We encountered an interesting scenario recently when a large producer fully
> saturated the broker's NIC for over an hour. The large volume of data led to
> a number of ISR shrinks (and subsequent expands). The NIC saturation
> affected the zookeeper client heartbeats and led to a session timeout. The
> timeline was roughly as follows:
> - Attempt to expand ISR
> - Expansion written to zookeeper (confirmed in zookeeper transaction logs)
> - Session timeout after around 13 seconds (the configured timeout is 20
>   seconds) so that lines up.
> - zkclient reconnects to zookeeper (with the same session ID) and retries
>   the write - but uses the old zkVersion. This fails because the zkVersion
>   has already been updated (above).
> - The ISR expand keeps failing after that and the only way to get out of it
>   is to bounce the broker.
> In the above code, if the zkVersion is different we should probably update
> the cached version and even retry the expansion until it succeeds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1382) Update zkVersion on partition state update failures

2014-09-14 Thread Jagbir (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133446#comment-14133446
 ] 

Jagbir commented on KAFKA-1382:
---

Hi Sriharsha,

This is what I did

prompt> git clone http://git-wip-us.apache.org/repos/asf/kafka.git
prompt> git checkout refs/tags/0.8.1.1 -b kafka-1382
prompt> wget 
https://issues.apache.org/jira/secure/attachment/12646741/KAFKA-1382.patch
prompt> patch -p1 < KAFKA-1382.patch

Thanks,
Jagbir

> Update zkVersion on partition state update failures
> ---
>
> Key: KAFKA-1382
> URL: https://issues.apache.org/jira/browse/KAFKA-1382
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joel Koshy
>Assignee: Sriharsha Chintalapani
> Fix For: 0.8.2, 0.8.1.2
>
> Attachments: KAFKA-1382.patch, KAFKA-1382_2014-05-30_21:19:21.patch, 
> KAFKA-1382_2014-05-31_15:50:25.patch, KAFKA-1382_2014-06-04_12:30:40.patch, 
> KAFKA-1382_2014-06-07_09:00:56.patch, KAFKA-1382_2014-06-09_18:23:42.patch, 
> KAFKA-1382_2014-06-11_09:37:22.patch, KAFKA-1382_2014-06-16_13:50:16.patch, 
> KAFKA-1382_2014-06-16_14:19:27.patch
>
>
> Our updateIsr code is currently:
>   private def updateIsr(newIsr: Set[Replica]) {
> debug("Updated ISR for partition [%s,%d] to %s".format(topic, 
> partitionId, newIsr.mkString(",")))
> val newLeaderAndIsr = new LeaderAndIsr(localBrokerId, leaderEpoch, 
> newIsr.map(r => r.brokerId).toList, zkVersion)
> // use the epoch of the controller that made the leadership decision, 
> instead of the current controller epoch
> val (updateSucceeded, newVersion) = 
> ZkUtils.conditionalUpdatePersistentPath(zkClient,
>   ZkUtils.getTopicPartitionLeaderAndIsrPath(topic, partitionId),
>   ZkUtils.leaderAndIsrZkData(newLeaderAndIsr, controllerEpoch), zkVersion)
> if (updateSucceeded){
>   inSyncReplicas = newIsr
>   zkVersion = newVersion
>   trace("ISR updated to [%s] and zkVersion updated to 
> [%d]".format(newIsr.mkString(","), zkVersion))
> } else {
>   info("Cached zkVersion [%d] not equal to that in zookeeper, skip 
> updating ISR".format(zkVersion))
> }
> We encountered an interesting scenario recently when a large producer fully
> saturated the broker's NIC for over an hour. The large volume of data led to
> a number of ISR shrinks (and subsequent expands). The NIC saturation
> affected the zookeeper client heartbeats and led to a session timeout. The
> timeline was roughly as follows:
> - Attempt to expand ISR
> - Expansion written to zookeeper (confirmed in zookeeper transaction logs)
> - Session timeout after around 13 seconds (the configured timeout is 20
>   seconds) so that lines up.
> - zkclient reconnects to zookeeper (with the same session ID) and retries
>   the write - but uses the old zkVersion. This fails because the zkVersion
>   has already been updated (above).
> - The ISR expand keeps failing after that and the only way to get out of it
>   is to bounce the broker.
> In the above code, if the zkVersion is different we should probably update
> the cached version and even retry the expansion until it succeeds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1558) AdminUtils.deleteTopic does not work

2014-09-14 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133468#comment-14133468
 ] 

Neha Narkhede commented on KAFKA-1558:
--

I'll add one more :)
After the controller broker is killed (kill -9)

> AdminUtils.deleteTopic does not work
> 
>
> Key: KAFKA-1558
> URL: https://issues.apache.org/jira/browse/KAFKA-1558
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.1.1
>Reporter: Henning Schmiedehausen
>Assignee: Sriharsha Chintalapani
>Priority: Blocker
> Fix For: 0.8.2
>
>
> the AdminUtils:.deleteTopic method is implemented as
> {code}
> def deleteTopic(zkClient: ZkClient, topic: String) {
> ZkUtils.createPersistentPath(zkClient, 
> ZkUtils.getDeleteTopicPath(topic))
> }
> {code}
> but the DeleteTopicCommand actually does
> {code}
> zkClient = new ZkClient(zkConnect, 3, 3, ZKStringSerializer)
> zkClient.deleteRecursive(ZkUtils.getTopicPath(topic))
> {code}
> so I guess, that the 'createPersistentPath' above should actually be 
> {code}
> def deleteTopic(zkClient: ZkClient, topic: String) {
> ZkUtils.deletePathRecursive(zkClient, ZkUtils.getTopicPath(topic))
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Offset Request with timestamp

2014-09-14 Thread Neha Narkhede
I added this question to the FAQ as it frequently comes up -
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-HowdoIaccuratelygetoffsetsofmessagesforacertaintimestampusingOffsetFetchRequest
?


On Tue, Sep 2, 2014 at 1:48 PM, Guozhang Wang  wrote:

> The semantic of the offset API is to "return the latest possible offset of
> the message that is appended no later than the given timestamp". For
> implementation, it will get the starting offset of the log segment that is
> created no later than the given timestamp, and hence if your log segment
> contains data for a long period of time, then the offset API may return you
> just the starting offset of the current log segment.
>
> If your traffic is small and you still want a finer grained offset
> response, you can try to reduce the log segment size (default to 1 GB);
> however doing so will increase the number of file handlers with more
> frequent log segment rolling.
>
> Guozhang
>
>
> On Tue, Sep 2, 2014 at 10:21 AM, Manjunath Shivakumar <
> manjunath.shivaku...@betfair.com> wrote:
>
> > Hi,
> >
> > My usecase is to fetch the offsets for a given topic from X milliseconds
> > ago.
> > If I use the offset api
> >
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol#AGuideToTheKafkaProtocol-OffsetAPI
> >
> > to do this and pass in a timestamp of (now() - X), I get the earliest
> > offset in the current log segment and not the offset from X milliseconds
> > ago.
> >
> > Is this the correct usage or behaviour?
> >
> > Thanks,
> > Manju
> >
> > 
> > In order to protect our email recipients, Betfair Group use SkyScan from
> > MessageLabs to scan all Incoming and Outgoing mail for viruses.
> >
> > 
>
>
>
>
> --
> -- Guozhang
>


[jira] [Updated] (KAFKA-1622) project shouldn't require signing to build

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1622:
-
Labels: build newbie packaging  (was: newbie)

> project shouldn't require signing to build
> --
>
> Key: KAFKA-1622
> URL: https://issues.apache.org/jira/browse/KAFKA-1622
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>Priority: Blocker
>  Labels: build, newbie, packaging
> Fix For: 0.8.2
>
>
> we only need signing for uploadArchives that is it
> The project trunk failed to build due to some signing/license checks (the 
> diff I used to get things to build is here: 
> https://gist.github.com/dehora/7e3c0bd75bb2b5d87557)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1583) Kafka API Refactoring

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1583:
-
Reviewer: Jun Rao

> Kafka API Refactoring
> -
>
> Key: KAFKA-1583
> URL: https://issues.apache.org/jira/browse/KAFKA-1583
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 0.9.0
>
> Attachments: KAFKA-1583.patch, KAFKA-1583_2014-08-20_13:54:38.patch, 
> KAFKA-1583_2014-08-21_11:30:34.patch, KAFKA-1583_2014-08-27_09:44:50.patch, 
> KAFKA-1583_2014-09-01_18:07:42.patch, KAFKA-1583_2014-09-02_13:37:47.patch, 
> KAFKA-1583_2014-09-05_14:08:36.patch, KAFKA-1583_2014-09-05_14:55:38.patch
>
>
> This is the next step of KAFKA-1430. Details can be found at this page:
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+API+Refactoring



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1634) Update protocol wiki to reflect the new offset management feature

2014-09-14 Thread Neha Narkhede (JIRA)
Neha Narkhede created KAFKA-1634:


 Summary: Update protocol wiki to reflect the new offset management 
feature
 Key: KAFKA-1634
 URL: https://issues.apache.org/jira/browse/KAFKA-1634
 Project: Kafka
  Issue Type: Bug
Reporter: Neha Narkhede
Assignee: Joel Koshy
Priority: Blocker
 Fix For: 0.8.2


>From the mailing list -

following up on this -- I think the online API docs for OffsetCommitRequest
still incorrectly refer to client-side timestamps:

https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol#AGuideToTheKafkaProtocol-OffsetCommitRequest

Wasn't that removed and now always handled server-side now?  Would one of
the devs mind updating the API spec wiki?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1554) Corrupt index found on clean startup

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1554:
-
Fix Version/s: 0.9.0

> Corrupt index found on clean startup
> 
>
> Key: KAFKA-1554
> URL: https://issues.apache.org/jira/browse/KAFKA-1554
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.1
> Environment: ubuntu 12.04, oracle jdk 1.7
>Reporter: Alexis Midon
>Priority: Critical
> Fix For: 0.9.0
>
>
> On a clean start up, corrupted index files are found.
> After investigations, it appears that some pre-allocated index files are not 
> "compacted" correctly and the end of the file is full of zeroes.
> As a result, on start up, the last relative offset is zero which yields an 
> offset equal to the base offset.
> The workaround is to delete all index files of size 10MB (the size of the 
> pre-allocated files), and restart. Index files will be re-created.
> {code}
> find $your_data_directory -size 10485760c -name *.index #-delete
> {code}
> This is issue might be related/similar to 
> https://issues.apache.org/jira/browse/KAFKA-1112
> {code}
> 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,696 
> INFO main kafka.server.KafkaServer.info - [Kafka Server 847605514], starting
> 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,698 
> INFO main kafka.server.KafkaServer.info - [Kafka Server 847605514], 
> Connecting to zookeeper on 
> zk-main0.XXX:2181,zk-main1.XXX:2181,zk-main2.:2181/production/kafka/main
> 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,708 
> INFO 
> ZkClient-EventThread-14-zk-main0.XXX.com:2181,zk-main1.XXX.com:2181,zk-main2.XXX.com:2181,zk-main3.XXX.com:2181,zk-main4.XXX.com:2181/production/kafka/main
>  org.I0Itec.zkclient.ZkEventThread.run - Starting ZkClient event thread.
> 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,714 
> INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client 
> environment:zookeeper.version=3.3.3-1203054, built on 11/17/2011 05:47 GMT
> 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,714 
> INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client 
> environment:host.name=i-6b948138.inst.aws.airbnb.com
> 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,714 
> INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client 
> environment:java.version=1.7.0_55
> 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,715 
> INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client 
> environment:java.vendor=Oracle Corporation
> 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,715 
> INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client 
> environment:java.home=/usr/lib/jvm/jre-7-oracle-x64/jre
> 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,715 
> INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client 
> environment:java.class.path=libs/snappy-java-1.0.5.jar:libs/scala-library-2.10.1.jar:libs/slf4j-api-1.7.2.jar:libs/jopt-simple-3.2.jar:libs/metrics-annotation-2.2.0.jar:libs/log4j-1.2.15.jar:libs/kafka_2.10-0.8.1.jar:libs/zkclient-0.3.jar:libs/zookeeper-3.3.4.jar:libs/metrics-core-2.2.0.jar
> 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,715 
> INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client 
> environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
> 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,716 
> INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client 
> environment:java.io.tmpdir=/tmp
> 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,716 
> INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client 
> environment:java.compiler=
> 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,716 
> INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client 
> environment:os.name=Linux
> 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,716 
> INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client 
> environment:os.arch=amd64
> 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,717 
> INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client 
> environment:os.version=3.2.0-61-virtual
> 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,717 
> INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client 
> environment:user.name=kafka
> 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,717 
> INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client 
> environment:user.home=/srv/kafka
> 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,717 
> INFO main org.apache.zookeeper.ZooKeeper.lo

[jira] [Commented] (KAFKA-1596) Exception in KafkaScheduler while shutting down

2014-09-14 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133493#comment-14133493
 ] 

Neha Narkhede commented on KAFKA-1596:
--

[~jjkoshy], [~pachalko] Bump. Same question as [~guozhang].

> Exception in KafkaScheduler while shutting down
> ---
>
> Key: KAFKA-1596
> URL: https://issues.apache.org/jira/browse/KAFKA-1596
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joel Koshy
>  Labels: newbie
> Attachments: kafka-1596.patch
>
>
> Saw this while trying to reproduce KAFKA-1577. It is very minor and won't 
> happen in practice but annoying nonetheless.
> {code}
> [2014-08-14 18:03:56,686] INFO zookeeper state changed (SyncConnected) 
> (org.I0Itec.zkclient.ZkClient)
> [2014-08-14 18:03:56,776] INFO Loading logs. (kafka.log.LogManager)
> [2014-08-14 18:03:56,783] INFO Logs loading complete. (kafka.log.LogManager)
> [2014-08-14 18:03:57,120] INFO Starting log cleanup with a period of 30 
> ms. (kafka.log.LogManager)
> [2014-08-14 18:03:57,124] INFO Starting log flusher with a default period of 
> 9223372036854775807 ms. (kafka.log.LogManager)
> [2014-08-14 18:03:57,158] INFO Awaiting socket connections on 0.0.0.0:9092. 
> (kafka.network.Acceptor)
> [2014-08-14 18:03:57,160] INFO [Socket Server on Broker 0], Started 
> (kafka.network.SocketServer)
> ^C[2014-08-14 18:03:57,203] INFO [Kafka Server 0], shutting down 
> (kafka.server.KafkaServer)
> [2014-08-14 18:03:57,211] INFO [Socket Server on Broker 0], Shutting down 
> (kafka.network.SocketServer)
> [2014-08-14 18:03:57,222] INFO [Socket Server on Broker 0], Shutdown 
> completed (kafka.network.SocketServer)
> [2014-08-14 18:03:57,226] INFO [Replica Manager on Broker 0]: Shut down 
> (kafka.server.ReplicaManager)
> [2014-08-14 18:03:57,228] INFO [ReplicaFetcherManager on broker 0] shutting 
> down (kafka.server.ReplicaFetcherManager)
> [2014-08-14 18:03:57,233] INFO [ReplicaFetcherManager on broker 0] shutdown 
> completed (kafka.server.ReplicaFetcherManager)
> [2014-08-14 18:03:57,274] INFO [Replica Manager on Broker 0]: Shut down 
> completely (kafka.server.ReplicaManager)
> [2014-08-14 18:03:57,276] INFO Shutting down. (kafka.log.LogManager)
> [2014-08-14 18:03:57,296] INFO Will not load MX4J, mx4j-tools.jar is not in 
> the classpath (kafka.utils.Mx4jLoader$)
> [2014-08-14 18:03:57,297] INFO Shutdown complete. (kafka.log.LogManager)
> [2014-08-14 18:03:57,301] FATAL Fatal error during KafkaServerStable startup. 
> Prepare to shutdown (kafka.server.KafkaServerStartable)
> java.lang.IllegalStateException: Kafka scheduler has not been started
> at kafka.utils.KafkaScheduler.ensureStarted(KafkaScheduler.scala:114)
> at kafka.utils.KafkaScheduler.schedule(KafkaScheduler.scala:95)
> at kafka.server.ReplicaManager.startup(ReplicaManager.scala:138)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:112)
> at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:28)
> at kafka.Kafka$.main(Kafka.scala:46)
> at kafka.Kafka.main(Kafka.scala)
> [2014-08-14 18:03:57,324] INFO [Kafka Server 0], shutting down 
> (kafka.server.KafkaServer)
> [2014-08-14 18:03:57,326] INFO Terminate ZkClient event thread. 
> (org.I0Itec.zkclient.ZkEventThread)
> [2014-08-14 18:03:57,329] INFO Session: 0x147d5b0a51a closed 
> (org.apache.zookeeper.ZooKeeper)
> [2014-08-14 18:03:57,329] INFO EventThread shut down 
> (org.apache.zookeeper.ClientCnxn)
> [2014-08-14 18:03:57,329] INFO [Kafka Server 0], shut down completed 
> (kafka.server.KafkaServer)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1447) Controlled shutdown deadlock when trying to send state updates

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1447:
-
Priority: Critical  (was: Major)

> Controlled shutdown deadlock when trying to send state updates
> --
>
> Key: KAFKA-1447
> URL: https://issues.apache.org/jira/browse/KAFKA-1447
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.8.0
>Reporter: Sam Meder
>Assignee: Neha Narkhede
>Priority: Critical
>  Labels: newbie++
>
> We're seeing controlled shutdown indefinitely stuck on trying to send out 
> state change messages to the other brokers:
> [2014-05-03 04:01:30,580] INFO [Socket Server on Broker 4], Shutdown 
> completed (kafka.network.SocketServer)
> [2014-05-03 04:01:30,581] INFO [Kafka Request Handler on Broker 4], shutting 
> down (kafka.server.KafkaRequestHandlerPool)
> and stuck on:
> "kafka-request-handler-12" daemon prio=10 tid=0x7f1f04a66800 nid=0x6e79 
> waiting on condition [0x7f1ad5767000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> parking to wait for <0x00078e91dc20> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
> at java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:349)
> at 
> kafka.controller.ControllerChannelManager.sendRequest(ControllerChannelManager.scala:57)
> locked <0x00078e91dc38> (a java.lang.Object)
> at kafka.controller.KafkaController.sendRequest(KafkaController.scala:655)
> at 
> kafka.controller.ControllerBrokerRequestBatch$$anonfun$sendRequestsToBrokers$2.apply(ControllerChannelManager.scala:298)
> at 
> kafkler.ControllerBrokerRequestBatch$$anonfun$sendRequestsToBrokers$2.apply(ControllerChannelManager.scala:290)
> at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:95)
> at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:95)
> at scala.collection.Iterator$class.foreach(Iterator.scala:772)
> at scala.collection.mutable.HashTable$$anon$1.foreach(HashTable.scala:157)
> at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:190)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:45)
> at scala.collection.mutable.HashMap.foreach(HashMap.scala:95)
> at 
> kafka.controller.ControllerBrokerRequestBatch.sendRequestsToBrokers(ControllerChannelManager.scala:290)
> at 
> kafka.controller.ReplicaStateMachine.handleStateChanges(ReplicaStateMachine.scala:97)
> at 
> kafka.controller.KafkaController$$anonfun$shutdownBroker$3$$anonfun$apply$1$$anonfun$apply$mcV$sp$3.apply(KafkaController.scala:269)
> at 
> kafka.controller.KafkaController$$anonfun$shutdownBroker$3$$anonfun$apply$1$$anonfun$apply$mcV$sp$3.apply(KafkaController.scala:253)
> at scala.Option.foreach(Option.scala:197)
> at 
> kafka.controller.KafkaController$$anonfun$shutdownBroker$3$$anonfun$apply$1.apply$mcV$sp(KafkaController.scala:253)
> at 
> kafka.controller.KafkaController$$anonfun$shutdownBroker$3$$anonfun$apply$1.apply(KafkaController.scala:253)
> at 
> kafka.controller.KafkaController$$anonfun$shutdownBroker$3$$anonfun$apply$1.apply(KafkaController.scala:253)
> at kafka.utils.Utils$.inLock(Utils.scala:538)
> at 
> kafka.controller.KafkaController$$anonfun$shutdownBroker$3.apply(KafkaController.scala:252)
> at 
> kafka.controller.KafkaController$$anonfun$shutdownBroker$3.apply(KafkaController.scala:249)
> at scala.collection.immutable.HashSet$HashSet1.foreach(HashSet.scala:130)
> at scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:275)
> at scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:275)
> at scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:275)
> at kafka.controller.KafkaController.shutdownBroker(KafkaController.scala:249)
> locked <0x00078b495af0> (a java.lang.Object)
> at kafka.server.KafkaApis.handleControlledShutdownRequest(KafkaApis.scala:264)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:192)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:42)
> at java.lang.Thread.run(Thread.java:722)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1447) Controlled shutdown deadlock when trying to send state updates

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1447:
-
Labels: newbie++  (was: )

> Controlled shutdown deadlock when trying to send state updates
> --
>
> Key: KAFKA-1447
> URL: https://issues.apache.org/jira/browse/KAFKA-1447
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.8.0
>Reporter: Sam Meder
>Assignee: Neha Narkhede
>Priority: Critical
>  Labels: newbie++
>
> We're seeing controlled shutdown indefinitely stuck on trying to send out 
> state change messages to the other brokers:
> [2014-05-03 04:01:30,580] INFO [Socket Server on Broker 4], Shutdown 
> completed (kafka.network.SocketServer)
> [2014-05-03 04:01:30,581] INFO [Kafka Request Handler on Broker 4], shutting 
> down (kafka.server.KafkaRequestHandlerPool)
> and stuck on:
> "kafka-request-handler-12" daemon prio=10 tid=0x7f1f04a66800 nid=0x6e79 
> waiting on condition [0x7f1ad5767000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> parking to wait for <0x00078e91dc20> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
> at java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:349)
> at 
> kafka.controller.ControllerChannelManager.sendRequest(ControllerChannelManager.scala:57)
> locked <0x00078e91dc38> (a java.lang.Object)
> at kafka.controller.KafkaController.sendRequest(KafkaController.scala:655)
> at 
> kafka.controller.ControllerBrokerRequestBatch$$anonfun$sendRequestsToBrokers$2.apply(ControllerChannelManager.scala:298)
> at 
> kafkler.ControllerBrokerRequestBatch$$anonfun$sendRequestsToBrokers$2.apply(ControllerChannelManager.scala:290)
> at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:95)
> at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:95)
> at scala.collection.Iterator$class.foreach(Iterator.scala:772)
> at scala.collection.mutable.HashTable$$anon$1.foreach(HashTable.scala:157)
> at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:190)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:45)
> at scala.collection.mutable.HashMap.foreach(HashMap.scala:95)
> at 
> kafka.controller.ControllerBrokerRequestBatch.sendRequestsToBrokers(ControllerChannelManager.scala:290)
> at 
> kafka.controller.ReplicaStateMachine.handleStateChanges(ReplicaStateMachine.scala:97)
> at 
> kafka.controller.KafkaController$$anonfun$shutdownBroker$3$$anonfun$apply$1$$anonfun$apply$mcV$sp$3.apply(KafkaController.scala:269)
> at 
> kafka.controller.KafkaController$$anonfun$shutdownBroker$3$$anonfun$apply$1$$anonfun$apply$mcV$sp$3.apply(KafkaController.scala:253)
> at scala.Option.foreach(Option.scala:197)
> at 
> kafka.controller.KafkaController$$anonfun$shutdownBroker$3$$anonfun$apply$1.apply$mcV$sp(KafkaController.scala:253)
> at 
> kafka.controller.KafkaController$$anonfun$shutdownBroker$3$$anonfun$apply$1.apply(KafkaController.scala:253)
> at 
> kafka.controller.KafkaController$$anonfun$shutdownBroker$3$$anonfun$apply$1.apply(KafkaController.scala:253)
> at kafka.utils.Utils$.inLock(Utils.scala:538)
> at 
> kafka.controller.KafkaController$$anonfun$shutdownBroker$3.apply(KafkaController.scala:252)
> at 
> kafka.controller.KafkaController$$anonfun$shutdownBroker$3.apply(KafkaController.scala:249)
> at scala.collection.immutable.HashSet$HashSet1.foreach(HashSet.scala:130)
> at scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:275)
> at scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:275)
> at scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:275)
> at kafka.controller.KafkaController.shutdownBroker(KafkaController.scala:249)
> locked <0x00078b495af0> (a java.lang.Object)
> at kafka.server.KafkaApis.handleControlledShutdownRequest(KafkaApis.scala:264)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:192)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:42)
> at java.lang.Thread.run(Thread.java:722)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1447) Controlled shutdown deadlock when trying to send state updates

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1447:
-
Assignee: (was: Neha Narkhede)

> Controlled shutdown deadlock when trying to send state updates
> --
>
> Key: KAFKA-1447
> URL: https://issues.apache.org/jira/browse/KAFKA-1447
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.8.0
>Reporter: Sam Meder
>Priority: Critical
>  Labels: newbie++
>
> We're seeing controlled shutdown indefinitely stuck on trying to send out 
> state change messages to the other brokers:
> [2014-05-03 04:01:30,580] INFO [Socket Server on Broker 4], Shutdown 
> completed (kafka.network.SocketServer)
> [2014-05-03 04:01:30,581] INFO [Kafka Request Handler on Broker 4], shutting 
> down (kafka.server.KafkaRequestHandlerPool)
> and stuck on:
> "kafka-request-handler-12" daemon prio=10 tid=0x7f1f04a66800 nid=0x6e79 
> waiting on condition [0x7f1ad5767000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> parking to wait for <0x00078e91dc20> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
> at java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:349)
> at 
> kafka.controller.ControllerChannelManager.sendRequest(ControllerChannelManager.scala:57)
> locked <0x00078e91dc38> (a java.lang.Object)
> at kafka.controller.KafkaController.sendRequest(KafkaController.scala:655)
> at 
> kafka.controller.ControllerBrokerRequestBatch$$anonfun$sendRequestsToBrokers$2.apply(ControllerChannelManager.scala:298)
> at 
> kafkler.ControllerBrokerRequestBatch$$anonfun$sendRequestsToBrokers$2.apply(ControllerChannelManager.scala:290)
> at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:95)
> at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:95)
> at scala.collection.Iterator$class.foreach(Iterator.scala:772)
> at scala.collection.mutable.HashTable$$anon$1.foreach(HashTable.scala:157)
> at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:190)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:45)
> at scala.collection.mutable.HashMap.foreach(HashMap.scala:95)
> at 
> kafka.controller.ControllerBrokerRequestBatch.sendRequestsToBrokers(ControllerChannelManager.scala:290)
> at 
> kafka.controller.ReplicaStateMachine.handleStateChanges(ReplicaStateMachine.scala:97)
> at 
> kafka.controller.KafkaController$$anonfun$shutdownBroker$3$$anonfun$apply$1$$anonfun$apply$mcV$sp$3.apply(KafkaController.scala:269)
> at 
> kafka.controller.KafkaController$$anonfun$shutdownBroker$3$$anonfun$apply$1$$anonfun$apply$mcV$sp$3.apply(KafkaController.scala:253)
> at scala.Option.foreach(Option.scala:197)
> at 
> kafka.controller.KafkaController$$anonfun$shutdownBroker$3$$anonfun$apply$1.apply$mcV$sp(KafkaController.scala:253)
> at 
> kafka.controller.KafkaController$$anonfun$shutdownBroker$3$$anonfun$apply$1.apply(KafkaController.scala:253)
> at 
> kafka.controller.KafkaController$$anonfun$shutdownBroker$3$$anonfun$apply$1.apply(KafkaController.scala:253)
> at kafka.utils.Utils$.inLock(Utils.scala:538)
> at 
> kafka.controller.KafkaController$$anonfun$shutdownBroker$3.apply(KafkaController.scala:252)
> at 
> kafka.controller.KafkaController$$anonfun$shutdownBroker$3.apply(KafkaController.scala:249)
> at scala.collection.immutable.HashSet$HashSet1.foreach(HashSet.scala:130)
> at scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:275)
> at scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:275)
> at scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:275)
> at kafka.controller.KafkaController.shutdownBroker(KafkaController.scala:249)
> locked <0x00078b495af0> (a java.lang.Object)
> at kafka.server.KafkaApis.handleControlledShutdownRequest(KafkaApis.scala:264)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:192)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:42)
> at java.lang.Thread.run(Thread.java:722)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 25136: Patch for KAFKA-1610

2014-09-14 Thread Neha Narkhede

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/25136/#review53292
---



core/src/main/scala/kafka/server/KafkaApis.scala


Too long a line. Could you please chop it up and indent properly?



core/src/main/scala/kafka/server/KafkaApis.scala


Same here


- Neha Narkhede


On Sept. 3, 2014, 6:27 p.m., Mayuresh Gharat wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/25136/
> ---
> 
> (Updated Sept. 3, 2014, 6:27 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1610
> https://issues.apache.org/jira/browse/KAFKA-1610
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Added comments explaining the changes and reverted back some changes as per 
> comments on the reviewboard
> 
> 
> Removed the unnecessary import
> 
> 
> Made changes to comments as per the suggestions on the reviewboard
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/admin/ReassignPartitionsCommand.scala 
> 691d69a49a240f38883d2025afaec26fd61281b5 
>   core/src/main/scala/kafka/controller/KafkaController.scala 
> 8ab4a1b8072c9dd187a9a6e94138b725d1f1b153 
>   core/src/main/scala/kafka/server/DelayedFetch.scala 
> e0f14e25af03e6d4344386dcabc1457ee784d345 
>   core/src/main/scala/kafka/server/DelayedProduce.scala 
> 9481508fc2d6140b36829840c337e557f3d090da 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> c584b559416b3ee4bcbec5966be4891e0a03eefb 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 28711182aaa70eaa623de858bc063cb2613b2a4d 
>   core/src/main/scala/kafka/tools/ReplicaVerificationTool.scala 
> af4783646803e58714770c21f8c3352370f26854 
>   core/src/test/scala/unit/kafka/server/LeaderElectionTest.scala 
> c2ba07c5fdbaf0e65ca033b2e4d88f45a8a15b2e 
> 
> Diff: https://reviews.apache.org/r/25136/diff/
> 
> 
> Testing
> ---
> 
> Ran the unit tests and everything passed and the build succeeeded
> 
> 
> Thanks,
> 
> Mayuresh Gharat
> 
>



[jira] [Commented] (KAFKA-1447) Controlled shutdown deadlock when trying to send state updates

2014-09-14 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133495#comment-14133495
 ] 

Neha Narkhede commented on KAFKA-1447:
--

[~rudolf.sima], It would help immensely if you can share the controller logs 
and the entire thread dump when you observe this issue. 

> Controlled shutdown deadlock when trying to send state updates
> --
>
> Key: KAFKA-1447
> URL: https://issues.apache.org/jira/browse/KAFKA-1447
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.8.0
>Reporter: Sam Meder
>Priority: Critical
>  Labels: newbie++
>
> We're seeing controlled shutdown indefinitely stuck on trying to send out 
> state change messages to the other brokers:
> [2014-05-03 04:01:30,580] INFO [Socket Server on Broker 4], Shutdown 
> completed (kafka.network.SocketServer)
> [2014-05-03 04:01:30,581] INFO [Kafka Request Handler on Broker 4], shutting 
> down (kafka.server.KafkaRequestHandlerPool)
> and stuck on:
> "kafka-request-handler-12" daemon prio=10 tid=0x7f1f04a66800 nid=0x6e79 
> waiting on condition [0x7f1ad5767000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> parking to wait for <0x00078e91dc20> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
> at java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:349)
> at 
> kafka.controller.ControllerChannelManager.sendRequest(ControllerChannelManager.scala:57)
> locked <0x00078e91dc38> (a java.lang.Object)
> at kafka.controller.KafkaController.sendRequest(KafkaController.scala:655)
> at 
> kafka.controller.ControllerBrokerRequestBatch$$anonfun$sendRequestsToBrokers$2.apply(ControllerChannelManager.scala:298)
> at 
> kafkler.ControllerBrokerRequestBatch$$anonfun$sendRequestsToBrokers$2.apply(ControllerChannelManager.scala:290)
> at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:95)
> at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:95)
> at scala.collection.Iterator$class.foreach(Iterator.scala:772)
> at scala.collection.mutable.HashTable$$anon$1.foreach(HashTable.scala:157)
> at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:190)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:45)
> at scala.collection.mutable.HashMap.foreach(HashMap.scala:95)
> at 
> kafka.controller.ControllerBrokerRequestBatch.sendRequestsToBrokers(ControllerChannelManager.scala:290)
> at 
> kafka.controller.ReplicaStateMachine.handleStateChanges(ReplicaStateMachine.scala:97)
> at 
> kafka.controller.KafkaController$$anonfun$shutdownBroker$3$$anonfun$apply$1$$anonfun$apply$mcV$sp$3.apply(KafkaController.scala:269)
> at 
> kafka.controller.KafkaController$$anonfun$shutdownBroker$3$$anonfun$apply$1$$anonfun$apply$mcV$sp$3.apply(KafkaController.scala:253)
> at scala.Option.foreach(Option.scala:197)
> at 
> kafka.controller.KafkaController$$anonfun$shutdownBroker$3$$anonfun$apply$1.apply$mcV$sp(KafkaController.scala:253)
> at 
> kafka.controller.KafkaController$$anonfun$shutdownBroker$3$$anonfun$apply$1.apply(KafkaController.scala:253)
> at 
> kafka.controller.KafkaController$$anonfun$shutdownBroker$3$$anonfun$apply$1.apply(KafkaController.scala:253)
> at kafka.utils.Utils$.inLock(Utils.scala:538)
> at 
> kafka.controller.KafkaController$$anonfun$shutdownBroker$3.apply(KafkaController.scala:252)
> at 
> kafka.controller.KafkaController$$anonfun$shutdownBroker$3.apply(KafkaController.scala:249)
> at scala.collection.immutable.HashSet$HashSet1.foreach(HashSet.scala:130)
> at scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:275)
> at scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:275)
> at scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:275)
> at kafka.controller.KafkaController.shutdownBroker(KafkaController.scala:249)
> locked <0x00078b495af0> (a java.lang.Object)
> at kafka.server.KafkaApis.handleControlledShutdownRequest(KafkaApis.scala:264)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:192)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:42)
> at java.lang.Thread.run(Thread.java:722)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: (New) reviewer field in Kafka jiras

2014-09-14 Thread Neha Narkhede
Thanks Joel! This is super helpful for distributing the outstanding reviews
amongst all committers and also useful for keeping track of the review todo
list per committer. I'd encourage all active committers to sign up for
patch review as that is one of the most important responsibilities of a
committer and also helps in scaling and building the community.

Currently, we have Apache send out an email for all JIRAs that have patches
outstanding. In addition to that, it will be ideal to send each committer a
list of unresolved JIRAs for which they are listed as a reviewer. I think
this will help provide a better experience to new contributors, who
currently spend significant time pinging committers on individual JIRAs
that can get lost in the list of open source emails.

Thanks,
Neha

On Fri, Aug 29, 2014 at 6:28 AM, Joe Stein  wrote:

> Thanks Joel, good stuff
>
> On Fri, Aug 29, 2014 at 2:13 AM, Joel Koshy  wrote:
>
> > I had requested infra to add a reviewer field in our jiras -
> > https://issues.apache.org/jira/browse/INFRA-8189 Hopefully it will
> > make it easier to formally keep track of a review owner for each jira.
> >
> > It goes without saying that it should not be interpreted as sole
> > reviewer - i.e., more than one person can and should review, but I
> > think this is slightly better than assigning a jira back to a person
> > to indicate a review is required.
> >
> > Thanks,
> >
> > --
> > Joel
> >
>


[jira] [Commented] (KAFKA-1374) LogCleaner (compaction) does not support compressed topics

2014-09-14 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133498#comment-14133498
 ] 

Neha Narkhede commented on KAFKA-1374:
--

Bump. This is marked for 0.8.2. Feel free to reassign for review.

> LogCleaner (compaction) does not support compressed topics
> --
>
> Key: KAFKA-1374
> URL: https://issues.apache.org/jira/browse/KAFKA-1374
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joel Koshy
>Assignee: Manikumar Reddy
>  Labels: newbie++
> Fix For: 0.8.2
>
> Attachments: KAFKA-1374.patch, KAFKA-1374_2014-08-09_16:18:55.patch, 
> KAFKA-1374_2014-08-12_22:23:06.patch
>
>
> This is a known issue, but opening a ticket to track.
> If you try to compact a topic that has compressed messages you will run into
> various exceptions - typically because during iteration we advance the
> position based on the decompressed size of the message. I have a bunch of
> stack traces, but it should be straightforward to reproduce.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1612) Consumer offsets auto-commit before processing finishes

2014-09-14 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133506#comment-14133506
 ] 

Neha Narkhede commented on KAFKA-1612:
--

[~gian] The current API was designed to assume that a message is "delivered" 
once it has been returned by the iterator. Over time, we have realized the 
deficiency of that API where custom commit logic is required to suit message 
processing. We are fixing that in the 0.9 consumer API. So it is best to wait 
for that API instead of significantly changing the behavior of the existing 
one. If you agree, I'd close this JIRA since the 0.9 API already takes care of 
this problem.

> Consumer offsets auto-commit before processing finishes
> ---
>
> Key: KAFKA-1612
> URL: https://issues.apache.org/jira/browse/KAFKA-1612
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.1.1
>Reporter: Gian Merlino
>Assignee: Neha Narkhede
>
> In a loop like this,
>   for (message <- kafkaStream) {
>  process(message)
>   }
> The consumer can commit offsets for the next message while "process" is 
> running. If the program crashes during "process", the next run will pick up 
> from the *next* message. The message in flight at the time of the crash will 
> never actually finish processing. Instead, I would have expected the high 
> level consumer to deliver messages at least once.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1611) Improve system test configuration

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1611:
-
Reviewer: Neha Narkhede

> Improve system test configuration
> -
>
> Key: KAFKA-1611
> URL: https://issues.apache.org/jira/browse/KAFKA-1611
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
> Fix For: 0.9.0
>
> Attachments: KAFKA-1611.0.patch
>
>
> I'd like to make the config a bit more "out of the box" for the common case 
> of local cluster. This will include:
> 1. Fix cluster_config.json of migration testsuite, it has hardcoded path that 
> prevents it from working out of the box at all. 
> 2. Use JAVA_HOME environment variable if "default" is specified and if 
> JAVA_HOME is defined. The current guessing method is a bit broken and using 
> JAVA_HOME will allow devs to configure their default java dir without editing 
> multiple cluster_config.json files in multiple places. 
> 3. (if feasible without too much headache): Configure remote hosts only for 
> test packages that will not be skipped. This will reduce some overhead in the 
> common use cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1611) Improve system test configuration

2014-09-14 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133510#comment-14133510
 ] 

Neha Narkhede commented on KAFKA-1611:
--

[~gwenshap] Thanks for the patch and apologize for the late review. We are 
trying to improve the management of the review backlog (see Reviewer thread on 
the mailing list) and will try our best to improve the review turnaround time 
and tracking. 

> Improve system test configuration
> -
>
> Key: KAFKA-1611
> URL: https://issues.apache.org/jira/browse/KAFKA-1611
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
> Fix For: 0.9.0
>
> Attachments: KAFKA-1611.0.patch
>
>
> I'd like to make the config a bit more "out of the box" for the common case 
> of local cluster. This will include:
> 1. Fix cluster_config.json of migration testsuite, it has hardcoded path that 
> prevents it from working out of the box at all. 
> 2. Use JAVA_HOME environment variable if "default" is specified and if 
> JAVA_HOME is defined. The current guessing method is a bit broken and using 
> JAVA_HOME will allow devs to configure their default java dir without editing 
> multiple cluster_config.json files in multiple places. 
> 3. (if feasible without too much headache): Configure remote hosts only for 
> test packages that will not be skipped. This will reduce some overhead in the 
> common use cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 25044: KAFKA-1611 - Improve system test configuration

2014-09-14 Thread Neha Narkhede

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/25044/#review53301
---

Ship it!


Ship It!

- Neha Narkhede


On Aug. 26, 2014, 12:04 a.m., Gwen Shapira wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/25044/
> ---
> 
> (Updated Aug. 26, 2014, 12:04 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> make the config a bit more "out of the box" for the common case of local 
> cluster. This includes:
> 1. Fix cluster_config.json of migration testsuite, it has hardcoded path that 
> prevents it from working out of the box at all.
> 2. Use JAVA_HOME environment variable if "default" is specified and if 
> JAVA_HOME is defined. The current guessing method is a bit broken and using 
> JAVA_HOME will allow devs to configure their default java dir without editing 
> multiple cluster_config.json files in multiple places.
> 
> 
> Diffs
> -
> 
>   system_test/migration_tool_testsuite/cluster_config.json 8353e56 
>   system_test/utils/system_test_utils.py 50340f0 
> 
> Diff: https://reviews.apache.org/r/25044/diff/
> 
> 
> Testing
> ---
> 
> Running system tests bunch of times :)
> 
> 
> Thanks,
> 
> Gwen Shapira
> 
>



[jira] [Commented] (KAFKA-1611) Improve system test configuration

2014-09-14 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133520#comment-14133520
 ] 

Neha Narkhede commented on KAFKA-1611:
--

+1 on your patch. Pushed to trunk

> Improve system test configuration
> -
>
> Key: KAFKA-1611
> URL: https://issues.apache.org/jira/browse/KAFKA-1611
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
> Fix For: 0.9.0
>
> Attachments: KAFKA-1611.0.patch
>
>
> I'd like to make the config a bit more "out of the box" for the common case 
> of local cluster. This will include:
> 1. Fix cluster_config.json of migration testsuite, it has hardcoded path that 
> prevents it from working out of the box at all. 
> 2. Use JAVA_HOME environment variable if "default" is specified and if 
> JAVA_HOME is defined. The current guessing method is a bit broken and using 
> JAVA_HOME will allow devs to configure their default java dir without editing 
> multiple cluster_config.json files in multiple places. 
> 3. (if feasible without too much headache): Configure remote hosts only for 
> test packages that will not be skipped. This will reduce some overhead in the 
> common use cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1615) Generating group ID in ZookeeperConsumerConnector shouldn't require local hostname to resolve

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1615:
-
Labels: newbie usability  (was: )

> Generating group ID in ZookeeperConsumerConnector shouldn't require local 
> hostname to resolve
> -
>
> Key: KAFKA-1615
> URL: https://issues.apache.org/jira/browse/KAFKA-1615
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.1.1
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
>Priority: Minor
>  Labels: newbie, usability
>
> ZookeeperConsumerConnector generates group ID by taking the local hostname:
> consumerUuid = "%s-%d-%s".format(
> InetAddress.getLocalHost.getHostName, System.currentTimeMillis,
> uuid.getMostSignificantBits().toHexString.substring(0,8))
> If localhost doesn't resolve (something that happens occasionally), this will 
> fail with following error:
> Exception in thread "main" java.net.UnknownHostException: Billc-cent70x64: 
> Billc-cent70x64: Name or service not known
>   at java.net.InetAddress.getLocalHost(InetAddress.java:1473)
>   at 
> kafka.consumer.ZookeeperConsumerConnector.(ZookeeperConsumerConnector.scala:119)
>   at 
> kafka.consumer.ZookeeperConsumerConnector.(ZookeeperConsumerConnector.scala:142)
>   at kafka.consumer.Consumer$.create(ConsumerConnector.scala:89)
>   at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:149)
>   at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
> Caused by: java.net.UnknownHostException: Billc-cent70x64: Name or service 
> not known
>   at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
>   at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:901)
>   at 
> java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1293)
>   at java.net.InetAddress.getLocalHost(InetAddress.java:1469)
>   ... 5 more
> Normally requiring a resolving localhost is not a problem, but here is seems 
> a bit frivolous - its just for generating an ID, nothing network related.
> I think we can catch the exception and generate an ID without the hostname.
> This is low priority since the issue can be easily worked around (add the 
> hostname to /etc/hosts) and since this API is going away anyway with the new 
> consumer API.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1611) Improve system test configuration

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1611:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Improve system test configuration
> -
>
> Key: KAFKA-1611
> URL: https://issues.apache.org/jira/browse/KAFKA-1611
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
> Fix For: 0.9.0
>
> Attachments: KAFKA-1611.0.patch
>
>
> I'd like to make the config a bit more "out of the box" for the common case 
> of local cluster. This will include:
> 1. Fix cluster_config.json of migration testsuite, it has hardcoded path that 
> prevents it from working out of the box at all. 
> 2. Use JAVA_HOME environment variable if "default" is specified and if 
> JAVA_HOME is defined. The current guessing method is a bit broken and using 
> JAVA_HOME will allow devs to configure their default java dir without editing 
> multiple cluster_config.json files in multiple places. 
> 3. (if feasible without too much headache): Configure remote hosts only for 
> test packages that will not be skipped. This will reduce some overhead in the 
> common use cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1614) Partition log directory name and segments information exposed via JMX

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1614:
-
Assignee: Alexander Demidko  (was: Jay Kreps)

> Partition log directory name and segments information exposed via JMX
> -
>
> Key: KAFKA-1614
> URL: https://issues.apache.org/jira/browse/KAFKA-1614
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Affects Versions: 0.8.1.1
>Reporter: Alexander Demidko
>Assignee: Alexander Demidko
> Attachments: log_segments_dir_jmx_info.patch
>
>
> Makes partition log directory and single segments information exposed via 
> JMX. This is useful to:
> - monitor disks usage in a cluster and on single broker
> - calculate disk space taken by different topics
> - estimate space to be freed when segments are expired
> Patch attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1614) Partition log directory name and segments information exposed via JMX

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1614:
-
Reviewer: Jay Kreps

> Partition log directory name and segments information exposed via JMX
> -
>
> Key: KAFKA-1614
> URL: https://issues.apache.org/jira/browse/KAFKA-1614
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Affects Versions: 0.8.1.1
>Reporter: Alexander Demidko
>Assignee: Alexander Demidko
> Attachments: log_segments_dir_jmx_info.patch
>
>
> Makes partition log directory and single segments information exposed via 
> JMX. This is useful to:
> - monitor disks usage in a cluster and on single broker
> - calculate disk space taken by different topics
> - estimate space to be freed when segments are expired
> Patch attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-847) kafka appender layout does not work for kafka 0.7.1

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-847:

Reviewer: Neha Narkhede

> kafka appender layout does not work for kafka 0.7.1
> ---
>
> Key: KAFKA-847
> URL: https://issues.apache.org/jira/browse/KAFKA-847
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.7.1, 0.8.0
> Environment: Win7 64 bit
>Reporter: Sining Ma
>Assignee: Jun Rao
>  Labels: easyfix, newbie
> Attachments: KAFKA-847-v1.patch, KAFKA-847-v2.patch
>
>
> I am using kafka 0.7.1 right now.
> I am using the following log4j properties file and trying to send some log 
> information to kafka server.
> log4j.rootLogger=INFO,file,stdout
> log4j.appender.stdout=org.apache.log4j.ConsoleAppender
> log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
> log4j.appender.stdout.layout.ConversionPattern=[%d] %p %t %m (%c)%n
> log4j.appender.file=org.apache.log4j.RollingFileAppender
> #log4j.appender.file.FileNamePattern=c:\\development\\producer-agent_%d{-MM-dd}.log
> log4j.appender.file.File=${AC_DATA_HOME}\\lmservice\\tailer-aggregator.log
> log4j.appender.file.MaxFileSize=100MB
> log4j.appender.file.MaxBackupIndex=1
> log4j.appender.file.layout=org.apache.log4j.PatternLayout
> #log4j.appender.file.layout.ConversionPattern= %-4r [%t] %-5p %c %x - %m%n
> log4j.appender.file.layout.ConversionPattern=[%d] %p %t %m (%c)%n
> log4j.appender.KAFKA=kafka.producer.KafkaLog4jAppender
> log4j.appender.KAFKA.layout=org.apache.log4j.PatternLayout
> log4j.appender.KAFKA.layout.ConversionPattern=[%d] %p %t %m (%c)%n
> log4j.appender.KAFKA.BrokerList=0:localhost:9092
> log4j.appender.KAFKA.SerializerClass=kafka.serializer.StringEncoder
> log4j.appender.KAFKA.Topic=test.topic
> # Turn on all our debugging info
> log4j.logger.kafka=INFO, KAFKA
> log4j.logger.org=INFO, KAFKA
> log4j.logger.com=INFO, KAFKA
> However, I find that the messages send to kafka server are not formatted as 
> my defined layout. 
> KafkaLog4jAppender just sends messages(%m), and there is no other conversion 
> patterns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-847) kafka appender layout does not work for kafka 0.7.1

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-847:

Resolution: Fixed
  Assignee: Peter Pham  (was: Jun Rao)
Status: Resolved  (was: Patch Available)

Pushed v2 to trunk

> kafka appender layout does not work for kafka 0.7.1
> ---
>
> Key: KAFKA-847
> URL: https://issues.apache.org/jira/browse/KAFKA-847
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.7.1, 0.8.0
> Environment: Win7 64 bit
>Reporter: Sining Ma
>Assignee: Peter Pham
>  Labels: easyfix, newbie
> Attachments: KAFKA-847-v1.patch, KAFKA-847-v2.patch
>
>
> I am using kafka 0.7.1 right now.
> I am using the following log4j properties file and trying to send some log 
> information to kafka server.
> log4j.rootLogger=INFO,file,stdout
> log4j.appender.stdout=org.apache.log4j.ConsoleAppender
> log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
> log4j.appender.stdout.layout.ConversionPattern=[%d] %p %t %m (%c)%n
> log4j.appender.file=org.apache.log4j.RollingFileAppender
> #log4j.appender.file.FileNamePattern=c:\\development\\producer-agent_%d{-MM-dd}.log
> log4j.appender.file.File=${AC_DATA_HOME}\\lmservice\\tailer-aggregator.log
> log4j.appender.file.MaxFileSize=100MB
> log4j.appender.file.MaxBackupIndex=1
> log4j.appender.file.layout=org.apache.log4j.PatternLayout
> #log4j.appender.file.layout.ConversionPattern= %-4r [%t] %-5p %c %x - %m%n
> log4j.appender.file.layout.ConversionPattern=[%d] %p %t %m (%c)%n
> log4j.appender.KAFKA=kafka.producer.KafkaLog4jAppender
> log4j.appender.KAFKA.layout=org.apache.log4j.PatternLayout
> log4j.appender.KAFKA.layout.ConversionPattern=[%d] %p %t %m (%c)%n
> log4j.appender.KAFKA.BrokerList=0:localhost:9092
> log4j.appender.KAFKA.SerializerClass=kafka.serializer.StringEncoder
> log4j.appender.KAFKA.Topic=test.topic
> # Turn on all our debugging info
> log4j.logger.kafka=INFO, KAFKA
> log4j.logger.org=INFO, KAFKA
> log4j.logger.com=INFO, KAFKA
> However, I find that the messages send to kafka server are not formatted as 
> my defined layout. 
> KafkaLog4jAppender just sends messages(%m), and there is no other conversion 
> patterns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1025) Producer.send should provide recoverability info on failiure

2014-09-14 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133526#comment-14133526
 ] 

Neha Narkhede commented on KAFKA-1025:
--

[~charmalloc], [~jbrosenb...@gmail.com], The new producer does provide the 
error code in the callback and using that the user can re-send if it makes 
sense. Do we still need this JIRA?

> Producer.send should provide recoverability info on failiure
> 
>
> Key: KAFKA-1025
> URL: https://issues.apache.org/jira/browse/KAFKA-1025
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.0
>Reporter: Jason Rosenberg
>  Labels: newbie
> Fix For: 0.8.2
>
> Attachments: KAFKA-1025.patch
>
>
> Currently, in 0.8, the Producer.send() method either succeeds, or fails by 
> throwing an Exception.
> There are several exceptions that can be thrown, including:
> FailedToSendException
> QueueFullException
> ClassCastExeption
> These are all sub-classes of RuntimeException.
> Under the covers, the producer will retry sending messages up to a maximum 
> number of times (according to the message.send.max.retries property).  
> Internally, the producer may decide which sorts of failures are recoverable, 
> and will retry those.  Alternatively (via an upcoming change, see KAFKA-998), 
> it may decide to not retry at all, if the error is not recoverable.
> The problem is, if FailedToSendException is returned, the caller to 
> Producer.send doesn't have a way to decide if a send failed due to an 
> unrecoverable error, or failed after exhausting a maximum number of retries.
> A caller may want to decide to retry more times, perhaps after waiting a 
> while.  But it should know first whether it's even likely that the failure is 
> retryable.
> An example of this might be a if the message size is too large (represented 
> internally as a MessageSizeTooLargeException).  In this case, it is not 
> recoverable, but it is still wrapped as a FailedToSendException, and should 
> not be retried.
> So the suggestion is to make clear in the api javadoc (or scaladoc) for 
> Producer.send, the set of exception types that can be thrown (so that we 
> don't have to search through source code to find them).  And add exception 
> types, or perhaps fields within FailedToSendException, so that it's possible 
> to reason about whether retrying might make sense.
> Currently, in addition, I've found that Producer.send can throw a 
> QueueFullException in async mode (this should be a retryable exception, after 
> time has elapsed, etc.), and also a ClassCastException, if there's a 
> misconfiguration between the configured Encoder and the message data type.  I 
> suspect there are other RuntimeExceptions that can also be thrown (e.g. 
> NullPointerException if the message/topic are null).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1613) Improve system test documentation

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1613:
-
Labels: newbie  (was: )

> Improve system test documentation
> -
>
> Key: KAFKA-1613
> URL: https://issues.apache.org/jira/browse/KAFKA-1613
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
>Priority: Minor
>  Labels: newbie
> Fix For: 0.9.0
>
>
> Few things to improve the docs:
> 1. Include python pre-requisites (matplotlib for example) and instructions 
> how to install them.
> 2. The README needs some cleanup, and reference to the wiki. For example, I'm 
> pretty sure system tests works with OS X.
> 3. Extra documentation about metrics and charts - they seem missing in action
> 4. Extra documentation about the cluster config - when to touch it, when not 
> to, why we have different levels, etc. Maybe example of cluster_conf.json 
> configured to run with remote hosts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1496) Using batch message in sync producer only sends the first message if we use a Scala Stream as the argument

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1496:
-
Assignee: Guan Liao  (was: Jun Rao)

> Using batch message in sync producer only sends the first message if we use a 
> Scala Stream as the argument 
> ---
>
> Key: KAFKA-1496
> URL: https://issues.apache.org/jira/browse/KAFKA-1496
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.1
> Environment: Scala 2.10
>Reporter: Guan Liao
>Assignee: Guan Liao
>Priority: Minor
> Attachments: kafka_1496v2.patch
>
>
> I am passing a Scala Stream to the producer as followed:
> producer.send(events.toSeq:*)
> The events was a Scala Stream of KeyedMessages. In sync mode, it will only 
> send one message to Kafka while in async mode it was fine. I tracked down the 
> issue and I'm attaching a fix for it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1496) Using batch message in sync producer only sends the first message if we use a Scala Stream as the argument

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1496:
-
Reviewer: Neha Narkhede

> Using batch message in sync producer only sends the first message if we use a 
> Scala Stream as the argument 
> ---
>
> Key: KAFKA-1496
> URL: https://issues.apache.org/jira/browse/KAFKA-1496
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.1
> Environment: Scala 2.10
>Reporter: Guan Liao
>Assignee: Guan Liao
>Priority: Minor
> Attachments: kafka_1496v2.patch
>
>
> I am passing a Scala Stream to the producer as followed:
> producer.send(events.toSeq:*)
> The events was a Scala Stream of KeyedMessages. In sync mode, it will only 
> send one message to Kafka while in async mode it was fine. I tracked down the 
> issue and I'm attaching a fix for it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1496) Using batch message in sync producer only sends the first message if we use a Scala Stream as the argument

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1496:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Using batch message in sync producer only sends the first message if we use a 
> Scala Stream as the argument 
> ---
>
> Key: KAFKA-1496
> URL: https://issues.apache.org/jira/browse/KAFKA-1496
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.1
> Environment: Scala 2.10
>Reporter: Guan Liao
>Assignee: Guan Liao
>Priority: Minor
> Attachments: kafka_1496v2.patch
>
>
> I am passing a Scala Stream to the producer as followed:
> producer.send(events.toSeq:*)
> The events was a Scala Stream of KeyedMessages. In sync mode, it will only 
> send one message to Kafka while in async mode it was fine. I tracked down the 
> issue and I'm attaching a fix for it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1496) Using batch message in sync producer only sends the first message if we use a Scala Stream as the argument

2014-09-14 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133542#comment-14133542
 ] 

Neha Narkhede commented on KAFKA-1496:
--

+1. Pushed to trunk

> Using batch message in sync producer only sends the first message if we use a 
> Scala Stream as the argument 
> ---
>
> Key: KAFKA-1496
> URL: https://issues.apache.org/jira/browse/KAFKA-1496
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.1
> Environment: Scala 2.10
>Reporter: Guan Liao
>Assignee: Guan Liao
>Priority: Minor
> Attachments: kafka_1496v2.patch
>
>
> I am passing a Scala Stream to the producer as followed:
> producer.send(events.toSeq:*)
> The events was a Scala Stream of KeyedMessages. In sync mode, it will only 
> send one message to Kafka while in async mode it was fine. I tracked down the 
> issue and I'm attaching a fix for it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-934) kafka hadoop consumer and producer use older 0.19.2 hadoop api's

2014-09-14 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133547#comment-14133547
 ] 

Jun Rao commented on KAFKA-934:
---

Gwen,

Yes, we probably need to make a decision on whether to keep maintaining the 
Kafka hadoop package here. Could you provide your list of issues here in any 
case so that we don't forget about them?

> kafka hadoop consumer and producer use older 0.19.2 hadoop api's
> 
>
> Key: KAFKA-934
> URL: https://issues.apache.org/jira/browse/KAFKA-934
> Project: Kafka
>  Issue Type: Bug
>  Components: contrib
>Affects Versions: 0.8.0
> Environment: [amilkowski@localhost impl]$ uname -a
> Linux localhost.localdomain 3.9.4-200.fc18.x86_64 #1 SMP Fri May 24 20:10:49 
> UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Andrew Milkowski
>Assignee: Sriharsha Chintalapani
>  Labels: hadoop, hadoop-2.0, newbie
> Fix For: 0.8.2
>
>
> New hadoop api present in 0.20.1 especially package  
> org.apache.hadoop.mapredude.lib is not used 
> code affected is both consumer and producer in kafka in the contrib package
> [amilkowski@localhost contrib]$ pwd
> /opt/local/git/kafka/contrib
> [amilkowski@localhost contrib]$ ls -lt
> total 12
> drwxrwxr-x 8 amilkowski amilkowski 4096 May 30 11:14 hadoop-consumer
> drwxrwxr-x 6 amilkowski amilkowski 4096 May 29 19:31 hadoop-producer
> drwxrwxr-x 6 amilkowski amilkowski 4096 May 29 16:43 target
> [amilkowski@localhost contrib]$ 
> in example
> import org.apache.hadoop.mapred.JobClient;
> import org.apache.hadoop.mapred.JobConf;
> import org.apache.hadoop.mapred.RunningJob;
> import org.apache.hadoop.mapred.TextOutputFormat;
> use 0.19.2 hadoop api format, this prevents merging of hadoop feature into 
> more modern hadoop implementation
> instead of drawing from 0.20.1 api set in import org.apache.hadoop.mapreduce



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1362) Publish sources and javadoc jars

2014-09-14 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133550#comment-14133550
 ] 

Jun Rao commented on KAFKA-1362:


The maven issue should be fixed in kafka-1502.

> Publish sources and javadoc jars
> 
>
> Key: KAFKA-1362
> URL: https://issues.apache.org/jira/browse/KAFKA-1362
> Project: Kafka
>  Issue Type: Bug
>  Components: packaging
>Affects Versions: 0.8.1
>Reporter: Stevo Slavic
>Assignee: Joel Koshy
>  Labels: build
> Fix For: 0.8.1.1
>
> Attachments: KAFKA-1362.patch
>
>
> Currently just binaries jars get published on Maven Central (see 
> http://repo1.maven.org/maven2/org/apache/kafka/kafka_2.10/0.8.1/ ). Please 
> also publish sources and javadoc jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1600) Controller failover not working correctly.

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1600:
-
Fix Version/s: 0.8.2

> Controller failover not working correctly.
> --
>
> Key: KAFKA-1600
> URL: https://issues.apache.org/jira/browse/KAFKA-1600
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.8.1
> Environment: Linux 3.2.0-4-amd64 #1 SMP Debian 3.2.46-1 x86_64 
> GNU/Linux
> java version "1.7.0_03"
>Reporter: Ding Haifeng
>Assignee: Neha Narkhede
> Fix For: 0.8.2
>
> Attachments: kafka_failure_logs.tar.gz
>
>
> We are running a 10 node Kafka 0.8.1 cluster and experienced a failure as 
> following. 
> At some time, broker A stopped acting as controller any more. We see this by 
> kafka.controller - KafkaController - ActiveControllerCount in JMX metrics 
> jumped from 1 to 0.
> In the meanwhile, broker A was still running and registering itself in the 
> zookeeper /kafka/controller node. So no other brokers could be elected as new 
> controller.
> Since that the cluster was running without controller. Producers and 
> consumers still worked. But functions requiring a controller such as new 
> topic leader election and topic leader failover were not working any more.
> A force restart of broker A could lead to a controller election and bring the 
> cluster back to a correct state.
> Here is our brief observations. I can provide more necessary informations if 
> needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1600) Controller failover not working correctly.

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1600:
-
Assignee: (was: Neha Narkhede)

> Controller failover not working correctly.
> --
>
> Key: KAFKA-1600
> URL: https://issues.apache.org/jira/browse/KAFKA-1600
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.8.1
> Environment: Linux 3.2.0-4-amd64 #1 SMP Debian 3.2.46-1 x86_64 
> GNU/Linux
> java version "1.7.0_03"
>Reporter: Ding Haifeng
>Priority: Blocker
> Fix For: 0.8.2
>
> Attachments: kafka_failure_logs.tar.gz
>
>
> We are running a 10 node Kafka 0.8.1 cluster and experienced a failure as 
> following. 
> At some time, broker A stopped acting as controller any more. We see this by 
> kafka.controller - KafkaController - ActiveControllerCount in JMX metrics 
> jumped from 1 to 0.
> In the meanwhile, broker A was still running and registering itself in the 
> zookeeper /kafka/controller node. So no other brokers could be elected as new 
> controller.
> Since that the cluster was running without controller. Producers and 
> consumers still worked. But functions requiring a controller such as new 
> topic leader election and topic leader failover were not working any more.
> A force restart of broker A could lead to a controller election and bring the 
> cluster back to a correct state.
> Here is our brief observations. I can provide more necessary informations if 
> needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1600) Controller failover not working correctly.

2014-09-14 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133551#comment-14133551
 ] 

Neha Narkhede commented on KAFKA-1600:
--

Marking this as a blocker for 0.8.2 since it is related to delete topic

> Controller failover not working correctly.
> --
>
> Key: KAFKA-1600
> URL: https://issues.apache.org/jira/browse/KAFKA-1600
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.8.1
> Environment: Linux 3.2.0-4-amd64 #1 SMP Debian 3.2.46-1 x86_64 
> GNU/Linux
> java version "1.7.0_03"
>Reporter: Ding Haifeng
>Priority: Blocker
> Fix For: 0.8.2
>
> Attachments: kafka_failure_logs.tar.gz
>
>
> We are running a 10 node Kafka 0.8.1 cluster and experienced a failure as 
> following. 
> At some time, broker A stopped acting as controller any more. We see this by 
> kafka.controller - KafkaController - ActiveControllerCount in JMX metrics 
> jumped from 1 to 0.
> In the meanwhile, broker A was still running and registering itself in the 
> zookeeper /kafka/controller node. So no other brokers could be elected as new 
> controller.
> Since that the cluster was running without controller. Producers and 
> consumers still worked. But functions requiring a controller such as new 
> topic leader election and topic leader failover were not working any more.
> A force restart of broker A could lead to a controller election and bring the 
> cluster back to a correct state.
> Here is our brief observations. I can provide more necessary informations if 
> needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1600) Controller failover not working correctly.

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1600:
-
Priority: Blocker  (was: Major)

> Controller failover not working correctly.
> --
>
> Key: KAFKA-1600
> URL: https://issues.apache.org/jira/browse/KAFKA-1600
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.8.1
> Environment: Linux 3.2.0-4-amd64 #1 SMP Debian 3.2.46-1 x86_64 
> GNU/Linux
> java version "1.7.0_03"
>Reporter: Ding Haifeng
>Assignee: Neha Narkhede
>Priority: Blocker
> Fix For: 0.8.2
>
> Attachments: kafka_failure_logs.tar.gz
>
>
> We are running a 10 node Kafka 0.8.1 cluster and experienced a failure as 
> following. 
> At some time, broker A stopped acting as controller any more. We see this by 
> kafka.controller - KafkaController - ActiveControllerCount in JMX metrics 
> jumped from 1 to 0.
> In the meanwhile, broker A was still running and registering itself in the 
> zookeeper /kafka/controller node. So no other brokers could be elected as new 
> controller.
> Since that the cluster was running without controller. Producers and 
> consumers still worked. But functions requiring a controller such as new 
> topic leader election and topic leader failover were not working any more.
> A force restart of broker A could lead to a controller election and bring the 
> cluster back to a correct state.
> Here is our brief observations. I can provide more necessary informations if 
> needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1548) Refactor the "replica_id" in requests

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1548:
-
Reviewer: Neha Narkhede

> Refactor the "replica_id" in requests
> -
>
> Key: KAFKA-1548
> URL: https://issues.apache.org/jira/browse/KAFKA-1548
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Gwen Shapira
>  Labels: newbie
> Fix For: 0.9.0
>
>
> Today in many requests like fetch and offset we have a integer replica_id 
> field, if the request is from a follower consumer it is the broker id from 
> that follower replica, if it is from a regular consumer it could be one of 
> the two values: "-1" for ordinary consumer, or "-2" for debugging consumer. 
> Hence this replica_id field is also used in two folds:
> 1) Logging for trouble shooting in request logs, which can be helpful only 
> when this is from a follower replica, 
> 2) Deciding if it is from the consumer or a replica to logically handle the 
> request in different ways. For this purpose we do not really care about the 
> actually id value.
> We probably would like to do the following improvements:
> 1) Rename "replica_id" to sth. less confusing?
> 2) Change the request.toString() function based on the replica_id, whether it 
> is a positive integer (meaning from a broker replica fetcher) or -1/-2 
> (meaning from a regular consumer).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1070) Auto-assign node id

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1070:
-
Reviewer: Neha Narkhede

> Auto-assign node id
> ---
>
> Key: KAFKA-1070
> URL: https://issues.apache.org/jira/browse/KAFKA-1070
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jay Kreps
>Assignee: Sriharsha Chintalapani
>  Labels: usability
> Attachments: KAFKA-1070.patch, KAFKA-1070_2014-07-19_16:06:13.patch, 
> KAFKA-1070_2014-07-22_11:34:18.patch, KAFKA-1070_2014-07-24_20:58:17.patch, 
> KAFKA-1070_2014-07-24_21:05:33.patch, KAFKA-1070_2014-08-21_10:26:20.patch
>
>
> It would be nice to have Kafka brokers auto-assign node ids rather than 
> having that be a configuration. Having a configuration is irritating because 
> (1) you have to generate a custom config for each broker and (2) even though 
> it is in configuration, changing the node id can cause all kinds of bad 
> things to happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1070) Auto-assign node id

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1070:
-
Assignee: Sriharsha Chintalapani  (was: Jay Kreps)

> Auto-assign node id
> ---
>
> Key: KAFKA-1070
> URL: https://issues.apache.org/jira/browse/KAFKA-1070
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jay Kreps
>Assignee: Sriharsha Chintalapani
>  Labels: usability
> Attachments: KAFKA-1070.patch, KAFKA-1070_2014-07-19_16:06:13.patch, 
> KAFKA-1070_2014-07-22_11:34:18.patch, KAFKA-1070_2014-07-24_20:58:17.patch, 
> KAFKA-1070_2014-07-24_21:05:33.patch, KAFKA-1070_2014-08-21_10:26:20.patch
>
>
> It would be nice to have Kafka brokers auto-assign node ids rather than 
> having that be a configuration. Having a configuration is irritating because 
> (1) you have to generate a custom config for each broker and (2) even though 
> it is in configuration, changing the node id can cause all kinds of bad 
> things to happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1499) Broker-side compression configuration

2014-09-14 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133555#comment-14133555
 ] 

Neha Narkhede commented on KAFKA-1499:
--

[~jjkoshy] Any chance you can take a quick look at the updated patch to see if 
your review comments are addressed? If not, please feel free to reassign for 
review.

> Broker-side compression configuration
> -
>
> Key: KAFKA-1499
> URL: https://issues.apache.org/jira/browse/KAFKA-1499
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Joel Koshy
>Assignee: Manikumar Reddy
>  Labels: newbie++
> Fix For: 0.8.2
>
> Attachments: KAFKA-1499.patch, KAFKA-1499.patch, 
> KAFKA-1499_2014-08-15_14:20:27.patch, KAFKA-1499_2014-08-21_21:44:27.patch
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> A given topic can have messages in mixed compression codecs. i.e., it can
> also have a mix of uncompressed/compressed messages.
> It will be useful to support a broker-side configuration to recompress
> messages to a specific compression codec. i.e., all messages (for all
> topics) on the broker will be compressed to this codec. We could have
> per-topic overrides as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1123) Broker IPv6 addresses parsed incorrectly

2014-09-14 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133558#comment-14133558
 ] 

Neha Narkhede commented on KAFKA-1123:
--

Bump [~kszafran]. Are you going to get a chance to update your patch?

> Broker IPv6 addresses parsed incorrectly
> 
>
> Key: KAFKA-1123
> URL: https://issues.apache.org/jira/browse/KAFKA-1123
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.2
>Reporter: Andrew Otto
>  Labels: newbie
> Attachments: KAFKA-1123.patch
>
>
> It seems that broker addresses are parsed incorrectly when IPv6 addresses are 
> supplied.  IPv6 addresses have colons in them, and Kafka seems to be 
> interpreting the first : as the address:port separator.
> I have only tried this with the console-producer --broker-list option, so I 
> don't know if this affects anything deeper than the CLI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1123) Broker IPv6 addresses parsed incorrectly

2014-09-14 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1123:
-
Reviewer: Jun Rao

> Broker IPv6 addresses parsed incorrectly
> 
>
> Key: KAFKA-1123
> URL: https://issues.apache.org/jira/browse/KAFKA-1123
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.2
>Reporter: Andrew Otto
>  Labels: newbie
> Attachments: KAFKA-1123.patch
>
>
> It seems that broker addresses are parsed incorrectly when IPv6 addresses are 
> supplied.  IPv6 addresses have colons in them, and Kafka seems to be 
> interpreting the first : as the address:port separator.
> I have only tried this with the console-producer --broker-list option, so I 
> don't know if this affects anything deeper than the CLI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-1606) Reference to old zk.connect vs broker.list found in docs under Produce APIs

2014-09-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-1606.

Resolution: Fixed
  Assignee: Jun Rao

Thanks for pointing this out. Fixed the website.

> Reference to old zk.connect vs broker.list found in docs under Produce APIs
> ---
>
> Key: KAFKA-1606
> URL: https://issues.apache.org/jira/browse/KAFKA-1606
> Project: Kafka
>  Issue Type: Bug
>  Components: website
>Affects Versions: 0.8.1
> Environment: any
>Reporter: Paul Otto
>Assignee: Jun Rao
>Priority: Trivial
>  Labels: documentation, producer
>
> Current documentation for Kafka 0.8.1 has the following under 5.1 API Design 
> / Producer APIs:
> * provides ZooKeeper based automatic broker discovery -
> The ZooKeeper based broker discovery and load balancing can be used by 
> specifying the ZooKeeper connection url through the zk.connect config 
> parameter. For some applications, however, the dependence on ZooKeeper is 
> inappropriate. In that case, the producer can take in a static list of 
> brokers through the broker.list config parameter. Each produce requests gets 
> routed to a random broker partition in this case. If that broker is down, the 
> produce request fails.
> This does not seem correct anymore as 1) zk.connect is no longer a valid 
> parameter; and 2) the way the broker list is handled has changed from the 
> above description. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)