kafka-consumer-groups.sh fail with sasl enabled 0.9.0.1

2016-08-08 Thread Prabhu V
I am using a kafka consumer where the partitions are assigned using maually
instead of automatic group assignment using a code similar to "consumer.
assign();"

In this case bin/kafka-consumer-groups fails with the message "Consumer
group `my_group1` does not exist or is rebalancing"


On debugging I found that the AdminClient.scals is returning a empty list
for the group summary with "GroupSummary(Dead,,,List())" status.

The command works when I use a consumer group with automatic partition
assignment.

Can someone from kafka-dev confirm if this is the expected behaviour ?

Thanks,
Prabhu


keeping a high value for zookeeper.connection.timeout.ms

2016-08-08 Thread Digumarthi, Prabhakar Venkata Surya
Hi All,


Right now we are running kafka in AWS EC2 servers and zookeeper is also running 
on separate EC2 instances.

We have created a service (system units )  for kafka and zookeeper to make sure 
that they are started in case the server gets rebooted.

The problem is sometimes zookeeper severs are little late in starting and kafka 
brokers by that time getting terminated.


So to deal with this issue we are planning to increase the 
zookeeper.connection.timeout.ms to some high number like 10 mins, at the broker 
side. Is this a good approach ?


Thanks,
Prabhakar


The information contained in this e-mail is confidential and/or proprietary to 
Capital One and/or its affiliates and may only be used solely in performance of 
work or services for Capital One. The information transmitted herewith is 
intended only for use by the individual or entity to which it is addressed. If 
the reader of this message is not the intended recipient, you are hereby 
notified that any review, retransmission, dissemination, distribution, copying 
or other use of, or taking of any action in reliance upon this information is 
strictly prohibited. If you have received this communication in error, please 
contact the sender and delete the material from your computer.


Re: Kafka ACLs CLI Auth Error

2016-08-08 Thread BigData dev
Hi,
I think jaas config file need to be changed.

Client {
   com.sun.security.auth.module.Krb5LoginModule required
   useKeyTab=true
   keyTab="/etc/security/keytabs/kafka.service.keytab"
   storeKey=true
   useTicketCache=false
   serviceName="zookeeper"
   principal="kafka/hostname.abc@abc.com";
};


You can follow the blog which provides complete steps for Kafka ACLS

https://developer.ibm.com/hadoop/2016/07/20/kafka-acls/



Thanks,

Bharat




On Mon, Aug 8, 2016 at 2:08 PM, Derar Alassi  wrote:

> Hi all,
>
> I have  3-node ZK and Kafka clusters. I have secured ZK with SASL. I got
> the keytabs done for my brokers and they can connect to the ZK ensemble
> just fine with no issues. All gravy!
>
> Now, I am trying to set ACLs using the kafka-acls.sh CLI. Before that, I
> did export the KAFKA_OPTS using the following command:
>
>
>  export  KAFKA_OPTS="-Djava.security.auth.login.config=/
> kafka_server_jaas.conf
> -Djavax.net.debug=all -Dsun.security.krb5.debug=true -Djavax.net.debug=all
> -Dsun.security.krb5.debug=true -Djava.security.krb5.conf= conf>/krb5.conf"
>
> I enabled extra debugging too. The JAAS file has the following info:
>
> KafkaServer {
> com.sun.security.auth.module.Krb5LoginModule required
> useKeyTab=true
> storeKey=true
> keyTab="/etc/+kafka.keytab"
> principal="kafka/@MY_DOMAIN";
> };
> Client {
> com.sun.security.auth.module.Krb5LoginModule required
> useKeyTab=true
> useTicketCache=true
> storeKey=true
> keyTab="/etc/+kafka.keytab"
> principal="kafka/@MY_DOMAIN";
> };
>
> Note that I enabled useTicketCache in the client section.
>
> I know that my krb5.conf file is good since the brokers are healthy and
> consumer/producers are able to do their work.
>
> Two scenarios:
>
> 1. When I enabled the useTicketCache=true, I get the following error:
>
> *Aug 08, 2016 8:42:46 PM org.apache.zookeeper.ClientCnxn$SendThread
> startConnectWARNING: SASL configuration failed:
> javax.security.auth.login.LoginException: No key to store Will continue
> connection to Zookeeper server without SASL authentication, if Zookeeper
> server allows it.*
>
> I execute "kinit kafka/@ -k -t
> /etc/+kafka.keytab " on the same shell where I run the .sh CLI
> tool.
> 2. When I remove userTicketCache, I get the following error:
>
>
>
>
>
>
>
>
> *Aug 08, 2016 9:03:38 PM org.apache.zookeeper.ZooKeeper closeINFO: Session:
> 0x356621f18f70009 closedError while executing ACL command:
> org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode =
> NoAuth for /kafka-acl/TopicAug 08, 2016 9:03:38 PM
> org.apache.zookeeper.ClientCnxn$EventThread runINFO: EventThread shut
> downorg.I0Itec.zkclient.exception.ZkException:
> org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode =
> NoAuth for /kafka-acl/Topicat
> org.I0Itec.zkclient.exception.ZkException.create(ZkException.java:68)*
>
>
> Here is the command I run to set the ACLs in all cases:
> ./bin/kafka-acls.sh --authorizer-properties zookeeper.connect=:
> 2181
> --add --allow-principal User:Bob --producer --topic ssl-topic
>
>
> I use Kafka 0.9.0.1. Note that I am using the same keytabs that my Brokers
> (Kafka services) are using.
>
>
> Any ideas what I am doing wrong or what I should do differently to get ACLs
> set?
>
> Thanks,
> Derar
>


RE: How to Identify Consumers of a Topic?

2016-08-08 Thread Jillian Cocklin
Thanks Derar,

I'll check that out and see if it gives enough information about the consumer 
to track it.

Thanks!
Jillian

-Original Message-
From: Derar Alassi [mailto:derar.ala...@gmail.com] 
Sent: Monday, August 08, 2016 3:35 PM
To: users@kafka.apache.org
Subject: Re: How to Identify Consumers of a Topic?

I use kafka-consumer-offset-checker.sh to check offsets of consumers and along 
that you get which consumer is attached to each partition.

On Mon, Aug 8, 2016 at 3:12 PM, Jillian Cocklin < jillian.cock...@danalinc.com> 
wrote:

> Hello,
>
> Our team is using Kafka for the first time and are in the testing 
> phase of getting a new product ready, which uses Kafka as the 
> communications backbone.  Basically, a processing unit will consume a 
> message from a topic, do the processing, then produce the output to another 
> topic.
> Messages get passed back and forth between processors until done.
>
> We had an issue last week where an outdated processor was "stealing"
> messages from a topic, doing incorrect (outdated) processing, and 
> putting it in the next topic.  We could not find the rogue processor 
> (aka consumer).  We shut down all known consumers of that topic, and 
> it was still happening.  We finally gave up and renamed the topic to 
> get around the issue.
>
> Is there a Kafka tool we could have used to find the connected 
> consumer in that consumer group?  Maybe by name or by IP?
>
> Thanks,
> Jillian
>
>


Re: How to Identify Consumers of a Topic?

2016-08-08 Thread Derar Alassi
I use kafka-consumer-offset-checker.sh to check offsets of consumers and
along that you get which consumer is attached to each partition.

On Mon, Aug 8, 2016 at 3:12 PM, Jillian Cocklin <
jillian.cock...@danalinc.com> wrote:

> Hello,
>
> Our team is using Kafka for the first time and are in the testing phase of
> getting a new product ready, which uses Kafka as the communications
> backbone.  Basically, a processing unit will consume a message from a
> topic, do the processing, then produce the output to another topic.
> Messages get passed back and forth between processors until done.
>
> We had an issue last week where an outdated processor was "stealing"
> messages from a topic, doing incorrect (outdated) processing, and putting
> it in the next topic.  We could not find the rogue processor (aka
> consumer).  We shut down all known consumers of that topic, and it was
> still happening.  We finally gave up and renamed the topic to get around
> the issue.
>
> Is there a Kafka tool we could have used to find the connected consumer in
> that consumer group?  Maybe by name or by IP?
>
> Thanks,
> Jillian
>
>


How to Identify Consumers of a Topic?

2016-08-08 Thread Jillian Cocklin
Hello,

Our team is using Kafka for the first time and are in the testing phase of 
getting a new product ready, which uses Kafka as the communications backbone.  
Basically, a processing unit will consume a message from a topic, do the 
processing, then produce the output to another topic.  Messages get passed back 
and forth between processors until done.

We had an issue last week where an outdated processor was "stealing" messages 
from a topic, doing incorrect (outdated) processing, and putting it in the next 
topic.  We could not find the rogue processor (aka consumer).  We shut down all 
known consumers of that topic, and it was still happening.  We finally gave up 
and renamed the topic to get around the issue.

Is there a Kafka tool we could have used to find the connected consumer in that 
consumer group?  Maybe by name or by IP?

Thanks,
Jillian



Re: Kafka cluster with a different version that the java API

2016-08-08 Thread Sergio Gonzalez
Perfect,  thank you so much Alex

On Fri, Aug 5, 2016 at 6:03 PM, Alex Loddengaard  wrote:

> Hi Sergio, clients have to be the same version or older than the brokers. A
> newer client won't work with an older broker.
>
> Alex
>
> On Fri, Aug 5, 2016 at 7:37 AM, Sergio Gonzalez <
> sgonza...@cecropiasolutions.com> wrote:
>
> > Hi users,
> >
> > Is there some issue if I create the kafka cluster using the
> > kafka_2.10-0.8.2.0 version  and I have my java producers and consumers
> with
> > the 0.10.0.0 version?
> >
> > 
> > org.apache.kafka
> > kafka-clients
> > 0.10.0.0
> > 
> > 
> >org.apache.kafka
> >kafka-streams
> >0.10.0.0
> > 
> >
> > What are the repercutions that i could have if I use this enviroment?
> >
> > Thanks,
> > Sergio GQ
> >
>



-- 
*Regards**,*

*Sergio A. González Quirós*
*Computer Engineering, QA **Engineer*

*Cartago, Costa Rica*
*Telephone*: *(+506) 8965-3890*


Kafka ACLs CLI Auth Error

2016-08-08 Thread Derar Alassi
Hi all,

I have  3-node ZK and Kafka clusters. I have secured ZK with SASL. I got
the keytabs done for my brokers and they can connect to the ZK ensemble
just fine with no issues. All gravy!

Now, I am trying to set ACLs using the kafka-acls.sh CLI. Before that, I
did export the KAFKA_OPTS using the following command:


 export  
KAFKA_OPTS="-Djava.security.auth.login.config=/kafka_server_jaas.conf
-Djavax.net.debug=all -Dsun.security.krb5.debug=true -Djavax.net.debug=all
-Dsun.security.krb5.debug=true -Djava.security.krb5.conf=/krb5.conf"

I enabled extra debugging too. The JAAS file has the following info:

KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/+kafka.keytab"
principal="kafka/@MY_DOMAIN";
};
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
useTicketCache=true
storeKey=true
keyTab="/etc/+kafka.keytab"
principal="kafka/@MY_DOMAIN";
};

Note that I enabled useTicketCache in the client section.

I know that my krb5.conf file is good since the brokers are healthy and
consumer/producers are able to do their work.

Two scenarios:

1. When I enabled the useTicketCache=true, I get the following error:

*Aug 08, 2016 8:42:46 PM org.apache.zookeeper.ClientCnxn$SendThread
startConnectWARNING: SASL configuration failed:
javax.security.auth.login.LoginException: No key to store Will continue
connection to Zookeeper server without SASL authentication, if Zookeeper
server allows it.*

I execute "kinit kafka/@ -k -t
/etc/+kafka.keytab " on the same shell where I run the .sh CLI
tool.
2. When I remove userTicketCache, I get the following error:








*Aug 08, 2016 9:03:38 PM org.apache.zookeeper.ZooKeeper closeINFO: Session:
0x356621f18f70009 closedError while executing ACL command:
org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode =
NoAuth for /kafka-acl/TopicAug 08, 2016 9:03:38 PM
org.apache.zookeeper.ClientCnxn$EventThread runINFO: EventThread shut
downorg.I0Itec.zkclient.exception.ZkException:
org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode =
NoAuth for /kafka-acl/Topicat
org.I0Itec.zkclient.exception.ZkException.create(ZkException.java:68)*


Here is the command I run to set the ACLs in all cases:
./bin/kafka-acls.sh --authorizer-properties zookeeper.connect=:2181
--add --allow-principal User:Bob --producer --topic ssl-topic


I use Kafka 0.9.0.1. Note that I am using the same keytabs that my Brokers
(Kafka services) are using.


Any ideas what I am doing wrong or what I should do differently to get ACLs
set?

Thanks,
Derar


Re: Large # of Topics/Partitions

2016-08-08 Thread Daniel Fagnan
Thanks Tom! This was very helpful and I’ll explore having a more static set of 
partitions as that seems to fit Kafka a lot better.

Cheers,
Daniel

> On Aug 8, 2016, at 12:27 PM, Tom Crayford  wrote:
> 
> Hi Daniel,
> 
> Kafka doesn't provide this kind of isolation or scalability for many many
> streams. The usual design is to use a consistent hash of some "key" to
> attribute your data to a particular partition. That of course, doesn't
> isolate things fully, but has everything in a partition dependent on each
> other.
> 
> We've found that over a few thousand to a few tens of thousands of
> partitions clusters hit a lot of issues (it depends on the write pattern,
> how much memory you give brokers and zookeeper, and if you plan on ever
> deleting topics).
> 
> Another option is to manage multiple clusters, and keep under a certain
> limit of partitions in each cluster. That is of course additional
> operational overhead and complexity.
> 
> I'm not sure I 100% understand your mechanism for tracking pending offsets,
> but it seems like that might be your best option.
> 
> Thanks
> 
> Tom Crayford
> Heroku Kafka
> 
> On Mon, Aug 8, 2016 at 8:12 PM, Daniel Fagnan  wrote:
> 
>> Hey all,
>> 
>> I’m currently in the process of designing a system around Kafka and I’m
>> wondering the recommended way to manage topics. Each event stream we have
>> needs to be isolated from each other. A failure from one should not affect
>> another event stream from processing (by failure, we mean a downstream
>> failure that would require us to replay the messages).
>> 
>> So my first thought was to create a topic per event stream. This allows a
>> larger event stream to be partitioned for added parallelism but keep the
>> default # of partitions down as much as possible. This would solve the
>> isolation requirement in that a topic can keep failing and we’ll continue
>> replaying the messages without affected all the other topics.
>> 
>> We read it’s not recommended to have your data model dictate the # of
>> partitions or topics in Kafka and we’re unsure about this approach if we
>> need to triple our event stream.
>> 
>> We’re currently looking at 10,000 event streams (or topics) but we don’t
>> want to be spinning up additional brokers just so we can add more event
>> stream, especially if the load for each is reasonable.
>> 
>> Another option we were looking into was to not isolate at the
>> topic/partition level but to keep a set of pending offsets persisted
>> somewhere (seemingly what Twitter Heron or Storm does but they don’t seem
>> to persist the pending offsets).
>> 
>> Thoughts?



Large # of Topics/Partitions

2016-08-08 Thread Daniel Fagnan
Hey all,

I’m currently in the process of designing a system around Kafka and I’m 
wondering the recommended way to manage topics. Each event stream we have needs 
to be isolated from each other. A failure from one should not affect another 
event stream from processing (by failure, we mean a downstream failure that 
would require us to replay the messages).

So my first thought was to create a topic per event stream. This allows a 
larger event stream to be partitioned for added parallelism but keep the 
default # of partitions down as much as possible. This would solve the 
isolation requirement in that a topic can keep failing and we’ll continue 
replaying the messages without affected all the other topics.

We read it’s not recommended to have your data model dictate the # of 
partitions or topics in Kafka and we’re unsure about this approach if we need 
to triple our event stream.

We’re currently looking at 10,000 event streams (or topics) but we don’t want 
to be spinning up additional brokers just so we can add more event stream, 
especially if the load for each is reasonable.

Another option we were looking into was to not isolate at the topic/partition 
level but to keep a set of pending offsets persisted somewhere (seemingly what 
Twitter Heron or Storm does but they don’t seem to persist the pending offsets).

Thoughts?

Re: Testing broker failover

2016-08-08 Thread Alper Akture
Got, it thanks, exactly what I had just noticed and sent a reply as you
were replying... thanks Alex.

On Mon, Aug 8, 2016 at 11:35 AM, Alex Loddengaard  wrote:

> Hi Alper,
>
> Thanks for sharing. I was particularly interested in seeing what *acks* was
> set to. Since you haven't set it, its value is the default, *1*.
>
> To handle errors, you need to use the send() method that takes a callback,
> and build an appropriate callback to handle errors. Take a look here for an
> example:
>
> http://kafka.apache.org/0100/javadoc/org/apache/kafka/clients/producer/
> KafkaProducer.html#send(org.apache.kafka.clients.producer.
> ProducerRecord,%20org.apache.kafka.clients.producer.Callback)
>
> Let us know if you have follow-up questions.
>
> Alex
>
> On Mon, Aug 8, 2016 at 11:24 AM, Alper Akture 
> wrote:
>
> > Thanks Alex... using producer props:
> >
> > {timeout.ms=500, max.block.ms=500, request.timeout.ms=500,
> > bootstrap.servers=localhost:9092,
> > serializer.class=kafka.serializer.StringEncoder,
> > value.serializer=org.apache.kafka.common.serialization.StringSerializer,
> > metadata.fetch.timeout.ms=500,
> > key.serializer=org.apache.kafka.common.serialization.StringSerializer}
> >
> >
> >
> >
> > On Mon, Aug 8, 2016 at 9:21 AM, Alex Loddengaard 
> > wrote:
> >
> > > Hi Alper, can you share your producer config -- the Properties object?
> We
> > > need to learn more to help you understand the behavior you're
> observing.
> > >
> > > Thanks,
> > >
> > > Alex
> > >
> > > On Fri, Aug 5, 2016 at 7:45 PM, Alper Akture <
> al...@goldenratstudios.com
> > >
> > > wrote:
> > >
> > > > I'm using 0.10.0.0 and testing some failover scenarios. For dev, i
> have
> > > > single kafka node and a zookeeper instance. While sending events to a
> > > > topic, I shutdown the broker to see if my failover handling works.
> > > However,
> > > > I don't see any indication that the send failed, but I do see the
> > > > connection refused errors logged at debug. What is the standard way
> to
> > > > detect a message send failure, and handle it for offline processing
> > > later?
> > > >
> > > > Here's the debug output I see:
> > > >
> > > > 19:20:00.906 [kafka-producer-network-thread | producer-1] DEBUG
> > > > org.apache.kafka.clients.NetworkClient - Initialize connection to
> node
> > > -1
> > > > for sending metadata request
> > > > 19:20:00.906 [kafka-producer-network-thread | producer-1] DEBUG
> > > > org.apache.kafka.clients.NetworkClient - Initiating connection to
> node
> > > -1
> > > > at localhost:9092.
> > > > 19:20:00.907 [kafka-producer-network-thread | producer-1] DEBUG
> > > > org.apache.kafka.common.network.Selector - Connection with
> localhost/
> > > > 127.0.0.1 disconnected
> > > > java.net.ConnectException: Connection refused
> > > > at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> > > ~[?:1.8.0_66]
> > > > at sun.nio.ch.SocketChannelImpl.finishConnect(
> > > SocketChannelImpl.java:717)
> > > > ~[?:1.8.0_66]
> > > > at
> > > > org.apache.kafka.common.network.PlaintextTransportLayer.
> finishConnect(
> > > > PlaintextTransportLayer.java:51)
> > > > ~[kafka-clients-0.10.0.0.jar:?]
> > > > at
> > > > org.apache.kafka.common.network.KafkaChannel.
> > finishConnect(KafkaChannel.
> > > > java:73)
> > > > ~[kafka-clients-0.10.0.0.jar:?]
> > > > at
> > > > org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.
> > > > java:309)
> > > > [kafka-clients-0.10.0.0.jar:?]
> > > > at org.apache.kafka.common.network.Selector.poll(Selector.java:283)
> > > > [kafka-clients-0.10.0.0.jar:?]
> > > > at org.apache.kafka.clients.NetworkClient.poll(
> NetworkClient.java:260)
> > > > [kafka-clients-0.10.0.0.jar:?]
> > > > at org.apache.kafka.clients.producer.internals.Sender.run(
> > > Sender.java:229)
> > > > [kafka-clients-0.10.0.0.jar:?]
> > > > at org.apache.kafka.clients.producer.internals.Sender.run(
> > > Sender.java:134)
> > > > [kafka-clients-0.10.0.0.jar:?]
> > > > at java.lang.Thread.run(Thread.java:745) [?:1.8.0_66]
> > > > 19:20:00.907 [kafka-producer-network-thread | producer-1] DEBUG
> > > > org.apache.kafka.clients.NetworkClient - Node -1 disconnected.
> > > > 19:20:00.907 [kafka-producer-network-thread | producer-1] DEBUG
> > > > org.apache.kafka.clients.NetworkClient - Give up sending metadata
> > > request
> > > > since no node is available
> > > >
> > >
> >
>


Re: Testing broker failover

2016-08-08 Thread Alex Loddengaard
Hi Alper,

Thanks for sharing. I was particularly interested in seeing what *acks* was
set to. Since you haven't set it, its value is the default, *1*.

To handle errors, you need to use the send() method that takes a callback,
and build an appropriate callback to handle errors. Take a look here for an
example:

http://kafka.apache.org/0100/javadoc/org/apache/kafka/clients/producer/KafkaProducer.html#send(org.apache.kafka.clients.producer.ProducerRecord,%20org.apache.kafka.clients.producer.Callback)

Let us know if you have follow-up questions.

Alex

On Mon, Aug 8, 2016 at 11:24 AM, Alper Akture 
wrote:

> Thanks Alex... using producer props:
>
> {timeout.ms=500, max.block.ms=500, request.timeout.ms=500,
> bootstrap.servers=localhost:9092,
> serializer.class=kafka.serializer.StringEncoder,
> value.serializer=org.apache.kafka.common.serialization.StringSerializer,
> metadata.fetch.timeout.ms=500,
> key.serializer=org.apache.kafka.common.serialization.StringSerializer}
>
>
>
>
> On Mon, Aug 8, 2016 at 9:21 AM, Alex Loddengaard 
> wrote:
>
> > Hi Alper, can you share your producer config -- the Properties object? We
> > need to learn more to help you understand the behavior you're observing.
> >
> > Thanks,
> >
> > Alex
> >
> > On Fri, Aug 5, 2016 at 7:45 PM, Alper Akture  >
> > wrote:
> >
> > > I'm using 0.10.0.0 and testing some failover scenarios. For dev, i have
> > > single kafka node and a zookeeper instance. While sending events to a
> > > topic, I shutdown the broker to see if my failover handling works.
> > However,
> > > I don't see any indication that the send failed, but I do see the
> > > connection refused errors logged at debug. What is the standard way to
> > > detect a message send failure, and handle it for offline processing
> > later?
> > >
> > > Here's the debug output I see:
> > >
> > > 19:20:00.906 [kafka-producer-network-thread | producer-1] DEBUG
> > > org.apache.kafka.clients.NetworkClient - Initialize connection to node
> > -1
> > > for sending metadata request
> > > 19:20:00.906 [kafka-producer-network-thread | producer-1] DEBUG
> > > org.apache.kafka.clients.NetworkClient - Initiating connection to node
> > -1
> > > at localhost:9092.
> > > 19:20:00.907 [kafka-producer-network-thread | producer-1] DEBUG
> > > org.apache.kafka.common.network.Selector - Connection with localhost/
> > > 127.0.0.1 disconnected
> > > java.net.ConnectException: Connection refused
> > > at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> > ~[?:1.8.0_66]
> > > at sun.nio.ch.SocketChannelImpl.finishConnect(
> > SocketChannelImpl.java:717)
> > > ~[?:1.8.0_66]
> > > at
> > > org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(
> > > PlaintextTransportLayer.java:51)
> > > ~[kafka-clients-0.10.0.0.jar:?]
> > > at
> > > org.apache.kafka.common.network.KafkaChannel.
> finishConnect(KafkaChannel.
> > > java:73)
> > > ~[kafka-clients-0.10.0.0.jar:?]
> > > at
> > > org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.
> > > java:309)
> > > [kafka-clients-0.10.0.0.jar:?]
> > > at org.apache.kafka.common.network.Selector.poll(Selector.java:283)
> > > [kafka-clients-0.10.0.0.jar:?]
> > > at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:260)
> > > [kafka-clients-0.10.0.0.jar:?]
> > > at org.apache.kafka.clients.producer.internals.Sender.run(
> > Sender.java:229)
> > > [kafka-clients-0.10.0.0.jar:?]
> > > at org.apache.kafka.clients.producer.internals.Sender.run(
> > Sender.java:134)
> > > [kafka-clients-0.10.0.0.jar:?]
> > > at java.lang.Thread.run(Thread.java:745) [?:1.8.0_66]
> > > 19:20:00.907 [kafka-producer-network-thread | producer-1] DEBUG
> > > org.apache.kafka.clients.NetworkClient - Node -1 disconnected.
> > > 19:20:00.907 [kafka-producer-network-thread | producer-1] DEBUG
> > > org.apache.kafka.clients.NetworkClient - Give up sending metadata
> > request
> > > since no node is available
> > >
> >
>


Re: Testing broker failover

2016-08-08 Thread Alper Akture
(continued from previous email)

Hit send too soon...  but I also notice that if I use the producer send
method that takes a callback, in the onCompletion method of the callback, I
do see an exception is sent in the callback:

org.apache.kafka.common.errors.TimeoutException: Batch containing 2
record(s) expired due to timeout while requesting metadata from brokers for
BACCARAT_ROUND_END-0

On Mon, Aug 8, 2016 at 11:34 AM, Alper Akture 
wrote:

> I also notice that if I use the send method that takes a callback:  public
> Future send(ProducerRecord record, Callback
> callback) {
> call
> that in the onCompletionalso all
>
> On Mon, Aug 8, 2016 at 11:24 AM, Alper Akture 
> wrote:
>
>> Thanks Alex... using producer props:
>>
>> {timeout.ms=500, max.block.ms=500, request.timeout.ms=500,
>> bootstrap.servers=localhost:9092, 
>> serializer.class=kafka.serializer.StringEncoder,
>> value.serializer=org.apache.kafka.common.serialization.StringSerializer,
>> metadata.fetch.timeout.ms=500, key.serializer=org.apache.kafk
>> a.common.serialization.StringSerializer}
>>
>>
>>
>>
>> On Mon, Aug 8, 2016 at 9:21 AM, Alex Loddengaard 
>> wrote:
>>
>>> Hi Alper, can you share your producer config -- the Properties object? We
>>> need to learn more to help you understand the behavior you're observing.
>>>
>>> Thanks,
>>>
>>> Alex
>>>
>>> On Fri, Aug 5, 2016 at 7:45 PM, Alper Akture >> >
>>> wrote:
>>>
>>> > I'm using 0.10.0.0 and testing some failover scenarios. For dev, i have
>>> > single kafka node and a zookeeper instance. While sending events to a
>>> > topic, I shutdown the broker to see if my failover handling works.
>>> However,
>>> > I don't see any indication that the send failed, but I do see the
>>> > connection refused errors logged at debug. What is the standard way to
>>> > detect a message send failure, and handle it for offline processing
>>> later?
>>> >
>>> > Here's the debug output I see:
>>> >
>>> > 19:20:00.906 [kafka-producer-network-thread | producer-1] DEBUG
>>> > org.apache.kafka.clients.NetworkClient - Initialize connection to
>>> node -1
>>> > for sending metadata request
>>> > 19:20:00.906 [kafka-producer-network-thread | producer-1] DEBUG
>>> > org.apache.kafka.clients.NetworkClient - Initiating connection to
>>> node -1
>>> > at localhost:9092.
>>> > 19:20:00.907 [kafka-producer-network-thread | producer-1] DEBUG
>>> > org.apache.kafka.common.network.Selector - Connection with localhost/
>>> > 127.0.0.1 disconnected
>>> > java.net.ConnectException: Connection refused
>>> > at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>>> ~[?:1.8.0_66]
>>> > at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl
>>> .java:717)
>>> > ~[?:1.8.0_66]
>>> > at
>>> > org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(
>>> > PlaintextTransportLayer.java:51)
>>> > ~[kafka-clients-0.10.0.0.jar:?]
>>> > at
>>> > org.apache.kafka.common.network.KafkaChannel.finishConnect(K
>>> afkaChannel.
>>> > java:73)
>>> > ~[kafka-clients-0.10.0.0.jar:?]
>>> > at
>>> > org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.
>>> > java:309)
>>> > [kafka-clients-0.10.0.0.jar:?]
>>> > at org.apache.kafka.common.network.Selector.poll(Selector.java:283)
>>> > [kafka-clients-0.10.0.0.jar:?]
>>> > at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:260)
>>> > [kafka-clients-0.10.0.0.jar:?]
>>> > at org.apache.kafka.clients.producer.internals.Sender.run(Sende
>>> r.java:229)
>>> > [kafka-clients-0.10.0.0.jar:?]
>>> > at org.apache.kafka.clients.producer.internals.Sender.run(Sende
>>> r.java:134)
>>> > [kafka-clients-0.10.0.0.jar:?]
>>> > at java.lang.Thread.run(Thread.java:745) [?:1.8.0_66]
>>> > 19:20:00.907 [kafka-producer-network-thread | producer-1] DEBUG
>>> > org.apache.kafka.clients.NetworkClient - Node -1 disconnected.
>>> > 19:20:00.907 [kafka-producer-network-thread | producer-1] DEBUG
>>> > org.apache.kafka.clients.NetworkClient - Give up sending metadata
>>> request
>>> > since no node is available
>>> >
>>>
>>
>>
>


Strange behavior when turn the system clock back.

2016-08-08 Thread Gabriel Ibarra
I am dealing with an issue when turn the system clock back (either due to
NTP or administrator action). I'm using kafka_2.11-0.10.0.0

I follow the next steps.
- Start a consumer for TOPIC_NAME with group id GROUP_NAME. It will be
owner of all the partitions.
- Turn the system clock back. For instance 1 hour.
- Start a new consumer for TOPIC_NAME  using the same group id, it will
force a rebalance.

After these actions the kafka server logs constantly the below
messages, and after
a while both consumers do not receive more packages. I saw that this
condition lasts at least the time that the clock went back, for this
example 1 hour, and finally after this time kafka come back to work.

[2016-08-08 11:30:23,023] INFO [GroupCoordinator 0]: Preparing to
restabilize group GROUP_NAME with old generation 2
(kafka.coordinator.GroupCoordinator)
[2016-08-08 11:30:23,025] INFO [GroupCoordinator 0]: Stabilized group
GROUP_NAME generation 3 (kafka.coordinator.GroupCoordinator)
[2016-08-08 11:30:23,027] INFO [GroupCoordinator 0]: Preparing to
restabilize group GROUP_NAME with old generation 3
(kafka.coordinator.GroupCoordinator)
[2016-08-08 11:30:23,029] INFO [GroupCoordinator 0]: Group GROUP_NAME
generation 3 is dead and removed (kafka.coordinator.GroupCoordinator)
[2016-08-08 11:30:23,032] INFO [GroupCoordinator 0]: Preparing to
restabilize group GROUP_NAME with old generation 0
(kafka.coordinator.GroupCoordinator)
[2016-08-08 11:30:23,032] INFO [GroupCoordinator 0]: Stabilized group
GROUP_NAME generation 1 (kafka.coordinator.GroupCoordinator)
[2016-08-08 11:30:23,033] INFO [GroupCoordinator 0]: Preparing to
restabilize group GROUP_NAME with old generation 1
(kafka.coordinator.GroupCoordinator)
[2016-08-08 11:30:23,034] INFO [GroupCoordinator 0]: Group GROUP generation
1 is dead and removed (kafka.coordinator.GroupCoordinator)
[2016-08-08 11:30:23,043] INFO [GroupCoordinator 0]: Preparing to
restabilize group GROUP_NAME with old generation 0
(kafka.coordinator.GroupCoordinator)
[2016-08-08 11:30:23,044] INFO [GroupCoordinator 0]: Stabilized group
GROUP_NAME generation 1 (kafka.coordinator.GroupCoordinator)
[2016-08-08 11:30:23,044] INFO [GroupCoordinator 0]: Preparing to
restabilize group GROUP_NAME with old generation 1
(kafka.coordinator.GroupCoordinator)
[2016-08-08 11:30:23,045] INFO [GroupCoordinator 0]: Group GROUP_NAME
generation 1 is dead and removed (kafka.coordinator.GroupCoordinator)

IMHO, I think that kafka's consumers has to work fine after any change of
system clock, but maybe this behavior has fundamentals that I don't know.

I'm sorry if it was discussed previously, I was researching but I didn't
found a similar issue.

Thanks,


-- 



Gabriel Alejandro Ibarra

Software Engineer

San Lorenzo 47, 3rd Floor, Office 5

Córdoba, Argentina

Phone: +54 351 4217888


Re: [kafka-clients] [VOTE] 0.10.0.1 RC2

2016-08-08 Thread Ismael Juma
+1 (non-binding), verified checksums and signature, ran tests on source
package and verified quickstart on source and binary packages.

On Mon, Aug 8, 2016 at 2:17 AM, Harsha Chintalapani  wrote:

> +1 (binding)
> 1. Ran 3 node cluser
> 2. Ran few tests in creating, producing , consuming from secure &
> non-secure clients.
>
> Thanks,
> Harsha
>
> On Fri, Aug 5, 2016 at 8:50 PM Manikumar Reddy 
> wrote:
>
> > +1 (non-binding).
> > verified quick start and artifacts.
> >
> > On Sat, Aug 6, 2016 at 5:45 AM, Joel Koshy  wrote:
> >
> > > +1 (binding)
> > >
> > > Thanks Ismael!
> > >
> > > On Thu, Aug 4, 2016 at 6:54 AM, Ismael Juma  wrote:
> > >
> > >> Hello Kafka users, developers and client-developers,
> > >>
> > >> This is the third candidate for the release of Apache Kafka 0.10.0.1.
> > >> This is a bug fix release and it includes fixes and improvements from
> 53
> > >> JIRAs (including a few critical bugs). See the release notes for more
> > >> details:
> > >>
> > >> http://home.apache.org/~ijuma/kafka-0.10.0.1-rc2/RELEASE_NOTES.html
> > >>
> > >> When compared to RC1, RC2 contains a fix for a regression where an
> older
> > >> version of slf4j-log4j12 was also being included in the libs folder of
> > the
> > >> binary tarball (KAFKA-4008). Thanks to Manikumar Reddy for reporting
> the
> > >> issue.
> > >>
> > >> *** Please download, test and vote by Monday, 8 August, 8am PT ***
> > >>
> > >> Kafka's KEYS file containing PGP keys we use to sign the release:
> > >> http://kafka.apache.org/KEYS
> > >>
> > >> * Release artifacts to be voted upon (source and binary):
> > >> http://home.apache.org/~ijuma/kafka-0.10.0.1-rc2/
> > >>
> > >> * Maven artifacts to be voted upon:
> > >> https://repository.apache.org/content/groups/staging
> > >>
> > >> * Javadoc:
> > >> http://home.apache.org/~ijuma/kafka-0.10.0.1-rc2/javadoc/
> > >>
> > >> * Tag to be voted upon (off 0.10.0 branch) is the 0.10.0.1-rc2 tag:
> > >> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
> > >> f8f56751744ba8e55f90f5c4f3aed8c3459447b2
> > >>
> > >> * Documentation:
> > >> http://kafka.apache.org/0100/documentation.html
> > >>
> > >> * Protocol:
> > >> http://kafka.apache.org/0100/protocol.html
> > >>
> > >> * Successful Jenkins builds for the 0.10.0 branch:
> > >> Unit/integration tests: *
> > https://builds.apache.org/job/kafka-0.10.0-jdk7/182/
> > >> *
> > >> System tests: *
> > https://jenkins.confluent.io/job/system-test-kafka-0.10.0/138/
> > >> *
> > >>
> > >> Thanks,
> > >> Ismael
> > >>
> > >> --
> > >> You received this message because you are subscribed to the Google
> > Groups
> > >> "kafka-clients" group.
> > >> To unsubscribe from this group and stop receiving emails from it, send
> > an
> > >> email to kafka-clients+unsubscr...@googlegroups.com.
> > >> To post to this group, send email to kafka-clie...@googlegroups.com.
> > >> Visit this group at https://groups.google.com/group/kafka-clients.
> > >> To view this discussion on the web visit
> https://groups.google.com/d/ms
> > >> gid/kafka-clients/CAD5tkZYMMxDEjg_2jt4x-mVZZHgJ6EC6HKSf4Hn%2
> > >> Bi59DbTdVoQ%40mail.gmail.com
> > >> <
> > https://groups.google.com/d/msgid/kafka-clients/CAD5tkZYMMxDEjg_2jt4x-
> mVZZHgJ6EC6HKSf4Hn%2Bi59DbTdVoQ%40mail.gmail.com?
> utm_medium=email_source=footer
> > >
> > >> .
> > >> For more options, visit https://groups.google.com/d/optout.
> > >>
> > >
> > > --
> > > You received this message because you are subscribed to the Google
> Groups
> > > "kafka-clients" group.
> > > To unsubscribe from this group and stop receiving emails from it, send
> an
> > > email to kafka-clients+unsubscr...@googlegroups.com.
> > > To post to this group, send email to kafka-clie...@googlegroups.com.
> > > Visit this group at https://groups.google.com/group/kafka-clients.
> > > To view this discussion on the web visit https://groups.google.com/d/
> > > msgid/kafka-clients/CAAOfhrAUcmrFRH2PpsLLmv579WDOi
> > > oMOcpy1LBrLJfdWff5iFA%40mail.gmail.com
> > > <
> > https://groups.google.com/d/msgid/kafka-clients/
> CAAOfhrAUcmrFRH2PpsLLmv579WDOioMOcpy1LBrLJfdWff5iFA%40mail.
> gmail.com?utm_medium=email_source=footer
> > >
> > > .
> > >
> > > For more options, visit https://groups.google.com/d/optout.
> > >
> >
>


[RESULTS] [VOTE] Release Kafka 0.10.0.1

2016-08-08 Thread Ismael Juma
The vote for the Kafka 0.10.0.1 release passed with 14 +1 votes (4 binding)
and no 0 or -1 votes.

+1 votes
PMC Members:
* Neha Narkhede
* Guozhang Wang
* Jun Rao
* Joel Koshy

Committers:
* Gwen Shapira
* Harsha Chintalapani
* Ismael Juma

Community:
* Jim Jagielski
* Tom Crayford
* Dana Powers
* Grant Henke
* Jason Gustafson
* Vahid Hashemian
* Manikumar Reddy

0 votes
* No votes

-1 votes
* No votes

Vote thread:
http://mail-archives.apache.org/mod_mbox/kafka-dev/201608.mbox/%3ccad5tkzymmxdejg_2jt4x-mvzzhgj6ec6hksf4hn+i59dbtd...@mail.gmail.com%3E

I'll continue with the release process and the release announcement will
follow in the next few days.

Ismael


RE: Kafka consumer getting duplicate message

2016-08-08 Thread Ghosh, Achintya (Contractor)
Thank you , Ewen for your response.
Actually we are using 1.0.0.M2 Spring Kafka release that uses Kafka 0.9 release.
Yes, we see a lot of duplicates and here is our producer and consumer settings 
in application. We don't see any duplicacy at Producer end I mean if we send 
1000 messages to a particular Topic we receive exactly (sometimes less) 1000 
messages.

But when we consume the message at Consumer level we see a lot of messages with 
same offset value and same partition , so please let us know what tweaking is 
needed to avaoid the duplicacy.

We have three types of Topics and each topic has 3 replication factors and 10 
partitions.

Producer Configuration:

bootstrap.producer.servers=provisioningservices-aq-dev.g.comcast.net:80
acks=1
retries=3
batch.size=16384
linger.ms=5
buffer.memory=33554432
request.timeout.ms=6
timeout.ms=6
key.serializer=org.apache.kafka.common.serialization.StringSerializer
value.serializer=com.comcast.provisioning.provisioning_.kafka.CustomMessageSer

Consumer Configuration:

bootstrap.consumer.servers=provisioningservices-aqr-dev.g.comcast.net:80
group.id=ps-consumer-group
enable.auto.commit=false
auto.commit.interval.ms=100
session.timeout.ms=15000
key.deserializer=org.apache.kafka.common.serialization.StringDeserializer
value.deserializer=com.comcast.provisioning.provisioning_.kafka.CustomMessageDeSer

factory.getContainerProperties().setSyncCommits(true);
factory.setConcurrency(5);

Thanks
Achintya


-Original Message-
From: Ewen Cheslack-Postava [mailto:e...@confluent.io] 
Sent: Saturday, August 06, 2016 1:45 AM
To: users@kafka.apache.org
Cc: d...@kafka.apache.org
Subject: Re: Kafka consumer getting duplicate message

Achintya,

1.0.0.M2 is not an official release, so this version number is not particularly 
meaningful to people on this list. What platform/distribution are you using and 
how does this map to actual Apache Kafka releases?

In general, it is not possible for any system to guarantee exactly once 
semantics because those semantics rely on the source and destination systems 
coordinating -- the source provides some sort of retry semantics, and the 
destination system needs to do some sort of deduplication or similar to only 
"deliver" the data one time.

That said, duplicates should usually only be generated in the face of failures. 
If you're seeing a lot of duplicates, that probably means shutdown/failover is 
not being handled correctly. If you can provide more info about your setup, we 
might be able to suggest tweaks that will avoid these situations.

-Ewen

On Fri, Aug 5, 2016 at 8:15 AM, Ghosh, Achintya (Contractor) < 
achintya_gh...@comcast.com> wrote:

> Hi there,
>
> We are using Kafka 1.0.0.M2 with Spring and we see a lot of duplicate 
> message is getting received by the Listener onMessage() method .
> We configured :
>
> enable.auto.commit=false
> session.timeout.ms=15000
> factory.getContainerProperties().setSyncCommits(true);
> factory.setConcurrency(5);
>
> So what could be the reason to get the duplicate messages?
>
> Thanks
> Achintya
>



--
Thanks,
Ewen


Re: Log compaction leaves empty files?

2016-08-08 Thread Harald Kirsch

Hi Dustin,

thanks for the reply. to be honest, I am trying to work around 
https://issues.apache.org/jira/browse/KAFKA-1194.


I implemented a small main() to call the LogCleaner for compaction and 
it seemed to work, but left the zero-byte files. The idea is to stop the 
broker, then run the compaction as a separate process, then restart the 
broker in order to get around this problem where the whole process trips 
over its own shoe laces due to Windows' file locking.


Since I haven't seen the compaction working on Windows yet, I was 
wondering whether this is to be expected. Should the compaction remove 
the zero-byte files are would the broker do this? I'm not yet enough 
into the cleanup code to understand this.


Regards,
Harald

On 05.08.2016 16:23, Dustin Cote wrote:

Harald,

I note that your last modified times are all the same.  Are you maybe using
Java 7?  There's some details here that a JDK bug in Java 7 causes the last
modified time to get updated on broker restart:
https://issues.apache.org/jira/browse/KAFKA-3802



On Fri, Aug 5, 2016 at 6:12 AM, Harald Kirsch 
wrote:


Hi,

experimenting with log compaction, I see Kafka go through all the steps,
in particular I see positive messages in log-cleaner.log and *.deleted
files. Yet once the *.deleted segment files have disappeared, the segment
and index files with size 0 are still kept.

I stopped and restarted Kafka and saw several rounds of the log cleaner go
by, but the empty files stay:

-rw-rw-r-- 1 0 Aug  5 11:58 .index
-rw-rw-r-- 1 0 Aug  5 11:58 .log
-rw-rw-r-- 1 0 Aug  5 11:58 0051.index
-rw-rw-r-- 1 0 Aug  5 11:58 0051.log
-rw-rw-r-- 1 0 Aug  5 11:58 0096.index
-rw-rw-r-- 1 0 Aug  5 11:58 0096.log
[... snip ...]
-rw-rw-r-- 1 92041 Aug  5 11:58 0680.log
-rw-rw-r-- 1 0 Aug  5 11:58 0727.index
-rw-rw-r-- 1199822 Aug  5 11:58 0727.log
-rw-rw-r-- 1  10485760 Aug  5 11:58 0781.index
-rw-rw-r-- 1 95972 Aug  5 11:58 0781.log

Is this expected behavior or is there yet another configuration option
that defines when these get purged?

Harald.







Re: Re: Failed to parse the broker info from zookeeeper

2016-08-08 Thread Sven Ott
Ah I got it. A underscore is not allowed and the host address contained one (I 
did't posed the full name here).
 
Thanks for the quick help.

Best,
Sven

Gesendet: Montag, 08. August 2016 um 12:46 Uhr
Von: "Radoslaw Gruchalski" 
An: users@kafka.apache.org, users@kafka.apache.org
Cc: users@kafka.apache.org
Betreff: Re: Failed to parse the broker info from zookeeeper
The exception is:
Caused by: kafka.common.KafkaException: Unable to parse PLAINTEXT://sven:9092 
to a broker endpoint 
And it happens 
here:https://github.com/apache/kafka/blob/0.10.0/core/src/main/scala/kafka/cluster/EndPoint.scala#L47
Do you have any non-ASCII characters in your URI? Something in your host string 
causes the regex to fail: 
https://github.com/apache/kafka/blob/0.10.0/core/src/main/scala/kafka/cluster/EndPoint.scala#L29[https://github.com/apache/kafka/blob/0.10.0/core/src/main/scala/kafka/cluster/EndPoint.scala#L29]

--
Best regards,
Rad




On Mon, Aug 8, 2016 at 12:22 PM +0200, "Sven Ott"  wrote:











It fails in step 3: 
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 
--partitions 1 --topic test

Error while executing topic command : Failed to parse the broker info from 
zookeeper: 
{"jmx_port":,"timestamp":"1470650706921","endpoints":["PLAINTEXT://sven:9092"],"host":"sven","version":3,"port":9092}
[2016-08-08 12:05:12,266] ERROR kafka.common.KafkaException: Failed to parse 
the broker info from zookeeper: 
{"jmx_port":,"timestamp":"1470650706921","endpoints":["PLAINTEXT://sven:9092"],"host":"sven","version":3,"port":9092}
at kafka.cluster.Broker$.createBroker(Broker.scala:101)
at kafka.utils.ZkUtils.getBrokerInfo(ZkUtils.scala:787)
at 
kafka.utils.ZkUtils$$anonfun$getAllBrokersInCluster$2.apply(ZkUtils.scala:162)
at 
kafka.utils.ZkUtils$$anonfun$getAllBrokersInCluster$2.apply(ZkUtils.scala:162)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at kafka.utils.ZkUtils.getAllBrokersInCluster(ZkUtils.scala:162)
at kafka.admin.AdminUtils$.getBrokerMetadatas(AdminUtils.scala:380)
at kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:402)
at kafka.admin.TopicCommand$.createTopic(TopicCommand.scala:110)
at kafka.admin.TopicCommand$.main(TopicCommand.scala:61)
at kafka.admin.TopicCommand.main(TopicCommand.scala)
Caused by: kafka.common.KafkaException: Unable to parse PLAINTEXT://sven:9092 
to a broker endpoint
at kafka.cluster.EndPoint$.createEndPoint(EndPoint.scala:50)
at kafka.cluster.Broker$$anonfun$1.apply(Broker.scala:90)
at kafka.cluster.Broker$$anonfun$1.apply(Broker.scala:89)
at scala.collection.immutable.List.map(List.scala:273)
at kafka.cluster.Broker$.createBroker(Broker.scala:89)
... 15 more
(kafka.admin.TopicCommand$)

___
The kafka-server process does not show any output after starting the topic 
process.

__
Whereas the zookeeper throws an KeeperException already before trying to 
generate a topic.

[2016-08-08 12:05:06,828] INFO Got user-level KeeperException when processing 
sessionid:0x156699ccfaa type:delete cxid:0x20 zxid:0xf3 txntype:-1 
reqpath:n/a Error Path:/admin/preferred_replica_election Error:KeeperErrorCode 
= NoNode for /admin/preferred_replica_election 
(org.apache.zookeeper.server.PrepRequestProcessor)
[2016-08-08 12:05:06,923] INFO Got user-level KeeperException when processing 
sessionid:0x156699ccfaa type:create cxid:0x27 zxid:0xf4 txntype:-1 
reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers 
(org.apache.zookeeper.server.PrepRequestProcessor)
[2016-08-08 12:05:06,924] INFO Got user-level KeeperException when processing 
sessionid:0x156699ccfaa type:create cxid:0x28 zxid:0xf5 txntype:-1 
reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for 
/brokers/ids (org.apache.zookeeper.server.PrepRequestProcessor)

And gives that output after trying to start the topic.

[2016-08-08 12:05:12,102] INFO Accepted socket connection from /127.0.0.1:41983 
(org.apache.zookeeper.server.NIOServerCnxnFactory)
[2016-08-08 12:05:12,104] INFO Client attempting to establish new session at 
/127.0.0.1:41983 (org.apache.zookeeper.server.ZooKeeperServer)
[2016-08-08 12:05:12,169] INFO Established session 0x156699ccfaa0001 with 
negotiated timeout 3 for client /127.0.0.1:41983 
(org.apache.zookeeper.server.ZooKeeperServer)
[2016-08-08 12:05:12,267] INFO Processed session termination for sessionid: 
0x156699ccfaa0001 (org.apache.zookeeper.server.PrepRequestProcessor)
[2016-08-08 12:05:12,332] INFO Closed socket connection for client 
/127.0.0.1:41983 which had 

Re: Failed to parse the broker info from zookeeeper

2016-08-08 Thread Radoslaw Gruchalski
The exception is:
Caused by: kafka.common.KafkaException: Unable to parse PLAINTEXT://sven:9092 
to a broker endpoint 
And it happens 
here:https://github.com/apache/kafka/blob/0.10.0/core/src/main/scala/kafka/cluster/EndPoint.scala#L47
Do you have any non-ASCII characters in your URI? Something in your host string 
causes the regex to fail: 
https://github.com/apache/kafka/blob/0.10.0/core/src/main/scala/kafka/cluster/EndPoint.scala#L29

-- 
Best regards,
Rad




On Mon, Aug 8, 2016 at 12:22 PM +0200, "Sven Ott"  wrote:











It fails in step 3: 
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 
--partitions 1 --topic test

Error while executing topic command : Failed to parse the broker info from 
zookeeper: 
{"jmx_port":,"timestamp":"1470650706921","endpoints":["PLAINTEXT://sven:9092"],"host":"sven","version":3,"port":9092}
[2016-08-08 12:05:12,266] ERROR kafka.common.KafkaException: Failed to parse 
the broker info from zookeeper: 
{"jmx_port":,"timestamp":"1470650706921","endpoints":["PLAINTEXT://sven:9092"],"host":"sven","version":3,"port":9092}
at kafka.cluster.Broker$.createBroker(Broker.scala:101)
at kafka.utils.ZkUtils.getBrokerInfo(ZkUtils.scala:787)
at 
kafka.utils.ZkUtils$$anonfun$getAllBrokersInCluster$2.apply(ZkUtils.scala:162)
at 
kafka.utils.ZkUtils$$anonfun$getAllBrokersInCluster$2.apply(ZkUtils.scala:162)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at kafka.utils.ZkUtils.getAllBrokersInCluster(ZkUtils.scala:162)
at kafka.admin.AdminUtils$.getBrokerMetadatas(AdminUtils.scala:380)
at kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:402)
at kafka.admin.TopicCommand$.createTopic(TopicCommand.scala:110)
at kafka.admin.TopicCommand$.main(TopicCommand.scala:61)
at kafka.admin.TopicCommand.main(TopicCommand.scala)
Caused by: kafka.common.KafkaException: Unable to parse PLAINTEXT://sven:9092 
to a broker endpoint
at kafka.cluster.EndPoint$.createEndPoint(EndPoint.scala:50)
at kafka.cluster.Broker$$anonfun$1.apply(Broker.scala:90)
at kafka.cluster.Broker$$anonfun$1.apply(Broker.scala:89)
at scala.collection.immutable.List.map(List.scala:273)
at kafka.cluster.Broker$.createBroker(Broker.scala:89)
... 15 more
 (kafka.admin.TopicCommand$)

___
The kafka-server process does not show any output after starting the topic 
process.

__
Whereas the zookeeper throws an KeeperException already before trying to 
generate a topic.

[2016-08-08 12:05:06,828] INFO Got user-level KeeperException when processing 
sessionid:0x156699ccfaa type:delete cxid:0x20 zxid:0xf3 txntype:-1 
reqpath:n/a Error Path:/admin/preferred_replica_election Error:KeeperErrorCode 
= NoNode for /admin/preferred_replica_election 
(org.apache.zookeeper.server.PrepRequestProcessor)
[2016-08-08 12:05:06,923] INFO Got user-level KeeperException when processing 
sessionid:0x156699ccfaa type:create cxid:0x27 zxid:0xf4 txntype:-1 
reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers 
(org.apache.zookeeper.server.PrepRequestProcessor)
[2016-08-08 12:05:06,924] INFO Got user-level KeeperException when processing 
sessionid:0x156699ccfaa type:create cxid:0x28 zxid:0xf5 txntype:-1 
reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for 
/brokers/ids (org.apache.zookeeper.server.PrepRequestProcessor)

And gives that output after trying to start the topic.

[2016-08-08 12:05:12,102] INFO Accepted socket connection from /127.0.0.1:41983 
(org.apache.zookeeper.server.NIOServerCnxnFactory)
[2016-08-08 12:05:12,104] INFO Client attempting to establish new session at 
/127.0.0.1:41983 (org.apache.zookeeper.server.ZooKeeperServer)
[2016-08-08 12:05:12,169] INFO Established session 0x156699ccfaa0001 with 
negotiated timeout 3 for client /127.0.0.1:41983 
(org.apache.zookeeper.server.ZooKeeperServer)
[2016-08-08 12:05:12,267] INFO Processed session termination for sessionid: 
0x156699ccfaa0001 (org.apache.zookeeper.server.PrepRequestProcessor)
[2016-08-08 12:05:12,332] INFO Closed socket connection for client 
/127.0.0.1:41983 which had sessionid 0x156699ccfaa0001 
(org.apache.zookeeper.server.NIOServerCnxn)

Thanks for helping me!

Best

Sven
 

Gesendet: Montag, 08. August 2016 um 10:04 Uhr
Von: "Jaikiran Pai" 
An: users@kafka.apache.org
Betreff: Re: Failed to parse the broker info from zookeeeper
The quickstart step 2 has a couple of 

Aw: Re: Failed to parse the broker info from zookeeeper

2016-08-08 Thread Sven Ott

It fails in step 3: 
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 
--partitions 1 --topic test

Error while executing topic command : Failed to parse the broker info from 
zookeeper: 
{"jmx_port":,"timestamp":"1470650706921","endpoints":["PLAINTEXT://sven:9092"],"host":"sven","version":3,"port":9092}
[2016-08-08 12:05:12,266] ERROR kafka.common.KafkaException: Failed to parse 
the broker info from zookeeper: 
{"jmx_port":,"timestamp":"1470650706921","endpoints":["PLAINTEXT://sven:9092"],"host":"sven","version":3,"port":9092}
at kafka.cluster.Broker$.createBroker(Broker.scala:101)
at kafka.utils.ZkUtils.getBrokerInfo(ZkUtils.scala:787)
at 
kafka.utils.ZkUtils$$anonfun$getAllBrokersInCluster$2.apply(ZkUtils.scala:162)
at 
kafka.utils.ZkUtils$$anonfun$getAllBrokersInCluster$2.apply(ZkUtils.scala:162)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at kafka.utils.ZkUtils.getAllBrokersInCluster(ZkUtils.scala:162)
at kafka.admin.AdminUtils$.getBrokerMetadatas(AdminUtils.scala:380)
at kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:402)
at kafka.admin.TopicCommand$.createTopic(TopicCommand.scala:110)
at kafka.admin.TopicCommand$.main(TopicCommand.scala:61)
at kafka.admin.TopicCommand.main(TopicCommand.scala)
Caused by: kafka.common.KafkaException: Unable to parse PLAINTEXT://sven:9092 
to a broker endpoint
at kafka.cluster.EndPoint$.createEndPoint(EndPoint.scala:50)
at kafka.cluster.Broker$$anonfun$1.apply(Broker.scala:90)
at kafka.cluster.Broker$$anonfun$1.apply(Broker.scala:89)
at scala.collection.immutable.List.map(List.scala:273)
at kafka.cluster.Broker$.createBroker(Broker.scala:89)
... 15 more
 (kafka.admin.TopicCommand$)

___
The kafka-server process does not show any output after starting the topic 
process.

__
Whereas the zookeeper throws an KeeperException already before trying to 
generate a topic.

[2016-08-08 12:05:06,828] INFO Got user-level KeeperException when processing 
sessionid:0x156699ccfaa type:delete cxid:0x20 zxid:0xf3 txntype:-1 
reqpath:n/a Error Path:/admin/preferred_replica_election Error:KeeperErrorCode 
= NoNode for /admin/preferred_replica_election 
(org.apache.zookeeper.server.PrepRequestProcessor)
[2016-08-08 12:05:06,923] INFO Got user-level KeeperException when processing 
sessionid:0x156699ccfaa type:create cxid:0x27 zxid:0xf4 txntype:-1 
reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers 
(org.apache.zookeeper.server.PrepRequestProcessor)
[2016-08-08 12:05:06,924] INFO Got user-level KeeperException when processing 
sessionid:0x156699ccfaa type:create cxid:0x28 zxid:0xf5 txntype:-1 
reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for 
/brokers/ids (org.apache.zookeeper.server.PrepRequestProcessor)

And gives that output after trying to start the topic.

[2016-08-08 12:05:12,102] INFO Accepted socket connection from /127.0.0.1:41983 
(org.apache.zookeeper.server.NIOServerCnxnFactory)
[2016-08-08 12:05:12,104] INFO Client attempting to establish new session at 
/127.0.0.1:41983 (org.apache.zookeeper.server.ZooKeeperServer)
[2016-08-08 12:05:12,169] INFO Established session 0x156699ccfaa0001 with 
negotiated timeout 3 for client /127.0.0.1:41983 
(org.apache.zookeeper.server.ZooKeeperServer)
[2016-08-08 12:05:12,267] INFO Processed session termination for sessionid: 
0x156699ccfaa0001 (org.apache.zookeeper.server.PrepRequestProcessor)
[2016-08-08 12:05:12,332] INFO Closed socket connection for client 
/127.0.0.1:41983 which had sessionid 0x156699ccfaa0001 
(org.apache.zookeeper.server.NIOServerCnxn)

Thanks for helping me!

Best

Sven
 

Gesendet: Montag, 08. August 2016 um 10:04 Uhr
Von: "Jaikiran Pai" 
An: users@kafka.apache.org
Betreff: Re: Failed to parse the broker info from zookeeeper
The quickstart step 2 has a couple of commands, which exact command
shows this exception and is there more in the exception, like an
exception stacktrace? Can you post that somehow?

-Jaikiran
On Monday 08 August 2016 12:46 PM, Sven Ott wrote:
> Hello everyone,
>
> I downloaded the latest Kafka verison and tried to run the quick start. 
> Unfortunately, I cannot open a topic after starting the zookeeper and kafka 
> server as described in step 2. The error
>
> Failed to parse the broker info from zookeeeper: {"jmx_port":-1, 
> "timestamp":"...", 
> 

Failed to parse the broker info from zookeeeper

2016-08-08 Thread Sven Ott
Hello everyone,
 
I downloaded the latest Kafka verison and tried to run the quick start. 
Unfortunately, I cannot open a topic after starting the zookeeper and kafka 
server as described in step 2. The error 

Failed to parse the broker info from zookeeeper: {"jmx_port":-1, 
"timestamp":"...", 
"endpoints":["PLAINSTEXT://sven:9092]","host":"sven","version":3,"port":9092}
 
appears.
 
I analysied the source code and tried to find a solution with google, but I 
could not found any helpful hint. Looks like I am the first one with that kind 
of error what I cannot believe. What am I doing wrong? Is there another tool 
missing?
 
Best,
Sven