Topic deletion issues

2015-12-30 Thread Brenden Cobb
Hello-

We have a use case where we're trying to create a topic, delete, then recreate 
with the same topic name.

Running into inconsistant results.

Creating the topic:
/opt/kafka/bin/kafka-topics.sh --create --partitions 3 --replication-factor 3 
--topic test-01 --zookeeper zoo01:2181, zoo02:2181, zoo03:2181

Delete:
/opt/kafka/bin/kafka-topics.sh --delete --topic test-01 --zookeeper zoo01:2181, 
zoo02:2181, zoo03:2181

Repeat creation.

The results are inconsistant. Executing the above several times can be 
successful, then sporadically we get caught in "topic marked for deletion" and 
it does not clear.

This appears to be a Zookeeper issue of sorts as the logs will show:
[2015-12-30 22:32:32,946] WARN Conditional update of path 
/brokers/topics/test-01/partitions/0/state with data 
{"controller_epoch":21,"leader":2,"version":1,"leader_epoch":1,"isr":[2,0,1]} 
and expected version 1 failed due to 
org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode 
for /brokers/topics/test-01/partitions/0/state (kafka.utils.ZkUtils$)

In this instance no subdirectories exist beyond /brokers/topics/test-01

I'd like to know if this is a common occurrance and why the Zookeeper node 
isn't "fully" created as Kafka deletion seems stuck without the expected node 
path.

We are using Kafka 8.2

Appreciate any info/guidance.

Thanks,
BC


Re: MD5 checksum on release

2015-12-30 Thread Xavier Stevens
Jun,

I'm not saying what you're doing is wrong it just wasn't what I expected.
It looks like all of Apache's release process pages is using GPG from what
I can tell, which is fine.

To answer your question though about sha1 and sha2. The GNU coreutils are
in the form of sum (Examples: md5sum, sha1sum, sha256sum,
sha512sum).

Don't feel like you have to do this on my account. One thing that would be
nice though, is if you generate sha2 hashes can you put the link on the
Kafka downloads page?

Cheers,


Xavier

On Wed, Dec 30, 2015 at 4:00 PM, Jun Rao  wrote:

> Xavier,
>
> We also generate sha1 and sha2. Do we have to use different tools to
> generate those too?
>
> Thanks,
>
> Jun
>
> On Wed, Dec 30, 2015 at 2:29 PM, Xavier Stevens  wrote:
>
> > Hey Jun,
> >
> > I was expecting that you just used md5sum (GNU version).
> >
> > The nice part of using it is that when scripting a check it has a -c
> > option:
> >
> > md5sum -c kafka_2.11-0.9.0.0.tgz.md5
> >
> > The difficult bit with what is currently there, is that it has a whole
> > bunch of newlines and spacing in it. So I had to do some janky shell
> > scripting to parse out what I wanted. Here's the basic gist of it if
> anyone
> > finds this useful:
> >
> > SOURCE_CHECKSUM=`md5sum kafka_2.11-0.9.0.0.tgz | awk '{print $1}'`
> > TARGET_CHECKSUM=`cat kafka_2.11-0.9.0.0.tgz.md5 | tr -d '\n' | awk '{
> line
> > = sprintf("%s", $0); gsub(/[[:space:]]/, "", line); split(line, parts,
> > ":"); print tolower(parts[2]) }'`
> > if [ "$SOURCE_CHECKSUM" == "$TARGET_CHECKSUM" ]
> > ...
> > fi
> >
> > Not a huge deal I just think it would make it easier for folks to use the
> > former rather than GPG.
> >
> > Cheers,
> >
> >
> > Xavier
> >
> >
> > On Wed, Dec 30, 2015 at 2:00 PM, Jun Rao  wrote:
> >
> > > Xavier,
> > >
> > > The md5 checksum is generated by running "gpg --print-md MD5". Is
> there a
> > > command that generates the output that you wanted?
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > > On Tue, Dec 29, 2015 at 5:13 PM, Xavier Stevens 
> > wrote:
> > >
> > > > The current md5 checksums of the release downloads all seem to be
> > > returning
> > > > in an atypical format. Anyone know what's going on there?
> > > >
> > > > Example:
> > > >
> > > >
> > >
> >
> https://dist.apache.org/repos/dist/release/kafka/0.9.0.0/kafka_2.11-0.9.0.0.tgz.md5
> > > >
> > > > I see:
> > > >
> > > > kafka_2.11-0.9.0.0.tgz: 08 4F B8
> > > > 0C DC 8C
> > > > 72 DC  75
> > > > BC 35 19
> > > > A5 D2 CC
> > > > 5C
> > > >
> > > > I would expect to see something more like:
> > > >
> > > > 084fb80cdc8c72dc75bc3519a5d2cc5c kafka_2.11-0.9.0.0.tgz
> > > >
> > >
> >
>


Re: MD5 checksum on release

2015-12-30 Thread Jun Rao
Xavier,

We also generate sha1 and sha2. Do we have to use different tools to
generate those too?

Thanks,

Jun

On Wed, Dec 30, 2015 at 2:29 PM, Xavier Stevens  wrote:

> Hey Jun,
>
> I was expecting that you just used md5sum (GNU version).
>
> The nice part of using it is that when scripting a check it has a -c
> option:
>
> md5sum -c kafka_2.11-0.9.0.0.tgz.md5
>
> The difficult bit with what is currently there, is that it has a whole
> bunch of newlines and spacing in it. So I had to do some janky shell
> scripting to parse out what I wanted. Here's the basic gist of it if anyone
> finds this useful:
>
> SOURCE_CHECKSUM=`md5sum kafka_2.11-0.9.0.0.tgz | awk '{print $1}'`
> TARGET_CHECKSUM=`cat kafka_2.11-0.9.0.0.tgz.md5 | tr -d '\n' | awk '{ line
> = sprintf("%s", $0); gsub(/[[:space:]]/, "", line); split(line, parts,
> ":"); print tolower(parts[2]) }'`
> if [ "$SOURCE_CHECKSUM" == "$TARGET_CHECKSUM" ]
> ...
> fi
>
> Not a huge deal I just think it would make it easier for folks to use the
> former rather than GPG.
>
> Cheers,
>
>
> Xavier
>
>
> On Wed, Dec 30, 2015 at 2:00 PM, Jun Rao  wrote:
>
> > Xavier,
> >
> > The md5 checksum is generated by running "gpg --print-md MD5". Is there a
> > command that generates the output that you wanted?
> >
> > Thanks,
> >
> > Jun
> >
> > On Tue, Dec 29, 2015 at 5:13 PM, Xavier Stevens 
> wrote:
> >
> > > The current md5 checksums of the release downloads all seem to be
> > returning
> > > in an atypical format. Anyone know what's going on there?
> > >
> > > Example:
> > >
> > >
> >
> https://dist.apache.org/repos/dist/release/kafka/0.9.0.0/kafka_2.11-0.9.0.0.tgz.md5
> > >
> > > I see:
> > >
> > > kafka_2.11-0.9.0.0.tgz: 08 4F B8
> > > 0C DC 8C
> > > 72 DC  75
> > > BC 35 19
> > > A5 D2 CC
> > > 5C
> > >
> > > I would expect to see something more like:
> > >
> > > 084fb80cdc8c72dc75bc3519a5d2cc5c kafka_2.11-0.9.0.0.tgz
> > >
> >
>


Re: MD5 checksum on release

2015-12-30 Thread Xavier Stevens
Hey Jun,

I was expecting that you just used md5sum (GNU version).

The nice part of using it is that when scripting a check it has a -c option:

md5sum -c kafka_2.11-0.9.0.0.tgz.md5

The difficult bit with what is currently there, is that it has a whole
bunch of newlines and spacing in it. So I had to do some janky shell
scripting to parse out what I wanted. Here's the basic gist of it if anyone
finds this useful:

SOURCE_CHECKSUM=`md5sum kafka_2.11-0.9.0.0.tgz | awk '{print $1}'`
TARGET_CHECKSUM=`cat kafka_2.11-0.9.0.0.tgz.md5 | tr -d '\n' | awk '{ line
= sprintf("%s", $0); gsub(/[[:space:]]/, "", line); split(line, parts,
":"); print tolower(parts[2]) }'`
if [ "$SOURCE_CHECKSUM" == "$TARGET_CHECKSUM" ]
...
fi

Not a huge deal I just think it would make it easier for folks to use the
former rather than GPG.

Cheers,


Xavier


On Wed, Dec 30, 2015 at 2:00 PM, Jun Rao  wrote:

> Xavier,
>
> The md5 checksum is generated by running "gpg --print-md MD5". Is there a
> command that generates the output that you wanted?
>
> Thanks,
>
> Jun
>
> On Tue, Dec 29, 2015 at 5:13 PM, Xavier Stevens  wrote:
>
> > The current md5 checksums of the release downloads all seem to be
> returning
> > in an atypical format. Anyone know what's going on there?
> >
> > Example:
> >
> >
> https://dist.apache.org/repos/dist/release/kafka/0.9.0.0/kafka_2.11-0.9.0.0.tgz.md5
> >
> > I see:
> >
> > kafka_2.11-0.9.0.0.tgz: 08 4F B8
> > 0C DC 8C
> > 72 DC  75
> > BC 35 19
> > A5 D2 CC
> > 5C
> >
> > I would expect to see something more like:
> >
> > 084fb80cdc8c72dc75bc3519a5d2cc5c kafka_2.11-0.9.0.0.tgz
> >
>


Re: MD5 checksum on release

2015-12-30 Thread Jun Rao
Xavier,

The md5 checksum is generated by running "gpg --print-md MD5". Is there a
command that generates the output that you wanted?

Thanks,

Jun

On Tue, Dec 29, 2015 at 5:13 PM, Xavier Stevens  wrote:

> The current md5 checksums of the release downloads all seem to be returning
> in an atypical format. Anyone know what's going on there?
>
> Example:
>
> https://dist.apache.org/repos/dist/release/kafka/0.9.0.0/kafka_2.11-0.9.0.0.tgz.md5
>
> I see:
>
> kafka_2.11-0.9.0.0.tgz: 08 4F B8
> 0C DC 8C
> 72 DC  75
> BC 35 19
> A5 D2 CC
> 5C
>
> I would expect to see something more like:
>
> 084fb80cdc8c72dc75bc3519a5d2cc5c kafka_2.11-0.9.0.0.tgz
>


Re: EOF Warning

2015-12-30 Thread Dana Powers
I was thinking kafka logs, but KAFKA-2078 suggests it may be a deeper
issue. Sorry, I don't have any better suggestions / ideas right now than
you found in that JIRA ticket.

-Dana

On Wed, Dec 30, 2015 at 10:10 AM, Birendra Kumar Singh 
wrote:

> Looks like there is an open issue reated to the same.
> https://issues.apache.org/jira/browse/KAFKA-2078
>
> @Dana
> Which server logs do you want me to check. Zookeeper or the kafka??I didnt
> find any stack trace over there though.
> Its only in my appliation logs that I see them. And it comes as a WARN
> rather then ERROR
>
> On Wed, Dec 30, 2015 at 9:51 PM, Dana Powers 
> wrote:
>
> > Do you have access to the server logs? Any error is likely recorded there
> > with a stack trace. You also might check what server version you are
> > connecting to.
> >
> > -Dana
> > On Dec 30, 2015 3:49 AM, "Birendra Kumar Singh" 
> > wrote:
> >
> > > I keep getting such warnings intermittenly in my application . The
> > > application connects to a kafka server and pushes messages. None of my
> > > messages have failed howeever.
> > >
> > > The application is a spring application and it uses kafka-clients to
> > > establish connection and send messages to kafka
> > > kafka-clients used is a below
> > >
> > > 
> > >
> > > org.apache.kafka
> > >
> > > kafka-clients
> > >
> > > 0.8.2.0
> > >
> > > 
> > >
> > > ! java.io.EOFException: null! at
> > >
> > >
> >
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:62)
> > > ~[publisher2-0.0.1-SNAPSHOT.jar:na]! at
> > > org.apache.kafka.common.network.Selector.poll(Selector.java:248)
> > > ~[publisher2-0.0.1-SNAPSHOT.jar:na]! at
> > > org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
> > > [publisher2-0.0.1-SNAPSHOT.jar:na]! at
> > > org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
> > > [publisher2-0.0.1-SNAPSHOT.jar:na]! at
> > > org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
> > > [publisher2-0.0.1-SNAPSHOT.jar:na]! at
> > > java.lang.Thread.run(Thread.java:745) [na:1.7.0_91]
> > >
> >
>


Re: consuming 0 records

2015-12-30 Thread Dana Powers
A few thoughts from a non-expert:

connections are also processed asynchronously in the poll loop. If you are
not enabling any timeout, you may be seeing a few initial iterations spent
on setting up the channel connections. Also you probably need a few loop
iterations to get through an initial metadata request / response.

also, if I recall, records should be returned in batches per
topic-partition; not one-by-one. So if/when records are ready, you would
get as many as were received via completed FetchRequests -- depends on
message size and fetch configs max.partition.fetch.bytes, fetch.min.bytes,
and fetch.max.wait.ms. So you shouldn't expect to poll 500x.

I'd suggest using a small, but non-zero timeout when polling. 100ms is used
in the docs quite a bit.

-Dana

On Wed, Dec 30, 2015 at 10:03 AM, Franco Giacosa  wrote:

> Hi,
>
> I am running kafka 0.9.0 locally.
>
> I am having a particular situation in the following scenario.
>
> (1) 1 Producer inserts 500 records (300bytes each aprox) to 1 topic 0
> partition (or 1 as you prefer)
> (2) After the producer finished inserting the 500 records, 1 Consumer reads
> in a loop from this topic with consumer.poll(0)
> and max.partition.fetch.bytes=500, sometimes that call brings records and
> something the loop has to go over a few times until it brings something.
> Can someone explain me why it doesn't fetch a record each time that it
> polls? can a poll operation affect another poll operation?
> why if I've inserted 500 records I have to poll more than 500 times?
>
> I have tried using poll(0), because in the documentation it says, "if 0,
> returns with any records that are available now".
>
> Thanks
>


Re: How to reset a consumer-group's offset in kafka 0.9?

2015-12-30 Thread Marko Bonaći
Hi Han,
if it doesn't work you should file an issue, since it explicitly says in
the readme that it works with:
  1 zookeeper built-in high-level consumer (based on Zookeeper)
  2 kafka built-in offset management API (based on Kafka internal topic)
  3 Storm Kafka Spout (based on Zookeeper by default)

I'm still on Kafka 0.8, so I can't shed any light on your issue.
Thx for the AdminClient info.

Marko Bonaći
Monitoring | Alerting | Anomaly Detection | Centralized Log Management
Solr & Elasticsearch Support
Sematext  | Contact


On Wed, Dec 30, 2015 at 6:10 PM, Han JU  wrote:

> Hi Marko,
>
> Yes we're currently using this on our production kafka 0.8. But it does not
> seem to work with the new consumer API in 0.9.
> To answer my own question about deleting consumer group in new consumer
> API, it seems that it's currently not possible with the new consumer API
> (there's no delete related method in `AdminClient` of the new consumer
> API).
>
>
> 2015-12-30 17:02 GMT+01:00 Marko Bonaći :
>
> > If you want to monitor offset (ZK or Kafka based), try with QuantFind's
> > Kafka Offset Monitor.
> > If you use Docker, it's easy as:
> >
> > docker run -p 8080:8080 -e ZK=zk_hostname:2181
> > jpodeszwik/kafka-offset-monitor
> > and then opening browser to dockerhost:8080.
> >
> > If not in the Docker mood, use instructions here:
> > https://github.com/quantifind/KafkaOffsetMonitor
> >
> > Marko Bonaći
> > Monitoring | Alerting | Anomaly Detection | Centralized Log Management
> > Solr & Elasticsearch Support
> > Sematext  | Contact
> > 
> >
> > On Wed, Dec 30, 2015 at 12:54 PM, Han JU  wrote:
> >
> > > Thanks guys. The `seek` seems a solution. But it's more cumbersome than
> > in
> > > 0.8 because I have to plug in some extra code in my consumer
> abstractions
> > > rather than simply deleting a zk node.
> > > And one more question: where does kafka 0.9 stores the consumer-group
> > > information? In fact I also tried to delete the consumer group but the
> > > `AdminUtils.deleteConsumerGroupInZK` does not seem to work in 0.9. And
> > also
> > > `bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --zookeeper
> > > localhost:2181 --group group-name` seems broken.
> > >
> > > Thanks!
> > >
> > > 2015-12-29 16:46 GMT+01:00 Marko Bonaći :
> > >
> > > > I was refering to Dana Powers's answer in the link I posted (to use a
> > > > client API). You can find an example here:
> > > >
> > > >
> > >
> >
> http://kafka.apache.org/090/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html
> > > >
> > > > Marko Bonaći
> > > > Monitoring | Alerting | Anomaly Detection | Centralized Log
> Management
> > > > Solr & Elasticsearch Support
> > > > Sematext  | Contact
> > > > 
> > > >
> > > > On Tue, Dec 29, 2015 at 4:41 PM, Stevo Slavić 
> > wrote:
> > > >
> > > > > Then I guess @Before test, explicitly commit offset of 0.
> > > > >
> > > > > There doesn't seem to be a tool for committing offset, only for
> > > > > checking/fetching current offset (see
> > > > > http://kafka.apache.org/documentation.html#operations )
> > > > >
> > > > > On Tue, Dec 29, 2015 at 4:35 PM, Han JU 
> > > wrote:
> > > > >
> > > > > > Hi Stevo,
> > > > > >
> > > > > > But by deleting and recreating the topic, do I remove also the
> > > messages
> > > > > > ingested?
> > > > > > My use case is that I ingest prepared messages once and run
> > consumer
> > > > > tests
> > > > > > multiple times, between each test run I reset the consumer
> group's
> > > > offset
> > > > > > so that each run starts from the beginning and consumers all the
> > > > > messages.
> > > > > >
> > > > > > 2015-12-29 16:19 GMT+01:00 Stevo Slavić :
> > > > > >
> > > > > > > Have you considered deleting and recreating topic used in test?
> > > > > > > Once topic is clean, read/poll once - any committed offset
> should
> > > be
> > > > > > > outside of the range, and consumer should reset offset.
> > > > > > >
> > > > > > > On Tue, Dec 29, 2015 at 4:11 PM, Han JU <
> ju.han.fe...@gmail.com>
> > > > > wrote:
> > > > > > >
> > > > > > > > Hello,
> > > > > > > >
> > > > > > > > For local test purpose I need to frequently reset offset for
> a
> > > > > consumer
> > > > > > > > group. In 0.8 I just delete the consumer group's zk node
> under
> > > > > > > > `/consumers`. But with the redesign of the 0.9, how could I
> > > achieve
> > > > > the
> > > > > > > > same thing?
> > > > > > > >
> > > > > > > > Thanks!
> > > > > > > >
> > > > > > > > --
> > > > > > > > *JU Han*
> > > > > > > >
> > > > > > > > Software Engineer @ Teads.tv
> > > > > > > >
> > > > > > > > +33 061960
> > > > > > > >
> > > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > *JU Han*
> > > > > >
> > > > > > Software Engineer @ Teads.tv
> > > > > >
> > > > > > +33 061960
> > > > > >
> > > > >
> > > >
> > >
>

Re: EOF Warning

2015-12-30 Thread Birendra Kumar Singh
Looks like there is an open issue reated to the same.
https://issues.apache.org/jira/browse/KAFKA-2078

@Dana
Which server logs do you want me to check. Zookeeper or the kafka??I didnt
find any stack trace over there though.
Its only in my appliation logs that I see them. And it comes as a WARN
rather then ERROR

On Wed, Dec 30, 2015 at 9:51 PM, Dana Powers  wrote:

> Do you have access to the server logs? Any error is likely recorded there
> with a stack trace. You also might check what server version you are
> connecting to.
>
> -Dana
> On Dec 30, 2015 3:49 AM, "Birendra Kumar Singh" 
> wrote:
>
> > I keep getting such warnings intermittenly in my application . The
> > application connects to a kafka server and pushes messages. None of my
> > messages have failed howeever.
> >
> > The application is a spring application and it uses kafka-clients to
> > establish connection and send messages to kafka
> > kafka-clients used is a below
> >
> > 
> >
> > org.apache.kafka
> >
> > kafka-clients
> >
> > 0.8.2.0
> >
> > 
> >
> > ! java.io.EOFException: null! at
> >
> >
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:62)
> > ~[publisher2-0.0.1-SNAPSHOT.jar:na]! at
> > org.apache.kafka.common.network.Selector.poll(Selector.java:248)
> > ~[publisher2-0.0.1-SNAPSHOT.jar:na]! at
> > org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
> > [publisher2-0.0.1-SNAPSHOT.jar:na]! at
> > org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
> > [publisher2-0.0.1-SNAPSHOT.jar:na]! at
> > org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
> > [publisher2-0.0.1-SNAPSHOT.jar:na]! at
> > java.lang.Thread.run(Thread.java:745) [na:1.7.0_91]
> >
>


consuming 0 records

2015-12-30 Thread Franco Giacosa
Hi,

I am running kafka 0.9.0 locally.

I am having a particular situation in the following scenario.

(1) 1 Producer inserts 500 records (300bytes each aprox) to 1 topic 0
partition (or 1 as you prefer)
(2) After the producer finished inserting the 500 records, 1 Consumer reads
in a loop from this topic with consumer.poll(0)
and max.partition.fetch.bytes=500, sometimes that call brings records and
something the loop has to go over a few times until it brings something.
Can someone explain me why it doesn't fetch a record each time that it
polls? can a poll operation affect another poll operation?
why if I've inserted 500 records I have to poll more than 500 times?

I have tried using poll(0), because in the documentation it says, "if 0,
returns with any records that are available now".

Thanks


Reactive Kafka now supports Kafka 0.9

2015-12-30 Thread Krzysztof Ciesielski
Hi there,

I'd like to announce that our open source library, Reactive Kafka (which
wraps Akka Streams for Java/Scala around Kafka consumers/producers), now
supports Kafka 0.9. More details:
https://softwaremill.com/reactive-kafka-09/


Re: How to reset a consumer-group's offset in kafka 0.9?

2015-12-30 Thread Han JU
Hi Marko,

Yes we're currently using this on our production kafka 0.8. But it does not
seem to work with the new consumer API in 0.9.
To answer my own question about deleting consumer group in new consumer
API, it seems that it's currently not possible with the new consumer API
(there's no delete related method in `AdminClient` of the new consumer API).


2015-12-30 17:02 GMT+01:00 Marko Bonaći :

> If you want to monitor offset (ZK or Kafka based), try with QuantFind's
> Kafka Offset Monitor.
> If you use Docker, it's easy as:
>
> docker run -p 8080:8080 -e ZK=zk_hostname:2181
> jpodeszwik/kafka-offset-monitor
> and then opening browser to dockerhost:8080.
>
> If not in the Docker mood, use instructions here:
> https://github.com/quantifind/KafkaOffsetMonitor
>
> Marko Bonaći
> Monitoring | Alerting | Anomaly Detection | Centralized Log Management
> Solr & Elasticsearch Support
> Sematext  | Contact
> 
>
> On Wed, Dec 30, 2015 at 12:54 PM, Han JU  wrote:
>
> > Thanks guys. The `seek` seems a solution. But it's more cumbersome than
> in
> > 0.8 because I have to plug in some extra code in my consumer abstractions
> > rather than simply deleting a zk node.
> > And one more question: where does kafka 0.9 stores the consumer-group
> > information? In fact I also tried to delete the consumer group but the
> > `AdminUtils.deleteConsumerGroupInZK` does not seem to work in 0.9. And
> also
> > `bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --zookeeper
> > localhost:2181 --group group-name` seems broken.
> >
> > Thanks!
> >
> > 2015-12-29 16:46 GMT+01:00 Marko Bonaći :
> >
> > > I was refering to Dana Powers's answer in the link I posted (to use a
> > > client API). You can find an example here:
> > >
> > >
> >
> http://kafka.apache.org/090/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html
> > >
> > > Marko Bonaći
> > > Monitoring | Alerting | Anomaly Detection | Centralized Log Management
> > > Solr & Elasticsearch Support
> > > Sematext  | Contact
> > > 
> > >
> > > On Tue, Dec 29, 2015 at 4:41 PM, Stevo Slavić 
> wrote:
> > >
> > > > Then I guess @Before test, explicitly commit offset of 0.
> > > >
> > > > There doesn't seem to be a tool for committing offset, only for
> > > > checking/fetching current offset (see
> > > > http://kafka.apache.org/documentation.html#operations )
> > > >
> > > > On Tue, Dec 29, 2015 at 4:35 PM, Han JU 
> > wrote:
> > > >
> > > > > Hi Stevo,
> > > > >
> > > > > But by deleting and recreating the topic, do I remove also the
> > messages
> > > > > ingested?
> > > > > My use case is that I ingest prepared messages once and run
> consumer
> > > > tests
> > > > > multiple times, between each test run I reset the consumer group's
> > > offset
> > > > > so that each run starts from the beginning and consumers all the
> > > > messages.
> > > > >
> > > > > 2015-12-29 16:19 GMT+01:00 Stevo Slavić :
> > > > >
> > > > > > Have you considered deleting and recreating topic used in test?
> > > > > > Once topic is clean, read/poll once - any committed offset should
> > be
> > > > > > outside of the range, and consumer should reset offset.
> > > > > >
> > > > > > On Tue, Dec 29, 2015 at 4:11 PM, Han JU 
> > > > wrote:
> > > > > >
> > > > > > > Hello,
> > > > > > >
> > > > > > > For local test purpose I need to frequently reset offset for a
> > > > consumer
> > > > > > > group. In 0.8 I just delete the consumer group's zk node under
> > > > > > > `/consumers`. But with the redesign of the 0.9, how could I
> > achieve
> > > > the
> > > > > > > same thing?
> > > > > > >
> > > > > > > Thanks!
> > > > > > >
> > > > > > > --
> > > > > > > *JU Han*
> > > > > > >
> > > > > > > Software Engineer @ Teads.tv
> > > > > > >
> > > > > > > +33 061960
> > > > > > >
> > > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > *JU Han*
> > > > >
> > > > > Software Engineer @ Teads.tv
> > > > >
> > > > > +33 061960
> > > > >
> > > >
> > >
> >
> >
> >
> > --
> > *JU Han*
> >
> > Software Engineer @ Teads.tv
> >
> > +33 061960
> >
>



-- 
*JU Han*

Software Engineer @ Teads.tv

+33 061960


Re: EOF Warning

2015-12-30 Thread Dana Powers
Do you have access to the server logs? Any error is likely recorded there
with a stack trace. You also might check what server version you are
connecting to.

-Dana
On Dec 30, 2015 3:49 AM, "Birendra Kumar Singh"  wrote:

> I keep getting such warnings intermittenly in my application . The
> application connects to a kafka server and pushes messages. None of my
> messages have failed howeever.
>
> The application is a spring application and it uses kafka-clients to
> establish connection and send messages to kafka
> kafka-clients used is a below
>
> 
>
> org.apache.kafka
>
> kafka-clients
>
> 0.8.2.0
>
> 
>
> ! java.io.EOFException: null! at
>
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:62)
> ~[publisher2-0.0.1-SNAPSHOT.jar:na]! at
> org.apache.kafka.common.network.Selector.poll(Selector.java:248)
> ~[publisher2-0.0.1-SNAPSHOT.jar:na]! at
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
> [publisher2-0.0.1-SNAPSHOT.jar:na]! at
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
> [publisher2-0.0.1-SNAPSHOT.jar:na]! at
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
> [publisher2-0.0.1-SNAPSHOT.jar:na]! at
> java.lang.Thread.run(Thread.java:745) [na:1.7.0_91]
>


Re: How to reset a consumer-group's offset in kafka 0.9?

2015-12-30 Thread Marko Bonaći
If you want to monitor offset (ZK or Kafka based), try with QuantFind's
Kafka Offset Monitor.
If you use Docker, it's easy as:

docker run -p 8080:8080 -e ZK=zk_hostname:2181
jpodeszwik/kafka-offset-monitor
and then opening browser to dockerhost:8080.

If not in the Docker mood, use instructions here:
https://github.com/quantifind/KafkaOffsetMonitor

Marko Bonaći
Monitoring | Alerting | Anomaly Detection | Centralized Log Management
Solr & Elasticsearch Support
Sematext  | Contact


On Wed, Dec 30, 2015 at 12:54 PM, Han JU  wrote:

> Thanks guys. The `seek` seems a solution. But it's more cumbersome than in
> 0.8 because I have to plug in some extra code in my consumer abstractions
> rather than simply deleting a zk node.
> And one more question: where does kafka 0.9 stores the consumer-group
> information? In fact I also tried to delete the consumer group but the
> `AdminUtils.deleteConsumerGroupInZK` does not seem to work in 0.9. And also
> `bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --zookeeper
> localhost:2181 --group group-name` seems broken.
>
> Thanks!
>
> 2015-12-29 16:46 GMT+01:00 Marko Bonaći :
>
> > I was refering to Dana Powers's answer in the link I posted (to use a
> > client API). You can find an example here:
> >
> >
> http://kafka.apache.org/090/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html
> >
> > Marko Bonaći
> > Monitoring | Alerting | Anomaly Detection | Centralized Log Management
> > Solr & Elasticsearch Support
> > Sematext  | Contact
> > 
> >
> > On Tue, Dec 29, 2015 at 4:41 PM, Stevo Slavić  wrote:
> >
> > > Then I guess @Before test, explicitly commit offset of 0.
> > >
> > > There doesn't seem to be a tool for committing offset, only for
> > > checking/fetching current offset (see
> > > http://kafka.apache.org/documentation.html#operations )
> > >
> > > On Tue, Dec 29, 2015 at 4:35 PM, Han JU 
> wrote:
> > >
> > > > Hi Stevo,
> > > >
> > > > But by deleting and recreating the topic, do I remove also the
> messages
> > > > ingested?
> > > > My use case is that I ingest prepared messages once and run consumer
> > > tests
> > > > multiple times, between each test run I reset the consumer group's
> > offset
> > > > so that each run starts from the beginning and consumers all the
> > > messages.
> > > >
> > > > 2015-12-29 16:19 GMT+01:00 Stevo Slavić :
> > > >
> > > > > Have you considered deleting and recreating topic used in test?
> > > > > Once topic is clean, read/poll once - any committed offset should
> be
> > > > > outside of the range, and consumer should reset offset.
> > > > >
> > > > > On Tue, Dec 29, 2015 at 4:11 PM, Han JU 
> > > wrote:
> > > > >
> > > > > > Hello,
> > > > > >
> > > > > > For local test purpose I need to frequently reset offset for a
> > > consumer
> > > > > > group. In 0.8 I just delete the consumer group's zk node under
> > > > > > `/consumers`. But with the redesign of the 0.9, how could I
> achieve
> > > the
> > > > > > same thing?
> > > > > >
> > > > > > Thanks!
> > > > > >
> > > > > > --
> > > > > > *JU Han*
> > > > > >
> > > > > > Software Engineer @ Teads.tv
> > > > > >
> > > > > > +33 061960
> > > > > >
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > *JU Han*
> > > >
> > > > Software Engineer @ Teads.tv
> > > >
> > > > +33 061960
> > > >
> > >
> >
>
>
>
> --
> *JU Han*
>
> Software Engineer @ Teads.tv
>
> +33 061960
>


Re: Consumer - Failed to find leader

2015-12-30 Thread Harsha
can you add your jass file details. Your jaas file might have
useTicketCache=true and storeKey=true as well example of
KafkaServer jass file

KafkaServer {

com.sun.security.auth.module.Krb5LoginModule required

useKeyTab=true

storeKey=true

serviceName="kafka"

keyTab="/vagrant/keytabs/kafka1.keytab"

principal="kafka/kafka1.witzend@witzend.com"; };

and KafkaClient KafkaClient {

com.sun.security.auth.module.Krb5LoginModule required

useTicketCache=true

serviceName="kafka";

};

On Wed, Dec 30, 2015, at 03:10 AM, prabhu v wrote:
> Hi Harsha,
>
> I have used the Fully qualified domain name. Just for security
> concerns, Before sending this mail,i have replaced our FQDN hostname
> to localhost.
>
> yes, i have tried KINIT and I am able to view the tickets using klist
> command as well.
>
> Thanks, Prabhu
>
> On Wed, Dec 30, 2015 at 11:27 AM, Harsha  wrote:
>> Prabhu,
>>
When using SASL/kerberos always make sure you give FQDN of
>>
the hostname . In your command you are using --zookeeper
>>
localhost:2181 and make sure you change that hostname.
>>
>>
"avax.security.auth.login.LoginException: No key to store Will continue
>>
> connection to Zookeeper server without SASL authentication, if
> Zookeeper"
>>
>> did you try  kinit with that keytab at the command line.
>>
>>
-Harsha
>> On Mon, Dec 28, 2015, at 04:07 AM, prabhu v wrote:
>>
> Thanks for the input Ismael.
>>
>
>>
> I will try and let you know.
>>
>
>>
> Also need your valuable inputs for the below issue:)
>>
>
>>
> i am not able to run kafka-topics.sh(0.9.0.0 version)
>>
>
>>
> [root@localhost bin]# ./kafka-topics.sh --list --zookeeper
> localhost:2181
>>
> [2015-12-28 12:41:32,589] WARN SASL configuration failed:
>>
> javax.security.auth.login.LoginException: No key to store Will
> continue
>>
> connection to Zookeeper server without SASL authentication, if
> Zookeeper
>>
> server allows it. (org.apache.zookeeper.ClientCnxn)
>>
> ^Z
>>
>
>>
> I am sure the key is present in its keytab file ( I have cross
> verified
>>
> using kinit command as well).
>>
>
>>
> Am i missing anything while calling the kafka-topics.sh??
>>
>
>>
>
>>
>
>>
> On Mon, Dec 28, 2015 at 3:53 PM, Ismael Juma
>  wrote:
>>
>
>>
> > Hi Prabhu,
>>
> >
>>
> > kafka-console-consumer.sh uses the old consumer by default, but
> > only the
>>
> > new consumer supports security. Use --new-consumer to change this.
>>
> >
>>
> > Hope this helps.
>>
> >
>>
> > Ismael
>>
> > On 28 Dec 2015 05:48, "prabhu v"  wrote:
>>
> >
>>
> > > Hi Experts,
>>
> > >
>>
> > > I am getting the below error when running the consumer
>>
> > > "kafka-console-consumer.sh" .
>>
> > >
>>
> > > I am using the new version 0.9.0.1.
>>
> > > Topic name: test
>>
> > >
>>
> > >
>>
> > > [2015-12-28 06:13:34,409] WARN
>>
> > >
>>
> > >
>>
> > [console-consumer-61657_localhost-1451283204993-5512891d-leader-
> > finder-thread],
>>
> > > Failed to find leader for Set([test,0])
>>
> > > (kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
>>
> > > kafka.common.BrokerEndPointNotAvailableException: End point
> > > PLAINTEXT not
>>
> > > found for broker 0
>>
> > >at kafka.cluster.Broker.getBrokerEndPoint(Broker.scala:136)
>>
> > >
>>
> > >
>>
> > > Please find the current configuration below.
>>
> > >
>>
> > > Configuration:
>>
> > >
>>
> > >
>>
> > > [root@localhost config]# grep -v "^#" consumer.properties
>>
> > > zookeeper.connect=localhost:2181
>>
> > > zookeeper.connection.timeout.ms=6
>>
> > > group.id=test-consumer-group
>>
> > > security.protocol=SASL_PLAINTEXT
>>
> > > sasl.kerberos.service.name="kafka"
>>
> > >
>>
> > >
>>
> > > [root@localhost config]# grep -v "^#" producer.properties
>>
> > > metadata.broker.list=localhost:9094,localhost:9095
>>
> > > producer.type=sync
>>
> > > compression.codec=none
>>
> > > serializer.class=kafka.serializer.DefaultEncoder
>>
> > > security.protocol=SASL_PLAINTEXT
>>
> > > sasl.kerberos.service.name="kafka"
>>
> > >
>>
> > > [root@localhost config]# grep -v "^#" server1.properties
>>
> > >
>>
> > > broker.id=0
>>
> > > listeners=SASL_PLAINTEXT://localhost:9094
>>
> > > delete.topic.enable=true
>>
> > > num.network.threads=3
>>
> > > num.io.threads=8
>>
> > > socket.send.buffer.bytes=102400
>>
> > > socket.receive.buffer.bytes=102400
>>
> > > socket.request.max.bytes=104857600
>>
> > > log.dirs=/data/kafka_2.11-0.9.0.0/kafka-logs
>>
> > > num.partitions=1
>>
> > > num.recovery.threads.per.data.dir=1
>>
> > > log.retention.hours=168
>>
> > > log.segment.bytes=1073741824
>>
> > > log.retention.check.interval.ms=30
>>
> > > log.cleaner.enable=false
>>
> > > zookeeper.connect=localhost:2181
>>
> > > zookeeper.connection.timeout.ms=6
>>
> > > inter.broker.protocol.version=0.9.0.0
>>
> > > security.inter.broker.protocol=SASL_PLAINTEXT
>>
> > > allow.everyone.if.no.acl.found=true
>>
> > >
>>
> > >
>>
> > > [root@localhost config]# grep -v "^#" server4.properties
>>
> > > broker.id=1
>>
> > > listeners=SASL_PLAINTEXT://localhost:9095
>>
> > > delete.topic.enable=tru

__consumer_offsets Topic files are not getting deleted after log retention hrs

2015-12-30 Thread Madhukar Bharti
Dear Team,

We are using Kafka-0.8.2.1 and having log.retention.hours=168 but files of
__consumer_offsets are not getting deleted, due to this lots of disc spaces
are used.

Please help how to delete file of offset storage topic after specified time.


Thanks and Regards,
Madhukar


Re: How to reset a consumer-group's offset in kafka 0.9?

2015-12-30 Thread Han JU
Thanks guys. The `seek` seems a solution. But it's more cumbersome than in
0.8 because I have to plug in some extra code in my consumer abstractions
rather than simply deleting a zk node.
And one more question: where does kafka 0.9 stores the consumer-group
information? In fact I also tried to delete the consumer group but the
`AdminUtils.deleteConsumerGroupInZK` does not seem to work in 0.9. And also
`bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --zookeeper
localhost:2181 --group group-name` seems broken.

Thanks!

2015-12-29 16:46 GMT+01:00 Marko Bonaći :

> I was refering to Dana Powers's answer in the link I posted (to use a
> client API). You can find an example here:
>
> http://kafka.apache.org/090/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html
>
> Marko Bonaći
> Monitoring | Alerting | Anomaly Detection | Centralized Log Management
> Solr & Elasticsearch Support
> Sematext  | Contact
> 
>
> On Tue, Dec 29, 2015 at 4:41 PM, Stevo Slavić  wrote:
>
> > Then I guess @Before test, explicitly commit offset of 0.
> >
> > There doesn't seem to be a tool for committing offset, only for
> > checking/fetching current offset (see
> > http://kafka.apache.org/documentation.html#operations )
> >
> > On Tue, Dec 29, 2015 at 4:35 PM, Han JU  wrote:
> >
> > > Hi Stevo,
> > >
> > > But by deleting and recreating the topic, do I remove also the messages
> > > ingested?
> > > My use case is that I ingest prepared messages once and run consumer
> > tests
> > > multiple times, between each test run I reset the consumer group's
> offset
> > > so that each run starts from the beginning and consumers all the
> > messages.
> > >
> > > 2015-12-29 16:19 GMT+01:00 Stevo Slavić :
> > >
> > > > Have you considered deleting and recreating topic used in test?
> > > > Once topic is clean, read/poll once - any committed offset should be
> > > > outside of the range, and consumer should reset offset.
> > > >
> > > > On Tue, Dec 29, 2015 at 4:11 PM, Han JU 
> > wrote:
> > > >
> > > > > Hello,
> > > > >
> > > > > For local test purpose I need to frequently reset offset for a
> > consumer
> > > > > group. In 0.8 I just delete the consumer group's zk node under
> > > > > `/consumers`. But with the redesign of the 0.9, how could I achieve
> > the
> > > > > same thing?
> > > > >
> > > > > Thanks!
> > > > >
> > > > > --
> > > > > *JU Han*
> > > > >
> > > > > Software Engineer @ Teads.tv
> > > > >
> > > > > +33 061960
> > > > >
> > > >
> > >
> > >
> > >
> > > --
> > > *JU Han*
> > >
> > > Software Engineer @ Teads.tv
> > >
> > > +33 061960
> > >
> >
>



-- 
*JU Han*

Software Engineer @ Teads.tv

+33 061960


EOF Warning

2015-12-30 Thread Birendra Kumar Singh
I keep getting such warnings intermittenly in my application . The
application connects to a kafka server and pushes messages. None of my
messages have failed howeever.

The application is a spring application and it uses kafka-clients to
establish connection and send messages to kafka
kafka-clients used is a below



org.apache.kafka

kafka-clients

0.8.2.0



! java.io.EOFException: null! at
org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:62)
~[publisher2-0.0.1-SNAPSHOT.jar:na]! at
org.apache.kafka.common.network.Selector.poll(Selector.java:248)
~[publisher2-0.0.1-SNAPSHOT.jar:na]! at
org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
[publisher2-0.0.1-SNAPSHOT.jar:na]! at
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
[publisher2-0.0.1-SNAPSHOT.jar:na]! at
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
[publisher2-0.0.1-SNAPSHOT.jar:na]! at
java.lang.Thread.run(Thread.java:745) [na:1.7.0_91]


Re: Consumer - Failed to find leader

2015-12-30 Thread prabhu v
Hi Harsha,

I have used the Fully qualified domain name. Just for security concerns,
Before sending this mail,i have replaced our FQDN hostname to localhost.

yes, i have tried KINIT and I am able to view the tickets using klist
command as well.

Thanks,
Prabhu

On Wed, Dec 30, 2015 at 11:27 AM, Harsha  wrote:

> Prabhu,
>When using SASL/kerberos always make sure you give FQDN of
>the hostname . In your command you are using --zookeeper
>localhost:2181 and make sure you change that hostname.
>
> "avax.security.auth.login.LoginException: No key to store Will continue
> > connection to Zookeeper server without SASL authentication, if Zookeeper"
>
> did you try  kinit with that keytab at the command line.
>
> -Harsha
> On Mon, Dec 28, 2015, at 04:07 AM, prabhu v wrote:
> > Thanks for the input Ismael.
> >
> > I will try and let you know.
> >
> > Also need your valuable inputs for the below issue:)
> >
> > i am not able to run kafka-topics.sh(0.9.0.0 version)
> >
> > [root@localhost bin]# ./kafka-topics.sh --list --zookeeper
> localhost:2181
> > [2015-12-28 12:41:32,589] WARN SASL configuration failed:
> > javax.security.auth.login.LoginException: No key to store Will continue
> > connection to Zookeeper server without SASL authentication, if Zookeeper
> > server allows it. (org.apache.zookeeper.ClientCnxn)
> > ^Z
> >
> > I am sure the key is present in its keytab file ( I have cross verified
> > using kinit command as well).
> >
> > Am i missing anything while calling the kafka-topics.sh??
> >
> >
> >
> > On Mon, Dec 28, 2015 at 3:53 PM, Ismael Juma  wrote:
> >
> > > Hi Prabhu,
> > >
> > > kafka-console-consumer.sh uses the old consumer by default, but only
> the
> > > new consumer supports security. Use --new-consumer to change this.
> > >
> > > Hope this helps.
> > >
> > > Ismael
> > > On 28 Dec 2015 05:48, "prabhu v"  wrote:
> > >
> > > > Hi Experts,
> > > >
> > > > I am getting the below error when running the consumer
> > > > "kafka-console-consumer.sh" .
> > > >
> > > > I am using the new version 0.9.0.1.
> > > > Topic name: test
> > > >
> > > >
> > > > [2015-12-28 06:13:34,409] WARN
> > > >
> > > >
> > >
> [console-consumer-61657_localhost-1451283204993-5512891d-leader-finder-thread],
> > > > Failed to find leader for Set([test,0])
> > > > (kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
> > > > kafka.common.BrokerEndPointNotAvailableException: End point
> PLAINTEXT not
> > > > found for broker 0
> > > > at kafka.cluster.Broker.getBrokerEndPoint(Broker.scala:136)
> > > >
> > > >
> > > > Please find the current configuration below.
> > > >
> > > > Configuration:
> > > >
> > > >
> > > > [root@localhost config]# grep -v "^#" consumer.properties
> > > > zookeeper.connect=localhost:2181
> > > > zookeeper.connection.timeout.ms=6
> > > > group.id=test-consumer-group
> > > > security.protocol=SASL_PLAINTEXT
> > > > sasl.kerberos.service.name="kafka"
> > > >
> > > >
> > > > [root@localhost config]# grep -v "^#" producer.properties
> > > > metadata.broker.list=localhost:9094,localhost:9095
> > > > producer.type=sync
> > > > compression.codec=none
> > > > serializer.class=kafka.serializer.DefaultEncoder
> > > > security.protocol=SASL_PLAINTEXT
> > > > sasl.kerberos.service.name="kafka"
> > > >
> > > > [root@localhost config]# grep -v "^#" server1.properties
> > > >
> > > > broker.id=0
> > > > listeners=SASL_PLAINTEXT://localhost:9094
> > > > delete.topic.enable=true
> > > > num.network.threads=3
> > > > num.io.threads=8
> > > > socket.send.buffer.bytes=102400
> > > > socket.receive.buffer.bytes=102400
> > > > socket.request.max.bytes=104857600
> > > > log.dirs=/data/kafka_2.11-0.9.0.0/kafka-logs
> > > > num.partitions=1
> > > > num.recovery.threads.per.data.dir=1
> > > > log.retention.hours=168
> > > > log.segment.bytes=1073741824
> > > > log.retention.check.interval.ms=30
> > > > log.cleaner.enable=false
> > > > zookeeper.connect=localhost:2181
> > > > zookeeper.connection.timeout.ms=6
> > > > inter.broker.protocol.version=0.9.0.0
> > > > security.inter.broker.protocol=SASL_PLAINTEXT
> > > > allow.everyone.if.no.acl.found=true
> > > >
> > > >
> > > > [root@localhost config]# grep -v "^#" server4.properties
> > > > broker.id=1
> > > > listeners=SASL_PLAINTEXT://localhost:9095
> > > > delete.topic.enable=true
> > > > num.network.threads=3
> > > > num.io.threads=8
> > > > socket.send.buffer.bytes=102400
> > > > socket.receive.buffer.bytes=102400
> > > > socket.request.max.bytes=104857600
> > > > log.dirs=/data/kafka_2.11-0.9.0.0/kafka-logs-1
> > > > num.partitions=1
> > > > num.recovery.threads.per.data.dir=1
> > > > log.retention.hours=168
> > > > log.segment.bytes=1073741824
> > > > log.retention.check.interval.ms=30
> > > > log.cleaner.enable=false
> > > > zookeeper.connect=localhost:2181
> > > > zookeeper.connection.timeout.ms=6
> > > > inter.broker.protocol.version=0.9.0.0
> > > > security.inter.broker.protocol=SASL_PLAINTEXT
> > > > zoo

Re: High CPU ~ Shrinking ISR for partition ~ skip updating ISR endless loop

2015-12-30 Thread Petri Lehtinen
Manish Sharma wrote:
> One of my brokers (id 0) is continuously emitting such log entries and
> eating up CPU cycles..
> 
> *[2015-11-07 12:49:50,677] INFO Partition [Wmt_Thursday_158,12] on broker
> 0: Shrinking ISR for partition [Wmt_Thursday_158,12] from 0,3 to 0
> (kafka.cluster.Partition)*
> 
> *[2015-11-07 12:49:50,680] INFO Partition [Wmt_Thursday_158,12] on broker
> 0: Cached zkVersion [2] not equal to that in zookeeper, skip updating ISR
> (kafka.cluster.Partition)*
> 
> *[2015-11-07 12:49:50,680] INFO Partition [Wmt_Saturday_139,8] on broker 0:
> Shrinking ISR for partition [Wmt_Saturday_139,8] from 0,8 to 0
> (kafka.cluster.Partition)*
> 
> *[2015-11-07 12:49:50,683] INFO Partition [Wmt_Saturday_139,8] on broker 0:
> Cached zkVersion [2] not equal to that in zookeeper, skip updating ISR
> (kafka.cluster.Partition)*

The same thing happened to me, too. There seems to be a related issue
in the issue tracker:

https://issues.apache.org/jira/browse/KAFKA-3042


signature.asc
Description: Digital signature