[jira] [Created] (KAFKA-3042) updateIsr should stop after failed several times due to zkVersion issue

2015-12-24 Thread Jiahongchao (JIRA)
Jiahongchao created KAFKA-3042:
--

 Summary: updateIsr should stop after failed several times due to 
zkVersion issue
 Key: KAFKA-3042
 URL: https://issues.apache.org/jira/browse/KAFKA-3042
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.8.2.1
 Environment: jdk 1.7
centos 6.4
Reporter: Jiahongchao


sometimes one broker may repeatly log
"Cached zkVersion 54 not equal to that in zookeeper, skip updating ISR"
I think this is because the broker consider itself as the leader in fact it's a 
follower.
So after several failed tries, it need to find out who is the leader



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3042) updateIsr should stop after failed several times due to zkVersion issue

2015-12-24 Thread Jiahongchao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiahongchao updated KAFKA-3042:
---
Issue Type: Bug  (was: Improvement)

> updateIsr should stop after failed several times due to zkVersion issue
> ---
>
> Key: KAFKA-3042
> URL: https://issues.apache.org/jira/browse/KAFKA-3042
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
> Environment: jdk 1.7
> centos 6.4
>Reporter: Jiahongchao
>
> sometimes one broker may repeatly log
> "Cached zkVersion 54 not equal to that in zookeeper, skip updating ISR"
> I think this is because the broker consider itself as the leader in fact it's 
> a follower.
> So after several failed tries, it need to find out who is the leader



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3040) Broker didn't report new data after change in leader

2015-12-24 Thread Mayuresh Gharat (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15071192#comment-15071192
 ] 

Mayuresh Gharat commented on KAFKA-3040:


Do you have the controller logs for the time period?

> Broker didn't report new data after change in leader
> 
>
> Key: KAFKA-3040
> URL: https://issues.apache.org/jira/browse/KAFKA-3040
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
> Environment: Debian 3.2.54-2 x86_64 GNU/Linux
>Reporter: Imran Patel
>Priority: Critical
>
> Recently we had an event that causes large Kafka backlogs to develop 
> suddenty. This happened across multiple partitions. We noticed that after a 
> brief connection loss to Zookeeper, Kafka brokers were not reporting no new 
> data to our (SimpleConsumer) consumer although the producers were enqueueing 
> fine. This went on until another zk blip led to a reconfiguration which 
> suddenly caused the consumers to "see" the data. Our consumers and our 
> monitoring tools did not see the offsets move during the outage window. Here 
> is the sequence of events for a single partition (with logs attached below). 
> The brokers are running 0.9, the producer is using library version 
> kafka_2.10:0.8.2.1 and consumer is using kafka_2.10:0.8.0 (both are Java 
> programs). Our monitoring tool uses kafka-python-9.0
> Can you tell us if this could be due to a consumer bug (the libraries being 
> too "old" to operate with 0.9 broker, for e.g.)? Or does it look a Kafka core 
> issue? Please note that we recently upgraded the brokers to 0.9 and hadn't 
> seen a similar issue prior to that.
> - after a brief connection loss to zookeeper, the partition leader (broker 9 
> for partition 29 in logs below) came back and shrunk the ISR to itself. 
> - producers kept on successfully sending data to Kafka and the remaining 
> replicas (brokers 3 and 4) recorded this data. AFAICT, 3 was the new leader. 
> Broker 9 did NOT replicate this data. It did repeatedly print the ISR 
> shrinking message over and over again.
> - consumer on the other hand reported no new data presumably because it was 
> talking to 9 and that broker was doing nothing.
> - 6 hours later, another zookeeper blip causes the brokers to reconfigure and 
> now consumers started seeing new data. 
> Broker 9:
> [2015-12-16 19:46:01,523] INFO Partition [messages,29] on broker 9: Expanding 
> ISR for partition [messages,29] from 9,4 to 9,4,3 (kafka.cluster.Partition
> [2015-12-18 00:59:25,511] INFO New leader is 9 
> (kafka.server.ZookeeperLeaderElector$LeaderChangeListener)
> [2015-12-18 01:00:18,451] INFO Partition [messages,29] on broker 9: Shrinking 
> ISR for partition [messages,29] from 9,4,3 to 9 (kafka.cluster.Partition)
> [2015-12-18 01:00:18,458] INFO Partition [messages,29] on broker 9: Cached 
> zkVersion [472] not equal to that in zookeeper, skip updating ISR 
> (kafka.cluster.Partition)
> [2015-12-18 07:04:44,552] INFO Truncating log messages-29 to offset 
> 14169556269. (kafka.log.Log)
> [2015-12-18 07:04:44,649] INFO [ReplicaFetcherManager on broker 9] Added 
> fetcher for partitions List([[messages,61], initOffset 14178575900 to broker 
> BrokerEndPoint(6,kafka006-prod.c.foo.internal,9092)] , [[messages,13], 
> initOffset 14156091271 to broker 
> BrokerEndPoint(2,kafka002-prod.c.foo.internal,9092)] , [[messages,45], 
> initOffset 14135826155 to broker 
> BrokerEndPoint(4,kafka004-prod.c.foo.internal,9092)] , [[messages,41], 
> initOffset 14157926400 to broker 
> BrokerEndPoint(1,kafka001-prod.c.foo.internal,9092)] , [[messages,29], 
> initOffset 14169556269 to broker 
> BrokerEndPoint(3,kafka003-prod.c.foo.internal,9092)] , [[messages,57], 
> initOffset 14175218230 to broker 
> BrokerEndPoint(1,kafka001-prod.c.foo.internal,9092)] ) 
> (kafka.server.ReplicaFetcherManager)
> Broker 3:
> [2015-12-18 01:00:01,763] INFO [ReplicaFetcherManager on broker 3] Removed 
> fetcher for partitions [messages,29] (kafka.server.ReplicaFetcherManager)
> [2015-12-18 07:09:04,631] INFO Partition [messages,29] on broker 3: Expanding 
> ISR for partition [messages,29] from 4,3 to 4,3,9 (kafka.cluster.Partition)
> [2015-12-18 07:09:49,693] INFO [ReplicaFetcherManager on broker 3] Removed 
> fetcher for partitions [messages,29] (kafka.server.ReplicaFetcherManager)
> Broker 4:
> [2015-12-18 01:00:01,783] INFO [ReplicaFetcherManager on broker 4] Removed 
> fetcher for partitions [messages,29] (kafka.server.ReplicaFetcherManager)
> [2015-12-18 01:00:01,866] INFO [ReplicaFetcherManager on broker 4] Added 
> fetcher for partitions List([[messages,29], initOffset 14169556262 to broker 
> BrokerEndPoint(3,kafka003-prod.c.foo.internal,9092)] ) 
> (kafka.server.ReplicaFetcherManager)
> [2015-12-18 07:09:50,191] ERROR [Repli

[jira] [Comment Edited] (KAFKA-3040) Broker didn't report new data after change in leader

2015-12-24 Thread Mayuresh Gharat (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15071192#comment-15071192
 ] 

Mayuresh Gharat edited comment on KAFKA-3040 at 12/24/15 7:09 PM:
--

Do you have the controller logs for the time period?
Also you might want to check :
https://issues.apache.org/jira/browse/KAFKA-3042


was (Author: mgharat):
Do you have the controller logs for the time period?

> Broker didn't report new data after change in leader
> 
>
> Key: KAFKA-3040
> URL: https://issues.apache.org/jira/browse/KAFKA-3040
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
> Environment: Debian 3.2.54-2 x86_64 GNU/Linux
>Reporter: Imran Patel
>Priority: Critical
>
> Recently we had an event that causes large Kafka backlogs to develop 
> suddenty. This happened across multiple partitions. We noticed that after a 
> brief connection loss to Zookeeper, Kafka brokers were not reporting no new 
> data to our (SimpleConsumer) consumer although the producers were enqueueing 
> fine. This went on until another zk blip led to a reconfiguration which 
> suddenly caused the consumers to "see" the data. Our consumers and our 
> monitoring tools did not see the offsets move during the outage window. Here 
> is the sequence of events for a single partition (with logs attached below). 
> The brokers are running 0.9, the producer is using library version 
> kafka_2.10:0.8.2.1 and consumer is using kafka_2.10:0.8.0 (both are Java 
> programs). Our monitoring tool uses kafka-python-9.0
> Can you tell us if this could be due to a consumer bug (the libraries being 
> too "old" to operate with 0.9 broker, for e.g.)? Or does it look a Kafka core 
> issue? Please note that we recently upgraded the brokers to 0.9 and hadn't 
> seen a similar issue prior to that.
> - after a brief connection loss to zookeeper, the partition leader (broker 9 
> for partition 29 in logs below) came back and shrunk the ISR to itself. 
> - producers kept on successfully sending data to Kafka and the remaining 
> replicas (brokers 3 and 4) recorded this data. AFAICT, 3 was the new leader. 
> Broker 9 did NOT replicate this data. It did repeatedly print the ISR 
> shrinking message over and over again.
> - consumer on the other hand reported no new data presumably because it was 
> talking to 9 and that broker was doing nothing.
> - 6 hours later, another zookeeper blip causes the brokers to reconfigure and 
> now consumers started seeing new data. 
> Broker 9:
> [2015-12-16 19:46:01,523] INFO Partition [messages,29] on broker 9: Expanding 
> ISR for partition [messages,29] from 9,4 to 9,4,3 (kafka.cluster.Partition
> [2015-12-18 00:59:25,511] INFO New leader is 9 
> (kafka.server.ZookeeperLeaderElector$LeaderChangeListener)
> [2015-12-18 01:00:18,451] INFO Partition [messages,29] on broker 9: Shrinking 
> ISR for partition [messages,29] from 9,4,3 to 9 (kafka.cluster.Partition)
> [2015-12-18 01:00:18,458] INFO Partition [messages,29] on broker 9: Cached 
> zkVersion [472] not equal to that in zookeeper, skip updating ISR 
> (kafka.cluster.Partition)
> [2015-12-18 07:04:44,552] INFO Truncating log messages-29 to offset 
> 14169556269. (kafka.log.Log)
> [2015-12-18 07:04:44,649] INFO [ReplicaFetcherManager on broker 9] Added 
> fetcher for partitions List([[messages,61], initOffset 14178575900 to broker 
> BrokerEndPoint(6,kafka006-prod.c.foo.internal,9092)] , [[messages,13], 
> initOffset 14156091271 to broker 
> BrokerEndPoint(2,kafka002-prod.c.foo.internal,9092)] , [[messages,45], 
> initOffset 14135826155 to broker 
> BrokerEndPoint(4,kafka004-prod.c.foo.internal,9092)] , [[messages,41], 
> initOffset 14157926400 to broker 
> BrokerEndPoint(1,kafka001-prod.c.foo.internal,9092)] , [[messages,29], 
> initOffset 14169556269 to broker 
> BrokerEndPoint(3,kafka003-prod.c.foo.internal,9092)] , [[messages,57], 
> initOffset 14175218230 to broker 
> BrokerEndPoint(1,kafka001-prod.c.foo.internal,9092)] ) 
> (kafka.server.ReplicaFetcherManager)
> Broker 3:
> [2015-12-18 01:00:01,763] INFO [ReplicaFetcherManager on broker 3] Removed 
> fetcher for partitions [messages,29] (kafka.server.ReplicaFetcherManager)
> [2015-12-18 07:09:04,631] INFO Partition [messages,29] on broker 3: Expanding 
> ISR for partition [messages,29] from 4,3 to 4,3,9 (kafka.cluster.Partition)
> [2015-12-18 07:09:49,693] INFO [ReplicaFetcherManager on broker 3] Removed 
> fetcher for partitions [messages,29] (kafka.server.ReplicaFetcherManager)
> Broker 4:
> [2015-12-18 01:00:01,783] INFO [ReplicaFetcherManager on broker 4] Removed 
> fetcher for partitions [messages,29] (kafka.server.ReplicaFetcherManager)
> [2015-12-18 01:00:01,866] INFO [ReplicaFetcherManager on broker 4] Added 
> fetche

[jira] [Created] (KAFKA-3043) Replace request.required.acks with acks in docs

2015-12-24 Thread Sasaki Toru (JIRA)
Sasaki Toru created KAFKA-3043:
--

 Summary: Replace request.required.acks with acks in docs
 Key: KAFKA-3043
 URL: https://issues.apache.org/jira/browse/KAFKA-3043
 Project: Kafka
  Issue Type: Improvement
  Components: website
Affects Versions: 0.9.0.1, 0.9.1.0
Reporter: Sasaki Toru
 Fix For: 0.9.0.0


In 0.9 request.required.acks=-1 which configration of producer is replaced by 
acks=all, but this old config is remained in docs.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3043) Replace request.required.acks with acks in docs

2015-12-24 Thread Sasaki Toru (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sasaki Toru updated KAFKA-3043:
---
Description: 
In Kafka 0.9, request.required.acks=-1 which configration of producer is 
replaced by acks=all, but this old config is remained in docs.


  was:
In 0.9 request.required.acks=-1 which configration of producer is replaced by 
acks=all, but this old config is remained in docs.



> Replace request.required.acks with acks in docs
> ---
>
> Key: KAFKA-3043
> URL: https://issues.apache.org/jira/browse/KAFKA-3043
> Project: Kafka
>  Issue Type: Improvement
>  Components: website
>Affects Versions: 0.9.0.1, 0.9.1.0
>Reporter: Sasaki Toru
> Fix For: 0.9.0.0
>
>
> In Kafka 0.9, request.required.acks=-1 which configration of producer is 
> replaced by acks=all, but this old config is remained in docs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3043: Replace request.required.acks with...

2015-12-24 Thread sasakitoa
GitHub user sasakitoa opened a pull request:

https://github.com/apache/kafka/pull/716

KAFKA-3043: Replace request.required.acks with acks in docs.

In Kafka 0.9, request.required.acks=-1 which configration of producer is 
replaced by acks=all, 
but this old config is remained in docs.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sasakitoa/kafka acks_doc

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/716.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #716


commit ab7d1b041f7c9472757672c7d2e62ac533390c85
Author: Sasaki Toru 
Date:   2015-12-25T05:27:44Z

Replace request.required.acks with acks in docs.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3043) Replace request.required.acks with acks in docs

2015-12-24 Thread Sasaki Toru (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sasaki Toru updated KAFKA-3043:
---
Status: Patch Available  (was: Open)

> Replace request.required.acks with acks in docs
> ---
>
> Key: KAFKA-3043
> URL: https://issues.apache.org/jira/browse/KAFKA-3043
> Project: Kafka
>  Issue Type: Improvement
>  Components: website
>Affects Versions: 0.9.0.1, 0.9.1.0
>Reporter: Sasaki Toru
> Fix For: 0.9.0.0
>
>
> In Kafka 0.9, request.required.acks=-1 which configration of producer is 
> replaced by acks=all, but this old config is remained in docs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3043) Replace request.required.acks with acks in docs

2015-12-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15071365#comment-15071365
 ] 

ASF GitHub Bot commented on KAFKA-3043:
---

GitHub user sasakitoa opened a pull request:

https://github.com/apache/kafka/pull/716

KAFKA-3043: Replace request.required.acks with acks in docs.

In Kafka 0.9, request.required.acks=-1 which configration of producer is 
replaced by acks=all, 
but this old config is remained in docs.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sasakitoa/kafka acks_doc

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/716.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #716


commit ab7d1b041f7c9472757672c7d2e62ac533390c85
Author: Sasaki Toru 
Date:   2015-12-25T05:27:44Z

Replace request.required.acks with acks in docs.




> Replace request.required.acks with acks in docs
> ---
>
> Key: KAFKA-3043
> URL: https://issues.apache.org/jira/browse/KAFKA-3043
> Project: Kafka
>  Issue Type: Improvement
>  Components: website
>Affects Versions: 0.9.0.1, 0.9.1.0
>Reporter: Sasaki Toru
> Fix For: 0.9.0.0
>
>
> In Kafka 0.9, request.required.acks=-1 which configration of producer is 
> replaced by acks=all, but this old config is remained in docs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Improve document of MirrorMaker

2015-12-24 Thread sasakitoa
GitHub user sasakitoa opened a pull request:

https://github.com/apache/kafka/pull/717

MINOR: Improve document of MirrorMaker



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sasakitoa/kafka mirrorMaker_doc

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/717.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #717


commit 32161fb582394595544be9ad8e1051e8111cf286
Author: Sasaki Toru 
Date:   2015-12-25T07:01:14Z

Improve document of MirrorMaker




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---