[jira] [Commented] (KAFKA-1207) Launch Kafka from within Apache Mesos

2017-01-20 Thread postmas...@inn.ru (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15832867#comment-15832867
 ] 

postmas...@inn.ru commented on KAFKA-1207:
--

Delivery is delayed to these recipients or groups:

e...@inn.ru

Subject: [jira] [Commented] (KAFKA-1207) Launch Kafka from within Apache Mesos

This message hasn't been delivered yet. Delivery will continue to be attempted.

The server will keep trying to deliver this message for the next 1 days, 19 
hours and 52 minutes. You'll be notified if the message can't be delivered by 
that time.







Diagnostic information for administrators:

Generating server: lc-exch-04.inn.local
Receiving server: inn.ru (109.105.153.25)

e...@inn.ru
Server at inn.ru (109.105.153.25) returned '400 4.4.7 Message delayed'
1/21/2017 7:32:34 AM - Server at inn.ru (109.105.153.25) returned '441 4.4.1 
Error communicating with target host: "Failed to connect. Winsock error code: 
10060, Win32 error code: 10060." Last endpoint attempted was 109.105.153.25:25'

Original message headers:

Received: from lc-exch-04.inn.local (10.64.37.99) by lc-exch-04.inn.local
 (10.64.37.99) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.669.32; Sat, 21
 Jan 2017 06:34:40 +0300
Received: from lc-asp-02.inn.ru (10.64.37.105) by lc-exch-04.inn.local
 (10.64.37.100) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.669.32 via
 Frontend Transport; Sat, 21 Jan 2017 06:34:40 +0300
Received-SPF: None (no SPF record) identity=mailfrom; client-ip=209.188.14.142; 
helo=spamd1-us-west.apache.org; envelope-from=j...@apache.org; 
receiver=e...@inn.ru
X-Envelope-From: 
Received: from spamd1-us-west.apache.org (pnap-us-west-generic-nat.apache.org 
[209.188.14.142])
by lc-asp-02.inn.ru (Postfix) with ESMTP id 3EC47400C3
for ; Sat, 21 Jan 2017 04:34:39 +0100 (CET)
Received: from localhost (localhost [127.0.0.1])
by spamd1-us-west.apache.org (ASF Mail Server at 
spamd1-us-west.apache.org) with ESMTP id 5FDA1C0477
for ; Sat, 21 Jan 2017 03:34:39 + (UTC)
X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org
X-Spam-Flag: NO
X-Spam-Score: -1.999
X-Spam-Level:
X-Spam-Status: No, score=-1.999 tagged_above=-999 required=6.31
tests=[KAM_LAZY_DOMAIN_SECURITY=1, RP_MATCHES_RCVD=-2.999]
autolearn=disabled
Received: from mx1-lw-us.apache.org ([10.40.0.8])
by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 
10024)
with ESMTP id 53kdGzUJ6OzL for ;
Sat, 21 Jan 2017 03:34:38 + (UTC)
Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org 
[209.188.14.139])
by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with 
ESMTP id DBCB65F54F
for ; Sat, 21 Jan 2017 03:34:37 + (UTC)
Received: from jira-lw-us.apache.org (unknown [207.244.88.139])
by mailrelay1-us-west.apache.org (ASF Mail Server at 
mailrelay1-us-west.apache.org) with ESMTP id 4E166E0272
for ; Sat, 21 Jan 2017 03:34:27 + (UTC)
Received: from jira-lw-us.apache.org (localhost [127.0.0.1])
by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) 
with ESMTP id AA6E92528D
for ; Sat, 21 Jan 2017 03:34:26 + (UTC)
Date: Sat, 21 Jan 2017 03:34:26 +
From: "postmas...@inn.ru (JIRA)" 
To: 
Message-ID: 
In-Reply-To: 
References:  

Subject: [jira] [Commented] (KAFKA-1207) Launch Kafka from within Apache
 Mesos
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394
X-inn-MailScanner-ESVA-Information: Please contact  for more information
X-inn-MailScanner-ESVA-ID: 3EC47400C3.A8BD1
X-inn-MailScanner-ESVA: Found to be clean
X-inn-MailScanner-ESVA-From: j...@apache.org
X-inn-MailScanner-ESVA-Watermark: 1485574479.8939@OQ9b9aQ2T2RLOMN9FZT10g
Return-Path: j...@apache.org
X-OrganizationHeadersPreserved: lc-exch-04.inn.local
X-CrossPremisesHeadersFilteredByDsnGenerator: lc-exch-04.inn.local



> Launch Kafka from within Apache Mesos
> -
>
> Key: KAFKA-1207
> URL: https://issues.apache.org/jira/browse/KAFKA-1207
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>  Labels: mesos
> Attachments: KAFKA-1207_2014-01-19_00:04:58.patch, 
> KAFKA-1207_2014-01-19_00:48:49.patch, KAFKA-1207.patch
>
>
> There are a few components to this.

[jira] [Commented] (KAFKA-1207) Launch Kafka from within Apache Mesos

2017-01-20 Thread postmas...@inn.ru (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15832868#comment-15832868
 ] 

postmas...@inn.ru commented on KAFKA-1207:
--

Delivery is delayed to these recipients or groups:

e...@inn.ru

Subject: [jira] [Commented] (KAFKA-1207) Launch Kafka from within Apache Mesos

This message hasn't been delivered yet. Delivery will continue to be attempted.

The server will keep trying to deliver this message for the next 1 days, 19 
hours and 51 minutes. You'll be notified if the message can't be delivered by 
that time.







Diagnostic information for administrators:

Generating server: lc-exch-04.inn.local
Receiving server: inn.ru (109.105.153.25)

e...@inn.ru
Server at inn.ru (109.105.153.25) returned '400 4.4.7 Message delayed'
1/21/2017 7:32:34 AM - Server at inn.ru (109.105.153.25) returned '441 4.4.1 
Error communicating with target host: "Failed to connect. Winsock error code: 
10060, Win32 error code: 10060." Last endpoint attempted was 109.105.153.25:25'

Original message headers:

Received: from lc-exch-04.inn.local (10.64.37.99) by lc-exch-04.inn.local
 (10.64.37.99) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.669.32; Sat, 21
 Jan 2017 06:34:33 +0300
Received: from lc-asp-02.inn.ru (10.64.37.104) by lc-exch-04.inn.local
 (10.64.37.100) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.669.32 via
 Frontend Transport; Sat, 21 Jan 2017 06:34:33 +0300
Received-SPF: None (no SPF record) identity=mailfrom; client-ip=209.188.14.142; 
helo=spamd1-us-west.apache.org; envelope-from=j...@apache.org; 
receiver=e...@inn.ru
X-Envelope-From: 
Received: from spamd1-us-west.apache.org (pnap-us-west-generic-nat.apache.org 
[209.188.14.142])
by lc-asp-02.inn.ru (Postfix) with ESMTP id 95A87400C6
for ; Sat, 21 Jan 2017 04:34:31 +0100 (CET)
Received: from localhost (localhost [127.0.0.1])
by spamd1-us-west.apache.org (ASF Mail Server at 
spamd1-us-west.apache.org) with ESMTP id 5CFF8C047C
for ; Sat, 21 Jan 2017 03:34:31 + (UTC)
X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org
X-Spam-Flag: NO
X-Spam-Score: -1.999
X-Spam-Level:
X-Spam-Status: No, score=-1.999 tagged_above=-999 required=6.31
tests=[KAM_LAZY_DOMAIN_SECURITY=1, RP_MATCHES_RCVD=-2.999]
autolearn=disabled
Received: from mx1-lw-us.apache.org ([10.40.0.8])
by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 
10024)
with ESMTP id Z1Vi-210Uesz for ;
Sat, 21 Jan 2017 03:34:30 + (UTC)
Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org 
[209.188.14.139])
by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with 
ESMTP id 3E17C618B0
for ; Sat, 21 Jan 2017 03:34:29 + (UTC)
Received: from jira-lw-us.apache.org (unknown [207.244.88.139])
by mailrelay1-us-west.apache.org (ASF Mail Server at 
mailrelay1-us-west.apache.org) with ESMTP id 15FF0E02EF
for ; Sat, 21 Jan 2017 03:34:28 + (UTC)
Received: from jira-lw-us.apache.org (localhost [127.0.0.1])
by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) 
with ESMTP id 056EA25286
for ; Sat, 21 Jan 2017 03:34:27 + (UTC)
Date: Sat, 21 Jan 2017 03:34:27 +
From: "postmas...@inn.ru (JIRA)" 
To: 
Message-ID: 
In-Reply-To: 
References:  

Subject: [jira] [Commented] (KAFKA-1207) Launch Kafka from within Apache
 Mesos
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394
X-inn-MailScanner-ESVA-Information: Please contact  for more information
X-inn-MailScanner-ESVA-ID: 95A87400C6.A7ECD
X-inn-MailScanner-ESVA: Found to be clean
X-inn-MailScanner-ESVA-From: j...@apache.org
X-inn-MailScanner-ESVA-Watermark: 1485574472.70264@8xm0fAuHQiSvIOR9FVYdsg
Return-Path: j...@apache.org
X-OrganizationHeadersPreserved: lc-exch-04.inn.local
X-CrossPremisesHeadersFilteredByDsnGenerator: lc-exch-04.inn.local



> Launch Kafka from within Apache Mesos
> -
>
> Key: KAFKA-1207
> URL: https://issues.apache.org/jira/browse/KAFKA-1207
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>  Labels: mesos
> Attachments: KAFKA-1207_2014-01-19_00:04:58.patch, 
> KAFKA-1207_2014-01-19_00:48:49.patch, KAFKA-1207.patch
>
>
> There are a few components to 

Re: [VOTE] KIP-74: Add FetchResponse size limit in bytes

2017-01-20 Thread Apurva Mehta
+1

On Fri, Jan 20, 2017 at 5:19 PM, Jason Gustafson  wrote:

> +1
>
> On Fri, Jan 20, 2017 at 4:51 PM, Ismael Juma  wrote:
>
> > Good catch, Colin. +1 to editing the wiki to match the desired behaviour
> > and what was implemented in 0.10.1.
> >
> > Ismael
> >
> > On Sat, Jan 21, 2017 at 12:19 AM, Colin McCabe 
> wrote:
> >
> > > Hi all,
> > >
> > > While looking at some code related to KIP-74, I noticed a slight
> > > discrepancy between the text on the wiki and the implementation.  The
> > > wiki says that "If max_bytes is Int.MAX_INT, new request behaves
> exactly
> > > like old one."  This would mean that if there was a single message that
> > > was larger than the maximum bytes per partition, zero messages would be
> > > returned, and clients would throw MessageSizeTooLargeException.
> > > However, the code does not implement this.  Instead, it implements the
> > > "new" behavior where the client always gets at least one message.
> > >
> > > The new behavior seems to be more desirable, since clients do not "get
> > > stuck" on messages that are too big.  I propose that we edit the wiki
> to
> > > reflect the implemented behavior by deleting the references to special
> > > behavior when max_bytes is MAX_INT.
> > >
> > > cheers,
> > > Colin
> > >
> >
>


[GitHub] kafka pull request #2416: MINOR: Refactor partition lag metric for cleaner e...

2017-01-20 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/2416

MINOR: Refactor partition lag metric for cleaner encapsulation



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka refactor-partition-lag-cleanup

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2416.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2416


commit fb973739315d18b1e593ed351b39214282472072
Author: Jason Gustafson 
Date:   2017-01-21T05:10:53Z

MINOR: Refactor partition lag metric for cleaner encapsulation




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #2415: KAFKA-4547 (0.10.1 hotfix): Avoid unnecessary offs...

2017-01-20 Thread vahidhashemian
GitHub user vahidhashemian opened a pull request:

https://github.com/apache/kafka/pull/2415

KAFKA-4547 (0.10.1 hotfix): Avoid unnecessary offset commit that could lead 
to an invalid offset position if partition is paused



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-4547-0.10.1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2415.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2415


commit 6acbf27aebbe5b6b4f230ce44ea5151c4c53ffbd
Author: Jason Gustafson 
Date:   2016-09-20T02:59:03Z

MINOR: Bump to version 0.10.2

commit f396fdac197409fb955f00a6f642f04e4926ba41
Author: Ben Stopford 
Date:   2016-09-20T06:17:23Z

KAFKA-4193; Fix for intermittent failure in FetcherTest

Author: Ben Stopford 

Reviewers: Jason Gustafson 

Closes #1881 from benstopford/KAFKA-4193

commit c195003cb6e05f2d8c49285ff7e77b1cb3aa4361
Author: Eno Thereska 
Date:   2016-09-20T10:33:50Z

HOTFIX: Added check for metadata unavailable

Author: Eno Thereska 

Reviewers: Damian Guy , Ismael Juma 


Closes #1887 from enothereska/hotfix-metadata-unavailable

commit 3663275cf066b7715cc11b26fd9c144bbff1c373
Author: Ben Stopford 
Date:   2016-09-20T13:53:48Z

KAFKA-4184; Intermittent failures in 
ReplicationQuotasTest.shouldBootstrapTwoBrokersWithFollowerThrottle

Build is unstable, so it's hard to validate this change. Of the various 
builds up until 11am BST the test ran twice and passed twice.

Author: Ben Stopford 

Reviewers: Ismael Juma 

Closes #1873 from benstopford/KAFKA-4184

commit 4f821830bc6b726cddf90999fff76006745b1a3f
Author: Ben Stopford 
Date:   2016-09-20T14:41:14Z

KAFKA-4197; Make ReassignPartitionsTest System Test move data

The ReassignPartitionsTest system tests doesn't reassign any replicas (i.e. 
move data).

This is a simple issue. It uses a 3 node cluster with replication factor of 
3, so whilst the replicas are jumbled around, nothing actually is moved from 
machine to machine when the assignment is executed.

This fix just ups the number of nodes to 4 so things move.

Tests pass locally.
There are runs pending on the two branch builders

Passes:
https://jenkins.confluent.io/job/system-test-kafka-branch-builder/551/
https://jenkins.confluent.io/job/system-test-kafka-branch-builder-2/94/
https://jenkins.confluent.io/job/system-test-kafka-branch-builder/553/
https://jenkins.confluent.io/job/system-test-kafka-branch-builder/554/
https://jenkins.confluent.io/job/system-test-kafka-branch-builder-2/95

Failures:
https://jenkins.confluent.io/job/system-test-kafka-branch-builder/552 => 
_RuntimeError: There aren't enough available nodes to satisfy the resource 
request. Total cluster size: 1, Requested: 4, Already allocated: 1, Available: 
0._ Which I assume to do with the test env.

Author: Ben Stopford 

Reviewers: Ismael Juma 

Closes #1892 from benstopford/fix_reassignment_test

commit 24f81ea764a493b4422b6a3ef6b3e771d0e4d63b
Author: Damian Guy 
Date:   2016-09-21T18:11:12Z

MINOR: add javadoc comment to PersistenKeyValueFactory.enableCaching

missing javadoc on public API method PersistenKeyValueFactory.enableCaching

Author: Damian Guy 

Reviewers: Eno Thereska, Guozhang Wang

Closes #1891 from dguy/minor-java-doc

commit a632716a3c9a871f325c6f13aefa9aed0add4b82
Author: Damian Guy 
Date:   2016-09-21T18:13:39Z

MINOR: remove unused code from InternalTopicManager

Remove isValidCleanupPolicy and related fields as they are never used.

Author: Damian Guy 

Reviewers: Eno Thereska, Guozhang Wang

Closes #1888 from dguy/minor-remove-unused

commit 732fabf94ebc9631d31f2feb2116ee8b63beabef
Author: Jason Gustafson 
Date:   2016-09-22T17:07:50Z

KAFKA-3782: Ensure heartbeat thread restarted after rebalance woken up

Author: Jason Gustafson 

Reviewers: Guozhang Wang

Closes #1898 from hachikuji/KAFKA-3782

commit 27e3edc791760dea7ff4d048f87d1585f9e235d7
Author: Elias Levy 
Date:   2016-09-22T17:33:23Z

MINOR: Fix comments in KStreamKStreamJoinTest

Minor comment fixes.

Author: Elias Levy 

[jira] [Commented] (KAFKA-4547) Consumer.position returns incorrect results for Kafka 0.10.1.0 client

2017-01-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15832806#comment-15832806
 ] 

ASF GitHub Bot commented on KAFKA-4547:
---

Github user vahidhashemian closed the pull request at:

https://github.com/apache/kafka/pull/2415


> Consumer.position returns incorrect results for Kafka 0.10.1.0 client
> -
>
> Key: KAFKA-4547
> URL: https://issues.apache.org/jira/browse/KAFKA-4547
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.1.0, 0.10.0.2, 0.10.1.1
> Environment: Windows Kafka 0.10.1.0
>Reporter: Pranav Nakhe
>Assignee: Vahid Hashemian
>Priority: Blocker
>  Labels: clients
> Fix For: 0.10.2.0
>
> Attachments: issuerep.zip
>
>
> Consider the following code -
>   KafkaConsumer consumer = new 
> KafkaConsumer(props);
>   List listOfPartitions = new ArrayList();
>   for (int i = 0; i < 
> consumer.partitionsFor("IssueTopic").size(); i++) {
>   listOfPartitions.add(new TopicPartition("IssueTopic", 
> i));
>   }
>   consumer.assign(listOfPartitions);  
>   consumer.pause(listOfPartitions);
>   consumer.seekToEnd(listOfPartitions);
> //consumer.resume(listOfPartitions); -- commented out
>   for(int i = 0; i < listOfPartitions.size(); i++) {
>   
> System.out.println(consumer.position(listOfPartitions.get(i)));
>   }
>   
> I have created a topic IssueTopic with 3 partitions with a single replica on 
> my single node kafka installation (0.10.1.0)
> The behavior noticed for Kafka client 0.10.1.0 as against Kafka client 
> 0.10.0.1
> A) Initially when there are no messages on IssueTopic running the above 
> program returns
> 0.10.1.0   
> 0  
> 0  
> 0   
> 0.10.0.1
> 0
> 0
> 0
> B) Next I send 6 messages and see that the messages have been evenly 
> distributed across the three partitions. Running the above program now 
> returns 
> 0.10.1.0   
> 0  
> 0  
> 2  
> 0.10.0.1
> 2
> 2
> 2
> Clearly there is a difference in behavior for the 2 clients.
> Now after seekToEnd call if I make a call to resume (uncomment the resume 
> call in code above) then the behavior is
> 0.10.1.0   
> 2  
> 2  
> 2  
> 0.10.0.1
> 2
> 2
> 2
> This is an issue I came across when using the spark kafka integration for 
> 0.10. When I use kafka 0.10.1.0 I started seeing this issue. I had raised a 
> pull request to resolve that issue [SPARK-18779] but when looking at the 
> kafka client implementation/documentation now it seems the issue is with 
> kafka and not with spark. There does not seem to be any documentation which 
> specifies/implies that we need to call resume after seekToEnd for position to 
> return the correct value. Also there is a clear difference in the behavior in 
> the two kafka client implementations. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2415: KAFKA-4547 (0.10.1 hotfix): Avoid unnecessary offs...

2017-01-20 Thread vahidhashemian
Github user vahidhashemian closed the pull request at:

https://github.com/apache/kafka/pull/2415


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4547) Consumer.position returns incorrect results for Kafka 0.10.1.0 client

2017-01-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15832805#comment-15832805
 ] 

ASF GitHub Bot commented on KAFKA-4547:
---

GitHub user vahidhashemian opened a pull request:

https://github.com/apache/kafka/pull/2415

KAFKA-4547 (0.10.1 hotfix): Avoid unnecessary offset commit that could lead 
to an invalid offset position if partition is paused



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-4547-0.10.1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2415.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2415


commit 6acbf27aebbe5b6b4f230ce44ea5151c4c53ffbd
Author: Jason Gustafson 
Date:   2016-09-20T02:59:03Z

MINOR: Bump to version 0.10.2

commit f396fdac197409fb955f00a6f642f04e4926ba41
Author: Ben Stopford 
Date:   2016-09-20T06:17:23Z

KAFKA-4193; Fix for intermittent failure in FetcherTest

Author: Ben Stopford 

Reviewers: Jason Gustafson 

Closes #1881 from benstopford/KAFKA-4193

commit c195003cb6e05f2d8c49285ff7e77b1cb3aa4361
Author: Eno Thereska 
Date:   2016-09-20T10:33:50Z

HOTFIX: Added check for metadata unavailable

Author: Eno Thereska 

Reviewers: Damian Guy , Ismael Juma 


Closes #1887 from enothereska/hotfix-metadata-unavailable

commit 3663275cf066b7715cc11b26fd9c144bbff1c373
Author: Ben Stopford 
Date:   2016-09-20T13:53:48Z

KAFKA-4184; Intermittent failures in 
ReplicationQuotasTest.shouldBootstrapTwoBrokersWithFollowerThrottle

Build is unstable, so it's hard to validate this change. Of the various 
builds up until 11am BST the test ran twice and passed twice.

Author: Ben Stopford 

Reviewers: Ismael Juma 

Closes #1873 from benstopford/KAFKA-4184

commit 4f821830bc6b726cddf90999fff76006745b1a3f
Author: Ben Stopford 
Date:   2016-09-20T14:41:14Z

KAFKA-4197; Make ReassignPartitionsTest System Test move data

The ReassignPartitionsTest system tests doesn't reassign any replicas (i.e. 
move data).

This is a simple issue. It uses a 3 node cluster with replication factor of 
3, so whilst the replicas are jumbled around, nothing actually is moved from 
machine to machine when the assignment is executed.

This fix just ups the number of nodes to 4 so things move.

Tests pass locally.
There are runs pending on the two branch builders

Passes:
https://jenkins.confluent.io/job/system-test-kafka-branch-builder/551/
https://jenkins.confluent.io/job/system-test-kafka-branch-builder-2/94/
https://jenkins.confluent.io/job/system-test-kafka-branch-builder/553/
https://jenkins.confluent.io/job/system-test-kafka-branch-builder/554/
https://jenkins.confluent.io/job/system-test-kafka-branch-builder-2/95

Failures:
https://jenkins.confluent.io/job/system-test-kafka-branch-builder/552 => 
_RuntimeError: There aren't enough available nodes to satisfy the resource 
request. Total cluster size: 1, Requested: 4, Already allocated: 1, Available: 
0._ Which I assume to do with the test env.

Author: Ben Stopford 

Reviewers: Ismael Juma 

Closes #1892 from benstopford/fix_reassignment_test

commit 24f81ea764a493b4422b6a3ef6b3e771d0e4d63b
Author: Damian Guy 
Date:   2016-09-21T18:11:12Z

MINOR: add javadoc comment to PersistenKeyValueFactory.enableCaching

missing javadoc on public API method PersistenKeyValueFactory.enableCaching

Author: Damian Guy 

Reviewers: Eno Thereska, Guozhang Wang

Closes #1891 from dguy/minor-java-doc

commit a632716a3c9a871f325c6f13aefa9aed0add4b82
Author: Damian Guy 
Date:   2016-09-21T18:13:39Z

MINOR: remove unused code from InternalTopicManager

Remove isValidCleanupPolicy and related fields as they are never used.

Author: Damian Guy 

Reviewers: Eno Thereska, Guozhang Wang

Closes #1888 from dguy/minor-remove-unused

commit 732fabf94ebc9631d31f2feb2116ee8b63beabef
Author: Jason Gustafson 
Date:   2016-09-22T17:07:50Z

KAFKA-3782: Ensure heartbeat thread restarted after rebalance woken up

Author: Jason Gustafson 

Reviewers: Guozhang Wang

Closes #1898 from hachikuji/KAFKA-3782

[jira] [Commented] (KAFKA-1207) Launch Kafka from within Apache Mesos

2017-01-20 Thread postmas...@inn.ru (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15832783#comment-15832783
 ] 

postmas...@inn.ru commented on KAFKA-1207:
--

Delivery is delayed to these recipients or groups:

e...@inn.ru

Subject: [jira] [Commented] (KAFKA-1207) Launch Kafka from within Apache Mesos

This message hasn't been delivered yet. Delivery will continue to be attempted.

The server will keep trying to deliver this message for the next 1 days, 19 
hours and 59 minutes. You'll be notified if the message can't be delivered by 
that time.







Diagnostic information for administrators:

Generating server: lc-exch-04.inn.local
Receiving server: inn.ru (109.105.153.25)

e...@inn.ru
Server at inn.ru (109.105.153.25) returned '400 4.4.7 Message delayed'
1/21/2017 3:22:32 AM - Server at inn.ru (109.105.153.25) returned '441 4.4.1 
Error communicating with target host: "Failed to connect. Winsock error code: 
10060, Win32 error code: 10060." Last endpoint attempted was 109.105.153.25:25'

Original message headers:

Received: from lc-exch-04.inn.local (10.64.37.99) by lc-exch-04.inn.local
 (10.64.37.99) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.669.32; Sat, 21
 Jan 2017 02:31:37 +0300
Received: from lc-asp-02.inn.ru (10.64.37.105) by lc-exch-04.inn.local
 (10.64.37.100) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.669.32 via
 Frontend Transport; Sat, 21 Jan 2017 02:31:37 +0300
Received-SPF: None (no SPF record) identity=mailfrom; client-ip=209.188.14.142; 
helo=spamd2-us-west.apache.org; envelope-from=j...@apache.org; 
receiver=e...@inn.ru
X-Envelope-From: 
Received: from spamd2-us-west.apache.org (pnap-us-west-generic-nat.apache.org 
[209.188.14.142])
by lc-asp-02.inn.ru (Postfix) with ESMTP id 61414400C3
for ; Sat, 21 Jan 2017 00:31:36 +0100 (CET)
Received: from localhost (localhost [127.0.0.1])
by spamd2-us-west.apache.org (ASF Mail Server at 
spamd2-us-west.apache.org) with ESMTP id 696701A00D1
for ; Fri, 20 Jan 2017 23:31:36 + (UTC)
X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org
X-Spam-Flag: NO
X-Spam-Score: -1.999
X-Spam-Level:
X-Spam-Status: No, score=-1.999 tagged_above=-999 required=6.31
tests=[KAM_LAZY_DOMAIN_SECURITY=1, RP_MATCHES_RCVD=-2.999]
autolearn=disabled
Received: from mx1-lw-us.apache.org ([10.40.0.8])
by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 
10024)
with ESMTP id sNvBHjNqFkVr for ;
Fri, 20 Jan 2017 23:31:34 + (UTC)
Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org 
[209.188.14.139])
by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with 
ESMTP id 8EED45F27D
for ; Fri, 20 Jan 2017 23:31:34 + (UTC)
Received: from jira-lw-us.apache.org (unknown [207.244.88.139])
by mailrelay1-us-west.apache.org (ASF Mail Server at 
mailrelay1-us-west.apache.org) with ESMTP id EEB21E0272
for ; Fri, 20 Jan 2017 23:31:27 + (UTC)
Received: from jira-lw-us.apache.org (localhost [127.0.0.1])
by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) 
with ESMTP id DF680252A3
for ; Fri, 20 Jan 2017 23:31:26 + (UTC)
Date: Fri, 20 Jan 2017 23:31:26 +
From: "postmas...@inn.ru (JIRA)" 
To: 
Message-ID: 
In-Reply-To: 
References:  

Subject: [jira] [Commented] (KAFKA-1207) Launch Kafka from within Apache
 Mesos
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394
X-inn-MailScanner-ESVA-Information: Please contact  for more information
X-inn-MailScanner-ESVA-ID: 61414400C3.A5AC2
X-inn-MailScanner-ESVA: Found to be clean
X-inn-MailScanner-ESVA-From: j...@apache.org
X-inn-MailScanner-ESVA-Watermark: 1485559896.78659@wcniv5TiUEgLVB0NqnBh0g
Return-Path: j...@apache.org
X-OrganizationHeadersPreserved: lc-exch-04.inn.local
X-CrossPremisesHeadersFilteredByDsnGenerator: lc-exch-04.inn.local



> Launch Kafka from within Apache Mesos
> -
>
> Key: KAFKA-1207
> URL: https://issues.apache.org/jira/browse/KAFKA-1207
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>  Labels: mesos
> Attachments: KAFKA-1207_2014-01-19_00:04:58.patch, 
> KAFKA-1207_2014-01-19_00:48:49.patch, KAFKA-1207.patch
>
>
> There are a few components to 

[jira] [Commented] (KAFKA-1207) Launch Kafka from within Apache Mesos

2017-01-20 Thread postmas...@inn.ru (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15832784#comment-15832784
 ] 

postmas...@inn.ru commented on KAFKA-1207:
--

Delivery is delayed to these recipients or groups:

e...@inn.ru

Subject: [jira] [Commented] (KAFKA-1207) Launch Kafka from within Apache Mesos

This message hasn't been delivered yet. Delivery will continue to be attempted.

The server will keep trying to deliver this message for the next 1 days, 19 
hours and 58 minutes. You'll be notified if the message can't be delivered by 
that time.







Diagnostic information for administrators:

Generating server: lc-exch-04.inn.local
Receiving server: inn.ru (109.105.153.25)

e...@inn.ru
Server at inn.ru (109.105.153.25) returned '400 4.4.7 Message delayed'
1/21/2017 3:22:32 AM - Server at inn.ru (109.105.153.25) returned '441 4.4.1 
Error communicating with target host: "Failed to connect. Winsock error code: 
10060, Win32 error code: 10060." Last endpoint attempted was 109.105.153.25:25'

Original message headers:

Received: from lc-exch-04.inn.local (10.64.37.99) by lc-exch-04.inn.local
 (10.64.37.99) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.669.32; Sat, 21
 Jan 2017 02:31:31 +0300
Received: from lc-asp-02.inn.ru (10.64.37.104) by lc-exch-04.inn.local
 (10.64.37.100) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.669.32 via
 Frontend Transport; Sat, 21 Jan 2017 02:31:31 +0300
Received-SPF: None (no SPF record) identity=mailfrom; client-ip=209.188.14.142; 
helo=spamd2-us-west.apache.org; envelope-from=j...@apache.org; 
receiver=e...@inn.ru
X-Envelope-From: 
Received: from spamd2-us-west.apache.org (pnap-us-west-generic-nat.apache.org 
[209.188.14.142])
by lc-asp-02.inn.ru (Postfix) with ESMTP id 5899B400C6
for ; Sat, 21 Jan 2017 00:31:30 +0100 (CET)
Received: from localhost (localhost [127.0.0.1])
by spamd2-us-west.apache.org (ASF Mail Server at 
spamd2-us-west.apache.org) with ESMTP id 00CFF1A00D1
for ; Fri, 20 Jan 2017 23:31:30 + (UTC)
X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org
X-Spam-Flag: NO
X-Spam-Score: -1.999
X-Spam-Level:
X-Spam-Status: No, score=-1.999 tagged_above=-999 required=6.31
tests=[KAM_LAZY_DOMAIN_SECURITY=1, RP_MATCHES_RCVD=-2.999]
autolearn=disabled
Received: from mx1-lw-us.apache.org ([10.40.0.8])
by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 
10024)
with ESMTP id wjuDt-L6Tu5q for ;
Fri, 20 Jan 2017 23:31:28 + (UTC)
Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org 
[209.188.14.139])
by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with 
ESMTP id 0996A60D95
for ; Fri, 20 Jan 2017 23:31:28 + (UTC)
Received: from jira-lw-us.apache.org (unknown [207.244.88.139])
by mailrelay1-us-west.apache.org (ASF Mail Server at 
mailrelay1-us-west.apache.org) with ESMTP id 4617CE026F
for ; Fri, 20 Jan 2017 23:31:27 + (UTC)
Received: from jira-lw-us.apache.org (localhost [127.0.0.1])
by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) 
with ESMTP id 9DBB02528D
for ; Fri, 20 Jan 2017 23:31:26 + (UTC)
Date: Fri, 20 Jan 2017 23:31:26 +
From: "postmas...@inn.ru (JIRA)" 
To: 
Message-ID: 
In-Reply-To: 
References:  

Subject: [jira] [Commented] (KAFKA-1207) Launch Kafka from within Apache
 Mesos
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394
X-inn-MailScanner-ESVA-Information: Please contact  for more information
X-inn-MailScanner-ESVA-ID: 5899B400C6.A7DC9
X-inn-MailScanner-ESVA: Found to be clean
X-inn-MailScanner-ESVA-From: j...@apache.org
X-inn-MailScanner-ESVA-Watermark: 1485559891.14463@Bdk3YnHTz6JpUPfn7z1gwg
Return-Path: j...@apache.org
X-OrganizationHeadersPreserved: lc-exch-04.inn.local
X-CrossPremisesHeadersFilteredByDsnGenerator: lc-exch-04.inn.local



> Launch Kafka from within Apache Mesos
> -
>
> Key: KAFKA-1207
> URL: https://issues.apache.org/jira/browse/KAFKA-1207
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>  Labels: mesos
> Attachments: KAFKA-1207_2014-01-19_00:04:58.patch, 
> KAFKA-1207_2014-01-19_00:48:49.patch, KAFKA-1207.patch
>
>
> There are a few components to 

[jira] [Commented] (KAFKA-4680) min.insync.replicas can be set higher than replication factor

2017-01-20 Thread huxi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15832729#comment-15832729
 ] 

huxi commented on KAFKA-4680:
-

If sending records to a topic with a larger 'min.insync.replicas' than 
replication factor with acks set to all, client callback returns 
`NOT_ENOUGH_REPLICAS` indicating that you be aware of this and adjust the 
topic-level min.isr.replicas or check the broker availability, doesn't it?

I do agree that it's definitely better if we add some check before 
creating/altering topic. What I am concerned is the completeness. If a check is 
introduced during creating/altering topics, then do we also need to add some 
checks before changing the replication factor, especially reducing this number 
(we could call kafka-reassign-partitions.sh to do this although it's sort of 
inconvenient to use), to ensure the reduced number is still larger than 
min.isr.replicas? If this is the case, seems we have to figure out all the 
affected function paths.

You of course could assign this jira to youself and get rolling :-)





> min.insync.replicas can be set higher than replication factor
> -
>
> Key: KAFKA-4680
> URL: https://issues.apache.org/jira/browse/KAFKA-4680
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.1
>Reporter: James Cheng
>
> It is possible to specify a min.insync.replicas for a topic that is higher 
> than the replication factor of the topic. If you do this, you will not be 
> able to produce to the topic with acks=all.
> Furthermore, each produce request (including retries) to the topic will emit 
> an ERROR level message to the broker debuglogs. If this is not noticed 
> quickly enough, it can cause the debuglogs to balloon.
> We actually hosed one of our Kafka clusters because of this. A topic got 
> configured with min.insync.replicas > replication factor. It had partitions 
> on all brokers of our cluster. The broker logs ballooned and filled up the 
> disks. We run these clusters on CoreOS, and CoreOS's etcd database got 
> corrupted. (Kafka didn't get corrupted, tho).
> I think Kafka should do validation when someone tries to change a topic to 
> min.insync.replicas > replication factor, and reject the change.
> This would presumably affect kafka-topics.sh, kafka-configs.sh, as well as 
> the CreateTopics operation that came in KIP-4.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Kafka 10 Stability Issue

2017-01-20 Thread Jason Gustafson
Hi there,

This sounds similar to https://issues.apache.org/jira/browse/KAFKA-4477.
Have you tried 0.10.1.1?

-Jason

On Fri, Jan 20, 2017 at 5:27 PM, Hui Yang  wrote:

> Hi, Kafka Team
>
> This is Hui Yang from Expedia engineer team and want to ask a question
> about Kafka 10 issue.
> Our team use Kafka as our core infrastructure and recently upgrade from
> Kafka 0.8.2.2 to Kafka 0.10.1.0 but get a issue after the upgrade.
>
> The issue is as below:
> Kafka 10 works well after the upgrade for couple days but then we started
> to see "java.io.IOException: Connection to 3 was disconnected before the
> response was read” on each Kafka broker when trying to communicate to
> controller (as you may know, one of the Kafka broker is acting as a
> controller to handle the topic/partition assignment and state change task,
> in our case, it is the broker 3).
> Even on the controller log, I found "[Controller-3-to-broker-3-send-thread],
> Controller 3 epoch 3 fails to send request,java.io.IOException: Connection
> to 3 was disconnected before the response was read”, looks it is even not
> able to sent message to itself.
> After we saw those exception on brokers for a while, we started to see
> timeout exception from our producer side that our producer is not able to
> send messages to brokers.
>
> When I check the JMX metrics, I found the CPU usage for controller is
> always higher than other brokers after we upgrade to Kafka 10(brokers have
> similar CPU usage when Kafka 8) and memory increased for a spike
> specifically for the controller during the issue. I assume the controller
> may not have enough memory left to create new connections for the producer
> and other brokers.
>
> One more need to mention is we use the Kafka 0.8 protocol and format on
> Kafka 0.10 brokers that we can still use 0.8 clients.
>
> Details for the exception:
> " WARN [ReplicaFetcherThread-0-3], Error in fetch kafka.server.
> ReplicaFetcherThread$FetchRequest@87d8e00 (kafka.server.
> ReplicaFetcherThread)
> java.io.IOException: Connection to 3 was disconnected before the response
> was read
> at kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$
> extension$1$$anonfun$apply$1.apply(NetworkClientBlockingOps.scala:115)
> at kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$
> extension$1$$anonfun$apply$1.apply(NetworkClientBlockingOps.scala:112)
> at scala.Option.foreach(Option.scala:257)
> at kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$
> extension$1.apply(NetworkClientBlockingOps.scala:112)
> at kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$
> extension$1.apply(NetworkClientBlockingOps.scala:108)
> at kafka.utils.NetworkClientBlockingOps$.recursivePoll$1(
> NetworkClientBlockingOps.scala:137)
> at kafka.utils.NetworkClientBlockingOps$.kafka$utils$
> NetworkClientBlockingOps$$pollContinuously$extension(
> NetworkClientBlockingOps.scala:143)
> at kafka.utils.NetworkClientBlockingOps$.blockingSendAndReceive$extension(
> NetworkClientBlockingOps.scala:108)
> at kafka.server.ReplicaFetcherThread.sendRequest(
> ReplicaFetcherThread.scala:253)
> at kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:238)
> at kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:42)
> at kafka.server.AbstractFetcherThread.processFetchRequest(
> AbstractFetcherThread.scala:118)
> at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:
> 103)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)"
>
> "WARN [Controller-3-to-broker-3-send-thread], Controller 3 epoch 1 fails
> to send request
> java.io.IOException: Connection to 2 was disconnected before the response
> was read
> at kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$
> extension$1$$anonfun$apply$1.apply(NetworkClientBlockingOps.scala:115)
> at kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$
> extension$1$$anonfun$apply$1.apply(NetworkClientBlockingOps.scala:112)
> at scala.Option.foreach(Option.scala:257)
> at kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$
> extension$1.apply(NetworkClientBlockingOps.scala:112)
> at kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$
> extension$1.apply(NetworkClientBlockingOps.scala:108)
> at kafka.utils.NetworkClientBlockingOps$.recursivePoll$1(
> NetworkClientBlockingOps.scala:137)
> at kafka.utils.NetworkClientBlockingOps$.kafka$utils$
> NetworkClientBlockingOps$$pollContinuously$extension(
> NetworkClientBlockingOps.scala:143)
> at kafka.utils.NetworkClientBlockingOps$.blockingSendAndReceive$extension(
> NetworkClientBlockingOps.scala:108)
> at kafka.controller.RequestSendThread.liftedTree1$
> 1(ControllerChannelManager.scala:190)
> at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.
> scala:181)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)”
>
> In production, we build 6 

Kafka 10 Stability Issue

2017-01-20 Thread Hui Yang
Hi, Kafka Team

This is Hui Yang from Expedia engineer team and want to ask a question about 
Kafka 10 issue.
Our team use Kafka as our core infrastructure and recently upgrade from Kafka 
0.8.2.2 to Kafka 0.10.1.0 but get a issue after the upgrade.

The issue is as below:
Kafka 10 works well after the upgrade for couple days but then we started to 
see "java.io.IOException: Connection to 3 was disconnected before the response 
was read” on each Kafka broker when trying to communicate to controller (as you 
may know, one of the Kafka broker is acting as a controller to handle the 
topic/partition assignment and state change task, in our case, it is the broker 
3).
Even on the controller log, I found "[Controller-3-to-broker-3-send-thread], 
Controller 3 epoch 3 fails to send request,java.io.IOException: Connection to 3 
was disconnected before the response was read”, looks it is even not able to 
sent message to itself.
After we saw those exception on brokers for a while, we started to see timeout 
exception from our producer side that our producer is not able to send messages 
to brokers.

When I check the JMX metrics, I found the CPU usage for controller is always 
higher than other brokers after we upgrade to Kafka 10(brokers have similar CPU 
usage when Kafka 8) and memory increased for a spike specifically for the 
controller during the issue. I assume the controller may not have enough memory 
left to create new connections for the producer and other brokers.

One more need to mention is we use the Kafka 0.8 protocol and format on Kafka 
0.10 brokers that we can still use 0.8 clients.

Details for the exception:
" WARN [ReplicaFetcherThread-0-3], Error in fetch 
kafka.server.ReplicaFetcherThread$FetchRequest@87d8e00 
(kafka.server.ReplicaFetcherThread)
java.io.IOException: Connection to 3 was disconnected before the response was 
read
at 
kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$extension$1$$anonfun$apply$1.apply(NetworkClientBlockingOps.scala:115)
at 
kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$extension$1$$anonfun$apply$1.apply(NetworkClientBlockingOps.scala:112)
at scala.Option.foreach(Option.scala:257)
at 
kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$extension$1.apply(NetworkClientBlockingOps.scala:112)
at 
kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$extension$1.apply(NetworkClientBlockingOps.scala:108)
at 
kafka.utils.NetworkClientBlockingOps$.recursivePoll$1(NetworkClientBlockingOps.scala:137)
at 
kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollContinuously$extension(NetworkClientBlockingOps.scala:143)
at 
kafka.utils.NetworkClientBlockingOps$.blockingSendAndReceive$extension(NetworkClientBlockingOps.scala:108)
at kafka.server.ReplicaFetcherThread.sendRequest(ReplicaFetcherThread.scala:253)
at kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:238)
at kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:42)
at 
kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:118)
at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:103)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)"

"WARN [Controller-3-to-broker-3-send-thread], Controller 3 epoch 1 fails to 
send request
java.io.IOException: Connection to 2 was disconnected before the response was 
read
at 
kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$extension$1$$anonfun$apply$1.apply(NetworkClientBlockingOps.scala:115)
at 
kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$extension$1$$anonfun$apply$1.apply(NetworkClientBlockingOps.scala:112)
at scala.Option.foreach(Option.scala:257)
at 
kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$extension$1.apply(NetworkClientBlockingOps.scala:112)
at 
kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$extension$1.apply(NetworkClientBlockingOps.scala:108)
at 
kafka.utils.NetworkClientBlockingOps$.recursivePoll$1(NetworkClientBlockingOps.scala:137)
at 
kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollContinuously$extension(NetworkClientBlockingOps.scala:143)
at 
kafka.utils.NetworkClientBlockingOps$.blockingSendAndReceive$extension(NetworkClientBlockingOps.scala:108)
at 
kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:190)
at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:181)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)”

In production, we build 6 Kafka brokers with 3 zookeeper nodes on the AWS using 
C3.xlarge type.
Our JVM settings is as follow: -Xmx1G -Xms1G –server -XX:+UseCompressedOops 
-XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled 
-XX:+CMSScavengeBeforeRemark.
Our traffic is 500 TPS and each message has average 100KB size.

I am appreciate for your time to give 

[jira] [Created] (KAFKA-4683) Mismatch between Stream windowed store and broker log retention logic

2017-01-20 Thread Elias Levy (JIRA)
Elias Levy created KAFKA-4683:
-

 Summary: Mismatch between Stream windowed store and broker log 
retention logic
 Key: KAFKA-4683
 URL: https://issues.apache.org/jira/browse/KAFKA-4683
 Project: Kafka
  Issue Type: Bug
  Components: log, streams
Affects Versions: 0.10.1.1
Reporter: Elias Levy


The RocksDBWindowStore keeps key-value entries for a configurable retention 
period.  The leading edge of the time period kept is determined the newest 
timestamp of an inserted KV.  The trailing edge is this leading edge minus the 
requested retention period.

If logging is enabled, changes to the store are written to a change log topic 
that is configured with a retention.ms value equal to the store retention 
period.  The leading edge of the time period kept by the log is the current 
time.  The trailing edge is the leading edge minus the requested retention 
period.

The difference on how the leading edge is determined can result in unexpected 
behavior.

If the stream application is processing data older than the retention period 
and storing it in a windowed store, the store will have data for the retention 
period looking back from the newest timestamp of the processed message.  But 
the messages written to the state changeling will almost immediately be deleted 
by the broker, as they will fall outside of the retention window as it computes 
it.  

If the application is stopped and restarted in this state, and if the local 
state has been lost of some reason, the application won't be able to recover 
the sate from the broker, as it broker has deleted it.


In addition, I've noticed that there is a discrepancy on what timestamp is used 
between the store and the change log.  The store will use the timestamp passed 
as an argument to {{put}}, or if no timestamp is passed, fallback to 
{{context.timestamp}}.  But {{StoreChangeLogger.logChange}} does not take a 
timestamp.  Instead is always uses {{context.timestamp}} to write the change to 
the broker.  Thus it is possible that the state store and the change log to use 
different timestamps for the same KV.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] KIP-74: Add FetchResponse size limit in bytes

2017-01-20 Thread Jason Gustafson
+1

On Fri, Jan 20, 2017 at 4:51 PM, Ismael Juma  wrote:

> Good catch, Colin. +1 to editing the wiki to match the desired behaviour
> and what was implemented in 0.10.1.
>
> Ismael
>
> On Sat, Jan 21, 2017 at 12:19 AM, Colin McCabe  wrote:
>
> > Hi all,
> >
> > While looking at some code related to KIP-74, I noticed a slight
> > discrepancy between the text on the wiki and the implementation.  The
> > wiki says that "If max_bytes is Int.MAX_INT, new request behaves exactly
> > like old one."  This would mean that if there was a single message that
> > was larger than the maximum bytes per partition, zero messages would be
> > returned, and clients would throw MessageSizeTooLargeException.
> > However, the code does not implement this.  Instead, it implements the
> > "new" behavior where the client always gets at least one message.
> >
> > The new behavior seems to be more desirable, since clients do not "get
> > stuck" on messages that are too big.  I propose that we edit the wiki to
> > reflect the implemented behavior by deleting the references to special
> > behavior when max_bytes is MAX_INT.
> >
> > cheers,
> > Colin
> >
>


[jira] [Comment Edited] (KAFKA-4682) Committed offsets should not be deleted if a consumer is still active

2017-01-20 Thread Jeff Widman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15832697#comment-15832697
 ] 

Jeff Widman edited comment on KAFKA-4682 at 1/21/17 1:08 AM:
-

Now that consumers have background heartbeat thread, it should be much easier 
to identify when consumer dies vs alive. So this makes sense to me. However, 
this would make KAFKA-2000 more important because you can't count on offsets 
expiring.

We also had a production problem where a couple of topics log files were 
totally cleared, but the offsets weren't cleared, so we had negative lag where 
consumer offset was higher than broker highwater. This was with zookeeper 
offset storage, but regardless I could envision something getting screwed up or 
someone resetting a cluster w/o understanding what they're doing and making 
offsets screwed up. If this was implemented those old offsets would never go 
away unless manually cleared up also. So I'd want to make sure that's protected 
against somehow... like if a broker ever encounters consumer offset that's 
higher than highwater mark, either an exception is thrown or those consumer 
offsets get reset to the broker highwater mark. Probably safest to just throw 
an exception in case something else funky is going on.


was (Author: jeffwidman):
Now that consumers have background heartbeat thread, it should be much easier 
to identify when consumer dies vs alive. So this makes sense to me. However, 
this would make KAFKA-2000 more important because you can't count on offsets 
expiring.

We also had a production problem where a couple of topics log files were 
totally cleared, but the offsets weren't cleared, so we had negative lag where 
consumer offset was higher than broker highwater. This was with zookeeper 
offset storage, but regardless I could envision something getting screwed up or 
someone resetting a cluster w/o understanding what they're doing and making 
offsets screwed up. If this was implemented those old offsets would never go 
away unless manually cleared up also. So I'd want to make sure that's protected 
against somehow... like if a broker ever encounters consumer offset that's 
higher than highwater mark, that gets removed from the topic.

> Committed offsets should not be deleted if a consumer is still active
> -
>
> Key: KAFKA-4682
> URL: https://issues.apache.org/jira/browse/KAFKA-4682
> Project: Kafka
>  Issue Type: Bug
>Reporter: James Cheng
>
> Kafka will delete committed offsets that are older than 
> offsets.retention.minutes
> If there is an active consumer on a low traffic partition, it is possible 
> that Kafka will delete the committed offset for that consumer. Once the 
> offset is deleted, a restart or a rebalance of that consumer will cause the 
> consumer to not find any committed offset and start consuming from 
> earliest/latest (depending on auto.offset.reset). I'm not sure, but a broker 
> failover might also cause you to start reading from auto.offset.reset (due to 
> broker restart, or coordinator failover).
> I think that Kafka should only delete offsets for inactive consumers. The 
> timer should only start after a consumer group goes inactive. For example, if 
> a consumer group goes inactive, then after 1 week, delete the offsets for 
> that consumer group. This is a solution that [~junrao] mentioned in 
> https://issues.apache.org/jira/browse/KAFKA-3806?focusedCommentId=15323521=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15323521
> The current workarounds are to:
> # Commit an offset on every partition you own on a regular basis, making sure 
> that it is more frequent than offsets.retention.minutes (a broker-side 
> setting that a consumer might not be aware of)
> or
> # Turn the value of offsets.retention.minutes up really really high. You have 
> to make sure it is higher than any valid low-traffic rate that you want to 
> support. For example, if you want to support a topic where someone produces 
> once a month, you would have to set offsetes.retention.mintues to 1 month. 
> or
> # Turn on enable.auto.commit (this is essentially #1, but easier to 
> implement).
> None of these are ideal. 
> #1 can be spammy. It requires your consumers know something about how the 
> brokers are configured. Sometimes it is out of your control. Mirrormaker, for 
> example, only commits offsets on partitions where it receives data. And it is 
> duplication that you need to put into all of your consumers.
> #2 has disk-space impact on the broker (in __consumer_offsets) as well as 
> memory-size on the broker (to answer OffsetFetch).
> #3 I think has the potential for message loss (the consumer might commit on 
> messages that are not yet fully processed)



--
This message was sent by Atlassian 

[jira] [Commented] (KAFKA-4682) Committed offsets should not be deleted if a consumer is still active

2017-01-20 Thread Jeff Widman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15832697#comment-15832697
 ] 

Jeff Widman commented on KAFKA-4682:


Now that consumers have background heartbeat thread, it should be much easier 
to identify when consumer dies vs alive. So this makes sense to me. However, 
this would make KAFKA-2000 more important because you can't count on offsets 
expiring.

We also had a production problem where a couple of topics log files were 
totally cleared, but the offsets weren't cleared, so we had negative lag where 
consumer offset was higher than broker highwater. This was with zookeeper 
offset storage, but regardless I could envision something getting screwed up or 
someone resetting a cluster w/o understanding what they're doing and making 
offsets screwed up. If this was implemented those old offsets would never go 
away unless manually cleared up also. So I'd want to make sure that's protected 
against somehow... like if a broker ever encounters consumer offset that's 
higher than highwater mark, that gets removed from the topic.

> Committed offsets should not be deleted if a consumer is still active
> -
>
> Key: KAFKA-4682
> URL: https://issues.apache.org/jira/browse/KAFKA-4682
> Project: Kafka
>  Issue Type: Bug
>Reporter: James Cheng
>
> Kafka will delete committed offsets that are older than 
> offsets.retention.minutes
> If there is an active consumer on a low traffic partition, it is possible 
> that Kafka will delete the committed offset for that consumer. Once the 
> offset is deleted, a restart or a rebalance of that consumer will cause the 
> consumer to not find any committed offset and start consuming from 
> earliest/latest (depending on auto.offset.reset). I'm not sure, but a broker 
> failover might also cause you to start reading from auto.offset.reset (due to 
> broker restart, or coordinator failover).
> I think that Kafka should only delete offsets for inactive consumers. The 
> timer should only start after a consumer group goes inactive. For example, if 
> a consumer group goes inactive, then after 1 week, delete the offsets for 
> that consumer group. This is a solution that [~junrao] mentioned in 
> https://issues.apache.org/jira/browse/KAFKA-3806?focusedCommentId=15323521=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15323521
> The current workarounds are to:
> # Commit an offset on every partition you own on a regular basis, making sure 
> that it is more frequent than offsets.retention.minutes (a broker-side 
> setting that a consumer might not be aware of)
> or
> # Turn the value of offsets.retention.minutes up really really high. You have 
> to make sure it is higher than any valid low-traffic rate that you want to 
> support. For example, if you want to support a topic where someone produces 
> once a month, you would have to set offsetes.retention.mintues to 1 month. 
> or
> # Turn on enable.auto.commit (this is essentially #1, but easier to 
> implement).
> None of these are ideal. 
> #1 can be spammy. It requires your consumers know something about how the 
> brokers are configured. Sometimes it is out of your control. Mirrormaker, for 
> example, only commits offsets on partitions where it receives data. And it is 
> duplication that you need to put into all of your consumers.
> #2 has disk-space impact on the broker (in __consumer_offsets) as well as 
> memory-size on the broker (to answer OffsetFetch).
> #3 I think has the potential for message loss (the consumer might commit on 
> messages that are not yet fully processed)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4682) Committed offsets should not be deleted if a consumer is still active

2017-01-20 Thread James Cheng (JIRA)
James Cheng created KAFKA-4682:
--

 Summary: Committed offsets should not be deleted if a consumer is 
still active
 Key: KAFKA-4682
 URL: https://issues.apache.org/jira/browse/KAFKA-4682
 Project: Kafka
  Issue Type: Bug
Reporter: James Cheng


Kafka will delete committed offsets that are older than 
offsets.retention.minutes

If there is an active consumer on a low traffic partition, it is possible that 
Kafka will delete the committed offset for that consumer. Once the offset is 
deleted, a restart or a rebalance of that consumer will cause the consumer to 
not find any committed offset and start consuming from earliest/latest 
(depending on auto.offset.reset). I'm not sure, but a broker failover might 
also cause you to start reading from auto.offset.reset (due to broker restart, 
or coordinator failover).

I think that Kafka should only delete offsets for inactive consumers. The timer 
should only start after a consumer group goes inactive. For example, if a 
consumer group goes inactive, then after 1 week, delete the offsets for that 
consumer group. This is a solution that [~junrao] mentioned in 
https://issues.apache.org/jira/browse/KAFKA-3806?focusedCommentId=15323521=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15323521

The current workarounds are to:
# Commit an offset on every partition you own on a regular basis, making sure 
that it is more frequent than offsets.retention.minutes (a broker-side setting 
that a consumer might not be aware of)
or
# Turn the value of offsets.retention.minutes up really really high. You have 
to make sure it is higher than any valid low-traffic rate that you want to 
support. For example, if you want to support a topic where someone produces 
once a month, you would have to set offsetes.retention.mintues to 1 month. 
or
# Turn on enable.auto.commit (this is essentially #1, but easier to implement).

None of these are ideal. 

#1 can be spammy. It requires your consumers know something about how the 
brokers are configured. Sometimes it is out of your control. Mirrormaker, for 
example, only commits offsets on partitions where it receives data. And it is 
duplication that you need to put into all of your consumers.

#2 has disk-space impact on the broker (in __consumer_offsets) as well as 
memory-size on the broker (to answer OffsetFetch).

#3 I think has the potential for message loss (the consumer might commit on 
messages that are not yet fully processed)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] KIP-74: Add FetchResponse size limit in bytes

2017-01-20 Thread Ismael Juma
Good catch, Colin. +1 to editing the wiki to match the desired behaviour
and what was implemented in 0.10.1.

Ismael

On Sat, Jan 21, 2017 at 12:19 AM, Colin McCabe  wrote:

> Hi all,
>
> While looking at some code related to KIP-74, I noticed a slight
> discrepancy between the text on the wiki and the implementation.  The
> wiki says that "If max_bytes is Int.MAX_INT, new request behaves exactly
> like old one."  This would mean that if there was a single message that
> was larger than the maximum bytes per partition, zero messages would be
> returned, and clients would throw MessageSizeTooLargeException.
> However, the code does not implement this.  Instead, it implements the
> "new" behavior where the client always gets at least one message.
>
> The new behavior seems to be more desirable, since clients do not "get
> stuck" on messages that are too big.  I propose that we edit the wiki to
> reflect the implemented behavior by deleting the references to special
> behavior when max_bytes is MAX_INT.
>
> cheers,
> Colin
>


[jira] [Created] (KAFKA-4681) Disallow reassign-partitions script to assign partitions to non-existent brokers

2017-01-20 Thread Nick Travers (JIRA)
Nick Travers created KAFKA-4681:
---

 Summary: Disallow reassign-partitions script to assign partitions 
to non-existent brokers
 Key: KAFKA-4681
 URL: https://issues.apache.org/jira/browse/KAFKA-4681
 Project: Kafka
  Issue Type: Improvement
Reporter: Nick Travers
Priority: Minor


When running the {{kafka-reassign-partitions.sh}} script, it is possible for 
partitions to be accidentally assigned to brokers that do not exist.

This results in partitions with an indefinitely reduced ISR set, as the 
partition can never be replicated to the non-existent broker.

The solution was to add a new broker with the "bogus" broker ID, reassign the 
partitions to other brokers, before decommissioning the bogus broker. This is 
not ideal due to the increased operational burden of adding and removing 
brokers, in addition to manually moving partitions around.

Suggest patching the script to disallow assignment of partitions to brokers 
that either do not exist, or brokers that are dead, and not participating in 
the ISR of other partitions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[VOTE] KIP-74: Add FetchResponse size limit in bytes

2017-01-20 Thread Colin McCabe
Hi all,

While looking at some code related to KIP-74, I noticed a slight
discrepancy between the text on the wiki and the implementation.  The
wiki says that "If max_bytes is Int.MAX_INT, new request behaves exactly
like old one."  This would mean that if there was a single message that
was larger than the maximum bytes per partition, zero messages would be
returned, and clients would throw MessageSizeTooLargeException. 
However, the code does not implement this.  Instead, it implements the
"new" behavior where the client always gets at least one message.

The new behavior seems to be more desirable, since clients do not "get
stuck" on messages that are too big.  I propose that we edit the wiki to
reflect the implemented behavior by deleting the references to special
behavior when max_bytes is MAX_INT.

cheers,
Colin


[jira] [Resolved] (KAFKA-3209) Support single message transforms in Kafka Connect

2017-01-20 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-3209.
--
Resolution: Fixed

Fixed in https://github.com/apache/kafka/pull/2374

> Support single message transforms in Kafka Connect
> --
>
> Key: KAFKA-3209
> URL: https://issues.apache.org/jira/browse/KAFKA-3209
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Neha Narkhede
>Assignee: Shikhar Bhushan
>  Labels: needs-kip
> Fix For: 0.10.2.0
>
>
> Users should be able to perform light transformations on messages between a 
> connector and Kafka. This is needed because some transformations must be 
> performed before the data hits Kafka (e.g. filtering certain types of events 
> or PII filtering). It's also useful for very light, single-message 
> modifications that are easier to perform inline with the data import/export.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3209) Support single message transforms in Kafka Connect

2017-01-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15832655#comment-15832655
 ] 

ASF GitHub Bot commented on KAFKA-3209:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2374


> Support single message transforms in Kafka Connect
> --
>
> Key: KAFKA-3209
> URL: https://issues.apache.org/jira/browse/KAFKA-3209
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Neha Narkhede
>Assignee: Shikhar Bhushan
>  Labels: needs-kip
> Fix For: 0.10.2.0
>
>
> Users should be able to perform light transformations on messages between a 
> connector and Kafka. This is needed because some transformations must be 
> performed before the data hits Kafka (e.g. filtering certain types of events 
> or PII filtering). It's also useful for very light, single-message 
> modifications that are easier to perform inline with the data import/export.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2374: KAFKA-3209: KIP-66: more single message transforms

2017-01-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2374


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP-111 : Kafka should preserve the Principal generated by the PrincipalBuilder while processing the request received on socket channel, on the broker.

2017-01-20 Thread Mayuresh Gharat
Hi,

Just wanted to see if anyone had any concerns with this KIP.
I would like to put this to vote soon, if there are no concerns.

Thanks,

Mayuresh

On Thu, Jan 12, 2017 at 11:21 AM, Mayuresh Gharat <
gharatmayures...@gmail.com> wrote:

> Hi Ismael,
>
> Fair point. I will update it.
>
> Thanks,
>
> Mayuresh
>
> On Thu, Jan 12, 2017 at 11:07 AM, Ismael Juma  wrote:
>
>> Hi Mayuresh,
>>
>> Thanks for the KIP. A quick comment before I do a more detailed analysis,
>> the KIP says:
>>
>> `This KIP is a pure addition to existing functionality and does not
>> include
>> any backward incompatible changes.`
>>
>> However, the KIP is proposing the addition of a method to the
>> PrincipalBuilder pluggable interface, which is not a compatible change.
>> Existing implementations would no longer compile, for example. It would be
>> good to make this clear in the KIP.
>>
>> Ismael
>>
>> On Thu, Jan 12, 2017 at 5:44 PM, Mayuresh Gharat <
>> gharatmayures...@gmail.com
>> > wrote:
>>
>> > Hi all.
>> >
>> > We created KIP-111 to propose that Kafka should preserve the Principal
>> > generated by the PrincipalBuilder while processing the request received
>> on
>> > socket channel, on the broker.
>> >
>> > Please find the KIP wiki in the link
>> > https://cwiki.apache.org/confluence/pages/viewpage.action?
>> pageId=67638388.
>> > We would love to hear your comments and suggestions.
>> >
>> >
>> > Thanks,
>> >
>> > Mayuresh
>> >
>>
>
>
>
> --
> -Regards,
> Mayuresh R. Gharat
> (862) 250-7125
>



-- 
-Regards,
Mayuresh R. Gharat
(862) 250-7125


[jira] [Commented] (KAFKA-4680) min.insync.replicas can be set higher than replication factor

2017-01-20 Thread James Cheng (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15832645#comment-15832645
 ] 

James Cheng commented on KAFKA-4680:


I would like to work on this, but I will need some guidance from other devs. 
Can I assign it to myself? 

> min.insync.replicas can be set higher than replication factor
> -
>
> Key: KAFKA-4680
> URL: https://issues.apache.org/jira/browse/KAFKA-4680
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.1
>Reporter: James Cheng
>
> It is possible to specify a min.insync.replicas for a topic that is higher 
> than the replication factor of the topic. If you do this, you will not be 
> able to produce to the topic with acks=all.
> Furthermore, each produce request (including retries) to the topic will emit 
> an ERROR level message to the broker debuglogs. If this is not noticed 
> quickly enough, it can cause the debuglogs to balloon.
> We actually hosed one of our Kafka clusters because of this. A topic got 
> configured with min.insync.replicas > replication factor. It had partitions 
> on all brokers of our cluster. The broker logs ballooned and filled up the 
> disks. We run these clusters on CoreOS, and CoreOS's etcd database got 
> corrupted. (Kafka didn't get corrupted, tho).
> I think Kafka should do validation when someone tries to change a topic to 
> min.insync.replicas > replication factor, and reject the change.
> This would presumably affect kafka-topics.sh, kafka-configs.sh, as well as 
> the CreateTopics operation that came in KIP-4.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4680) min.insync.replicas can be set higher than replication factor

2017-01-20 Thread James Cheng (JIRA)
James Cheng created KAFKA-4680:
--

 Summary: min.insync.replicas can be set higher than replication 
factor
 Key: KAFKA-4680
 URL: https://issues.apache.org/jira/browse/KAFKA-4680
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.10.0.1
Reporter: James Cheng


It is possible to specify a min.insync.replicas for a topic that is higher than 
the replication factor of the topic. If you do this, you will not be able to 
produce to the topic with acks=all.

Furthermore, each produce request (including retries) to the topic will emit an 
ERROR level message to the broker debuglogs. If this is not noticed quickly 
enough, it can cause the debuglogs to balloon.

We actually hosed one of our Kafka clusters because of this. A topic got 
configured with min.insync.replicas > replication factor. It had partitions on 
all brokers of our cluster. The broker logs ballooned and filled up the disks. 
We run these clusters on CoreOS, and CoreOS's etcd database got corrupted. 
(Kafka didn't get corrupted, tho).

I think Kafka should do validation when someone tries to change a topic to 
min.insync.replicas > replication factor, and reject the change.

This would presumably affect kafka-topics.sh, kafka-configs.sh, as well as the 
CreateTopics operation that came in KIP-4.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4635) Client Compatibility follow-up

2017-01-20 Thread Colin P. McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin P. McCabe updated KAFKA-4635:
---
Status: Patch Available  (was: Open)

> Client Compatibility follow-up
> --
>
> Key: KAFKA-4635
> URL: https://issues.apache.org/jira/browse/KAFKA-4635
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients
>Reporter: Ismael Juma
>Assignee: Colin P. McCabe
> Fix For: 0.10.2.0
>
>
> I collected a number of improvements that I think would be good to do before 
> the release. [~cmccabe], please correct if I got anything wrong and feel free 
> to move some items to separate JIRAs.
> 1. OffsetAndTimestamp is a public class and the javadoc should only include 
> the behaviour that users will see. The following (or part of it) should 
> probably be a non-javadoc comment as it only happens internally:
> "* The timestamp should never be negative, unless it is invalid.  This could 
> happen when handling a response from a broker that doesn't support KIP-79."
> 2. There was a bit of a discussion with regards to the name of the exception 
> that is thrown when a broker is too old. The current name is 
> ObsoleteBrokerException. We should decide on the name and then we should 
> update the relevant producer/consumer methods to mention it.
> 3. [~junrao] suggested that it would be a good idea log when downgrading 
> requests as the behaviour can be a little different. We should decide the 
> right logging level and add this.
> 4. We should have a system test against 0.9.0.1 brokers. We don't support it, 
> but we should ideally give a reasonable error message.
> 5. It seems like `Fetcher.listOffset` could use `retrieveOffsetsByTimes` 
> instead of calling `sendListOffsetRequests` directly. I think that would be a 
> little better, but not sure if others disagree.
> 6. [~hachikuji] suggested that a version mismatch in the `offsetsForTimes` 
> call should result in null entry in map instead of exception for consistency 
> with how we handle the unsupported message format case. I am adding this to 
> make sure we discuss it, but I am not actually sure that is what we should 
> do. Under normal circumstances, the brokers are either too old or not whereas 
> the message format is a topic level configuration and, strictly speaking, 
> independent of the broker version (there is a correlation in practice).
> 7. We log a warning in case of an error while doing an ApiVersions request. 
> Because it is the first request and we retry, the warning in the log is 
> useful. We have a similar warning for Metadata requests, but we only did it 
> for bootstrap brokers. Would it make sense to do the same for ApiVersions?
> 8. It would be good to add a few more tests for the usable versions 
> computation. We have a single simple one at the moment.
> 9. We should add a note to the upgrade notes specifying the change in 
> behaviour with regards to older broker versions.
> cc [~hachikuji].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4635) Client Compatibility follow-up

2017-01-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15832608#comment-15832608
 ] 

ASF GitHub Bot commented on KAFKA-4635:
---

GitHub user cmccabe opened a pull request:

https://github.com/apache/kafka/pull/2414

KAFKA-4635: Client Compatibility follow-ups



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cmccabe/kafka KAFKA-4635

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2414.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2414


commit f3db4b0edc6a50a608c9876573ce6adbb1e5f378
Author: Colin P. Mccabe 
Date:   2017-01-20T22:59:26Z

ObsoleteBrokerException -> OutdatedBrokerException, revise comment for 
OffsetAndTimestamp

commit b39db596d5f0aab7dea31fce0edf7e88cf2bf2a9
Author: Colin P. Mccabe 
Date:   2017-01-20T23:09:38Z

Add debug log message when downgrading the message protocol for an older 
broker

commit 6414fa18f7c695c1801aa93adf6a3a5bc9815d7b
Author: Colin P. Mccabe 
Date:   2017-01-20T23:23:07Z

NodeApiVersionsTest: add more tests

commit b38c79d6129ed76cf73868766765516ab9f3731e
Author: Colin P. Mccabe 
Date:   2017-01-20T23:30:03Z

docs/upgrade.html: add a paragraph about client compatibility in 0.10.2




> Client Compatibility follow-up
> --
>
> Key: KAFKA-4635
> URL: https://issues.apache.org/jira/browse/KAFKA-4635
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients
>Reporter: Ismael Juma
>Assignee: Colin P. McCabe
> Fix For: 0.10.2.0
>
>
> I collected a number of improvements that I think would be good to do before 
> the release. [~cmccabe], please correct if I got anything wrong and feel free 
> to move some items to separate JIRAs.
> 1. OffsetAndTimestamp is a public class and the javadoc should only include 
> the behaviour that users will see. The following (or part of it) should 
> probably be a non-javadoc comment as it only happens internally:
> "* The timestamp should never be negative, unless it is invalid.  This could 
> happen when handling a response from a broker that doesn't support KIP-79."
> 2. There was a bit of a discussion with regards to the name of the exception 
> that is thrown when a broker is too old. The current name is 
> ObsoleteBrokerException. We should decide on the name and then we should 
> update the relevant producer/consumer methods to mention it.
> 3. [~junrao] suggested that it would be a good idea log when downgrading 
> requests as the behaviour can be a little different. We should decide the 
> right logging level and add this.
> 4. We should have a system test against 0.9.0.1 brokers. We don't support it, 
> but we should ideally give a reasonable error message.
> 5. It seems like `Fetcher.listOffset` could use `retrieveOffsetsByTimes` 
> instead of calling `sendListOffsetRequests` directly. I think that would be a 
> little better, but not sure if others disagree.
> 6. [~hachikuji] suggested that a version mismatch in the `offsetsForTimes` 
> call should result in null entry in map instead of exception for consistency 
> with how we handle the unsupported message format case. I am adding this to 
> make sure we discuss it, but I am not actually sure that is what we should 
> do. Under normal circumstances, the brokers are either too old or not whereas 
> the message format is a topic level configuration and, strictly speaking, 
> independent of the broker version (there is a correlation in practice).
> 7. We log a warning in case of an error while doing an ApiVersions request. 
> Because it is the first request and we retry, the warning in the log is 
> useful. We have a similar warning for Metadata requests, but we only did it 
> for bootstrap brokers. Would it make sense to do the same for ApiVersions?
> 8. It would be good to add a few more tests for the usable versions 
> computation. We have a single simple one at the moment.
> 9. We should add a note to the upgrade notes specifying the change in 
> behaviour with regards to older broker versions.
> cc [~hachikuji].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2414: KAFKA-4635: Client Compatibility follow-ups

2017-01-20 Thread cmccabe
GitHub user cmccabe opened a pull request:

https://github.com/apache/kafka/pull/2414

KAFKA-4635: Client Compatibility follow-ups



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cmccabe/kafka KAFKA-4635

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2414.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2414


commit f3db4b0edc6a50a608c9876573ce6adbb1e5f378
Author: Colin P. Mccabe 
Date:   2017-01-20T22:59:26Z

ObsoleteBrokerException -> OutdatedBrokerException, revise comment for 
OffsetAndTimestamp

commit b39db596d5f0aab7dea31fce0edf7e88cf2bf2a9
Author: Colin P. Mccabe 
Date:   2017-01-20T23:09:38Z

Add debug log message when downgrading the message protocol for an older 
broker

commit 6414fa18f7c695c1801aa93adf6a3a5bc9815d7b
Author: Colin P. Mccabe 
Date:   2017-01-20T23:23:07Z

NodeApiVersionsTest: add more tests

commit b38c79d6129ed76cf73868766765516ab9f3731e
Author: Colin P. Mccabe 
Date:   2017-01-20T23:30:03Z

docs/upgrade.html: add a paragraph about client compatibility in 0.10.2




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-1207) Launch Kafka from within Apache Mesos

2017-01-20 Thread postmas...@inn.ru (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15832606#comment-15832606
 ] 

postmas...@inn.ru commented on KAFKA-1207:
--

Delivery is delayed to these recipients or groups:

e...@inn.ru

Subject: [jira] [Commented] (KAFKA-1207) Launch Kafka from within Apache Mesos

This message hasn't been delivered yet. Delivery will continue to be attempted.

The server will keep trying to deliver this message for the next 1 days, 19 
hours and 51 minutes. You'll be notified if the message can't be delivered by 
that time.







Diagnostic information for administrators:

Generating server: lc-exch-02.inn.local
Receiving server: inn.ru (109.105.153.25)

e...@inn.ru
Server at inn.ru (109.105.153.25) returned '400 4.4.7 Message delayed'
1/20/2017 11:19:24 PM - Server at inn.ru (109.105.153.25) returned '441 4.4.1 
Error communicating with target host: "Failed to connect. Winsock error code: 
10060, Win32 error code: 10060." Last endpoint attempted was 109.105.153.25:25'

Original message headers:

Received: from lc-exch-04.inn.local (10.64.37.99) by lc-exch-02.inn.local
 (10.64.37.98) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.669.32; Fri, 20
 Jan 2017 22:20:43 +0300
Received: from lc-asp-02.inn.ru (10.64.37.104) by lc-exch-04.inn.local
 (10.64.37.100) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.669.32 via
 Frontend Transport; Fri, 20 Jan 2017 22:20:43 +0300
Received-SPF: None (no SPF record) identity=mailfrom; client-ip=209.188.14.142; 
helo=spamd3-us-west.apache.org; envelope-from=j...@apache.org; 
receiver=e...@inn.ru
X-Envelope-From: 
Received: from spamd3-us-west.apache.org (pnap-us-west-generic-nat.apache.org 
[209.188.14.142])
by lc-asp-02.inn.ru (Postfix) with ESMTP id C846F400C6
for ; Fri, 20 Jan 2017 20:20:42 +0100 (CET)
Received: from localhost (localhost [127.0.0.1])
by spamd3-us-west.apache.org (ASF Mail Server at 
spamd3-us-west.apache.org) with ESMTP id C62F6181B9D
for ; Fri, 20 Jan 2017 19:20:42 + (UTC)
X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org
X-Spam-Flag: NO
X-Spam-Score: -1.999
X-Spam-Level:
X-Spam-Status: No, score=-1.999 tagged_above=-999 required=6.31
tests=[KAM_LAZY_DOMAIN_SECURITY=1, RP_MATCHES_RCVD=-2.999]
autolearn=disabled
Received: from mx1-lw-us.apache.org ([10.40.0.8])
by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, 
port 10024)
with ESMTP id 72_5WVZwFUog for ;
Fri, 20 Jan 2017 19:20:37 + (UTC)
Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org 
[209.188.14.139])
by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with 
ESMTP id 2B22D60D97
for ; Fri, 20 Jan 2017 19:20:37 + (UTC)
Received: from jira-lw-us.apache.org (unknown [207.244.88.139])
by mailrelay1-us-west.apache.org (ASF Mail Server at 
mailrelay1-us-west.apache.org) with ESMTP id 7D6F0E026E
for ; Fri, 20 Jan 2017 19:20:27 + (UTC)
Received: from jira-lw-us.apache.org (localhost [127.0.0.1])
by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) 
with ESMTP id 9615E2528B
for ; Fri, 20 Jan 2017 19:20:26 + (UTC)
Date: Fri, 20 Jan 2017 19:20:26 +
From: "postmas...@inn.ru (JIRA)" 
To: 
Message-ID: 
In-Reply-To: 
References:  

Subject: [jira] [Commented] (KAFKA-1207) Launch Kafka from within Apache
 Mesos
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394
X-inn-MailScanner-ESVA-Information: Please contact  for more information
X-inn-MailScanner-ESVA-ID: C846F400C6.A61C6
X-inn-MailScanner-ESVA: Found to be clean
X-inn-MailScanner-ESVA-From: j...@apache.org
X-inn-MailScanner-ESVA-Watermark: 1485544843.22796@8avWF5wDTReYeJYgwyLYfQ
Return-Path: j...@apache.org
X-OrganizationHeadersPreserved: lc-exch-02.inn.local
X-CrossPremisesHeadersFilteredByDsnGenerator: lc-exch-02.inn.local



> Launch Kafka from within Apache Mesos
> -
>
> Key: KAFKA-1207
> URL: https://issues.apache.org/jira/browse/KAFKA-1207
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>  Labels: mesos
> Attachments: KAFKA-1207_2014-01-19_00:04:58.patch, 
> KAFKA-1207_2014-01-19_00:48:49.patch, KAFKA-1207.patch
>
>
> There are a few components to 

[jira] [Commented] (KAFKA-1207) Launch Kafka from within Apache Mesos

2017-01-20 Thread postmas...@inn.ru (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15832605#comment-15832605
 ] 

postmas...@inn.ru commented on KAFKA-1207:
--

Delivery is delayed to these recipients or groups:

e...@inn.ru

Subject: [jira] [Commented] (KAFKA-1207) Launch Kafka from within Apache Mesos

This message hasn't been delivered yet. Delivery will continue to be attempted.

The server will keep trying to deliver this message for the next 1 days, 19 
hours and 51 minutes. You'll be notified if the message can't be delivered by 
that time.







Diagnostic information for administrators:

Generating server: lc-exch-02.inn.local
Receiving server: inn.ru (109.105.153.25)

e...@inn.ru
Server at inn.ru (109.105.153.25) returned '400 4.4.7 Message delayed'
1/20/2017 11:19:24 PM - Server at inn.ru (109.105.153.25) returned '441 4.4.1 
Error communicating with target host: "Failed to connect. Winsock error code: 
10060, Win32 error code: 10060." Last endpoint attempted was 109.105.153.25:25'

Original message headers:

Received: from lc-exch-04.inn.local (10.64.37.99) by lc-exch-02.inn.local
 (10.64.37.98) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.669.32; Fri, 20
 Jan 2017 22:20:33 +0300
Received: from lc-asp-02.inn.ru (10.64.37.105) by lc-exch-04.inn.local
 (10.64.37.100) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.669.32 via
 Frontend Transport; Fri, 20 Jan 2017 22:20:34 +0300
Received-SPF: None (no SPF record) identity=mailfrom; client-ip=209.188.14.142; 
helo=spamd3-us-west.apache.org; envelope-from=j...@apache.org; 
receiver=e...@inn.ru
X-Envelope-From: 
Received: from spamd3-us-west.apache.org (pnap-us-west-generic-nat.apache.org 
[209.188.14.142])
by lc-asp-02.inn.ru (Postfix) with ESMTP id 9B2C7400C3
for ; Fri, 20 Jan 2017 20:20:32 +0100 (CET)
Received: from localhost (localhost [127.0.0.1])
by spamd3-us-west.apache.org (ASF Mail Server at 
spamd3-us-west.apache.org) with ESMTP id 1E9CF189F6F
for ; Fri, 20 Jan 2017 19:20:32 + (UTC)
X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org
X-Spam-Flag: NO
X-Spam-Score: -1.999
X-Spam-Level:
X-Spam-Status: No, score=-1.999 tagged_above=-999 required=6.31
tests=[KAM_LAZY_DOMAIN_SECURITY=1, RP_MATCHES_RCVD=-2.999]
autolearn=disabled
Received: from mx1-lw-eu.apache.org ([10.40.0.8])
by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, 
port 10024)
with ESMTP id Ix8BW0NFs0Lc for ;
Fri, 20 Jan 2017 19:20:30 + (UTC)
Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org 
[209.188.14.139])
by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with 
ESMTP id A44DC5FC34
for ; Fri, 20 Jan 2017 19:20:29 + (UTC)
Received: from jira-lw-us.apache.org (unknown [207.244.88.139])
by mailrelay1-us-west.apache.org (ASF Mail Server at 
mailrelay1-us-west.apache.org) with ESMTP id 5D030E02DB
for ; Fri, 20 Jan 2017 19:20:28 + (UTC)
Received: from jira-lw-us.apache.org (localhost [127.0.0.1])
by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) 
with ESMTP id CCA8C2529F
for ; Fri, 20 Jan 2017 19:20:26 + (UTC)
Date: Fri, 20 Jan 2017 19:20:26 +
From: "postmas...@inn.ru (JIRA)" 
To: 
Message-ID: 
In-Reply-To: 
References:  

Subject: [jira] [Commented] (KAFKA-1207) Launch Kafka from within Apache
 Mesos
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394
X-inn-MailScanner-ESVA-Information: Please contact  for more information
X-inn-MailScanner-ESVA-ID: 9B2C7400C3.A6B30
X-inn-MailScanner-ESVA: Found to be clean
X-inn-MailScanner-ESVA-From: j...@apache.org
X-inn-MailScanner-ESVA-Watermark: 1485544833.42772@r6+SvKNaLQuojo3QfN23Yg
Return-Path: j...@apache.org
X-OrganizationHeadersPreserved: lc-exch-02.inn.local
X-CrossPremisesHeadersFilteredByDsnGenerator: lc-exch-02.inn.local



> Launch Kafka from within Apache Mesos
> -
>
> Key: KAFKA-1207
> URL: https://issues.apache.org/jira/browse/KAFKA-1207
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>  Labels: mesos
> Attachments: KAFKA-1207_2014-01-19_00:04:58.patch, 
> KAFKA-1207_2014-01-19_00:48:49.patch, KAFKA-1207.patch
>
>
> There are a few components to 

[GitHub] kafka-site pull request #42: fix typos on powered by page

2017-01-20 Thread derrickdoo
GitHub user derrickdoo opened a pull request:

https://github.com/apache/kafka-site/pull/42

fix typos on powered by page

- Fix typos in markup for Tumblr, Etsy, Paypal descriptions

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/derrickdoo/kafka-site poweredByAdjustments

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka-site/pull/42.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #42


commit b480957522e12f5d3f083da7eccdfe4832504e40
Author: Derrick Or 
Date:   2017-01-20T23:05:06Z

fix typos on powered by page




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka-site pull request #41: New design for the "powered by" page

2017-01-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka-site/pull/41


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka-site issue #41: New design for the "powered by" page

2017-01-20 Thread ewencp
Github user ewencp commented on the issue:

https://github.com/apache/kafka-site/pull/41
  
LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka-site issue #39: New europe meetup links

2017-01-20 Thread ewencp
Github user ewencp commented on the issue:

https://github.com/apache/kafka-site/pull/39
  
LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka-site pull request #39: New europe meetup links

2017-01-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka-site/pull/39


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is back to normal : kafka-trunk-jdk8 #1199

2017-01-20 Thread Apache Jenkins Server
See 



Jenkins build is back to normal : kafka-0.10.2-jdk7 #18

2017-01-20 Thread Apache Jenkins Server
See 



[jira] [Updated] (KAFKA-3450) Producer blocks on send to topic that doesn't exist if auto create is disabled

2017-01-20 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-3450:
---
Assignee: (was: Jun Rao)

> Producer blocks on send to topic that doesn't exist if auto create is disabled
> --
>
> Key: KAFKA-3450
> URL: https://issues.apache.org/jira/browse/KAFKA-3450
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.1
>Reporter: Michal Turek
>Priority: Critical
>
> {{producer.send()}} is blocked for {{max.block.ms}} (default 60 seconds) if 
> the destination topic doesn't exist and if their automatic creation is 
> disabled. Warning from NetworkClient containing UNKNOWN_TOPIC_OR_PARTITION is 
> logged every 100 ms in a loop until the 60 seconds timeout expires, but the 
> operation is not recoverable.
> Preconditions
> - Kafka 0.9.0.1 with default configuration and auto.create.topics.enable=false
> - Kafka 0.9.0.1 clients.
> Example minimalist code
> https://github.com/avast/kafka-tests/blob/master/src/main/java/com/avast/kafkatests/othertests/nosuchtopic/NoSuchTopicTest.java
> {noformat}
> /**
>  * Test of sending to a topic that does not exist while automatic creation of 
> topics is disabled in Kafka (auto.create.topics.enable=false).
>  */
> public class NoSuchTopicTest {
> private static final Logger LOGGER = 
> LoggerFactory.getLogger(NoSuchTopicTest.class);
> public static void main(String[] args) {
> Properties properties = new Properties();
> properties.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, 
> "localhost:9092");
> properties.setProperty(ProducerConfig.CLIENT_ID_CONFIG, 
> NoSuchTopicTest.class.getSimpleName());
> properties.setProperty(ProducerConfig.MAX_BLOCK_MS_CONFIG, "1000"); 
> // Default is 60 seconds
> try (Producer producer = new 
> KafkaProducer<>(properties, new StringSerializer(), new StringSerializer())) {
> LOGGER.info("Sending message");
> producer.send(new ProducerRecord<>("ThisTopicDoesNotExist", 
> "key", "value"), (metadata, exception) -> {
> if (exception != null) {
> LOGGER.error("Send failed: {}", exception.toString());
> } else {
> LOGGER.info("Send successful: {}-{}/{}", 
> metadata.topic(), metadata.partition(), metadata.offset());
> }
> });
> LOGGER.info("Sending message");
> producer.send(new ProducerRecord<>("ThisTopicDoesNotExistToo", 
> "key", "value"), (metadata, exception) -> {
> if (exception != null) {
> LOGGER.error("Send failed: {}", exception.toString());
> } else {
> LOGGER.info("Send successful: {}-{}/{}", 
> metadata.topic(), metadata.partition(), metadata.offset());
> }
> });
> }
> }
> }
> {noformat}
> Related output
> {noformat}
> 2016-03-23 12:44:37.725 INFO  c.a.k.o.nosuchtopic.NoSuchTopicTest [main]: 
> Sending message (NoSuchTopicTest.java:26)
> 2016-03-23 12:44:37.830 WARN  o.a.kafka.clients.NetworkClient 
> [kafka-producer-network-thread | NoSuchTopicTest]: Error while fetching 
> metadata with correlation id 0 : 
> {ThisTopicDoesNotExist=UNKNOWN_TOPIC_OR_PARTITION} (NetworkClient.java:582)
> 2016-03-23 12:44:37.928 WARN  o.a.kafka.clients.NetworkClient 
> [kafka-producer-network-thread | NoSuchTopicTest]: Error while fetching 
> metadata with correlation id 1 : 
> {ThisTopicDoesNotExist=UNKNOWN_TOPIC_OR_PARTITION} (NetworkClient.java:582)
> 2016-03-23 12:44:38.028 WARN  o.a.kafka.clients.NetworkClient 
> [kafka-producer-network-thread | NoSuchTopicTest]: Error while fetching 
> metadata with correlation id 2 : 
> {ThisTopicDoesNotExist=UNKNOWN_TOPIC_OR_PARTITION} (NetworkClient.java:582)
> 2016-03-23 12:44:38.130 WARN  o.a.kafka.clients.NetworkClient 
> [kafka-producer-network-thread | NoSuchTopicTest]: Error while fetching 
> metadata with correlation id 3 : 
> {ThisTopicDoesNotExist=UNKNOWN_TOPIC_OR_PARTITION} (NetworkClient.java:582)
> 2016-03-23 12:44:38.231 WARN  o.a.kafka.clients.NetworkClient 
> [kafka-producer-network-thread | NoSuchTopicTest]: Error while fetching 
> metadata with correlation id 4 : 
> {ThisTopicDoesNotExist=UNKNOWN_TOPIC_OR_PARTITION} (NetworkClient.java:582)
> 2016-03-23 12:44:38.332 WARN  o.a.kafka.clients.NetworkClient 
> [kafka-producer-network-thread | NoSuchTopicTest]: Error while fetching 
> metadata with correlation id 5 : 
> {ThisTopicDoesNotExist=UNKNOWN_TOPIC_OR_PARTITION} (NetworkClient.java:582)
> 2016-03-23 12:44:38.433 WARN  o.a.kafka.clients.NetworkClient 
> [kafka-producer-network-thread | NoSuchTopicTest]: Error while fetching 
> metadata 

[jira] [Commented] (KAFKA-2000) Delete consumer offsets from kafka once the topic is deleted

2017-01-20 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15832410#comment-15832410
 ] 

Sriharsha Chintalapani commented on KAFKA-2000:
---

[~jeffwidman] 

[~omkreddy] working on it.

> Delete consumer offsets from kafka once the topic is deleted
> 
>
> Key: KAFKA-2000
> URL: https://issues.apache.org/jira/browse/KAFKA-2000
> Project: Kafka
>  Issue Type: Bug
>Reporter: Sriharsha Chintalapani
>Assignee: Manikumar Reddy
>  Labels: newbie++
> Fix For: 0.10.2.0
>
> Attachments: KAFKA-2000_2015-05-03_10:39:11.patch, KAFKA-2000.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2000) Delete consumer offsets from kafka once the topic is deleted

2017-01-20 Thread Jeff Widman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15832378#comment-15832378
 ] 

Jeff Widman commented on KAFKA-2000:


If neither of them is interested, I'm happy to cleanup the existing patch to 
get it merged into 0.10.2. The test suite at my work would benefit from this.

> Delete consumer offsets from kafka once the topic is deleted
> 
>
> Key: KAFKA-2000
> URL: https://issues.apache.org/jira/browse/KAFKA-2000
> Project: Kafka
>  Issue Type: Bug
>Reporter: Sriharsha Chintalapani
>Assignee: Manikumar Reddy
>  Labels: newbie++
> Fix For: 0.10.2.0
>
> Attachments: KAFKA-2000_2015-05-03_10:39:11.patch, KAFKA-2000.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-2000) Delete consumer offsets from kafka once the topic is deleted

2017-01-20 Thread Jeff Widman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15832378#comment-15832378
 ] 

Jeff Widman edited comment on KAFKA-2000 at 1/20/17 8:45 PM:
-

If neither of them is interested, I'm happy to cleanup the existing patch to 
get it merged into 0.10.2. The test suite at my work would benefit from this. 
Just let me know.


was (Author: jeffwidman):
If neither of them is interested, I'm happy to cleanup the existing patch to 
get it merged into 0.10.2. The test suite at my work would benefit from this.

> Delete consumer offsets from kafka once the topic is deleted
> 
>
> Key: KAFKA-2000
> URL: https://issues.apache.org/jira/browse/KAFKA-2000
> Project: Kafka
>  Issue Type: Bug
>Reporter: Sriharsha Chintalapani
>Assignee: Manikumar Reddy
>  Labels: newbie++
> Fix For: 0.10.2.0
>
> Attachments: KAFKA-2000_2015-05-03_10:39:11.patch, KAFKA-2000.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2000) Delete consumer offsets from kafka once the topic is deleted

2017-01-20 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-2000:
--
Assignee: Manikumar Reddy  (was: Sriharsha Chintalapani)

> Delete consumer offsets from kafka once the topic is deleted
> 
>
> Key: KAFKA-2000
> URL: https://issues.apache.org/jira/browse/KAFKA-2000
> Project: Kafka
>  Issue Type: Bug
>Reporter: Sriharsha Chintalapani
>Assignee: Manikumar Reddy
>  Labels: newbie++
> Fix For: 0.10.2.0
>
> Attachments: KAFKA-2000_2015-05-03_10:39:11.patch, KAFKA-2000.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4679) Remove unstable markers from Connect APIs

2017-01-20 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-4679:


 Summary: Remove unstable markers from Connect APIs
 Key: KAFKA-4679
 URL: https://issues.apache.org/jira/browse/KAFKA-4679
 Project: Kafka
  Issue Type: Task
  Components: KafkaConnect
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava
 Fix For: 0.10.2.0


Connect has had a stable API for awhile now and we are careful about 
compatibility. It's safe to remove the unstable markers now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2000) Delete consumer offsets from kafka once the topic is deleted

2017-01-20 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15832295#comment-15832295
 ] 

Jason Gustafson commented on KAFKA-2000:


It would be nice to get this fixed. The patch from [~parth.brahmbhatt] has 
review comments from Feb. 2016 which have not been addressed. Perhaps 
[~omkreddy] can rebase the patch posted above and we can try to get it merged?

> Delete consumer offsets from kafka once the topic is deleted
> 
>
> Key: KAFKA-2000
> URL: https://issues.apache.org/jira/browse/KAFKA-2000
> Project: Kafka
>  Issue Type: Bug
>Reporter: Sriharsha Chintalapani
>Assignee: Sriharsha Chintalapani
>  Labels: newbie++
> Fix For: 0.10.2.0
>
> Attachments: KAFKA-2000_2015-05-03_10:39:11.patch, KAFKA-2000.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2413: MINOR: update JavaDoc for DSL PAPI-API

2017-01-20 Thread mjsax
GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/2413

MINOR: update JavaDoc for DSL PAPI-API



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka javaDocImprovements6

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2413.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2413


commit 4b61faa1cb29e273dea020207d44abdd16e4dded
Author: Matthias J. Sax 
Date:   2017-01-20T19:21:40Z

MINOR: update JavaDoc for DSL PAPI-API




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-1207) Launch Kafka from within Apache Mesos

2017-01-20 Thread postmas...@inn.ru (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15832279#comment-15832279
 ] 

postmas...@inn.ru commented on KAFKA-1207:
--

Delivery is delayed to these recipients or groups:

e...@inn.ru

Subject: [jira] [Commented] (KAFKA-1207) Launch Kafka from within Apache Mesos

This message hasn't been delivered yet. Delivery will continue to be attempted.

The server will keep trying to deliver this message for the next 1 days, 19 
hours and 54 minutes. You'll be notified if the message can't be delivered by 
that time.







Diagnostic information for administrators:

Generating server: lc-exch-02.inn.local
Receiving server: inn.ru (109.105.153.25)

e...@inn.ru
Server at inn.ru (109.105.153.25) returned '400 4.4.7 Message delayed'
1/20/2017 7:09:22 PM - Server at inn.ru (109.105.153.25) returned '441 4.4.1 
Error communicating with target host: "Failed to connect. Winsock error code: 
10060, Win32 error code: 10060." Last endpoint attempted was 109.105.153.25:25'

Original message headers:

Received: from lc-exch-04.inn.local (10.64.37.99) by lc-exch-02.inn.local
 (10.64.37.98) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.669.32; Fri, 20
 Jan 2017 18:13:43 +0300
Received: from lc-asp-02.inn.ru (10.64.37.105) by lc-exch-04.inn.local
 (10.64.37.100) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.669.32 via
 Frontend Transport; Fri, 20 Jan 2017 18:13:43 +0300
Received-SPF: None (no SPF record) identity=mailfrom; client-ip=209.188.14.142; 
helo=spamd4-us-west.apache.org; envelope-from=j...@apache.org; 
receiver=e...@inn.ru
X-Envelope-From: 
Received: from spamd4-us-west.apache.org (pnap-us-west-generic-nat.apache.org 
[209.188.14.142])
by lc-asp-02.inn.ru (Postfix) with ESMTP id 6538E400C3
for ; Fri, 20 Jan 2017 16:13:31 +0100 (CET)
Received: from localhost (localhost [127.0.0.1])
by spamd4-us-west.apache.org (ASF Mail Server at 
spamd4-us-west.apache.org) with ESMTP id D22E7C0D33
for ; Fri, 20 Jan 2017 15:13:30 + (UTC)
X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org
X-Spam-Flag: NO
X-Spam-Score: -1.999
X-Spam-Level:
X-Spam-Status: No, score=-1.999 tagged_above=-999 required=6.31
tests=[KAM_LAZY_DOMAIN_SECURITY=1, RP_MATCHES_RCVD=-2.999]
autolearn=disabled
Received: from mx1-lw-eu.apache.org ([10.40.0.8])
by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, 
port 10024)
with ESMTP id fWMqiyvF8tkd for ;
Fri, 20 Jan 2017 15:13:29 + (UTC)
Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org 
[209.188.14.139])
by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with 
ESMTP id 5A13C5FC2F
for ; Fri, 20 Jan 2017 15:13:28 + (UTC)
Received: from jira-lw-us.apache.org (unknown [207.244.88.139])
by mailrelay1-us-west.apache.org (ASF Mail Server at 
mailrelay1-us-west.apache.org) with ESMTP id 50DBFE026E
for ; Fri, 20 Jan 2017 15:13:27 + (UTC)
Received: from jira-lw-us.apache.org (localhost [127.0.0.1])
by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) 
with ESMTP id AACF92528B
for ; Fri, 20 Jan 2017 15:13:26 + (UTC)
Date: Fri, 20 Jan 2017 15:13:26 +
From: "postmas...@inn.ru (JIRA)" 
To: 
Message-ID: 
In-Reply-To: 
References:  

Subject: [jira] [Commented] (KAFKA-1207) Launch Kafka from within Apache
 Mesos
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394
X-inn-MailScanner-ESVA-Information: Please contact  for more information
X-inn-MailScanner-ESVA-ID: 6538E400C3.A4BEE
X-inn-MailScanner-ESVA: Found to be clean
X-inn-MailScanner-ESVA-From: j...@apache.org
X-inn-MailScanner-ESVA-Watermark: 1485530012.26729@1kYsuwYvxdNqWOavb108nA
Return-Path: j...@apache.org
X-OrganizationHeadersPreserved: lc-exch-02.inn.local
X-CrossPremisesHeadersFilteredByDsnGenerator: lc-exch-02.inn.local



> Launch Kafka from within Apache Mesos
> -
>
> Key: KAFKA-1207
> URL: https://issues.apache.org/jira/browse/KAFKA-1207
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>  Labels: mesos
> Attachments: KAFKA-1207_2014-01-19_00:04:58.patch, 
> KAFKA-1207_2014-01-19_00:48:49.patch, KAFKA-1207.patch
>
>
> There are a few components to 

[jira] [Commented] (KAFKA-1207) Launch Kafka from within Apache Mesos

2017-01-20 Thread postmas...@inn.ru (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15832280#comment-15832280
 ] 

postmas...@inn.ru commented on KAFKA-1207:
--

Delivery is delayed to these recipients or groups:

e...@inn.ru

Subject: [jira] [Commented] (KAFKA-1207) Launch Kafka from within Apache Mesos

This message hasn't been delivered yet. Delivery will continue to be attempted.

The server will keep trying to deliver this message for the next 1 days, 19 
hours and 54 minutes. You'll be notified if the message can't be delivered by 
that time.







Diagnostic information for administrators:

Generating server: lc-exch-02.inn.local
Receiving server: inn.ru (109.105.153.25)

e...@inn.ru
Server at inn.ru (109.105.153.25) returned '400 4.4.7 Message delayed'
1/20/2017 7:09:22 PM - Server at inn.ru (109.105.153.25) returned '441 4.4.1 
Error communicating with target host: "Failed to connect. Winsock error code: 
10060, Win32 error code: 10060." Last endpoint attempted was 109.105.153.25:25'

Original message headers:

Received: from lc-exch-04.inn.local (10.64.37.99) by lc-exch-02.inn.local
 (10.64.37.98) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.669.32; Fri, 20
 Jan 2017 18:13:34 +0300
Received: from lc-asp-02.inn.ru (10.64.37.104) by lc-exch-04.inn.local
 (10.64.37.100) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.669.32 via
 Frontend Transport; Fri, 20 Jan 2017 18:13:34 +0300
Received-SPF: None (no SPF record) identity=mailfrom; client-ip=209.188.14.142; 
helo=spamd3-us-west.apache.org; envelope-from=j...@apache.org; 
receiver=e...@inn.ru
X-Envelope-From: 
Received: from spamd3-us-west.apache.org (pnap-us-west-generic-nat.apache.org 
[209.188.14.142])
by lc-asp-02.inn.ru (Postfix) with ESMTP id 25FFA400C6
for ; Fri, 20 Jan 2017 16:13:33 +0100 (CET)
Received: from localhost (localhost [127.0.0.1])
by spamd3-us-west.apache.org (ASF Mail Server at 
spamd3-us-west.apache.org) with ESMTP id 02F4B18C458
for ; Fri, 20 Jan 2017 15:13:33 + (UTC)
X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org
X-Spam-Flag: NO
X-Spam-Score: -1.999
X-Spam-Level:
X-Spam-Status: No, score=-1.999 tagged_above=-999 required=6.31
tests=[KAM_LAZY_DOMAIN_SECURITY=1, RP_MATCHES_RCVD=-2.999]
autolearn=disabled
Received: from mx1-lw-us.apache.org ([10.40.0.8])
by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, 
port 10024)
with ESMTP id YNII4M94e3ks for ;
Fri, 20 Jan 2017 15:13:31 + (UTC)
Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org 
[209.188.14.139])
by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with 
ESMTP id 257BF618BA
for ; Fri, 20 Jan 2017 15:13:29 + (UTC)
Received: from jira-lw-us.apache.org (unknown [207.244.88.139])
by mailrelay1-us-west.apache.org (ASF Mail Server at 
mailrelay1-us-west.apache.org) with ESMTP id 08C04E02F1
for ; Fri, 20 Jan 2017 15:13:28 + (UTC)
Received: from jira-lw-us.apache.org (localhost [127.0.0.1])
by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) 
with ESMTP id F09B2252A0
for ; Fri, 20 Jan 2017 15:13:26 + (UTC)
Date: Fri, 20 Jan 2017 15:13:26 +
From: "postmas...@inn.ru (JIRA)" 
To: 
Message-ID: 
In-Reply-To: 
References:  

Subject: [jira] [Commented] (KAFKA-1207) Launch Kafka from within Apache
 Mesos
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394
X-inn-MailScanner-ESVA-Information: Please contact  for more information
X-inn-MailScanner-ESVA-ID: 25FFA400C6.A7A60
X-inn-MailScanner-ESVA: Found to be clean
X-inn-MailScanner-ESVA-From: j...@apache.org
X-inn-MailScanner-ESVA-Watermark: 1485530013.94527@g+U4ETB/IO6C7UI+EtEUlA
Return-Path: j...@apache.org
X-OrganizationHeadersPreserved: lc-exch-02.inn.local
X-CrossPremisesHeadersFilteredByDsnGenerator: lc-exch-02.inn.local



> Launch Kafka from within Apache Mesos
> -
>
> Key: KAFKA-1207
> URL: https://issues.apache.org/jira/browse/KAFKA-1207
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>  Labels: mesos
> Attachments: KAFKA-1207_2014-01-19_00:04:58.patch, 
> KAFKA-1207_2014-01-19_00:48:49.patch, KAFKA-1207.patch
>
>
> There are a few components to 

[GitHub] kafka pull request #2412: MINOR: reduce verbosity of cache flushes

2017-01-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2412


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-1641) Log cleaner exits if last cleaned offset is lower than earliest offset

2017-01-20 Thread Peter Davis (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15832266#comment-15832266
 ] 

Peter Davis commented on KAFKA-1641:


"Me too" on 0.10.0.1 - does this issue need to be reopened?

java.lang.IllegalArgumentException: requirement failed: Last clean 
offset is 43056300 but segment base offset is 42738384 for log -redacted- -0.
at scala.Predef$.require(Predef.scala:224)
at kafka.log.Cleaner.buildOffsetMap(LogCleaner.scala:604)
at kafka.log.Cleaner.clean(LogCleaner.scala:329)
at kafka.log.LogCleaner$CleanerThread.cleanOrSleep(LogCleaner.scala:237)
at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:215)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)


> Log cleaner exits if last cleaned offset is lower than earliest offset
> --
>
> Key: KAFKA-1641
> URL: https://issues.apache.org/jira/browse/KAFKA-1641
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.1.1
>Reporter: Joel Koshy
>Assignee: Guozhang Wang
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-1641_2014-10-09_13:04:15.patch, KAFKA-1641.patch
>
>
> Encountered this recently: the log cleaner exited a while ago (I think 
> because the topic had compressed messages). That issue was subsequently 
> addressed by having the producer only send uncompressed. However, on a 
> subsequent restart of the broker we see this:
> In this scenario I think it is reasonable to just emit a warning and have the 
> cleaner round up its first dirty offset to the base offset of the first 
> segment.
> {code}
> [kafka-server] [] [kafka-log-cleaner-thread-0], Error due to 
> java.lang.IllegalArgumentException: requirement failed: Last clean offset is 
> 54770438 but segment base offset is 382844024 for log testtopic-0.
> at scala.Predef$.require(Predef.scala:145)
> at kafka.log.Cleaner.buildOffsetMap(LogCleaner.scala:491)
> at kafka.log.Cleaner.clean(LogCleaner.scala:288)
> at 
> kafka.log.LogCleaner$CleanerThread.cleanOrSleep(LogCleaner.scala:202)
> at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:187)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:51)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4676) Kafka consumers gets stuck for some partitions

2017-01-20 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15832255#comment-15832255
 ] 

Jason Gustafson commented on KAFKA-4676:


Logs from the consumer would also be helpful. Thanks.

> Kafka consumers gets stuck for some partitions
> --
>
> Key: KAFKA-4676
> URL: https://issues.apache.org/jira/browse/KAFKA-4676
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Vishal Shukla
>Priority: Critical
>  Labels: consumer, reliability
> Attachments: stuck-topic-thread-dump.log
>
>
> We recently upgraded to Kafka 0.10.1.0. We are frequently facing issue that 
> Kafka consumers get stuck suddenly for some partitions.
> Attached thread dump.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4675) Subsequent CreateTopic command could be lost after a DeleteTopic command

2017-01-20 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15832248#comment-15832248
 ] 

Guozhang Wang commented on KAFKA-4675:
--

Actually no, I did not see the exception from broker side.

> Subsequent CreateTopic command could be lost after a DeleteTopic command
> 
>
> Key: KAFKA-4675
> URL: https://issues.apache.org/jira/browse/KAFKA-4675
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>  Labels: admin
>
> This is discovered while investigating KAFKA-3896: If an admin client sends a 
> delete topic command and a create topic command consecutively, even if it 
> wait for the response of the previous command before issuing the second, 
> there is still a race condition that the create topic command could be "lost".
> This is because currently these commands are all asynchronous as defined in 
> KIP-4, and controller will return the response once it has written the 
> corresponding data to ZK path, which can be handled by different listener 
> threads at different paces, and if the thread handling create is faster than 
> the other, the executions could be effectively re-ordered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-1776) Re-factor out existing tools that have been implemented behind the CLI

2017-01-20 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-1776.
-
Resolution: Duplicate

> Re-factor out existing tools that have been implemented behind the CLI
> --
>
> Key: KAFKA-1776
> URL: https://issues.apache.org/jira/browse/KAFKA-1776
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2412: MINOR: reduce verbosity of cache flushes

2017-01-20 Thread xvrl
GitHub user xvrl opened a pull request:

https://github.com/apache/kafka/pull/2412

MINOR: reduce verbosity of cache flushes

This log message tends to be extremely verbose when state stores are being 
restored

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xvrl/kafka reduce-verbosity

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2412.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2412


commit b1565e94e1c9dd4d50ac6e2dbeb7865be642b403
Author: Xavier Léauté 
Date:   2017-01-20T18:40:58Z

MINOR: reduce verbosity of cache flushes




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-4678) Create separate page for Connect docs

2017-01-20 Thread Shikhar Bhushan (JIRA)
Shikhar Bhushan created KAFKA-4678:
--

 Summary: Create separate page for Connect docs
 Key: KAFKA-4678
 URL: https://issues.apache.org/jira/browse/KAFKA-4678
 Project: Kafka
  Issue Type: Improvement
  Components: documentation, KafkaConnect
Reporter: Shikhar Bhushan
Assignee: Ewen Cheslack-Postava
Priority: Minor


The single-page http://kafka.apache.org/documentation/ is quite long, and will 
get even longer with the inclusion of info on Kafka Connect's included 
transformations.

Recently Kafka Streams documentation was split off to its own page with a short 
overview in the main doc page. We should do the same for {{connect.html}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1379) Partition reassignment resets clock for time-based retention

2017-01-20 Thread Andrew Olson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15832155#comment-15832155
 ] 

Andrew Olson commented on KAFKA-1379:
-

[~jjkoshy] / [~becket_qin] should this Jira now be closed as a duplicate of 
KAFKA-3163?

https://cwiki.apache.org/confluence/display/KAFKA/KIP-33+-+Add+a+time+based+log+index#KIP-33-Addatimebasedlogindex-Enforcetimebasedlogretention

> Partition reassignment resets clock for time-based retention
> 
>
> Key: KAFKA-1379
> URL: https://issues.apache.org/jira/browse/KAFKA-1379
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joel Koshy
>
> Since retention is driven off mod-times reassigned partitions will result in
> data that has been on a leader to be retained for another full retention
> cycle. E.g., if retention is seven days and you reassign partitions on the
> sixth day then those partitions will remain on the replicas for another
> seven days.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Improve License Header Check

2017-01-20 Thread Matthias J. Sax
Hi,

I opened an PR to improve the check for file license header (the check
is currently quite weak and it's possible to have files with an invalid
header).

https://github.com/apache/kafka/pull/2303/

As some people do have IDE setting for adding a header automatically, we
wanted to give a heads up that you will need to update you IDE setting.



-Matthias




signature.asc
Description: OpenPGP digital signature


[jira] [Commented] (KAFKA-4677) Avoid unnecessary task movement across threads of the same process during rebalance

2017-01-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15832090#comment-15832090
 ] 

ASF GitHub Bot commented on KAFKA-4677:
---

GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/2411

[WIP] KAFKA-4677: Avoid unnecessary task movement across threads of the 
same process during rebalance



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka kstreams-446

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2411.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2411


commit a899864e484e859e44a71ebe8c9890dedfc403cc
Author: Damian Guy 
Date:   2017-01-20T15:19:28Z

Avoid unnecessary task movement across threads of the same process during 
rebalances




> Avoid unnecessary task movement across threads of the same process during 
> rebalance
> ---
>
> Key: KAFKA-4677
> URL: https://issues.apache.org/jira/browse/KAFKA-4677
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 0.10.3.0
>
>
> StreamPartitionAssigner tries to follow a sticky assignment policy to avoid 
> expensive task migration. Currently, it does this in a best-effort approach.
> We could observe a case, for which tasks did migrate for no good reason, thus 
> we assume that the current implementation could be improved to be more sticky.
> The concrete scenario is as follows:
> assume we have topology with 3 tasks, A, B, C
> assume we have 3 threads, each executing one task: 1-A, 2-B, 3-C
> for some reason, thread 1 goes down and a rebalance gets triggered
> thread 2 and 3 get their partitions revoked
> sometimes (not sure what the exact condition for this is), the new assignment 
> flips the assignment for task B and C (task A is newly assigned to either 
> thread 2 or 3)
> > possible new assignment 2(A,C) and 3-B
> There is no obvious reason (like load-balancing) why the task assignment for 
> B and C does change to the other thread resulting in unnecessary task 
> migration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2411: [WIP] KAFKA-4677: Avoid unnecessary task movement ...

2017-01-20 Thread dguy
GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/2411

[WIP] KAFKA-4677: Avoid unnecessary task movement across threads of the 
same process during rebalance



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka kstreams-446

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2411.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2411


commit a899864e484e859e44a71ebe8c9890dedfc403cc
Author: Damian Guy 
Date:   2017-01-20T15:19:28Z

Avoid unnecessary task movement across threads of the same process during 
rebalances




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-4677) Avoid unnecessary task movement across threads of the same process during rebalance

2017-01-20 Thread Damian Guy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damian Guy updated KAFKA-4677:
--
Summary: Avoid unnecessary task movement across threads of the same process 
during rebalance  (was: Make StreamPartitionAssigner more sticky when assigning 
to threads)

> Avoid unnecessary task movement across threads of the same process during 
> rebalance
> ---
>
> Key: KAFKA-4677
> URL: https://issues.apache.org/jira/browse/KAFKA-4677
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 0.10.3.0
>
>
> StreamPartitionAssigner tries to follow a sticky assignment policy to avoid 
> expensive task migration. Currently, it does this in a best-effort approach.
> We could observe a case, for which tasks did migrate for no good reason, thus 
> we assume that the current implementation could be improved to be more sticky.
> The concrete scenario is as follows:
> assume we have topology with 3 tasks, A, B, C
> assume we have 3 threads, each executing one task: 1-A, 2-B, 3-C
> for some reason, thread 1 goes down and a rebalance gets triggered
> thread 2 and 3 get their partitions revoked
> sometimes (not sure what the exact condition for this is), the new assignment 
> flips the assignment for task B and C (task A is newly assigned to either 
> thread 2 or 3)
> > possible new assignment 2(A,C) and 3-B
> There is no obvious reason (like load-balancing) why the task assignment for 
> B and C does change to the other thread resulting in unnecessary task 
> migration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2410: MINOR: Fix typo in WordCountProcessorDemo

2017-01-20 Thread wmarshall484
Github user wmarshall484 closed the pull request at:

https://github.com/apache/kafka/pull/2410


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #2410: MINOR: Fix typo in WordCountProcessorDemo

2017-01-20 Thread wmarshall484
GitHub user wmarshall484 reopened a pull request:

https://github.com/apache/kafka/pull/2410

MINOR: Fix typo in WordCountProcessorDemo

`bin-kafka-console-producer.sh` should be `bin/kafka-console-producer.sh`.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/wmarshall484/kafka typo-fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2410.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2410


commit 34bf9829cd8215eb7d61d2202ca2bc6795ebe8a8
Author: Will Marshall 
Date:   2017-01-20T08:01:39Z

Fix typo in WordCountProcessorDemo

`bin-kafka-console-producer.sh` should be `bin/kafka-console-producer.sh`.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3450) Producer blocks on send to topic that doesn't exist if auto create is disabled

2017-01-20 Thread Esoga Simmons (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15832030#comment-15832030
 ] 

Esoga Simmons commented on KAFKA-3450:
--

This issue is a dup of [KAFKA-3539] reported by [~ozhurakousky] and assigned to 
[~omkreddy] and is also related to [KAFKA-1843]


> Producer blocks on send to topic that doesn't exist if auto create is disabled
> --
>
> Key: KAFKA-3450
> URL: https://issues.apache.org/jira/browse/KAFKA-3450
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.1
>Reporter: Michal Turek
>Assignee: Jun Rao
>Priority: Critical
>
> {{producer.send()}} is blocked for {{max.block.ms}} (default 60 seconds) if 
> the destination topic doesn't exist and if their automatic creation is 
> disabled. Warning from NetworkClient containing UNKNOWN_TOPIC_OR_PARTITION is 
> logged every 100 ms in a loop until the 60 seconds timeout expires, but the 
> operation is not recoverable.
> Preconditions
> - Kafka 0.9.0.1 with default configuration and auto.create.topics.enable=false
> - Kafka 0.9.0.1 clients.
> Example minimalist code
> https://github.com/avast/kafka-tests/blob/master/src/main/java/com/avast/kafkatests/othertests/nosuchtopic/NoSuchTopicTest.java
> {noformat}
> /**
>  * Test of sending to a topic that does not exist while automatic creation of 
> topics is disabled in Kafka (auto.create.topics.enable=false).
>  */
> public class NoSuchTopicTest {
> private static final Logger LOGGER = 
> LoggerFactory.getLogger(NoSuchTopicTest.class);
> public static void main(String[] args) {
> Properties properties = new Properties();
> properties.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, 
> "localhost:9092");
> properties.setProperty(ProducerConfig.CLIENT_ID_CONFIG, 
> NoSuchTopicTest.class.getSimpleName());
> properties.setProperty(ProducerConfig.MAX_BLOCK_MS_CONFIG, "1000"); 
> // Default is 60 seconds
> try (Producer producer = new 
> KafkaProducer<>(properties, new StringSerializer(), new StringSerializer())) {
> LOGGER.info("Sending message");
> producer.send(new ProducerRecord<>("ThisTopicDoesNotExist", 
> "key", "value"), (metadata, exception) -> {
> if (exception != null) {
> LOGGER.error("Send failed: {}", exception.toString());
> } else {
> LOGGER.info("Send successful: {}-{}/{}", 
> metadata.topic(), metadata.partition(), metadata.offset());
> }
> });
> LOGGER.info("Sending message");
> producer.send(new ProducerRecord<>("ThisTopicDoesNotExistToo", 
> "key", "value"), (metadata, exception) -> {
> if (exception != null) {
> LOGGER.error("Send failed: {}", exception.toString());
> } else {
> LOGGER.info("Send successful: {}-{}/{}", 
> metadata.topic(), metadata.partition(), metadata.offset());
> }
> });
> }
> }
> }
> {noformat}
> Related output
> {noformat}
> 2016-03-23 12:44:37.725 INFO  c.a.k.o.nosuchtopic.NoSuchTopicTest [main]: 
> Sending message (NoSuchTopicTest.java:26)
> 2016-03-23 12:44:37.830 WARN  o.a.kafka.clients.NetworkClient 
> [kafka-producer-network-thread | NoSuchTopicTest]: Error while fetching 
> metadata with correlation id 0 : 
> {ThisTopicDoesNotExist=UNKNOWN_TOPIC_OR_PARTITION} (NetworkClient.java:582)
> 2016-03-23 12:44:37.928 WARN  o.a.kafka.clients.NetworkClient 
> [kafka-producer-network-thread | NoSuchTopicTest]: Error while fetching 
> metadata with correlation id 1 : 
> {ThisTopicDoesNotExist=UNKNOWN_TOPIC_OR_PARTITION} (NetworkClient.java:582)
> 2016-03-23 12:44:38.028 WARN  o.a.kafka.clients.NetworkClient 
> [kafka-producer-network-thread | NoSuchTopicTest]: Error while fetching 
> metadata with correlation id 2 : 
> {ThisTopicDoesNotExist=UNKNOWN_TOPIC_OR_PARTITION} (NetworkClient.java:582)
> 2016-03-23 12:44:38.130 WARN  o.a.kafka.clients.NetworkClient 
> [kafka-producer-network-thread | NoSuchTopicTest]: Error while fetching 
> metadata with correlation id 3 : 
> {ThisTopicDoesNotExist=UNKNOWN_TOPIC_OR_PARTITION} (NetworkClient.java:582)
> 2016-03-23 12:44:38.231 WARN  o.a.kafka.clients.NetworkClient 
> [kafka-producer-network-thread | NoSuchTopicTest]: Error while fetching 
> metadata with correlation id 4 : 
> {ThisTopicDoesNotExist=UNKNOWN_TOPIC_OR_PARTITION} (NetworkClient.java:582)
> 2016-03-23 12:44:38.332 WARN  o.a.kafka.clients.NetworkClient 
> [kafka-producer-network-thread | NoSuchTopicTest]: Error while fetching 
> metadata with correlation id 5 : 
> {ThisTopicDoesNotExist=UNKNOWN_TOPIC_OR_PARTITION} 

[jira] [Commented] (KAFKA-4477) Node reduces its ISR to itself, and doesn't recover. Other nodes do not take leadership, cluster remains sick until node is restarted.

2017-01-20 Thread Christos Trochalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15831955#comment-15831955
 ] 

Christos Trochalakis commented on KAFKA-4477:
-

We also experienced the same issue today. The affected node terminated after 5 
minutes when it run short of file descriptors.

Are there debian packages for 0.10.1.1? http://packages.confluent.io/deb/3.1 
currently has 0.10.1.0.

> Node reduces its ISR to itself, and doesn't recover. Other nodes do not take 
> leadership, cluster remains sick until node is restarted.
> --
>
> Key: KAFKA-4477
> URL: https://issues.apache.org/jira/browse/KAFKA-4477
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.1.0
> Environment: RHEL7
> java version "1.8.0_66"
> Java(TM) SE Runtime Environment (build 1.8.0_66-b17)
> Java HotSpot(TM) 64-Bit Server VM (build 25.66-b17, mixed mode)
>Reporter: Michael Andre Pearce (IG)
>Assignee: Apurva Mehta
>Priority: Critical
>  Labels: reliability
> Fix For: 0.10.1.1
>
> Attachments: 2016_12_15.zip, issue_node_1001_ext.log, 
> issue_node_1001.log, issue_node_1002_ext.log, issue_node_1002.log, 
> issue_node_1003_ext.log, issue_node_1003.log, kafka.jstack, 
> state_change_controller.tar.gz
>
>
> We have encountered a critical issue that has re-occured in different 
> physical environments. We haven't worked out what is going on. We do though 
> have a nasty work around to keep service alive. 
> We do have not had this issue on clusters still running 0.9.01.
> We have noticed a node randomly shrinking for the partitions it owns the 
> ISR's down to itself, moments later we see other nodes having disconnects, 
> followed by finally app issues, where producing to these partitions is 
> blocked.
> It seems only by restarting the kafka instance java process resolves the 
> issues.
> We have had this occur multiple times and from all network and machine 
> monitoring the machine never left the network, or had any other glitches.
> Below are seen logs from the issue.
> Node 7:
> [2016-12-01 07:01:28,112] INFO Partition 
> [com_ig_trade_v1_position_event--demo--compacted,10] on broker 7: Shrinking 
> ISR for partition [com_ig_trade_v1_position_event--demo--compacted,10] from 
> 1,2,7 to 7 (kafka.cluster.Partition)
> All other nodes:
> [2016-12-01 07:01:38,172] WARN [ReplicaFetcherThread-0-7], Error in fetch 
> kafka.server.ReplicaFetcherThread$FetchRequest@5aae6d42 
> (kafka.server.ReplicaFetcherThread)
> java.io.IOException: Connection to 7 was disconnected before the response was 
> read
> All clients:
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.NetworkException: The server disconnected 
> before a response was received.
> After this occurs, we then suddenly see on the sick machine an increasing 
> amount of close_waits and file descriptors.
> As a work around to keep service we are currently putting in an automated 
> process that tails and regex's for: and where new_partitions hit just itself 
> we restart the node. 
> "\[(?P.+)\] INFO Partition \[.*\] on broker .* Shrinking ISR for 
> partition \[.*\] from (?P.+) to (?P.+) 
> \(kafka.cluster.Partition\)"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4675) Subsequent CreateTopic command could be lost after a DeleteTopic command

2017-01-20 Thread huxi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15831945#comment-15831945
 ] 

huxi commented on KAFKA-4675:
-

Seems deleting '/brokers/topics/' comes fourth from the bottom during 
the topic deletion while creating it comes top front in createTopic logic, did 
you encounter the TopicExistsException that failed the creation?

> Subsequent CreateTopic command could be lost after a DeleteTopic command
> 
>
> Key: KAFKA-4675
> URL: https://issues.apache.org/jira/browse/KAFKA-4675
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>  Labels: admin
>
> This is discovered while investigating KAFKA-3896: If an admin client sends a 
> delete topic command and a create topic command consecutively, even if it 
> wait for the response of the previous command before issuing the second, 
> there is still a race condition that the create topic command could be "lost".
> This is because currently these commands are all asynchronous as defined in 
> KIP-4, and controller will return the response once it has written the 
> corresponding data to ZK path, which can be handled by different listener 
> threads at different paces, and if the thread handling create is faster than 
> the other, the executions could be effectively re-ordered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1207) Launch Kafka from within Apache Mesos

2017-01-20 Thread postmas...@inn.ru (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15831908#comment-15831908
 ] 

postmas...@inn.ru commented on KAFKA-1207:
--

Delivery is delayed to these recipients or groups:

e...@inn.ru

Subject: [jira] [Commented] (KAFKA-1207) Launch Kafka from within Apache Mesos

This message hasn't been delivered yet. Delivery will continue to be attempted.

The server will keep trying to deliver this message for the next 1 days, 19 
hours and 51 minutes. You'll be notified if the message can't be delivered by 
that time.







Diagnostic information for administrators:

Generating server: lc-exch-04.inn.local
Receiving server: inn.ru (109.105.153.25)

e...@inn.ru
Server at inn.ru (109.105.153.25) returned '400 4.4.7 Message delayed'
1/20/2017 3:02:17 PM - Server at inn.ru (109.105.153.25) returned '441 4.4.1 
Error communicating with target host: "Failed to connect. Winsock error code: 
10060, Win32 error code: 10060." Last endpoint attempted was 109.105.153.25:25'

Original message headers:

Received: from lc-exch-04.inn.local (10.64.37.99) by lc-exch-04.inn.local
 (10.64.37.99) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.669.32; Fri, 20
 Jan 2017 14:03:33 +0300
Received: from lc-asp-02.inn.ru (10.64.37.105) by lc-exch-04.inn.local
 (10.64.37.100) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.669.32 via
 Frontend Transport; Fri, 20 Jan 2017 14:03:33 +0300
Received-SPF: None (no SPF record) identity=mailfrom; client-ip=209.188.14.142; 
helo=spamd4-us-west.apache.org; envelope-from=j...@apache.org; 
receiver=e...@inn.ru
X-Envelope-From: 
Received: from spamd4-us-west.apache.org (pnap-us-west-generic-nat.apache.org 
[209.188.14.142])
by lc-asp-02.inn.ru (Postfix) with ESMTP id 236C2400C3
for ; Fri, 20 Jan 2017 12:03:33 +0100 (CET)
Received: from localhost (localhost [127.0.0.1])
by spamd4-us-west.apache.org (ASF Mail Server at 
spamd4-us-west.apache.org) with ESMTP id 6156AC12C9
for ; Fri, 20 Jan 2017 11:03:32 + (UTC)
X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org
X-Spam-Flag: NO
X-Spam-Score: -1.999
X-Spam-Level:
X-Spam-Status: No, score=-1.999 tagged_above=-999 required=6.31
tests=[KAM_LAZY_DOMAIN_SECURITY=1, RP_MATCHES_RCVD=-2.999]
autolearn=disabled
Received: from mx1-lw-eu.apache.org ([10.40.0.8])
by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, 
port 10024)
with ESMTP id Dg54eZI3Quis for ;
Fri, 20 Jan 2017 11:03:30 + (UTC)
Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org 
[209.188.14.139])
by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with 
ESMTP id 37D535FC5A
for ; Fri, 20 Jan 2017 11:03:29 + (UTC)
Received: from jira-lw-us.apache.org (unknown [207.244.88.139])
by mailrelay1-us-west.apache.org (ASF Mail Server at 
mailrelay1-us-west.apache.org) with ESMTP id 34FFCE027E
for ; Fri, 20 Jan 2017 11:03:28 + (UTC)
Received: from jira-lw-us.apache.org (localhost [127.0.0.1])
by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) 
with ESMTP id A7C622528B
for ; Fri, 20 Jan 2017 11:03:26 + (UTC)
Date: Fri, 20 Jan 2017 11:03:26 +
From: "postmas...@inn.ru (JIRA)" 
To: 
Message-ID: 
In-Reply-To: 
References:  

Subject: [jira] [Commented] (KAFKA-1207) Launch Kafka from within Apache
 Mesos
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394
X-inn-MailScanner-ESVA-Information: Please contact  for more information
X-inn-MailScanner-ESVA-ID: 236C2400C3.A58B2
X-inn-MailScanner-ESVA: Found to be clean
X-inn-MailScanner-ESVA-From: j...@apache.org
X-inn-MailScanner-ESVA-Watermark: 1485515013.52107@MSvhjLdJKoQT/L9mPEP9kg
Return-Path: j...@apache.org
X-OrganizationHeadersPreserved: lc-exch-04.inn.local
X-CrossPremisesHeadersFilteredByDsnGenerator: lc-exch-04.inn.local



> Launch Kafka from within Apache Mesos
> -
>
> Key: KAFKA-1207
> URL: https://issues.apache.org/jira/browse/KAFKA-1207
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>  Labels: mesos
> Attachments: KAFKA-1207_2014-01-19_00:04:58.patch, 
> KAFKA-1207_2014-01-19_00:48:49.patch, KAFKA-1207.patch
>
>
> There are a few components to 

[jira] [Commented] (KAFKA-1207) Launch Kafka from within Apache Mesos

2017-01-20 Thread postmas...@inn.ru (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15831909#comment-15831909
 ] 

postmas...@inn.ru commented on KAFKA-1207:
--

Delivery is delayed to these recipients or groups:

e...@inn.ru

Subject: [jira] [Commented] (KAFKA-1207) Launch Kafka from within Apache Mesos

This message hasn't been delivered yet. Delivery will continue to be attempted.

The server will keep trying to deliver this message for the next 1 days, 19 
hours and 58 minutes. You'll be notified if the message can't be delivered by 
that time.







Diagnostic information for administrators:

Generating server: lc-exch-04.inn.local
Receiving server: inn.ru (109.105.153.25)

e...@inn.ru
Server at inn.ru (109.105.153.25) returned '400 4.4.7 Message delayed'
1/20/2017 3:02:17 PM - Server at inn.ru (109.105.153.25) returned '441 4.4.1 
Error communicating with target host: "Failed to connect. Winsock error code: 
10060, Win32 error code: 10060." Last endpoint attempted was 109.105.153.25:25'

Original message headers:

Received: from lc-exch-04.inn.local (10.64.37.99) by lc-exch-04.inn.local
 (10.64.37.99) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.669.32; Fri, 20
 Jan 2017 14:10:34 +0300
Received: from lc-asp-02.inn.ru (10.64.37.104) by lc-exch-04.inn.local
 (10.64.37.100) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.669.32 via
 Frontend Transport; Fri, 20 Jan 2017 14:10:34 +0300
Received-SPF: None (no SPF record) identity=mailfrom; client-ip=209.188.14.142; 
helo=spamd1-us-west.apache.org; envelope-from=j...@apache.org; 
receiver=e...@inn.ru
X-Envelope-From: 
Received: from spamd1-us-west.apache.org (pnap-us-west-generic-nat.apache.org 
[209.188.14.142])
by lc-asp-02.inn.ru (Postfix) with ESMTP id 4DC0B400C6
for ; Fri, 20 Jan 2017 12:10:33 +0100 (CET)
Received: from localhost (localhost [127.0.0.1])
by spamd1-us-west.apache.org (ASF Mail Server at 
spamd1-us-west.apache.org) with ESMTP id D2B94C019A
for ; Fri, 20 Jan 2017 11:10:32 + (UTC)
X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org
X-Spam-Flag: NO
X-Spam-Score: -1.999
X-Spam-Level:
X-Spam-Status: No, score=-1.999 tagged_above=-999 required=6.31
tests=[KAM_LAZY_DOMAIN_SECURITY=1, RP_MATCHES_RCVD=-2.999]
autolearn=disabled
Received: from mx1-lw-eu.apache.org ([10.40.0.8])
by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 
10024)
with ESMTP id qETdyFHQcsOw for ;
Fri, 20 Jan 2017 11:10:31 + (UTC)
Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org 
[209.188.14.139])
by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with 
ESMTP id F37B65F30B
for ; Fri, 20 Jan 2017 11:10:30 + (UTC)
Received: from jira-lw-us.apache.org (unknown [207.244.88.139])
by mailrelay1-us-west.apache.org (ASF Mail Server at 
mailrelay1-us-west.apache.org) with ESMTP id 76733E0272
for ; Fri, 20 Jan 2017 11:10:27 + (UTC)
Received: from jira-lw-us.apache.org (localhost [127.0.0.1])
by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) 
with ESMTP id AA5772528B
for ; Fri, 20 Jan 2017 11:10:26 + (UTC)
Date: Fri, 20 Jan 2017 11:10:26 +
From: "postmas...@inn.ru (JIRA)" 
To: 
Message-ID: 
In-Reply-To: 
References:  

Subject: [jira] [Commented] (KAFKA-1207) Launch Kafka from within Apache
 Mesos
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394
X-inn-MailScanner-ESVA-Information: Please contact  for more information
X-inn-MailScanner-ESVA-ID: 4DC0B400C6.A7FE6
X-inn-MailScanner-ESVA: Found to be clean
X-inn-MailScanner-ESVA-From: j...@apache.org
X-inn-MailScanner-ESVA-Watermark: 1485515433.65336@8V2j7XLmtllyohnvOGh4MA
Return-Path: j...@apache.org
X-OrganizationHeadersPreserved: lc-exch-04.inn.local
X-CrossPremisesHeadersFilteredByDsnGenerator: lc-exch-04.inn.local



> Launch Kafka from within Apache Mesos
> -
>
> Key: KAFKA-1207
> URL: https://issues.apache.org/jira/browse/KAFKA-1207
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>  Labels: mesos
> Attachments: KAFKA-1207_2014-01-19_00:04:58.patch, 
> KAFKA-1207_2014-01-19_00:48:49.patch, KAFKA-1207.patch
>
>
> There are a few components to 

[jira] [Created] (KAFKA-4677) Make StreamPartitionAssigner more sticky when assigning to threads

2017-01-20 Thread Damian Guy (JIRA)
Damian Guy created KAFKA-4677:
-

 Summary: Make StreamPartitionAssigner more sticky when assigning 
to threads
 Key: KAFKA-4677
 URL: https://issues.apache.org/jira/browse/KAFKA-4677
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 0.10.2.0
Reporter: Damian Guy
Assignee: Damian Guy
 Fix For: 0.10.3.0


StreamPartitionAssigner tries to follow a sticky assignment policy to avoid 
expensive task migration. Currently, it does this in a best-effort approach.
We could observe a case, for which tasks did migrate for no good reason, thus 
we assume that the current implementation could be improved to be more sticky.
The concrete scenario is as follows:
assume we have topology with 3 tasks, A, B, C
assume we have 3 threads, each executing one task: 1-A, 2-B, 3-C
for some reason, thread 1 goes down and a rebalance gets triggered
thread 2 and 3 get their partitions revoked
sometimes (not sure what the exact condition for this is), the new assignment 
flips the assignment for task B and C (task A is newly assigned to either 
thread 2 or 3)
> possible new assignment 2(A,C) and 3-B
There is no obvious reason (like load-balancing) why the task assignment for B 
and C does change to the other thread resulting in unnecessary task migration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-4676) Kafka consumers gets stuck for some partitions

2017-01-20 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15831639#comment-15831639
 ] 

Ismael Juma edited comment on KAFKA-4676 at 1/20/17 12:25 PM:
--

Thanks for the report. Do the broker logs show any issue? Also, we highly 
recommend upgrading to 0.10.1.1 as it includes a number of important fixes.


was (Author: ijuma):
Thanks for the report. Do the broker logs show any issue? Also, we highly 
recommend upgrade to 0.10.1.1 as it includes a number of important fixes.

> Kafka consumers gets stuck for some partitions
> --
>
> Key: KAFKA-4676
> URL: https://issues.apache.org/jira/browse/KAFKA-4676
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Vishal Shukla
>Priority: Critical
>  Labels: consumer, reliability
> Attachments: stuck-topic-thread-dump.log
>
>
> We recently upgraded to Kafka 0.10.1.0. We are frequently facing issue that 
> Kafka consumers get stuck suddenly for some partitions.
> Attached thread dump.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4676) Kafka consumers gets stuck for some partitions

2017-01-20 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15831639#comment-15831639
 ] 

Ismael Juma commented on KAFKA-4676:


Thanks for the report. Do the broker logs show any issue? Also, we highly 
recommend upgrade to 0.10.1.1 as it includes a number of important fixes.

> Kafka consumers gets stuck for some partitions
> --
>
> Key: KAFKA-4676
> URL: https://issues.apache.org/jira/browse/KAFKA-4676
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Vishal Shukla
>Priority: Critical
>  Labels: consumer, reliability
> Attachments: stuck-topic-thread-dump.log
>
>
> We recently upgraded to Kafka 0.10.1.0. We are frequently facing issue that 
> Kafka consumers get stuck suddenly for some partitions.
> Attached thread dump.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4676) Kafka consumers gets stuck for some partitions

2017-01-20 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4676:
---
Labels: consumer reliability  (was: consumer)

> Kafka consumers gets stuck for some partitions
> --
>
> Key: KAFKA-4676
> URL: https://issues.apache.org/jira/browse/KAFKA-4676
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Vishal Shukla
>Priority: Critical
>  Labels: consumer, reliability
> Attachments: stuck-topic-thread-dump.log
>
>
> We recently upgraded to Kafka 0.10.1.0. We are frequently facing issue that 
> Kafka consumers get stuck suddenly for some partitions.
> Attached thread dump.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4676) Kafka consumers gets stuck for some partitions

2017-01-20 Thread Vishal Shukla (JIRA)
Vishal Shukla created KAFKA-4676:


 Summary: Kafka consumers gets stuck for some partitions
 Key: KAFKA-4676
 URL: https://issues.apache.org/jira/browse/KAFKA-4676
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.10.1.0
Reporter: Vishal Shukla
Priority: Critical
 Attachments: stuck-topic-thread-dump.log

We recently upgraded to Kafka 0.10.1.0. We are frequently facing issue that 
Kafka consumers get stuck suddenly for some partitions.

Attached thread dump.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1207) Launch Kafka from within Apache Mesos

2017-01-20 Thread postmas...@inn.ru (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15831549#comment-15831549
 ] 

postmas...@inn.ru commented on KAFKA-1207:
--

Delivery is delayed to these recipients or groups:

e...@inn.ru

Subject: [jira] [Commented] (KAFKA-1207) Launch Kafka from within Apache Mesos

This message hasn't been delivered yet. Delivery will continue to be attempted.

The server will keep trying to deliver this message for the next 1 days, 19 
hours and 54 minutes. You'll be notified if the message can't be delivered by 
that time.







Diagnostic information for administrators:

Generating server: lc-exch-02.inn.local
Receiving server: inn.ru (109.105.153.25)

e...@inn.ru
Server at inn.ru (109.105.153.25) returned '400 4.4.7 Message delayed'
1/20/2017 10:59:09 AM - Server at inn.ru (109.105.153.25) returned '441 4.4.1 
Error communicating with target host: "Failed to connect. Winsock error code: 
10060, Win32 error code: 10060." Last endpoint attempted was 109.105.153.25:25'

Original message headers:

Received: from lc-exch-04.inn.local (10.64.37.99) by lc-exch-02.inn.local
 (10.64.37.98) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.669.32; Fri, 20
 Jan 2017 10:03:31 +0300
Received: from lc-asp-02.inn.ru (10.64.37.104) by lc-exch-04.inn.local
 (10.64.37.100) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.669.32 via
 Frontend Transport; Fri, 20 Jan 2017 10:03:31 +0300
Received-SPF: None (no SPF record) identity=mailfrom; client-ip=209.188.14.142; 
helo=spamd3-us-west.apache.org; envelope-from=j...@apache.org; 
receiver=e...@inn.ru
X-Envelope-From: 
Received: from spamd3-us-west.apache.org (pnap-us-west-generic-nat.apache.org 
[209.188.14.142])
by lc-asp-02.inn.ru (Postfix) with ESMTP id B598C400C6
for ; Fri, 20 Jan 2017 08:03:30 +0100 (CET)
Received: from localhost (localhost [127.0.0.1])
by spamd3-us-west.apache.org (ASF Mail Server at 
spamd3-us-west.apache.org) with ESMTP id 4FAFD181B9D
for ; Fri, 20 Jan 2017 07:03:30 + (UTC)
X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org
X-Spam-Flag: NO
X-Spam-Score: -1.999
X-Spam-Level:
X-Spam-Status: No, score=-1.999 tagged_above=-999 required=6.31
tests=[KAM_LAZY_DOMAIN_SECURITY=1, RP_MATCHES_RCVD=-2.999]
autolearn=disabled
Received: from mx1-lw-us.apache.org ([10.40.0.8])
by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, 
port 10024)
with ESMTP id yMSJoOzABaqU for ;
Fri, 20 Jan 2017 07:03:28 + (UTC)
Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org 
[209.188.14.139])
by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with 
ESMTP id 7B2155FBD1
for ; Fri, 20 Jan 2017 07:03:28 + (UTC)
Received: from jira-lw-us.apache.org (unknown [207.244.88.139])
by mailrelay1-us-west.apache.org (ASF Mail Server at 
mailrelay1-us-west.apache.org) with ESMTP id 740B7E026E
for ; Fri, 20 Jan 2017 07:03:27 + (UTC)
Received: from jira-lw-us.apache.org (localhost [127.0.0.1])
by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) 
with ESMTP id 91A222528B
for ; Fri, 20 Jan 2017 07:03:26 + (UTC)
Date: Fri, 20 Jan 2017 07:03:26 +
From: "postmas...@inn.ru (JIRA)" 
To: 
Message-ID: 
In-Reply-To: 
References:  

Subject: [jira] [Commented] (KAFKA-1207) Launch Kafka from within Apache
 Mesos
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394
X-inn-MailScanner-ESVA-Information: Please contact  for more information
X-inn-MailScanner-ESVA-ID: B598C400C6.A8C8F
X-inn-MailScanner-ESVA: Found to be clean
X-inn-MailScanner-ESVA-From: j...@apache.org
X-inn-MailScanner-ESVA-Watermark: 1485500611.19048@Wl0gbdXXLVA9ORX35V9gug
Return-Path: j...@apache.org
X-OrganizationHeadersPreserved: lc-exch-02.inn.local
X-CrossPremisesHeadersFilteredByDsnGenerator: lc-exch-02.inn.local



> Launch Kafka from within Apache Mesos
> -
>
> Key: KAFKA-1207
> URL: https://issues.apache.org/jira/browse/KAFKA-1207
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>  Labels: mesos
> Attachments: KAFKA-1207_2014-01-19_00:04:58.patch, 
> KAFKA-1207_2014-01-19_00:48:49.patch, KAFKA-1207.patch
>
>
> There are a few components to 

[jira] [Commented] (KAFKA-1207) Launch Kafka from within Apache Mesos

2017-01-20 Thread postmas...@inn.ru (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15831544#comment-15831544
 ] 

postmas...@inn.ru commented on KAFKA-1207:
--

Delivery is delayed to these recipients or groups:

e...@inn.ru

Subject: [jira] [Commented] (KAFKA-1207) Launch Kafka from within Apache Mesos

This message hasn't been delivered yet. Delivery will continue to be attempted.

The server will keep trying to deliver this message for the next 1 days, 19 
hours and 58 minutes. You'll be notified if the message can't be delivered by 
that time.







Diagnostic information for administrators:

Generating server: lc-exch-04.inn.local
Receiving server: inn.ru (109.105.153.25)

e...@inn.ru
Server at inn.ru (109.105.153.25) returned '400 4.4.7 Message delayed'
1/20/2017 10:52:11 AM - Server at inn.ru (109.105.153.25) returned '441 4.4.1 
Error communicating with target host: "Failed to connect. Winsock error code: 
10060, Win32 error code: 10060." Last endpoint attempted was 109.105.153.25:25'

Original message headers:

Received: from lc-exch-04.inn.local (10.64.37.99) by lc-exch-04.inn.local
 (10.64.37.99) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.669.32; Fri, 20
 Jan 2017 10:00:39 +0300
Received: from lc-asp-02.inn.ru (10.64.37.104) by lc-exch-04.inn.local
 (10.64.37.100) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.669.32 via
 Frontend Transport; Fri, 20 Jan 2017 10:00:39 +0300
Received-SPF: None (no SPF record) identity=mailfrom; client-ip=209.188.14.142; 
helo=spamd4-us-west.apache.org; envelope-from=j...@apache.org; 
receiver=e...@inn.ru
X-Envelope-From: 
Received: from spamd4-us-west.apache.org (pnap-us-west-generic-nat.apache.org 
[209.188.14.142])
by lc-asp-02.inn.ru (Postfix) with ESMTP id 608D1400C6
for ; Fri, 20 Jan 2017 08:00:38 +0100 (CET)
Received: from localhost (localhost [127.0.0.1])
by spamd4-us-west.apache.org (ASF Mail Server at 
spamd4-us-west.apache.org) with ESMTP id 4E091C0B7D
for ; Fri, 20 Jan 2017 07:00:37 + (UTC)
X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org
X-Spam-Flag: NO
X-Spam-Score: -1.999
X-Spam-Level:
X-Spam-Status: No, score=-1.999 tagged_above=-999 required=6.31
tests=[KAM_LAZY_DOMAIN_SECURITY=1, RP_MATCHES_RCVD=-2.999]
autolearn=disabled
Received: from mx1-lw-us.apache.org ([10.40.0.8])
by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, 
port 10024)
with ESMTP id yBbkDSCKH_EH for ;
Fri, 20 Jan 2017 07:00:35 + (UTC)
Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org 
[209.188.14.139])
by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with 
ESMTP id ECBEA60D95
for ; Fri, 20 Jan 2017 07:00:34 + (UTC)
Received: from jira-lw-us.apache.org (unknown [207.244.88.139])
by mailrelay1-us-west.apache.org (ASF Mail Server at 
mailrelay1-us-west.apache.org) with ESMTP id 0964CE0272
for ; Fri, 20 Jan 2017 07:00:33 + (UTC)
Received: from jira-lw-us.apache.org (localhost [127.0.0.1])
by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) 
with ESMTP id 4C4CB2528B
for ; Fri, 20 Jan 2017 07:00:33 + (UTC)
Date: Fri, 20 Jan 2017 07:00:33 +
From: "postmas...@inn.ru (JIRA)" 
To: 
Message-ID: 
In-Reply-To: 
References:  

Subject: [jira] [Commented] (KAFKA-1207) Launch Kafka from within Apache
 Mesos
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394
X-inn-MailScanner-ESVA-Information: Please contact  for more information
X-inn-MailScanner-ESVA-ID: 608D1400C6.A7B50
X-inn-MailScanner-ESVA: Found to be clean
X-inn-MailScanner-ESVA-From: j...@apache.org
X-inn-MailScanner-ESVA-Watermark: 1485500439.19006@IGv+AzB1oAiYHTe2eQPa8w
Return-Path: j...@apache.org
X-OrganizationHeadersPreserved: lc-exch-04.inn.local
X-CrossPremisesHeadersFilteredByDsnGenerator: lc-exch-04.inn.local



> Launch Kafka from within Apache Mesos
> -
>
> Key: KAFKA-1207
> URL: https://issues.apache.org/jira/browse/KAFKA-1207
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>  Labels: mesos
> Attachments: KAFKA-1207_2014-01-19_00:04:58.patch, 
> KAFKA-1207_2014-01-19_00:48:49.patch, KAFKA-1207.patch
>
>
> There are a few components to 

RE: MirrorMaker across Kafka Clusters with different versions

2017-01-20 Thread david.franklin
Thanks for confirming that Gwen.

Best wishes,
David

-Original Message-
From: Gwen Shapira [mailto:g...@confluent.io] 
Sent: 19 January 2017 20:41
To: dev@kafka.apache.org
Subject: Re: MirrorMaker across Kafka Clusters with different versions

As you figured out - 0.10 clients (including Connect and MirrorMaker) will not 
work against Kafka 0.8.

On the other hand, 0.8. client will work with Kafka 0.10. So you just need to 
use a 0.8 MirrorMaker to do the replication.

Gwen

On Wed, Jan 18, 2017 at 8:14 AM,  wrote:

> Can MirrorMaker can work across different major versions of Kafka, 
> specifically from a v10 producer to a v8 consumer?
>
> I suspect, given that the client API is not backwards compatible, that 
> the answer unfortunately is no.
> But it would be useful to get a definitive answer on that, and any 
> suggestions in case the answer is no.
>
> The context is that I have a Kafka cluster at version 10 so that I can 
> use the Kafka Connect capability.  One of the target sinks is a Kafka 
> cluster at version 8 but I suspect Kafka Connect will not be able to 
> communicate with this cluster because the client version doesn't match 
> the target cluster.
>
> There is no scope to upgrade the Kafka 8 cluster.
>



--
*Gwen Shapira*
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter  | blog 



[jira] [Commented] (KAFKA-4566) Can't Symlink to Kafka bins

2017-01-20 Thread Akhilesh Naidu (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15831447#comment-15831447
 ] 

Akhilesh Naidu commented on KAFKA-4566:
---

If the above suggestion seems ok can someone assign this ticket to me

> Can't Symlink to Kafka bins
> ---
>
> Key: KAFKA-4566
> URL: https://issues.apache.org/jira/browse/KAFKA-4566
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.1.1
>Reporter: Stephane Maarek
>  Labels: newbie
>
> in the kafka consumer for example, the last line is :
> https://github.com/apache/kafka/blob/trunk/bin/kafka-console-consumer.sh#L21
> {code}
> exec $(dirname $0)/kafka-run-class.sh kafka.tools.ConsoleConsumer "$@"
> {code}
> if I create a symlink using 
> {code}
> ln -s
> {code}
> it doesn't resolve the right directory name because of $(dirname $0) 
> I believe the right way is to do:
> {code}
> "$(dirname "$(readlink -e "$0")")"
> {code}
>  
> Any thoughts on that before I do a PR?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-4566) Can't Symlink to Kafka bins

2017-01-20 Thread Akhilesh Naidu (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15831442#comment-15831442
 ] 

Akhilesh Naidu edited comment on KAFKA-4566 at 1/20/17 9:20 AM:


One approach could be to simulate the working of the readlink function, in a 
portable manner.
below is the code snippet

{code:title=Example.sh|borderStyle=solid}
# Function to get target file in case of symlinks
GetTargetFile () {
FILE=$0
cd `dirname $FILE`
FILE=`basename $FILE`

# Iterate down a chain of symlinks
while [ -L "$FILE" ]
do
FILE=`readlink $FILE`
cd `dirname $FILE`
FILE=`basename $FILE`
done

# Get the canonicalized name of the target file.
FILE=`pwd -P`/$FILE
echo $FILE
}

exec $(dirname $(GetTargetFile $0))/kafka-run-class.sh 
kafka.tools.ConsoleConsumer "$@"
{code}

have tested this on 
1) MacOS (system details):-
bash -version
GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin15)
Copyright (C) 2007 Free Software Foundation, Inc.
2) CentOS release 6.8 (system details):-
bash -version
GNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu)
Copyright (C) 2009 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 




was (Author: akhilesh_naidu):
One approach could be to simulate the working of the readlink function, in a 
portable manner.
below is the code snippet

{code:title=Example.sh|borderStyle=solid}
# Function to get target file in case of symlinks
GetTargetFile () {
FILE=$0
cd `dirname $FILE`
FILE=`basename $FILE`

# Iterate down a chain of symlinks
while [ -L "$FILE" ]
do
FILE=`readlink $FILE`
cd `dirname $FILE`
FILE=`basename $FILE`
done

# Append the file to the present directory,
# to get the canonicalized name.
FILE=`pwd -P`/$FILE
echo $FILE
}

exec $(dirname $(GetTargetFile $0))/kafka-run-class.sh 
kafka.tools.ConsoleConsumer "$@"
{code}

have tested this on 
1) MacOS (system details):-
bash -version
GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin15)
Copyright (C) 2007 Free Software Foundation, Inc.
2) CentOS release 6.8 (system details):-
bash -version
GNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu)
Copyright (C) 2009 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 



> Can't Symlink to Kafka bins
> ---
>
> Key: KAFKA-4566
> URL: https://issues.apache.org/jira/browse/KAFKA-4566
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.1.1
>Reporter: Stephane Maarek
>  Labels: newbie
>
> in the kafka consumer for example, the last line is :
> https://github.com/apache/kafka/blob/trunk/bin/kafka-console-consumer.sh#L21
> {code}
> exec $(dirname $0)/kafka-run-class.sh kafka.tools.ConsoleConsumer "$@"
> {code}
> if I create a symlink using 
> {code}
> ln -s
> {code}
> it doesn't resolve the right directory name because of $(dirname $0) 
> I believe the right way is to do:
> {code}
> "$(dirname "$(readlink -e "$0")")"
> {code}
>  
> Any thoughts on that before I do a PR?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-4566) Can't Symlink to Kafka bins

2017-01-20 Thread Akhilesh Naidu (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15831442#comment-15831442
 ] 

Akhilesh Naidu edited comment on KAFKA-4566 at 1/20/17 9:16 AM:


One approach could be to simulate the working of the readlink function, in a 
portable manner.
below is the code snippet

{code:title=Example.sh|borderStyle=solid}
# Function to get target file in case of symlinks
GetTargetFile () {
FILE=$0
cd `dirname $FILE`
FILE=`basename $FILE`

# Iterate down a chain of symlinks
while [ -L "$FILE" ]
do
FILE=`readlink $FILE`
cd `dirname $FILE`
FILE=`basename $FILE`
done

# Append the file to the present directory,
# to get the canonicalized name.
FILE=`pwd -P`/$FILE
echo $FILE
}

exec $(dirname $(GetTargetFile $0))/kafka-run-class.sh 
kafka.tools.ConsoleConsumer "$@"
{code}

have tested this on 
1) MacOS (system details):-
bash -version
GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin15)
Copyright (C) 2007 Free Software Foundation, Inc.
2) CentOS release 6.8 (system details):-
bash -version
GNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu)
Copyright (C) 2009 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 




was (Author: akhilesh_naidu):
One approach could be to simulate the working of the readlink function, in a 
portable manner.
below is the code snippet

--
# Function to get target file in case of symlinks
GetTargetFile () {
FILE=$0
cd `dirname $FILE`
FILE=`basename $FILE`

# Iterate down a chain of symlinks
while [ -L "$FILE" ]
do
FILE=`readlink $FILE`
cd `dirname $FILE`
FILE=`basename $FILE`
done

# Append the file to the present directory,
# to get the canonicalized name.
FILE=`pwd -P`/$FILE
echo $FILE
}

exec $(dirname $(GetTargetFile $0))/kafka-run-class.sh 
kafka.tools.ConsoleConsumer "$@"
--

have tested this on 
1) MacOS (system details):-
bash -version
GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin15)
Copyright (C) 2007 Free Software Foundation, Inc.
2) CentOS release 6.8 (system details):-
bash -version
GNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu)
Copyright (C) 2009 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 



> Can't Symlink to Kafka bins
> ---
>
> Key: KAFKA-4566
> URL: https://issues.apache.org/jira/browse/KAFKA-4566
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.1.1
>Reporter: Stephane Maarek
>  Labels: newbie
>
> in the kafka consumer for example, the last line is :
> https://github.com/apache/kafka/blob/trunk/bin/kafka-console-consumer.sh#L21
> {code}
> exec $(dirname $0)/kafka-run-class.sh kafka.tools.ConsoleConsumer "$@"
> {code}
> if I create a symlink using 
> {code}
> ln -s
> {code}
> it doesn't resolve the right directory name because of $(dirname $0) 
> I believe the right way is to do:
> {code}
> "$(dirname "$(readlink -e "$0")")"
> {code}
>  
> Any thoughts on that before I do a PR?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4566) Can't Symlink to Kafka bins

2017-01-20 Thread Akhilesh Naidu (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15831442#comment-15831442
 ] 

Akhilesh Naidu commented on KAFKA-4566:
---

One approach could be to simulate the working of the readlink function, in a 
portable manner.
below is the code snippet

--
# Function to get target file in case of symlinks
GetTargetFile () {
FILE=$0
cd `dirname $FILE`
FILE=`basename $FILE`

# Iterate down a chain of symlinks
while [ -L "$FILE" ]
do
FILE=`readlink $FILE`
cd `dirname $FILE`
FILE=`basename $FILE`
done

# Append the file to the present directory,
# to get the canonicalized name.
FILE=`pwd -P`/$FILE
echo $FILE
}

exec $(dirname $(GetTargetFile $0))/kafka-run-class.sh 
kafka.tools.ConsoleConsumer "$@"
--

have tested this on 
1) MacOS (system details):-
bash -version
GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin15)
Copyright (C) 2007 Free Software Foundation, Inc.
2) CentOS release 6.8 (system details):-
bash -version
GNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu)
Copyright (C) 2009 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 



> Can't Symlink to Kafka bins
> ---
>
> Key: KAFKA-4566
> URL: https://issues.apache.org/jira/browse/KAFKA-4566
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.1.1
>Reporter: Stephane Maarek
>  Labels: newbie
>
> in the kafka consumer for example, the last line is :
> https://github.com/apache/kafka/blob/trunk/bin/kafka-console-consumer.sh#L21
> {code}
> exec $(dirname $0)/kafka-run-class.sh kafka.tools.ConsoleConsumer "$@"
> {code}
> if I create a symlink using 
> {code}
> ln -s
> {code}
> it doesn't resolve the right directory name because of $(dirname $0) 
> I believe the right way is to do:
> {code}
> "$(dirname "$(readlink -e "$0")")"
> {code}
>  
> Any thoughts on that before I do a PR?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4675) Subsequent CreateTopic command could be lost after a DeleteTopic command

2017-01-20 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-4675:


 Summary: Subsequent CreateTopic command could be lost after a 
DeleteTopic command
 Key: KAFKA-4675
 URL: https://issues.apache.org/jira/browse/KAFKA-4675
 Project: Kafka
  Issue Type: Bug
Reporter: Guozhang Wang


This is discovered while investigating KAFKA-3896: If an admin client sends a 
delete topic command and a create topic command consecutively, even if it wait 
for the response of the previous command before issuing the second, there is 
still a race condition that the create topic command could be "lost".

This is because currently these commands are all asynchronous as defined in 
KIP-4, and controller will return the response once it has written the 
corresponding data to ZK path, which can be handled by different listener 
threads at different paces, and if the thread handling create is faster than 
the other, the executions could be effectively re-ordered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2410: MINOR: Fix typo in WordCountProcessorDemo

2017-01-20 Thread wmarshall484
GitHub user wmarshall484 opened a pull request:

https://github.com/apache/kafka/pull/2410

MINOR: Fix typo in WordCountProcessorDemo

`bin-kafka-console-producer.sh` should be `bin/kafka-console-producer.sh`.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/wmarshall484/kafka typo-fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2410.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2410


commit 34bf9829cd8215eb7d61d2202ca2bc6795ebe8a8
Author: Will Marshall 
Date:   2017-01-20T08:01:39Z

Fix typo in WordCountProcessorDemo

`bin-kafka-console-producer.sh` should be `bin/kafka-console-producer.sh`.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---