[GitHub] kafka pull request #2093: MINOR: Add description of how consumer wakeup acts...

2016-11-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2093


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #1020

2016-11-04 Thread Apache Jenkins Server
See 

Changes:

[jason] KAFKA-4024; Override client metadata backoff on topic changes and avoid

[jason] MINOR: Add description of how consumer wakeup acts if no threads are

--
[...truncated 954 lines...]
kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKForGroupConsumingOneTopic PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKForGroupConsumingMultipleTopics STARTED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKForGroupConsumingMultipleTopics PASSED

kafka.admin.DeleteConsumerGroupTest > testGroupWideDeleteInZK STARTED

kafka.admin.DeleteConsumerGroupTest > testGroupWideDeleteInZK PASSED

kafka.admin.ConfigCommandTest > shouldParseArgumentsForTopicsEntityType STARTED

kafka.admin.ConfigCommandTest > shouldParseArgumentsForTopicsEntityType PASSED

kafka.admin.ConfigCommandTest > testUserClientQuotaOpts STARTED

kafka.admin.ConfigCommandTest > testUserClientQuotaOpts PASSED

kafka.admin.ConfigCommandTest > shouldAddTopicConfig STARTED

kafka.admin.ConfigCommandTest > shouldAddTopicConfig PASSED

kafka.admin.ConfigCommandTest > shouldAddClientConfig STARTED

kafka.admin.ConfigCommandTest > shouldAddClientConfig PASSED

kafka.admin.ConfigCommandTest > shouldDeleteBrokerConfig STARTED

kafka.admin.ConfigCommandTest > shouldDeleteBrokerConfig PASSED

kafka.admin.ConfigCommandTest > testQuotaConfigEntity STARTED

kafka.admin.ConfigCommandTest > testQuotaConfigEntity PASSED

kafka.admin.ConfigCommandTest > 
shouldNotUpdateBrokerConfigIfMalformedBracketConfig STARTED

kafka.admin.ConfigCommandTest > 
shouldNotUpdateBrokerConfigIfMalformedBracketConfig PASSED

kafka.admin.ConfigCommandTest > shouldFailIfUnrecognisedEntityType STARTED

kafka.admin.ConfigCommandTest > shouldFailIfUnrecognisedEntityType PASSED

kafka.admin.ConfigCommandTest > 
shouldNotUpdateBrokerConfigIfMalformedEntityName STARTED

kafka.admin.ConfigCommandTest > 
shouldNotUpdateBrokerConfigIfMalformedEntityName PASSED

kafka.admin.ConfigCommandTest > shouldSupportCommaSeparatedValues STARTED

kafka.admin.ConfigCommandTest > shouldSupportCommaSeparatedValues PASSED

kafka.admin.ConfigCommandTest > shouldNotUpdateBrokerConfigIfMalformedConfig 
STARTED

kafka.admin.ConfigCommandTest > shouldNotUpdateBrokerConfigIfMalformedConfig 
PASSED

kafka.admin.ConfigCommandTest > shouldParseArgumentsForBrokersEntityType STARTED

kafka.admin.ConfigCommandTest > shouldParseArgumentsForBrokersEntityType PASSED

kafka.admin.ConfigCommandTest > shouldAddBrokerConfig STARTED

kafka.admin.ConfigCommandTest > shouldAddBrokerConfig PASSED

kafka.admin.ConfigCommandTest > testQuotaDescribeEntities STARTED

kafka.admin.ConfigCommandTest > testQuotaDescribeEntities PASSED

kafka.admin.ConfigCommandTest > shouldParseArgumentsForClientsEntityType STARTED

kafka.admin.ConfigCommandTest > shouldParseArgumentsForClientsEntityType PASSED

kafka.admin.DescribeConsumerGroupTest > testDescribeExistingGroup STARTED

kafka.admin.DescribeConsumerGroupTest > testDescribeExistingGroup PASSED

kafka.admin.DescribeConsumerGroupTest > 
testDescribeConsumersWithNoAssignedPartitions STARTED

kafka.admin.DescribeConsumerGroupTest > 
testDescribeConsumersWithNoAssignedPartitions PASSED

kafka.admin.DescribeConsumerGroupTest > testDescribeNonExistingGroup STARTED

kafka.admin.DescribeConsumerGroupTest > testDescribeNonExistingGroup PASSED

kafka.admin.DescribeConsumerGroupTest > testDescribeExistingGroupWithNoMembers 
STARTED

kafka.admin.DescribeConsumerGroupTest > testDescribeExistingGroupWithNoMembers 
PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfBlankArg STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfBlankArg PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowVerifyWithoutReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowVerifyWithoutReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowTopicsOptionWithVerify STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowTopicsOptionWithVerify PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithThrottleOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithThrottleOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfNoArgs STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfNoArgs PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithoutReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithoutReassignmentOption PASSED

kafka.admin.ReassignPartitionsComma

Build failed in Jenkins: kafka-trunk-jdk7 #1670

2016-11-04 Thread Apache Jenkins Server
See 

Changes:

[jason] KAFKA-4024; Override client metadata backoff on topic changes and avoid

[jason] MINOR: Add description of how consumer wakeup acts if no threads are

--
[...truncated 6618 lines...]
kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoProduceWithoutDescribeAcl PASSED

kafka.api.UserQuotaTest > testProducerConsumerOverrideUnthrottled STARTED

kafka.api.UserQuotaTest > testProducerConsumerOverrideUnthrottled PASSED

kafka.api.UserQuotaTest > testThrottledProducerConsumer STARTED

kafka.api.UserQuotaTest > testThrottledProducerConsumer PASSED

kafka.api.UserQuotaTest > testQuotaOverrideDelete STARTED

kafka.api.UserQuotaTest > testQuotaOverrideDelete PASSED

kafka.api.PlaintextConsumerTest > testEarliestOrLatestOffsets STARTED

kafka.api.PlaintextConsumerTest > testEarliestOrLatestOffsets PASSED

kafka.api.PlaintextConsumerTest > testPartitionsForAutoCreate STARTED

kafka.api.PlaintextConsumerTest > testPartitionsForAutoCreate PASSED

kafka.api.PlaintextConsumerTest > testShrinkingTopicSubscriptions STARTED

kafka.api.PlaintextConsumerTest > testShrinkingTopicSubscriptions PASSED

kafka.api.PlaintextConsumerTest > testMaxPollIntervalMs STARTED

kafka.api.PlaintextConsumerTest > testMaxPollIntervalMs PASSED

kafka.api.PlaintextConsumerTest > testOffsetsForTimes STARTED

kafka.api.PlaintextConsumerTest > testOffsetsForTimes PASSED

kafka.api.PlaintextConsumerTest > testSubsequentPatternSubscription STARTED

kafka.api.PlaintextConsumerTest > testSubsequentPatternSubscription PASSED

kafka.api.PlaintextConsumerTest > testAsyncCommit STARTED

kafka.api.PlaintextConsumerTest > testAsyncCommit PASSED

kafka.api.PlaintextConsumerTest > testLowMaxFetchSizeForRequestAndPartition 
STARTED

kafka.api.PlaintextConsumerTest > testLowMaxFetchSizeForRequestAndPartition 
PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnStopPolling 
STARTED

kafka.api.PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnStopPolling 
PASSED

kafka.api.PlaintextConsumerTest > testMaxPollIntervalMsDelayInRevocation STARTED

kafka.api.PlaintextConsumerTest > testMaxPollIntervalMsDelayInRevocation PASSED

kafka.api.PlaintextConsumerTest > testPartitionsForInvalidTopic STARTED

kafka.api.PlaintextConsumerTest > testPartitionsForInvalidTopic PASSED

kafka.api.PlaintextConsumerTest > testPauseStateNotPreservedByRebalance STARTED

kafka.api.PlaintextConsumerTest > testPauseStateNotPreservedByRebalance PASSED

kafka.api.PlaintextConsumerTest > 
testFetchHonoursFetchSizeIfLargeRecordNotFirst STARTED

kafka.api.PlaintextConsumerTest > 
testFetchHonoursFetchSizeIfLargeRecordNotFirst PASSED

kafka.api.PlaintextConsumerTest > testSeek STARTED

kafka.api.PlaintextConsumerTest > testSeek PASSED

kafka.api.PlaintextConsumerTest > testPositionAndCommit STARTED

kafka.api.PlaintextConsumerTest > testPositionAndCommit PASSED

kafka.api.PlaintextConsumerTest > 
testFetchRecordLargerThanMaxPartitionFetchBytes STARTED

kafka.api.PlaintextConsumerTest > 
testFetchRecordLargerThanMaxPartitionFetchBytes PASSED

kafka.api.PlaintextConsumerTest > testUnsubscribeTopic STARTED

kafka.api.PlaintextConsumerTest > testUnsubscribeTopic PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnClose STARTED

kafka.api.PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnClose PASSED

kafka.api.PlaintextConsumerTest > testFetchRecordLargerThanFetchMaxBytes STARTED

kafka.api.PlaintextConsumerTest > testFetchRecordLargerThanFetchMaxBytes PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerDefaultAssignment STARTED

kafka.api.PlaintextConsumerTest > testMultiConsumerDefaultAssignment PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitOnClose STARTED

kafka.api.PlaintextConsumerTest > testAutoCommitOnClose PASSED

kafka.api.PlaintextConsumerTest > testListTopics STARTED

kafka.api.PlaintextConsumerTest > testListTopics PASSED

kafka.api.PlaintextConsumerTest > testExpandingTopicSubscriptions STARTED

kafka.api.PlaintextConsumerTest > testExpandingTopicSubscriptions PASSED

kafka.api.PlaintextConsumerTest > testInterceptors STARTED

kafka.api.PlaintextConsumerTest > testInterceptors PASSED

kafka.api.PlaintextConsumerTest > testPatternUnsubscription STARTED

kafka.api.PlaintextConsumerTest > testPatternUnsubscription PASSED

kafka.api.PlaintextConsumerTest > testGroupConsumption STARTED

kafka.api.PlaintextConsumerTest > testGroupConsumption PASSED

kafka.api.PlaintextConsumerTest > testPartitionsFor STARTED

kafka.api.PlaintextConsumerTest > testPartitionsFor PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitOnRebalance STARTED

kafka.api.PlaintextConsumerTest > testAutoCommitOnRebalance PASSED

kafka.api.PlaintextConsumerTest > testInterceptorsWithWrongKeyValue STARTED

kafka.api.PlaintextConsumerTest > testInterceptorsWithWrongKeyValue PASSED

kafka.api.PlaintextConsumerTest > testMaxP

[jira] [Commented] (KAFKA-4371) Sporadic ConnectException shuts down the whole connect process

2016-11-04 Thread Sagar Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15636091#comment-15636091
 ] 

Sagar Rao commented on KAFKA-4371:
--

[~ewencp] By shutting down I meant that the processing of data doesn't happen 
anymore. I should have been a bit more careful in writing that. Since this is a 
connect-jdbc issue, I will close this ticket. I have created a separate issue 
on connect-jdbc:

https://github.com/confluentinc/kafka-connect-jdbc/issues/160

> Sporadic ConnectException shuts down the whole connect process
> --
>
> Key: KAFKA-4371
> URL: https://issues.apache.org/jira/browse/KAFKA-4371
> Project: Kafka
>  Issue Type: Bug
>Reporter: Sagar Rao
>Priority: Critical
>
> I had setup a 2 node distributed kafka-connect process. Everything went well 
> and I could see lot of data flowing into the relevant kafka topics.
> After some time, JDBCUtils.getCurrentTimeOnDB threw a ConnectException with 
> the following stacktrace:
> The last packet successfully received from the server was 792 milliseconds 
> ago.  The last packet sent successfully to the server was 286 milliseconds 
> ago. (io.confluent.connect.jdbc.source.JdbcSourceTask:234)
> [2016-11-02 12:42:06,116] ERROR Failed to get current time from DB using 
> query select CURRENT_TIMESTAMP; on database MySQL 
> (io.confluent.connect.jdbc.util.JdbcUtils:226)
> com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link 
> failure
> The last packet successfully received from the server was 1,855 milliseconds 
> ago.  The last packet sent successfully to the server was 557 milliseconds 
> ago.
>at sun.reflect.GeneratedConstructorAccessor51.newInstance(Unknown 
> Source)
>at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
>at 
> com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1117)
>at com.mysql.jdbc.MysqlIO.send(MysqlIO.java:3829)
>at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2449)
>at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2629)
>at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2719)
>at 
> com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2155)
>at 
> com.mysql.jdbc.PreparedStatement.execute(PreparedStatement.java:1379)
>at 
> com.mysql.jdbc.StatementImpl.createResultSetUsingServerFetch(StatementImpl.java:651)
>at com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1527)
>at 
> io.confluent.connect.jdbc.util.JdbcUtils.getCurrentTimeOnDB(JdbcUtils.java:220)
>at 
> io.confluent.connect.jdbc.source.TimestampIncrementingTableQuerier.executeQuery(TimestampIncrementingTableQuerier.java:157)
>at 
> io.confluent.connect.jdbc.source.TableQuerier.maybeStartQuery(TableQuerier.java:78)
>at 
> io.confluent.connect.jdbc.source.TimestampIncrementingTableQuerier.maybeStartQuery(TimestampIncrementingTableQuerier.java:57)
>at 
> io.confluent.connect.jdbc.source.JdbcSourceTask.poll(JdbcSourceTask.java:207)
>at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:155)
>at 
> org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)
>at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)
>at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>at java.lang.Thread.run(Thread.java:745)
> Caused by: java.net.SocketException: Broken pipe (Write failed)
>at java.net.SocketOutputStream.socketWrite0(Native Method)
>at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109)
>at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
>at 
> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
>at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
>at com.mysql.jdbc.MysqlIO.send(MysqlIO.java:3810)
>... 20 more
> This was just a minor glitch to the connection as the ec2 isntances are able 
> to connect to the Mysql Aurora instances without any issues.
> But, after this exception(which is there a number of times), none of the 
> connectors' tasks are executing. Beyond this, all I see in the logs is 
> [2016-11-02 16:17:41,983] ERROR Failed to run q

[jira] [Resolved] (KAFKA-4371) Sporadic ConnectException shuts down the whole connect process

2016-11-04 Thread Sagar Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sagar Rao resolved KAFKA-4371.
--
Resolution: Not A Problem

> Sporadic ConnectException shuts down the whole connect process
> --
>
> Key: KAFKA-4371
> URL: https://issues.apache.org/jira/browse/KAFKA-4371
> Project: Kafka
>  Issue Type: Bug
>Reporter: Sagar Rao
>Priority: Critical
>
> I had setup a 2 node distributed kafka-connect process. Everything went well 
> and I could see lot of data flowing into the relevant kafka topics.
> After some time, JDBCUtils.getCurrentTimeOnDB threw a ConnectException with 
> the following stacktrace:
> The last packet successfully received from the server was 792 milliseconds 
> ago.  The last packet sent successfully to the server was 286 milliseconds 
> ago. (io.confluent.connect.jdbc.source.JdbcSourceTask:234)
> [2016-11-02 12:42:06,116] ERROR Failed to get current time from DB using 
> query select CURRENT_TIMESTAMP; on database MySQL 
> (io.confluent.connect.jdbc.util.JdbcUtils:226)
> com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link 
> failure
> The last packet successfully received from the server was 1,855 milliseconds 
> ago.  The last packet sent successfully to the server was 557 milliseconds 
> ago.
>at sun.reflect.GeneratedConstructorAccessor51.newInstance(Unknown 
> Source)
>at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
>at 
> com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1117)
>at com.mysql.jdbc.MysqlIO.send(MysqlIO.java:3829)
>at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2449)
>at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2629)
>at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2719)
>at 
> com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2155)
>at 
> com.mysql.jdbc.PreparedStatement.execute(PreparedStatement.java:1379)
>at 
> com.mysql.jdbc.StatementImpl.createResultSetUsingServerFetch(StatementImpl.java:651)
>at com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1527)
>at 
> io.confluent.connect.jdbc.util.JdbcUtils.getCurrentTimeOnDB(JdbcUtils.java:220)
>at 
> io.confluent.connect.jdbc.source.TimestampIncrementingTableQuerier.executeQuery(TimestampIncrementingTableQuerier.java:157)
>at 
> io.confluent.connect.jdbc.source.TableQuerier.maybeStartQuery(TableQuerier.java:78)
>at 
> io.confluent.connect.jdbc.source.TimestampIncrementingTableQuerier.maybeStartQuery(TimestampIncrementingTableQuerier.java:57)
>at 
> io.confluent.connect.jdbc.source.JdbcSourceTask.poll(JdbcSourceTask.java:207)
>at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:155)
>at 
> org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)
>at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)
>at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>at java.lang.Thread.run(Thread.java:745)
> Caused by: java.net.SocketException: Broken pipe (Write failed)
>at java.net.SocketOutputStream.socketWrite0(Native Method)
>at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109)
>at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
>at 
> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
>at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
>at com.mysql.jdbc.MysqlIO.send(MysqlIO.java:3810)
>... 20 more
> This was just a minor glitch to the connection as the ec2 isntances are able 
> to connect to the Mysql Aurora instances without any issues.
> But, after this exception(which is there a number of times), none of the 
> connectors' tasks are executing. Beyond this, all I see in the logs is 
> [2016-11-02 16:17:41,983] ERROR Failed to run query for table 
> TimestampIncrementingTableQuerier{name='eng_match_series', query='null', 
> topicPrefix='ci-eng-', timestampColumn='modified', incrementingColumn='id'}: 
> com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: No 
> operations allowed after statement closed. 
> (io.confluent.connect.jdbc.source.JdbcSourceTask:

[jira] [Created] (KAFKA-4379) Remove caching of dirty and removed keys from StoreChangeLogger

2016-11-04 Thread Damian Guy (JIRA)
Damian Guy created KAFKA-4379:
-

 Summary: Remove caching of dirty and removed keys from 
StoreChangeLogger
 Key: KAFKA-4379
 URL: https://issues.apache.org/jira/browse/KAFKA-4379
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Affects Versions: 0.10.1.0
Reporter: Damian Guy
Assignee: Damian Guy
Priority: Minor
 Fix For: 0.10.1.1


The StoreChangeLogger currently keeps a cache of dirty and removed keys and 
will batch the changelog records such that we don't send a record for each 
update. However, with KIP-63 this is unnecessary as the batching and de-duping 
is done by the caching layer. Further, the StoreChangeLogger relies on 
context.timestamp() which is likely to be incorrect when caching is enabled



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-4379) Remove caching of dirty and removed keys from StoreChangeLogger

2016-11-04 Thread Damian Guy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-4379 started by Damian Guy.
-
> Remove caching of dirty and removed keys from StoreChangeLogger
> ---
>
> Key: KAFKA-4379
> URL: https://issues.apache.org/jira/browse/KAFKA-4379
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Damian Guy
>Assignee: Damian Guy
>Priority: Minor
> Fix For: 0.10.1.1
>
>
> The StoreChangeLogger currently keeps a cache of dirty and removed keys and 
> will batch the changelog records such that we don't send a record for each 
> update. However, with KIP-63 this is unnecessary as the batching and 
> de-duping is done by the caching layer. Further, the StoreChangeLogger relies 
> on context.timestamp() which is likely to be incorrect when caching is enabled



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2103: KAFKA-4379: Remove caching of dirty and removed ke...

2016-11-04 Thread dguy
GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/2103

KAFKA-4379: Remove caching of dirty and removed keys from StoreChangeLogger

The `StoreChangeLogger` currently keeps a cache of dirty and removed keys 
and will batch the changelog records such that we don't send a record for each 
update. However, with KIP-63 this is unnecessary as the batching and de-duping 
is done by the caching layer. Further, the `StoreChangeLogger` relies on 
`context.timestamp()` which is likely to be incorrect when caching is enabled

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka store-change-logger

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2103.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2103


commit 9acd0cf3521b9fabec7b53fe147f445a6bf82b5a
Author: Damian Guy 
Date:   2016-11-04T12:36:59Z

remove caching from StoreChangeLogger




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4379) Remove caching of dirty and removed keys from StoreChangeLogger

2016-11-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15636248#comment-15636248
 ] 

ASF GitHub Bot commented on KAFKA-4379:
---

GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/2103

KAFKA-4379: Remove caching of dirty and removed keys from StoreChangeLogger

The `StoreChangeLogger` currently keeps a cache of dirty and removed keys 
and will batch the changelog records such that we don't send a record for each 
update. However, with KIP-63 this is unnecessary as the batching and de-duping 
is done by the caching layer. Further, the `StoreChangeLogger` relies on 
`context.timestamp()` which is likely to be incorrect when caching is enabled

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka store-change-logger

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2103.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2103


commit 9acd0cf3521b9fabec7b53fe147f445a6bf82b5a
Author: Damian Guy 
Date:   2016-11-04T12:36:59Z

remove caching from StoreChangeLogger




> Remove caching of dirty and removed keys from StoreChangeLogger
> ---
>
> Key: KAFKA-4379
> URL: https://issues.apache.org/jira/browse/KAFKA-4379
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Damian Guy
>Assignee: Damian Guy
>Priority: Minor
> Fix For: 0.10.1.1
>
>
> The StoreChangeLogger currently keeps a cache of dirty and removed keys and 
> will batch the changelog records such that we don't send a record for each 
> update. However, with KIP-63 this is unnecessary as the batching and 
> de-duping is done by the caching layer. Further, the StoreChangeLogger relies 
> on context.timestamp() which is likely to be incorrect when caching is enabled



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3151) kafka-consumer-groups.sh fail with sasl enabled

2016-11-04 Thread Balint Molnar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15636247#comment-15636247
 ] 

Balint Molnar commented on KAFKA-3151:
--

[~linbao111] please create a for example grouprop.properties
Put the following line into the file:
{code}
security.protocol=SASL_PLAINTEXT
{code} 
Use the following command:
{code}
./bin/kafka-consumer-groups.sh --new-consumer --bootstrap-server 
slave1.otocyon.com:9092 --list --command-config groupprop.properties
{code}

> kafka-consumer-groups.sh fail with sasl enabled 
> 
>
> Key: KAFKA-3151
> URL: https://issues.apache.org/jira/browse/KAFKA-3151
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
> Environment: redhat as6.5
>Reporter: linbao111
>
> ./bin/kafka-consumer-groups.sh --new-consumer  --bootstrap-server 
> slave1.otocyon.com:9092 --list
> Error while executing consumer group command Request METADATA failed on 
> brokers List(Node(-1, slave1.otocyon.com, 9092))
> java.lang.RuntimeException: Request METADATA failed on brokers List(Node(-1, 
> slave1.otocyon.com, 9092))
> at kafka.admin.AdminClient.sendAnyNode(AdminClient.scala:73)
> at kafka.admin.AdminClient.findAllBrokers(AdminClient.scala:93)
> at kafka.admin.AdminClient.listAllGroups(AdminClient.scala:101)
> at 
> kafka.admin.AdminClient.listAllGroupsFlattened(AdminClient.scala:122)
> at 
> kafka.admin.AdminClient.listAllConsumerGroupsFlattened(AdminClient.scala:126)
> at 
> kafka.admin.ConsumerGroupCommand$KafkaConsumerGroupService.list(ConsumerGroupCommand.scala:310)
> at 
> kafka.admin.ConsumerGroupCommand$.main(ConsumerGroupCommand.scala:61)
> at kafka.admin.ConsumerGroupCommand.main(ConsumerGroupCommand.scala)
> same error for:
> bin/kafka-run-class.sh kafka.admin.ConsumerGroupCommand  --bootstrap-server 
> slave16:9092,app:9092 --describe --group test-consumer-group  --new-consumer
> Error while executing consumer group command Request GROUP_COORDINATOR failed 
> on brokers List(Node(-1, slave16, 9092), Node(-2, app, 9092))
> java.lang.RuntimeException: Request GROUP_COORDINATOR failed on brokers 
> List(Node(-1, slave16, 9092), Node(-2, app, 9092))
> at kafka.admin.AdminClient.sendAnyNode(AdminClient.scala:73)
> at kafka.admin.AdminClient.findCoordinator(AdminClient.scala:78)
> at kafka.admin.AdminClient.describeGroup(AdminClient.scala:130)
> at 
> kafka.admin.AdminClient.describeConsumerGroup(AdminClient.scala:152)
> at 
> kafka.admin.ConsumerGroupCommand$KafkaConsumerGroupService.describeGroup(ConsumerGroupCommand.scala:314)
> at 
> kafka.admin.ConsumerGroupCommand$ConsumerGroupService$class.describe(ConsumerGroupCommand.scala:84)
> at 
> kafka.admin.ConsumerGroupCommand$KafkaConsumerGroupService.describe(ConsumerGroupCommand.scala:302)
> at 
> kafka.admin.ConsumerGroupCommand$.main(ConsumerGroupCommand.scala:63)
> at kafka.admin.ConsumerGroupCommand.main(ConsumerGroupCommand.scala)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4379) Remove caching of dirty and removed keys from StoreChangeLogger

2016-11-04 Thread Damian Guy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damian Guy updated KAFKA-4379:
--
Status: Patch Available  (was: In Progress)

> Remove caching of dirty and removed keys from StoreChangeLogger
> ---
>
> Key: KAFKA-4379
> URL: https://issues.apache.org/jira/browse/KAFKA-4379
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Damian Guy
>Assignee: Damian Guy
>Priority: Minor
> Fix For: 0.10.1.1
>
>
> The StoreChangeLogger currently keeps a cache of dirty and removed keys and 
> will batch the changelog records such that we don't send a record for each 
> update. However, with KIP-63 this is unnecessary as the batching and 
> de-duping is done by the caching layer. Further, the StoreChangeLogger relies 
> on context.timestamp() which is likely to be incorrect when caching is enabled



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4366) KafkaStreams.close() blocks indefinitely

2016-11-04 Thread Damian Guy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damian Guy updated KAFKA-4366:
--
Status: Patch Available  (was: In Progress)

> KafkaStreams.close() blocks indefinitely
> 
>
> Key: KAFKA-4366
> URL: https://issues.apache.org/jira/browse/KAFKA-4366
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.0.1, 0.10.1.0
>Reporter: Michal Borowiecki
>Assignee: Damian Guy
>
> KafkaStreams.close() method calls join on all its threads without a timeout, 
> meaning indefinitely, which makes it prone to deadlocks and unfit to be used 
> in shutdown hooks.
> (KafkaStreams::close is used in numerous examples by confluent: 
> https://github.com/confluentinc/examples/tree/kafka-0.10.0.1-cp-3.0.1/kafka-streams/src/main/java/io/confluent/examples/streams
>  and 
> https://www.confluent.io/blog/introducing-kafka-streams-stream-processing-made-simple/
>  so we assumed it to be recommended practice)
> A deadlock happens, for instance, if System.exit() is called from within the 
> uncaughtExceptionHandler. (We need to call System.exit() from the 
> uncaughtExceptionHandler because KAFKA-4355 issue shuts down the StreamThread 
> and to recover we want the process to exit, as our infrastructure will then 
> start it up again.)
> The System.exit call (from the uncaughtExceptionHandler, which runs in the 
> StreamThread) will execute the shutdown hook in a new thread and wait for 
> that thread to join. If the shutdown hook calls KafkaStreams.close, it will 
> in turn block waiting for the StreamThread to join, hence the deadlock.
> Runtime.addShutdownHook javadocs state:
> {quote}
> Shutdown hooks run at a delicate time in the life cycle of a virtual machine 
> and should therefore be coded defensively. They should, in particular, be 
> written to be thread-safe and to avoid deadlocks insofar as possible
> {quote}
> and
> {quote}
> Shutdown hooks should also finish their work quickly.
> {quote}
> Therefore the current implementation of KafkaStreams.close() which waits 
> forever for threads to join is completely unsuitable for use in a shutdown 
> hook. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


ReplicaFetcherThread stopped after ReplicaFetcherThread received a corrupted message

2016-11-04 Thread Jun H.
Hi all,

We recently discovered an issue in Kafka 0.9.0.1 (), where
ReplicaFetcherThread stopped after ReplicaFetcherThread received a
corrupted message. As the same logic exists also in Kafka 0.10.0.0 and
0.10.0.1, they may have the similar issue.

Here are system logs related to this issue.

> 2016-10-30 - 02:33:05,606 ERROR ReplicaFetcherThread-5-1590174474
> ReplicaFetcherThread.apply - Found invalid messages during fetch for
> partition [logs,41] offset 39021512238 error Message is corrupt (stored crc
> = 2028421553, computed crc = 3577227678)
> 2016-10-30 - 02:33:06,582 ERROR ReplicaFetcherThread-5-1590174474
> ReplicaFetcherThread.error - [ReplicaFetcherThread-5-1590174474], Error due
> to kafka.common.KafkaException: - error processing data for partition
> [logs,41] offset 39021512301 Caused - by: java.lang.RuntimeException:
> Offset mismatch: fetched offset = 39021512301, log end offset = 39021512238.


First, ReplicaFetcherThread got a corrupted message (offset 39021512238)
due to some blip.

Line
https://github.com/apache/kafka/blob/0.9.0.1/core/src/main/scala/kafka/server/AbstractFetcherThread.scala#L138
threw exception

Then, Line
https://github.com/apache/kafka/blob/0.9.0.1/core/src/main/scala/kafka/server/AbstractFetcherThread.scala#L145
caught it and logged this error.

Because
https://github.com/apache/kafka/blob/0.9.0.1/core/src/main/scala/kafka/server/AbstractFetcherThread.scala#L134
updated the topic partition offset to the fetched latest one in
partitionMap. So ReplicaFetcherThread skipped the batch with corrupted
messages.

Based on
https://github.com/apache/kafka/blob/0.9.0.1/core/src/main/scala/kafka/server/AbstractFetcherThread.scala#L84,
the ReplicaFetcherThread then directly fetched the next batch of messages
(with offset 39021512301)

Next, ReplicaFetcherThread stopped because the log end offset (still
39021512238) didn't match the fetched message (offset 39021512301).

A quick fix is to move line 134 to be after line 138.

Would be great to have your comments and please let me know if a Jira issue
is needed. Thanks.

Best,

Jun


Re: why cant SslTransportLayer be muted before handshake completion?

2016-11-04 Thread radai
the root issue is i may not have enough available memory to read from the
socket.
I'll do the simple thing and just avoid muting handshaking sockets, for now.

On Wed, Nov 2, 2016 at 6:59 PM, Harsha Chintalapani  wrote:

> HI Radai,
>   One main reason is to keep the handshake details away from
> the application layer. i.e Kafka network layer which is sending Kafka
> protocols doesn't need to worry about the handshake details, all it needs
> is a validation that the connection is completed and it can start sending
> Kafka protocols over the wire.  So when a client tries to connect to a
> broker's SSL port, it goes through the handshake, if we mute the channel,
> Kafka network need to decide when to unmute it that means leaking some of
> the SSL connection details in Kafka Selector code. Given that we are
> supporting multiple Secure channels and each has its handshake mechanism we
> kept the selector code the same irrespective of which channel/port/security
> its trying to use. The details will be handled by the TransportLayer its
> job is to finish the handshake and return the ready() to be true when its
> ok for the client to start sending requests.
> As Joel said, it's possible to pause/resume the handshake but
> not sure why its needed; you can treat that as a black box and start
> sending your requests once the channel.ready(). I haven't gone through
> KIP-72 proposal so I might be missing something here.
>
> Thanks,
> Harsha
>
> On Wed, Nov 2, 2016 at 5:01 PM Joel Koshy  wrote:
>
> > Sriharsha can validate this, but I think the reason is that if we allow
> > muting/unmuting at will (via those public APIs) that can completely mess
> up
> > the handshake itself. It should be possible to pause/resume the handshake
> > if that's what you'r elooking for but I'm not sure it is worth it for the
> > purposes of KIP-72 given the small volumes of reads/writes involved in
> > handshaking.
> >
> > On Wed, Nov 2, 2016 at 4:24 PM, radai 
> wrote:
> >
> > > Hi,
> > >
> > > as part of testing my code for KIP-72 (broker memory control), i ran
> into
> > > the following code snippet in SslTransportLayer:
> > >
> > > public void removeInterestOps(int ops) {
> > > if (!key.isValid())
> > > throw new CancelledKeyException();
> > > else if (!handshakeComplete)
> > > throw new IllegalStateException("handshake is not
> > completed");
> > >
> > > key.interestOps(key.interestOps() & ~ops);
> > > }
> > >
> > > why cant an ssl socket be muted before handshake is complete?
> > >
> >
>


Re: [DISCUSS] KIP-80: Kafka REST Server

2016-11-04 Thread Manikumar
Hi Evan,

Thanks for the detailed comments.  Sorry for the late reply.

As most of the community members are not in favor of of adding Rest proxy to
Kafka, we decided to drop the KIP proposal.  We will work with Confluent's
kafka-rest
project community to contribute new features.

Also, thanks for raising valid queries and helping me to improve the KIP
details.
I will discuss some of your notes (new consumer API, HTTP/2,  multi-REST-proxy
etc) at
kafka-rest project issues page .

Thanks you all for participating in this discussion.

Thanks,
Manikumar

On Wed, Oct 26, 2016 at 11:56 AM, Guozhang Wang  wrote:

> I find it a bit hard to just discuss "whether or not we should add a REST
> proxy into Apache Kafka repo" without discussing about "what should be
> included in the Apache Kafka repo", which, as people mentioned, was a
> grey-area and deserves ongoing discussions. So I would like to first throw
> my thoughts for the second question before the first one:
>
> 1. Reading on this email I saw two ideas on extreme sides of the tradeoff,
> i.e. "we should just keep Kafka broker and Kafka protocol (or even just the
> Kafka protocol) in the core Apache Kafka and leave everything else in other
> repos that depend on it", v.s. "we should consider including any
> eco-systems and tools into Apache Kafka, as long as we believe they are
> commonly used along with Kafka". For myself, I am leaning towards not
> having all-small-projects-style of Apache project, but at the same time I
> am not convinced about going to the other extreme as well. Instead I would
> love to propose something in between: specifically, I think the Kafka
> protocol, the Kafka broker implementation, plus a JVM-based implementation
> of the integrated clients as default and reference implementations: admin
> (although not many people have talked about that, I feel it is one kind of
> clients that should really be considered to add, as proposed in KIP-4),
> producer, consumer, processor (streams), and native pipe (connect and MM),
> to be included in Apache Kafka. Additional clients implementations,
> potentially in other languages -- note we have actually seen other Java
> implementations of producer / consumer and MM as well besides the AK
> provided implementations -- or frameworks, and eco systems and tools such
> like Kafka manager, monitoring facilities, integration with resource
> managers / metrics collectors, to be included in separate repos.
>
> 2. A lot have been said about "whether or not adding a REST proxy", so I
> would not try to restate them again. Just following my thoughts around
> "what we should add", I'd think of a REST proxy as a non-JVM pipe client
> since it usually is used as an extra hop to the Kafka brokers from user
> code or other systems, like MM and connect. Personally I would prefer
> keeping such components in a separate repo instead of adding them into
> Apache Kafka.
>
>
>
> Guozhang
>
> On Tue, Oct 25, 2016 at 9:12 PM, Ewen Cheslack-Postava 
> wrote:
>
> > Sorry that I've stayed quiet on this for a bit. My reason for doing so is
> > well summed up by Jason's notes.
> >
> > First, I do want to say thanks for referencing the Confluent REST Proxy
> in
> > the proposal! It makes me proud that it's being used as an important
> > reference point.
> >
> > Second, Nacho, I want to say thanks to you because as I read this
> thread, I
> > think you have brought the most nuanced, gray-area view of this, which is
> > where it really lies. It *is* fuzzy, and an ongoing discussion. See my
> > answer (somewhere below) re: why I would consider Streams & Connect
> > different than a REST proxy (key word being proxy...). My taste
> definitely
> > skews towards yours wrt keeping inclusion of code in the core project
> > minimal. Jay referenced this a bit as well and I think it's partly a
> > comfort with the lots-of-small-projects style of open source.
> >
> > I want to address a few different areas of concern: the motivation from
> the
> > proposal, a few general observations re: inclusion, and more concrete
> notes
> > on the details of the proposal. email, unfortunately, is not ideal for
> > this, but hopefully this won't be too much for one email...
> >
> > ---
> >
> > On the motivation section, I want to address the 3 key points:
> >
> > > 1) Many data Infra tools comes up with Rest Interface. It is useful to
> > have inbuilt Rest API support for Produce, Consume messages and admin
> > interface for integrating with external management and provisioning
> tools.
> >
> > This doesn't explain why including a REST proxy is a good thing (also,
> REST
> > proxy and REST interface are different things). Just because many tools
> do
> > this doesn't mean its the best choice for Kafka. Presumably there are
> > reasons they choose to, some of why might apply to Kafka, some of which
> may
> > not. It would be helpful to explicitly enumerate these.
> >
> >
> > > 2) Shipping Kafka Rest Server as part of a normal Kafka release makes
> i

[GitHub] kafka pull request #2096: KAFKA-4372: Kafka Connect REST API does not handle...

2016-11-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2096


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4372) Kafka Connect REST API does not handle DELETE of connector with slashes in their names

2016-11-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15637128#comment-15637128
 ] 

ASF GitHub Bot commented on KAFKA-4372:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2096


> Kafka Connect REST API does not handle DELETE of connector with slashes in 
> their names
> --
>
> Key: KAFKA-4372
> URL: https://issues.apache.org/jira/browse/KAFKA-4372
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.0.0, 0.10.0.1
>Reporter: Olivier Girardot
>Assignee: Ewen Cheslack-Postava
>
> Currently there is nothing to prevent someone from registering a Kafka 
> Connector with slashes in its name, however it's impossible to DELETE it 
> afterwards because the DELETE REST API access point is using a PathParam and 
> does not allow slashes.
> A few other API points will have a tough times handling connectors with 
> slashes in their names.
> We should allow for slashes in the DELETE API points to allow current setups 
> to be cleaned up without having to drop all the other connectors, and not 
> allow anymore connectors to be created with slashes in their names.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-4372) Kafka Connect REST API does not handle DELETE of connector with slashes in their names

2016-11-04 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-4372.
--
   Resolution: Fixed
 Reviewer: Ewen Cheslack-Postava
Fix Version/s: 0.10.2.0

> Kafka Connect REST API does not handle DELETE of connector with slashes in 
> their names
> --
>
> Key: KAFKA-4372
> URL: https://issues.apache.org/jira/browse/KAFKA-4372
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.0.0, 0.10.0.1
>Reporter: Olivier Girardot
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.10.2.0
>
>
> Currently there is nothing to prevent someone from registering a Kafka 
> Connector with slashes in its name, however it's impossible to DELETE it 
> afterwards because the DELETE REST API access point is using a PathParam and 
> does not allow slashes.
> A few other API points will have a tough times handling connectors with 
> slashes in their names.
> We should allow for slashes in the DELETE API points to allow current setups 
> to be cleaned up without having to drop all the other connectors, and not 
> allow anymore connectors to be created with slashes in their names.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: 0.10.1.0 - KafkaConsumer.poll() blocks background heartbeat thread causing consumer to be considered dead?

2016-11-04 Thread Jason Gustafson
Hey Jaikiran,

Thanks for the report. We shouldn't hold the lock in poll() longer than the
heartbeat interval, so if you're seeing this, it's probably a bug. Let me
see if I can reproduce it. One quick question about the code snippet. What
kind of executor are you using? Is the load on the topic small enough that
you never have to worry about the executor's queue filling up, or do you
have some approach for back pressure? If so, can you describe it?

Thanks,
Jason

On Wed, Nov 2, 2016 at 8:22 AM, Jaikiran Pai 
wrote:

> Thanks Ismael. Just checked, that one doesn't look like it's the same
> issue, but could be a similar one. In that JIRA it looks like the issue was
> probably addressed for the commitSync call. However, in this specific
> instance the KafkaConsumer.poll(...) itself leads to locking the object
> monitor of on the ConsumerNetworkClient. The heart beat thread in the
> background seems to be waiting to get hold of that object monitor and
> blocks on it.
>
> If I keep aside the implementation details, what is the expected semantics
> with heart beat background thread - would it fail to send a heartbeat for a
> consumer if the consumer is currently busy with poll(), commitSync() or any
> similar call? If so, would this lack of heartbeat being sent (for a while)
> cause that member to be considered dead by the co-ordinator. My reading of
> the logs and the limited knowledge of Kafka code seems to indicate that
> this is what's happening, either as per expected semantics or a possible
> bug.
>
> -Jaikiran
>
>
> On Wednesday 02 November 2016 08:39 PM, Ismael Juma wrote:
>
>> Maybe https://issues.apache.org/jira/browse/KAFKA-4303?
>>
>> On 2 Nov 2016 10:15 am, "Jaikiran Pai"  wrote:
>>
>> We have been trying to narrow down an issue in 0.10.1 of Kafka in our
>>> setups where our consumers are marked as dead very frequently causing
>>> rebalances almost every few seconds. The consumer (Java new API) then
>>> starts seeing exceptions like:
>>>
>>> org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot
>>> be
>>> completed since the group has already rebalanced and assigned the
>>> partitions to another member. This means that the time between subsequent
>>> calls to poll() was longer than the configured max.poll.interval.ms,
>>> which typically implies that the poll loop is spending too much time
>>> message processing. You can address this either by increasing the session
>>> timeout or by reducing the maximum size of batches returned in poll()
>>> with
>>> max.poll.records.
>>>  at org.apache.kafka.clients.consumer.internals.ConsumerCoordina
>>> tor$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:674)
>>> ~[kafka-clients-0.10.1.0.jar!/:na]
>>>  at org.apache.kafka.clients.consumer.internals.ConsumerCoordina
>>> tor$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:615)
>>> ~[kafka-clients-0.10.1.0.jar!/:na]
>>>  at org.apache.kafka.clients.consumer.internals.AbstractCoordina
>>> tor$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:742)
>>> ~[kafka-clients-0.10.1.0.jar!/:na]
>>>  at org.apache.kafka.clients.consumer.internals.AbstractCoordina
>>> tor$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:722)
>>> ~[kafka-clients-0.10.1.0.jar!/:na]
>>>  at org.apache.kafka.clients.consumer.internals.RequestFuture$1.
>>> onSuccess(RequestFuture.java:186) ~[kafka-clients-0.10.1.0.jar!/:na]
>>>  at org.apache.kafka.clients.consumer.internals.RequestFuture.
>>> fireSuccess(RequestFuture.java:149) ~[kafka-clients-0.10.1.0.jar!/:na]
>>>  at org.apache.kafka.clients.consumer.internals.RequestFuture.
>>> complete(RequestFuture.java:116) ~[kafka-clients-0.10.1.0.jar!/:na]
>>>  at org.apache.kafka.clients.consumer.internals.ConsumerNetworkC
>>> lient$RequestFutureCompletionHandler.fireCompletion(Consumer
>>> NetworkClient.java:479)
>>> ~[kafka-clients-0.10.1.0.jar!/:na]
>>>  at org.apache.kafka.clients.consumer.internals.ConsumerNetworkC
>>> lient.firePendingCompletedRequests(ConsumerNetworkClient.java:316)
>>> ~[kafka-clients-0.10.1.0.jar!/:na]
>>>  at org.apache.kafka.clients.consumer.internals.ConsumerNetworkC
>>> lient.poll(ConsumerNetworkClient.java:256)
>>> ~[kafka-clients-0.10.1.0.jar!/
>>> :na]
>>>  at org.apache.kafka.clients.consumer.internals.ConsumerNetworkC
>>> lient.poll(ConsumerNetworkClient.java:180)
>>> ~[kafka-clients-0.10.1.0.jar!/
>>> :na]
>>>  at org.apache.kafka.clients.consumer.internals.ConsumerCoordina
>>> tor.commitOffsetsSync(ConsumerCoordinator.java:499)
>>> ~[kafka-clients-0.10.1.0.jar!/:na]
>>>
>>>
>>> Our session and heartbeat timeouts are defaults that ship in Kafka 0.10.1
>>> (i.e. we don't set any specific values). Every few seconds, we see
>>> messages
>>> on the broker logs which indicate these consumers are considered dead:
>>>
>>> [2016-11-02 06:09:48,103] TRACE [GroupCoordinator 0]: Member
>>> consumer-1-efde1e11-fdc6-4801-8fba-20d58b7a30b6 in group foo-bar has
>>> faile

[jira] [Updated] (KAFKA-3994) Deadlock between consumer heartbeat expiration and offset commit.

2016-11-04 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-3994:
---
Fix Version/s: (was: 0.10.2.0)
   0.10.1.1

> Deadlock between consumer heartbeat expiration and offset commit.
> -
>
> Key: KAFKA-3994
> URL: https://issues.apache.org/jira/browse/KAFKA-3994
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.0.0
>Reporter: Jiangjie Qin
>Assignee: Jason Gustafson
> Fix For: 0.10.1.1
>
>
> I got the following stacktraces from ConsumerBounceTest
> {code}
> ...
> "Test worker" #12 prio=5 os_prio=0 tid=0x7fbb28b7f000 nid=0x427c runnable 
> [0x7fbb06445000]
>java.lang.Thread.State: RUNNABLE
> at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
> at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
> at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
> at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
> - locked <0x0003d48bcbc0> (a sun.nio.ch.Util$2)
> - locked <0x0003d48bcbb0> (a 
> java.util.Collections$UnmodifiableSet)
> - locked <0x0003d48bbd28> (a sun.nio.ch.EPollSelectorImpl)
> at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
> at org.apache.kafka.common.network.Selector.select(Selector.java:454)
> at org.apache.kafka.common.network.Selector.poll(Selector.java:277)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:260)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:360)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:192)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:179)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:411)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1086)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1054)
> at 
> kafka.api.ConsumerBounceTest.consumeWithBrokerFailures(ConsumerBounceTest.scala:103)
> at 
> kafka.api.ConsumerBounceTest.testConsumptionWithBrokerFailures(ConsumerBounceTest.scala:70)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:483)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
> at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
> at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
> at 
> org.gradle.api.internal.tasks.

Jenkins build is back to normal : kafka-trunk-jdk8 #1021

2016-11-04 Thread Apache Jenkins Server
See 



Build failed in Jenkins: kafka-trunk-jdk7 #1671

2016-11-04 Thread Apache Jenkins Server
See 

Changes:

[me] KAFKA-4372: Kafka Connect REST API does not handle DELETE of connector

--
[...truncated 7941 lines...]

kafka.utils.CommandLineUtilsTest > testParseSingleArg STARTED

kafka.utils.CommandLineUtilsTest > testParseSingleArg PASSED

kafka.utils.CommandLineUtilsTest > testParseArgs STARTED

kafka.utils.CommandLineUtilsTest > testParseArgs PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgAsValid STARTED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgAsValid PASSED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr STARTED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED

kafka.utils.ReplicationUtilsTest > testGetLeaderIsrAndEpochForPartition STARTED

kafka.utils.ReplicationUtilsTest > testGetLeaderIsrAndEpochForPartition PASSED

kafka.utils.JsonTest > testJsonEncoding STARTED

kafka.utils.JsonTest > testJsonEncoding PASSED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask STARTED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask STARTED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask PASSED

kafka.utils.SchedulerTest > testNonPeriodicTask STARTED

kafka.utils.SchedulerTest > testNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testRestart STARTED

kafka.utils.SchedulerTest > testRestart PASSED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler STARTED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler PASSED

kafka.utils.SchedulerTest > testPeriodicTask STARTED

kafka.utils.SchedulerTest > testPeriodicTask PASSED

kafka.utils.ZkUtilsTest > testAbortedConditionalDeletePath STARTED

kafka.utils.ZkUtilsTest > testAbortedConditionalDeletePath PASSED

kafka.utils.ZkUtilsTest > testSuccessfulConditionalDeletePath STARTED

kafka.utils.ZkUtilsTest > testSuccessfulConditionalDeletePath PASSED

kafka.utils.ZkUtilsTest > testClusterIdentifierJsonParsing STARTED

kafka.utils.ZkUtilsTest > testClusterIdentifierJsonParsing PASSED

kafka.utils.IteratorTemplateTest > testIterator STARTED

kafka.utils.IteratorTemplateTest > testIterator PASSED

kafka.utils.UtilsTest > testGenerateUuidAsBase64 STARTED

kafka.utils.UtilsTest > testGenerateUuidAsBase64 PASSED

kafka.utils.UtilsTest > testAbs STARTED

kafka.utils.UtilsTest > testAbs PASSED

kafka.utils.UtilsTest > testReplaceSuffix STARTED

kafka.utils.UtilsTest > testReplaceSuffix PASSED

kafka.utils.UtilsTest > testCircularIterator STARTED

kafka.utils.UtilsTest > testCircularIterator PASSED

kafka.utils.UtilsTest > testReadBytes STARTED

kafka.utils.UtilsTest > testReadBytes PASSED

kafka.utils.UtilsTest > testCsvList STARTED

kafka.utils.UtilsTest > testCsvList PASSED

kafka.utils.UtilsTest > testReadInt STARTED

kafka.utils.UtilsTest > testReadInt PASSED

kafka.utils.UtilsTest > testUrlSafeBase64EncodeUUID STARTED

kafka.utils.UtilsTest > testUrlSafeBase64EncodeUUID PASSED

kafka.utils.UtilsTest > testCsvMap STARTED

kafka.utils.UtilsTest > testCsvMap PASSED

kafka.utils.UtilsTest > testInLock STARTED

kafka.utils.UtilsTest > testInLock PASSED

kafka.utils.UtilsTest > testSwallow STARTED

kafka.utils.UtilsTest > testSwallow PASSED

kafka.utils.ByteBoundedBlockingQueueTest > testByteBoundedBlockingQueue STARTED

kafka.utils.ByteBoundedBlockingQueueTest > testByteBoundedBlockingQueue PASSED

kafka.producer.AsyncProducerTest > testFailedSendRetryLogic STARTED

kafka.producer.AsyncProducerTest > testFailedSendRetryLogic PASSED

kafka.producer.AsyncProducerTest > testQueueTimeExpired STARTED

kafka.producer.AsyncProducerTest > testQueueTimeExpired PASSED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents STARTED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents PASSED

kafka.producer.AsyncProducerTest > testBatchSize STARTED

kafka.producer.AsyncProducerTest > testBatchSize PASSED

kafka.producer.AsyncProducerTest > testSerializeEvents STARTED

kafka.producer.AsyncProducerTest > testSerializeEvents PASSED

kafka.producer.AsyncProducerTest > testProducerQueueSize STARTED

kafka.producer.AsyncProducerTest > testProducerQueueSize PASSED

kafka.producer.AsyncProducerTest > testRandomPartitioner STARTED

kafka.producer.AsyncProducerTest > testRandomPartitioner PASSED

kafka.producer.AsyncProducerTest > testInvalidConfiguration STARTED

kafka.producer.AsyncProducerTest > testInvalidConfiguration PASSED

kafka.producer.AsyncProducerTest > testInvalidPartition STARTED

kafka.producer.AsyncProducerTest > testInvalidPartition PASSED

kafka.producer.AsyncProducerTest > testNoBroker STARTED

kafka.producer.AsyncProducerTest > testNoBroker PASSED

kafka.producer.AsyncProducerTest > testProduceAfterClosed STARTED

kafka.producer.AsyncProducerTest > testProduceAfterClosed PASSED

kafka.producer.AsyncProducerTest > testJavaProducer STARTED

kafka.producer.AsyncProducerTest > testJav

[GitHub] kafka pull request #1969: MINOR: missing fullstop in doc for `max.partition....

2016-11-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1969


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-4380) Remove CleanShutdownFile as 0.8.2 has been released

2016-11-04 Thread holdenk (JIRA)
holdenk created KAFKA-4380:
--

 Summary: Remove CleanShutdownFile as 0.8.2 has been released
 Key: KAFKA-4380
 URL: https://issues.apache.org/jira/browse/KAFKA-4380
 Project: Kafka
  Issue Type: Improvement
  Components: log
Reporter: holdenk
Priority: Trivial


There is a TODO in the code to remove CleanShutdownFile after 0.8.2 is shipped, 
which has happened.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4380) Remove CleanShutdownFile as 0.8.2 has been released

2016-11-04 Thread holdenk (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15637658#comment-15637658
 ] 

holdenk commented on KAFKA-4380:


Working on this, but I don't seem to be able to assign it to myself.

> Remove CleanShutdownFile as 0.8.2 has been released
> ---
>
> Key: KAFKA-4380
> URL: https://issues.apache.org/jira/browse/KAFKA-4380
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Reporter: holdenk
>Priority: Trivial
>
> There is a TODO in the code to remove CleanShutdownFile after 0.8.2 is 
> shipped, which has happened.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2104: [KAFKA-4380] Remove cleanshutdownfile

2016-11-04 Thread holdenk
GitHub user holdenk opened a pull request:

https://github.com/apache/kafka/pull/2104

[KAFKA-4380] Remove cleanshutdownfile

This PR removes the cleanshutdownfile as suggested in the code comments as 
a TODO.

Use of this seems to be well covered by existing tests (some of which 
needed to be updated).

The gradlew unit tests pass locally on my machine, but since this is my 
first (small) PR to Kafka I may have left something out.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/holdenk/kafka 
KAFKA-4380-remove-cleanshutdownfile

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2104.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2104


commit 157b8523ac5de7f8c954889f0a719b956810fb76
Author: Holden Karau 
Date:   2016-09-28T20:28:31Z

Get rid of CleanShutdownFile now that we are well past 0.8.2 :)

commit 245814cfbf36b19fdeab7900de0e56386a62f6f6
Author: Holden Karau 
Date:   2016-11-01T15:34:10Z

More fixes




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4380) Remove CleanShutdownFile as 0.8.2 has been released

2016-11-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15637661#comment-15637661
 ] 

ASF GitHub Bot commented on KAFKA-4380:
---

GitHub user holdenk opened a pull request:

https://github.com/apache/kafka/pull/2104

[KAFKA-4380] Remove cleanshutdownfile

This PR removes the cleanshutdownfile as suggested in the code comments as 
a TODO.

Use of this seems to be well covered by existing tests (some of which 
needed to be updated).

The gradlew unit tests pass locally on my machine, but since this is my 
first (small) PR to Kafka I may have left something out.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/holdenk/kafka 
KAFKA-4380-remove-cleanshutdownfile

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2104.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2104


commit 157b8523ac5de7f8c954889f0a719b956810fb76
Author: Holden Karau 
Date:   2016-09-28T20:28:31Z

Get rid of CleanShutdownFile now that we are well past 0.8.2 :)

commit 245814cfbf36b19fdeab7900de0e56386a62f6f6
Author: Holden Karau 
Date:   2016-11-01T15:34:10Z

More fixes




> Remove CleanShutdownFile as 0.8.2 has been released
> ---
>
> Key: KAFKA-4380
> URL: https://issues.apache.org/jira/browse/KAFKA-4380
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Reporter: holdenk
>Priority: Trivial
>
> There is a TODO in the code to remove CleanShutdownFile after 0.8.2 is 
> shipped, which has happened.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2105: KAFKA-4322 StateRestoreCallback begin and end indi...

2016-11-04 Thread markcshelton
GitHub user markcshelton opened a pull request:

https://github.com/apache/kafka/pull/2105

KAFKA-4322 StateRestoreCallback begin and end indication

This adds a begin and end callback to StateRestoreCallback.

The contribution is my original work and I license the work to Apache Kafka 
under the Kafka's open source license.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markcshelton/kafka KAFKA-4322

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2105.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2105


commit 9f59ea6ff2aac94bdd84eaafda36c860b12e1dcd
Author: Mark Shelton 
Date:   2016-11-04T20:57:37Z

changes KAFKA-4322 StateRestoreCallback begin and end indication




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #1968: MINOR: missing whitespace in doc for `ssl.cipher.s...

2016-11-04 Thread shikhar
Github user shikhar closed the pull request at:

https://github.com/apache/kafka/pull/1968


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4322) StateRestoreCallback begin and end indication

2016-11-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15637668#comment-15637668
 ] 

ASF GitHub Bot commented on KAFKA-4322:
---

GitHub user markcshelton opened a pull request:

https://github.com/apache/kafka/pull/2105

KAFKA-4322 StateRestoreCallback begin and end indication

This adds a begin and end callback to StateRestoreCallback.

The contribution is my original work and I license the work to Apache Kafka 
under the Kafka's open source license.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markcshelton/kafka KAFKA-4322

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2105.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2105


commit 9f59ea6ff2aac94bdd84eaafda36c860b12e1dcd
Author: Mark Shelton 
Date:   2016-11-04T20:57:37Z

changes KAFKA-4322 StateRestoreCallback begin and end indication




> StateRestoreCallback begin and end indication
> -
>
> Key: KAFKA-4322
> URL: https://issues.apache.org/jira/browse/KAFKA-4322
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.0.1
>Reporter: Mark Shelton
>Assignee: Guozhang Wang
>Priority: Minor
>
> In Kafka Streams, the StateRestoreCallback interface provides only a single 
> method "restore(byte[] key, byte[] value)" that is called for every key-value 
> pair to be restored. 
> It would be nice to have "beginRestore" and "endRestore" methods as part of 
> StateRestoreCallback.
> Kafka Streams would call "beginRestore" before restoring any keys, and would 
> call "endRestore" when it determines that it is done. This allows an 
> implementation, for example, to report on the number of keys restored and 
> perform a commit after the last key was restored. Other uses are conceivable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4322) StateRestoreCallback begin and end indication

2016-11-04 Thread Mark Shelton (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Shelton updated KAFKA-4322:

Flags: Patch

> StateRestoreCallback begin and end indication
> -
>
> Key: KAFKA-4322
> URL: https://issues.apache.org/jira/browse/KAFKA-4322
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.0.1
>Reporter: Mark Shelton
>Assignee: Guozhang Wang
>Priority: Minor
>
> In Kafka Streams, the StateRestoreCallback interface provides only a single 
> method "restore(byte[] key, byte[] value)" that is called for every key-value 
> pair to be restored. 
> It would be nice to have "beginRestore" and "endRestore" methods as part of 
> StateRestoreCallback.
> Kafka Streams would call "beginRestore" before restoring any keys, and would 
> call "endRestore" when it determines that it is done. This allows an 
> implementation, for example, to report on the number of keys restored and 
> perform a commit after the last key was restored. Other uses are conceivable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[DISCUSS] KIP-89: Allow sink connectors to decouple flush and offset commit

2016-11-04 Thread Shikhar Bhushan
Hi all,

I created KIP-89 for making a Connect API change that allows for sink
connectors to decouple flush and offset commits.


https://cwiki.apache.org/confluence/display/KAFKA/KIP-89%3A+Allow+sink+connectors+to+decouple+flush+and+offset+commit

I'd welcome your input.

Best,

Shikhar


[jira] [Commented] (KAFKA-4161) Decouple flush and offset commits

2016-11-04 Thread Shikhar Bhushan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15637718#comment-15637718
 ] 

Shikhar Bhushan commented on KAFKA-4161:


Created KIP-89 for this 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-89%3A+Allow+sink+connectors+to+decouple+flush+and+offset+commit

> Decouple flush and offset commits
> -
>
> Key: KAFKA-4161
> URL: https://issues.apache.org/jira/browse/KAFKA-4161
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Shikhar Bhushan
>Assignee: Shikhar Bhushan
>  Labels: needs-kip
>
> It is desirable to have, in addition to the time-based flush interval, volume 
> or size-based commits. E.g. a sink connector which is buffering in terms of 
> number of records may want to request a flush when the buffer is full, or 
> when sufficient amount of data has been buffered in a file.
> Having a method like say {{requestFlush()}} on the {{SinkTaskContext}} would 
> allow for connectors to have flexible policies around flushes. This would be 
> in addition to the time interval based flushes that are controlled with 
> {{offset.flush.interval.ms}}, for which the clock should be reset when any 
> kind of flush happens.
> We should probably also support requesting flushes via the 
> {{SourceTaskContext}} for consistency though a use-case doesn't come to mind 
> off the bat.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4381) Add per partition lag metric to KafkaConsumer.

2016-11-04 Thread Jiangjie Qin (JIRA)
Jiangjie Qin created KAFKA-4381:
---

 Summary: Add per partition lag metric to KafkaConsumer.
 Key: KAFKA-4381
 URL: https://issues.apache.org/jira/browse/KAFKA-4381
 Project: Kafka
  Issue Type: Task
  Components: clients, consumer
Affects Versions: 0.10.1.0
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin
 Fix For: 0.10.2.0


Currently KafkaConsumer only has a metric of max lag across all the partitions. 
It would be useful to know per partition lag as well.

I remember there was a ticket created before but did not find it. So I am 
creating this ticket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-4381) Add per partition lag metric to KafkaConsumer.

2016-11-04 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-4381 started by Jiangjie Qin.
---
> Add per partition lag metric to KafkaConsumer.
> --
>
> Key: KAFKA-4381
> URL: https://issues.apache.org/jira/browse/KAFKA-4381
> Project: Kafka
>  Issue Type: Task
>  Components: clients, consumer
>Affects Versions: 0.10.1.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.10.2.0
>
>
> Currently KafkaConsumer only has a metric of max lag across all the 
> partitions. It would be useful to know per partition lag as well.
> I remember there was a ticket created before but did not find it. So I am 
> creating this ticket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1672

2016-11-04 Thread Apache Jenkins Server
See 

Changes:

[me] MINOR: missing fullstop in doc for `max.partition.fetch.bytes`

--
[...truncated 3866 lines...]

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testBasicTopicMetadata 
STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression STARTED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic STARTED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest STARTED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopicWithCollision 
STARTED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopicWithCollision 
PASSED

kafka.integration.PlaintextTopicMetadataTest > testAliveBrokerListWithNoTopics 
STARTED

kafka.integration.PlaintextTopicMetadataTest > testAliveBrokerListWithNoTopics 
PASSED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.PlaintextTopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.PlaintextTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.PlaintextTopicMetadataTest > testBasicTopicMetadata STARTED

kafka.integration.PlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow PASSED

kafka.integration.AutoOffsetRe

[jira] [Created] (KAFKA-4382) Fix broken fragments in site docs

2016-11-04 Thread Konstantine Karantasis (JIRA)
Konstantine Karantasis created KAFKA-4382:
-

 Summary: Fix broken fragments in site docs
 Key: KAFKA-4382
 URL: https://issues.apache.org/jira/browse/KAFKA-4382
 Project: Kafka
  Issue Type: Bug
  Components: documentation
Affects Versions: 0.10.0.1
Reporter: Konstantine Karantasis
Assignee: Konstantine Karantasis
Priority: Minor
 Fix For: 0.10.0.1


There are just a few broken fragments in the current version of site docs. 

For instance, under documentation.html in 0.10.1 such fragments are: 

{quote}
http://kafka.apache.org/documentation.html#newconsumerapi
http://kafka.apache.org/documentation#config_broker
http://kafka.apache.org/documentation#security_kerberos_sasl_clientconfig
{quote}

A more thorough search in the previous versions of the documentation might 
reveal a few more. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4383) Update API design subsection to reflect the current implementation of Producer/Consumer

2016-11-04 Thread Konstantine Karantasis (JIRA)
Konstantine Karantasis created KAFKA-4383:
-

 Summary: Update API design subsection to reflect the current 
implementation of Producer/Consumer
 Key: KAFKA-4383
 URL: https://issues.apache.org/jira/browse/KAFKA-4383
 Project: Kafka
  Issue Type: Improvement
  Components: documentation
Affects Versions: 0.10.0.1
Reporter: Konstantine Karantasis
Assignee: Konstantine Karantasis
 Fix For: 0.10.0.1, 0.10.0.0



After 0.9.0 and 0.10.0 the site docs were updated to reflect transition to the 
new APIs for producer and consumer. Changes were made in sections such as {{2. 
APIS}} and {{3. CONFIGURATION}}. 

However, the related subsections under {{5.IMPLEMENTATION}} still describe the 
implementation details of the old producer and consumer APIs. This section 
needs to be re-written to reflect the implementation status of the new APIs 
(possibly by retaining the description for the old APIs as well in a separate 
subsection). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #1022

2016-11-04 Thread Apache Jenkins Server
See 

Changes:

[me] MINOR: missing fullstop in doc for `max.partition.fetch.bytes`

--
[...truncated 7217 lines...]

kafka.admin.AdminTest > testShutdownBroker STARTED

kafka.admin.AdminTest > testShutdownBroker PASSED

kafka.admin.AdminTest > testTopicCreationWithCollision STARTED

kafka.admin.AdminTest > testTopicCreationWithCollision PASSED

kafka.admin.AdminTest > testTopicCreationInZK STARTED

kafka.admin.AdminTest > testTopicCreationInZK PASSED

kafka.admin.TopicCommandTest > testCreateIfNotExists STARTED

kafka.admin.TopicCommandTest > testCreateIfNotExists PASSED

kafka.admin.TopicCommandTest > testCreateAlterTopicWithRackAware STARTED

kafka.admin.TopicCommandTest > testCreateAlterTopicWithRackAware PASSED

kafka.admin.TopicCommandTest > testTopicDeletion STARTED

kafka.admin.TopicCommandTest > testTopicDeletion PASSED

kafka.admin.TopicCommandTest > testConfigPreservationAcrossPartitionAlteration 
STARTED

kafka.admin.TopicCommandTest > testConfigPreservationAcrossPartitionAlteration 
PASSED

kafka.admin.TopicCommandTest > testAlterIfExists STARTED

kafka.admin.TopicCommandTest > testAlterIfExists PASSED

kafka.admin.TopicCommandTest > testDeleteIfExists STARTED

kafka.admin.TopicCommandTest > testDeleteIfExists PASSED

kafka.admin.DescribeConsumerGroupTest > testDescribeExistingGroup STARTED

kafka.admin.DescribeConsumerGroupTest > testDescribeExistingGroup PASSED

kafka.admin.DescribeConsumerGroupTest > 
testDescribeConsumersWithNoAssignedPartitions STARTED

kafka.admin.DescribeConsumerGroupTest > 
testDescribeConsumersWithNoAssignedPartitions PASSED

kafka.admin.DescribeConsumerGroupTest > testDescribeNonExistingGroup STARTED

kafka.admin.DescribeConsumerGroupTest > testDescribeNonExistingGroup PASSED

kafka.admin.DescribeConsumerGroupTest > testDescribeExistingGroupWithNoMembers 
STARTED

kafka.admin.DescribeConsumerGroupTest > testDescribeExistingGroupWithNoMembers 
PASSED

kafka.admin.ConfigCommandTest > shouldParseArgumentsForTopicsEntityType STARTED

kafka.admin.ConfigCommandTest > shouldParseArgumentsForTopicsEntityType PASSED

kafka.admin.ConfigCommandTest > testUserClientQuotaOpts STARTED

kafka.admin.ConfigCommandTest > testUserClientQuotaOpts PASSED

kafka.admin.ConfigCommandTest > shouldAddTopicConfig STARTED

kafka.admin.ConfigCommandTest > shouldAddTopicConfig PASSED

kafka.admin.ConfigCommandTest > shouldAddClientConfig STARTED

kafka.admin.ConfigCommandTest > shouldAddClientConfig PASSED

kafka.admin.ConfigCommandTest > shouldDeleteBrokerConfig STARTED

kafka.admin.ConfigCommandTest > shouldDeleteBrokerConfig PASSED

kafka.admin.ConfigCommandTest > testQuotaConfigEntity STARTED

kafka.admin.ConfigCommandTest > testQuotaConfigEntity PASSED

kafka.admin.ConfigCommandTest > 
shouldNotUpdateBrokerConfigIfMalformedBracketConfig STARTED

kafka.admin.ConfigCommandTest > 
shouldNotUpdateBrokerConfigIfMalformedBracketConfig PASSED

kafka.admin.ConfigCommandTest > shouldFailIfUnrecognisedEntityType STARTED

kafka.admin.ConfigCommandTest > shouldFailIfUnrecognisedEntityType PASSED

kafka.admin.ConfigCommandTest > 
shouldNotUpdateBrokerConfigIfMalformedEntityName STARTED

kafka.admin.ConfigCommandTest > 
shouldNotUpdateBrokerConfigIfMalformedEntityName PASSED

kafka.admin.ConfigCommandTest > shouldSupportCommaSeparatedValues STARTED

kafka.admin.ConfigCommandTest > shouldSupportCommaSeparatedValues PASSED

kafka.admin.ConfigCommandTest > shouldNotUpdateBrokerConfigIfMalformedConfig 
STARTED

kafka.admin.ConfigCommandTest > shouldNotUpdateBrokerConfigIfMalformedConfig 
PASSED

kafka.admin.ConfigCommandTest > shouldParseArgumentsForBrokersEntityType STARTED

kafka.admin.ConfigCommandTest > shouldParseArgumentsForBrokersEntityType PASSED

kafka.admin.ConfigCommandTest > shouldAddBrokerConfig STARTED

kafka.admin.ConfigCommandTest > shouldAddBrokerConfig PASSED

kafka.admin.ConfigCommandTest > testQuotaDescribeEntities STARTED

kafka.admin.ConfigCommandTest > testQuotaDescribeEntities PASSED

kafka.admin.ConfigCommandTest > shouldParseArgumentsForClientsEntityType STARTED

kafka.admin.ConfigCommandTest > shouldParseArgumentsForClientsEntityType PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfBlankArg STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfBlankArg PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowVerifyWithoutReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowVerifyWithoutReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowTopicsOptionWithVerify STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowTopicsOptionWithVerify PASSED

kafk

[GitHub] kafka pull request #2099: MINOR: Fix re-raise of python error in system test...

2016-11-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2099


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP-72 Allow Sizing Incoming Request Queue in Bytes

2016-11-04 Thread radai
here are my proposed changes:

https://github.com/radai-rosenblatt/kafka/commit/8d7744ab8a6c660c4749b495b033b948a68efd3c

at this point i've run this code on a test cluster under load that OOMs
"vanilla" 0.10.1.0 and verified that my code deployed under the same
condition remains stable.

what i've done:

1. configure max heap size to 1.5GB and a single io thread (makes it easier
to DOS)
2. set up a topic with 100 partitions all on the same broker (makes it
easier to focus IO) - ./kafka-topics.sh --zookeeper  --create --topic
dos --replica-assignment [100 times the same broker id]
3. spin up load from 10 machines - ./kafka-producer-perf-test.sh --topic
dos --num-records 100 --record-size 991600 --throughput 10
--producer-props bootstrap.servers= max.request.size=104857600
acks=0 linger.ms=3 buffer.memory=209715200 batch.size=1048576

this would result in single requests that are just under 100MB in size,
times 10 for ~1GB max oustanding memory requirement. on my setup it was
enough to reliably DOS 10.0.1.0. under my patch the broker held up (request
rate was throttled).

performance when not under memory load was roughly the same (note the
longest run was ~1 hour, havent done long term stress tests yet).

At this point I think I've addressed most (all?) the concerns and would
like to move on to a vote? (obviously tha code has not been reviewed yet,
but in terms of high-level approach and changes to public API the KIP is
ready)




On Sun, Oct 30, 2016 at 5:05 PM, radai  wrote:

> Hi Jun,
>
> the benchmarks just spawn 16 threads where each thread allocates a chunk
> of memory from the pool and immediately releases it. 16 was chosen because
> its typical for LinkedIn setups. the benchmarks never "consume" more than
> 16 * [single allocation size] and so do not test out-of-memory performance,
> but rather "normal" operating conditions. tests were run with 4 memory
> allocation sizes - 1k, 10k, 100k and 1M (1M being the largest typical
> single request size setting at LinkedIn). the results are in ops/sec (for
> context - a single request involves a single allocation/release cycle,
> typical LinkedIn setups dont go beyond 20k requests/sec on a single broker).
>
> results show that the GC pool (which is a combination of an AtomicLong
> outstanding bytes count + weak references for allocated buffers) has a
> negligible performance cost vs the simple benchmark (which does nothing,
> same as current code).
>
> the more interesting thing that the results show is that as the requested
> buffer size gets larger a single allocate/release cycle becomes more
> expensive. since the benchmark never hold a lot of outstanding memory (16 *
> buf size tops) i suspect the issue is memory fragmentation - its harder to
> find larger contiguous chunks of heap.
>
> this indicates that for throughput scenarios (large request batches)
> broker performance may actually be impacted by the overhead of allocating
> and releasing buffers (the situation may even be worse - inter-broker
> requests are much larger), and an implementation of memory pool that
> actually recycles buffers (mine just acts as a limiter and leak detector)
> might improve broker performance under high throughput conditions (but
> thats probably a separate followup change).
>
> I expect to stress test my code this week (though no guarantees).
>
> I'll look at KIP-81.
>
> On Sun, Oct 30, 2016 at 12:27 PM, Jun Rao  wrote:
>
>> Hi, Radai,
>>
>> Sorry for the late response. How should the benchmark results be
>> interpreted? The higher the ops/s, the better? It would also be useful to
>> test this out on LinkedIn's traffic with enough socket connections to see
>> if there is any performance degradation.
>>
>> Also, there is a separate proposal KIP-81 to bound the consumer memory
>> usage. Perhaps you can chime it there on whether this proposal can be
>> utilized there too.
>>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-81%3A+
>> Bound+Fetch+memory+usage+in+the+consumer
>>
>> Thanks,
>>
>> Jun
>>
>> On Tue, Sep 27, 2016 at 10:23 AM, radai 
>> wrote:
>>
>> > Hi Jun,
>> >
>> > 10 - mute/unmute functionality has been added in
>> > https://github.com/radai-rosenblatt/kafka/tree/broker-
>> > memory-pool-with-muting.
>> > I have yet to run stress tests to see how it behaves versus without
>> muting
>> >
>> > 11 - I've added a SimplePool implementation (nothing more than an
>> > AtomicLong really) and compared it with my GC pool (that uses weak
>> refs) -
>> > https://github.com/radai-rosenblatt/kafka-benchmarks/
>> > tree/master/memorypool-benchmarks.
>> > the results show no noticeable difference. what the results _do_ show
>> > though is that for large requests (1M) performance drops very sharply.
>> > since the SimplePool is essentially identical to current kafka code
>> > behaviour (the nechmark never reaches out of memory conditions) it would
>> > suggest to me that kafka performance for large request suffers greatly
>> from
>> > the cost of alloc

[GitHub] kafka-site pull request #29: Update the website repo link in code.html to po...

2016-11-04 Thread becketqin
GitHub user becketqin opened a pull request:

https://github.com/apache/kafka-site/pull/29

Update the website repo link in code.html to point to github.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/becketqin/kafka-site asf-site

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka-site/pull/29.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #29






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka-site issue #29: Update the website repo link in code.html to point to ...

2016-11-04 Thread ijuma
Github user ijuma commented on the issue:

https://github.com/apache/kafka-site/pull/29
  
Good catch, you probably want to include that link to that Apache Git repo 
as well.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4381) Add per partition lag metric to KafkaConsumer.

2016-11-04 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15638178#comment-15638178
 ] 

Jiangjie Qin commented on KAFKA-4381:
-

[~hachikuji] [~junrao] [~guozhang] [~ewencp] Do you think we need a KIP for 
this?

> Add per partition lag metric to KafkaConsumer.
> --
>
> Key: KAFKA-4381
> URL: https://issues.apache.org/jira/browse/KAFKA-4381
> Project: Kafka
>  Issue Type: Task
>  Components: clients, consumer
>Affects Versions: 0.10.1.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.10.2.0
>
>
> Currently KafkaConsumer only has a metric of max lag across all the 
> partitions. It would be useful to know per partition lag as well.
> I remember there was a ticket created before but did not find it. So I am 
> creating this ticket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : kafka-trunk-jdk8 #1023

2016-11-04 Thread Apache Jenkins Server
See 



[GitHub] kafka-site issue #29: Update the website repo link in code.html to point to ...

2016-11-04 Thread becketqin
Github user becketqin commented on the issue:

https://github.com/apache/kafka-site/pull/29
  
@ijuma The github site has stated that it is mirrored from the Apache repo. 
I added the link anyways.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---