Re: [UPDATE] 0.10.1 Release Progress

2016-09-19 Thread Gwen Shapira
W00t! :)

On Mon, Sep 19, 2016 at 9:23 PM, Jason Gustafson  wrote:
> Thanks everyone for the hard work! The 0.10.1 release branch has been
> created. We're now entering the stabilization phase of this release which
> means we'll focus on bug fixes and testing.
>
> -Jason
>
> On Fri, Sep 16, 2016 at 5:00 PM, Jason Gustafson  wrote:
>
>> Hi All,
>>
>> Thanks everyone for the hard work! Here's an update on the remaining KIPs
>> that we are hoping to include:
>>
>> KIP-78 (clusterId): Review is basically complete. Assuming no problems
>> emerge, Ismael is planning to merge today.
>> KIP-74 (max fetch size): Review is nearing completion, just a few minor
>> issues remain. This will probably be merged tomorrow or Sunday.
>> KIP-55 (secure quotas): The patch has been rebased and probably needs one
>> more review pass before merging. Jun is confident it can get in before
>> Monday.
>>
>> As for KIP-79, I've made one review pass, but to make it in, we'll need 1)
>> some more votes on the vote thread, and 2) a few review iterations. It's
>> looking a bit doubtful, but let's see how it goes!
>>
>> Since we are nearing the feature freeze, it would be helpful if people
>> begin setting some priorities on the bugs that need to get in before the
>> code freeze. I am going to make an effort to prune the list early next
>> week, so if there are any critical issues you know about, make sure they
>> are marked as such.
>>
>> Thanks,
>> Jason
>>



-- 
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog


[jira] [Created] (KAFKA-4195) support throttling on request rate

2016-09-19 Thread Jun Rao (JIRA)
Jun Rao created KAFKA-4195:
--

 Summary: support throttling on request rate
 Key: KAFKA-4195
 URL: https://issues.apache.org/jira/browse/KAFKA-4195
 Project: Kafka
  Issue Type: Improvement
Reporter: Jun Rao


Currently, we can throttle the client by data volume. However, if a client 
sends requests too quickly (e.g., a consumer with min.byte configured to 0), it 
can still overwhelm the broker. It would be useful to additionally support 
throttling by request rate. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4195) support throttling on request rate

2016-09-19 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15505615#comment-15505615
 ] 

Jun Rao commented on KAFKA-4195:


We will need to do a KIP on the design first.

> support throttling on request rate
> --
>
> Key: KAFKA-4195
> URL: https://issues.apache.org/jira/browse/KAFKA-4195
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jun Rao
>
> Currently, we can throttle the client by data volume. However, if a client 
> sends requests too quickly (e.g., a consumer with min.byte configured to 0), 
> it can still overwhelm the broker. It would be useful to additionally support 
> throttling by request rate. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [UPDATE] 0.10.1 Release Progress

2016-09-19 Thread Jason Gustafson
Thanks everyone for the hard work! The 0.10.1 release branch has been
created. We're now entering the stabilization phase of this release which
means we'll focus on bug fixes and testing.

-Jason

On Fri, Sep 16, 2016 at 5:00 PM, Jason Gustafson  wrote:

> Hi All,
>
> Thanks everyone for the hard work! Here's an update on the remaining KIPs
> that we are hoping to include:
>
> KIP-78 (clusterId): Review is basically complete. Assuming no problems
> emerge, Ismael is planning to merge today.
> KIP-74 (max fetch size): Review is nearing completion, just a few minor
> issues remain. This will probably be merged tomorrow or Sunday.
> KIP-55 (secure quotas): The patch has been rebased and probably needs one
> more review pass before merging. Jun is confident it can get in before
> Monday.
>
> As for KIP-79, I've made one review pass, but to make it in, we'll need 1)
> some more votes on the vote thread, and 2) a few review iterations. It's
> looking a bit doubtful, but let's see how it goes!
>
> Since we are nearing the feature freeze, it would be helpful if people
> begin setting some priorities on the bugs that need to get in before the
> code freeze. I am going to make an effort to prune the list early next
> week, so if there are any critical issues you know about, make sure they
> are marked as such.
>
> Thanks,
> Jason
>


[GitHub] kafka pull request #1886: MINOR: Bump to version 0.10.2

2016-09-19 Thread hachikuji
Github user hachikuji closed the pull request at:

https://github.com/apache/kafka/pull/1886


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is back to normal : kafka-trunk-jdk8 #898

2016-09-19 Thread Apache Jenkins Server
See 



[GitHub] kafka pull request #1886: MINOR: Bump to version 0.10.2

2016-09-19 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/1886

MINOR: Bump to version 0.10.2



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka bump-to-0.10.2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1886.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1886


commit 6a7d00c809b004f650d9f6d8ca6a1931882aad80
Author: Jason Gustafson 
Date:   2016-09-20T02:59:03Z

MINOR: Bump to version 0.10.2




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-4191) After the leader broker is down, then start the producer of librdkafka, it cannot produce any data any more

2016-09-19 Thread Leon (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leon resolved KAFKA-4191.
-
Resolution: Not A Problem

sorry, this is not an issue. If the leader broker is down, and when restart the 
consumer or producer, need to connect to the available leader.

> After the leader broker is down, then start the producer of librdkafka, it 
> cannot produce any data any more
> ---
>
> Key: KAFKA-4191
> URL: https://issues.apache.org/jira/browse/KAFKA-4191
> Project: Kafka
>  Issue Type: Wish
>  Components: clients
>Affects Versions: 0.10.0.1
> Environment: Windows 7
>Reporter: Leon
>Priority: Minor
>
> Hi,
> I am using kafka_2.11-0.10.0.1 and librdkafka-master on Windows 7,
> and there are 3 brokers, 1 zookeeper, 1 producer (rdkafka_example.exe) and 1 
> consumer(rdkafka_consumer_example_cpp.exe), All of them are on the same PC. 
> But I found an issue that the producer failed to produce any data after the 
> leader of the brokers is down.
> Here are the steps to reproduce this issue:
> 1.  Start zookeeper.
> 2.  Start the brokers by running the following commands:
>   kafka-server-start.bat .\config\server.properties
>   kafka-server-start.bat .\config\server-1.properties
>   kafka-server-start.bat .\config\server-2.properties
>  The configures for each server are:
>  config/server.properties:
>  broker.id=0
>  listeners=PLAINTEXT://:9092
>  log.dir=/tmp/kafka-logs-0
>  config/server-1.properties:
>  broker.id=1
>  listeners=PLAINTEXT://:9093
>  log.dir=/tmp/kafka-logs-1
>  config/server-2.properties:
>  broker.id=2
>  listeners=PLAINTEXT://:9094
>  log.dir=/tmp/kafka-logs-2
> 3. Create a new topic
>   kafka-topics.bat --create --zookeeper localhost:2181 
> --replication-factor  3 --partitions 1 --topic topic1  
>  Then you can see that the leader is broker 0 with following command
>  kafka-topics.bat --describe --zookeeper localhost:2181 --topic topic1 
> 4. Start consumer:
>   rdkafka_consumer_example_cpp.exe -g 1 -b localhost:9092 topic1 
> 5. Start producer:
>   rdkafka_example.exe -P -t topic1 -b localhost:9092
>   
>  Now you can see that everything works fine.
> 6. Then stop broker0 by closing the command prompt which runs 
> 'kafka-server-start.bat .\config\server.properties', and you can see that the 
> producer and consumer still work fine.
> 7. Then stop the producer and consumer by pressing Ctrl+C and then closing 
> the related command prompt, and start them again with the same step 4 and 5, 
> now you can see that both the producer and consumer do not work!
> My expected behavior is that even the leader of multi-broker cluster is down, 
> we can still restart the producer and consumer of librdkafka and make them 
> work.
> Would you please give me any help?
> Thank you!
> Leon



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4191) After the leader broker is down, then start the producer of librdkafka, it cannot produce any data any more

2016-09-19 Thread Leon (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leon updated KAFKA-4191:

  Priority: Minor  (was: Major)
Issue Type: Wish  (was: Bug)

> After the leader broker is down, then start the producer of librdkafka, it 
> cannot produce any data any more
> ---
>
> Key: KAFKA-4191
> URL: https://issues.apache.org/jira/browse/KAFKA-4191
> Project: Kafka
>  Issue Type: Wish
>  Components: clients
>Affects Versions: 0.10.0.1
> Environment: Windows 7
>Reporter: Leon
>Priority: Minor
>
> Hi,
> I am using kafka_2.11-0.10.0.1 and librdkafka-master on Windows 7,
> and there are 3 brokers, 1 zookeeper, 1 producer (rdkafka_example.exe) and 1 
> consumer(rdkafka_consumer_example_cpp.exe), All of them are on the same PC. 
> But I found an issue that the producer failed to produce any data after the 
> leader of the brokers is down.
> Here are the steps to reproduce this issue:
> 1.  Start zookeeper.
> 2.  Start the brokers by running the following commands:
>   kafka-server-start.bat .\config\server.properties
>   kafka-server-start.bat .\config\server-1.properties
>   kafka-server-start.bat .\config\server-2.properties
>  The configures for each server are:
>  config/server.properties:
>  broker.id=0
>  listeners=PLAINTEXT://:9092
>  log.dir=/tmp/kafka-logs-0
>  config/server-1.properties:
>  broker.id=1
>  listeners=PLAINTEXT://:9093
>  log.dir=/tmp/kafka-logs-1
>  config/server-2.properties:
>  broker.id=2
>  listeners=PLAINTEXT://:9094
>  log.dir=/tmp/kafka-logs-2
> 3. Create a new topic
>   kafka-topics.bat --create --zookeeper localhost:2181 
> --replication-factor  3 --partitions 1 --topic topic1  
>  Then you can see that the leader is broker 0 with following command
>  kafka-topics.bat --describe --zookeeper localhost:2181 --topic topic1 
> 4. Start consumer:
>   rdkafka_consumer_example_cpp.exe -g 1 -b localhost:9092 topic1 
> 5. Start producer:
>   rdkafka_example.exe -P -t topic1 -b localhost:9092
>   
>  Now you can see that everything works fine.
> 6. Then stop broker0 by closing the command prompt which runs 
> 'kafka-server-start.bat .\config\server.properties', and you can see that the 
> producer and consumer still work fine.
> 7. Then stop the producer and consumer by pressing Ctrl+C and then closing 
> the related command prompt, and start them again with the same step 4 and 5, 
> now you can see that both the producer and consumer do not work!
> My expected behavior is that even the leader of multi-broker cluster is down, 
> we can still restart the producer and consumer of librdkafka and make them 
> work.
> Would you please give me any help?
> Thank you!
> Leon



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #897

2016-09-19 Thread Apache Jenkins Server
See 

Changes:

[jason] KAFKA-4135; Consumer poll with no subscription or assignment should

[wangguoz] KAFKA-4153: Fix incorrect KStream-KStream join behavior with 
asymmetric

--
[...truncated 3558 lines...]

kafka.network.SocketServerTest > tooBigRequestIsRejected STARTED

kafka.network.SocketServerTest > tooBigRequestIsRejected PASSED

kafka.integration.SaslSslTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack STARTED

kafka.integration.SaslSslTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslSslTopicMetadataTest > testAutoCreateTopicWithCollision 
STARTED

kafka.integration.SaslSslTopicMetadataTest > testAutoCreateTopicWithCollision 
PASSED

kafka.integration.SaslSslTopicMetadataTest > testAliveBrokerListWithNoTopics 
STARTED

kafka.integration.SaslSslTopicMetadataTest > testAliveBrokerListWithNoTopics 
PASSED

kafka.integration.SaslSslTopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.SaslSslTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslSslTopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.SaslSslTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.SaslSslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.SaslSslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslSslTopicMetadataTest > testBasicTopicMetadata STARTED

kafka.integration.SaslSslTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslSslTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication STARTED

kafka.integration.SaslSslTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication PASSED

kafka.integration.SaslSslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown STARTED

kafka.integration.SaslSslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression STARTED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic STARTED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest STARTED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
STARTED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
STARTED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride PASSED

kafka.integration.MinIsrConfigTest > testDefaultKafkaConfig STARTED

kafka.integration.MinIsrConfigTest > testDefaultKafkaConfig PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 

[jira] [Updated] (KAFKA-4191) After the leader broker is down, then start the producer of librdkafka, it cannot produce any data any more

2016-09-19 Thread Leon (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leon updated KAFKA-4191:

  Priority: Major  (was: Minor)
Issue Type: Bug  (was: Improvement)

> After the leader broker is down, then start the producer of librdkafka, it 
> cannot produce any data any more
> ---
>
> Key: KAFKA-4191
> URL: https://issues.apache.org/jira/browse/KAFKA-4191
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.0.1
> Environment: Windows 7
>Reporter: Leon
>
> Hi,
> I am using kafka_2.11-0.10.0.1 and librdkafka-master on Windows 7,
> and there are 3 brokers, 1 zookeeper, 1 producer (rdkafka_example.exe) and 1 
> consumer(rdkafka_consumer_example_cpp.exe), All of them are on the same PC. 
> But I found an issue that the producer failed to produce any data after the 
> leader of the brokers is down.
> Here are the steps to reproduce this issue:
> 1.  Start zookeeper.
> 2.  Start the brokers by running the following commands:
>   kafka-server-start.bat .\config\server.properties
>   kafka-server-start.bat .\config\server-1.properties
>   kafka-server-start.bat .\config\server-2.properties
>  The configures for each server are:
>  config/server.properties:
>  broker.id=0
>  listeners=PLAINTEXT://:9092
>  log.dir=/tmp/kafka-logs-0
>  config/server-1.properties:
>  broker.id=1
>  listeners=PLAINTEXT://:9093
>  log.dir=/tmp/kafka-logs-1
>  config/server-2.properties:
>  broker.id=2
>  listeners=PLAINTEXT://:9094
>  log.dir=/tmp/kafka-logs-2
> 3. Create a new topic
>   kafka-topics.bat --create --zookeeper localhost:2181 
> --replication-factor  3 --partitions 1 --topic topic1  
>  Then you can see that the leader is broker 0 with following command
>  kafka-topics.bat --describe --zookeeper localhost:2181 --topic topic1 
> 4. Start consumer:
>   rdkafka_consumer_example_cpp.exe -g 1 -b localhost:9092 topic1 
> 5. Start producer:
>   rdkafka_example.exe -P -t topic1 -b localhost:9092
>   
>  Now you can see that everything works fine.
> 6. Then stop broker0 by closing the command prompt which runs 
> 'kafka-server-start.bat .\config\server.properties', and you can see that the 
> producer and consumer still work fine.
> 7. Then stop the producer and consumer by pressing Ctrl+C and then closing 
> the related command prompt, and start them again with the same step 4 and 5, 
> now you can see that both the producer and consumer do not work!
> My expected behavior is that even the leader of multi-broker cluster is down, 
> we can still restart the producer and consumer of librdkafka and make them 
> work.
> Would you please give me any help?
> Thank you!
> Leon



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-4148) KIP-79 add ListOffsetRequest v1 and search by timestamp interface to consumer.

2016-09-19 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-4148.

   Resolution: Fixed
Fix Version/s: 0.10.1.0

Issue resolved by pull request 1852
[https://github.com/apache/kafka/pull/1852]

> KIP-79 add ListOffsetRequest v1 and search by timestamp interface to consumer.
> --
>
> Key: KAFKA-4148
> URL: https://issues.apache.org/jira/browse/KAFKA-4148
> Project: Kafka
>  Issue Type: Task
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.10.1.0
>
>
> This ticket is to implement KIP-79.
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65868090



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4148) KIP-79 add ListOffsetRequest v1 and search by timestamp interface to consumer.

2016-09-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15505275#comment-15505275
 ] 

ASF GitHub Bot commented on KAFKA-4148:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1852


> KIP-79 add ListOffsetRequest v1 and search by timestamp interface to consumer.
> --
>
> Key: KAFKA-4148
> URL: https://issues.apache.org/jira/browse/KAFKA-4148
> Project: Kafka
>  Issue Type: Task
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>
> This ticket is to implement KIP-79.
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65868090



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1852: KAFKA-4148: KIP-79 ListOffsetRequest v1 and add se...

2016-09-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1852


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-4194) Add more tests for KIP-79

2016-09-19 Thread Jiangjie Qin (JIRA)
Jiangjie Qin created KAFKA-4194:
---

 Summary: Add more tests for KIP-79
 Key: KAFKA-4194
 URL: https://issues.apache.org/jira/browse/KAFKA-4194
 Project: Kafka
  Issue Type: Sub-task
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin


This is a follow up ticket to add more tests for KIP-79. Including the 
integration tests and clients test if necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1556

2016-09-19 Thread Apache Jenkins Server
See 

Changes:

[jason] KAFKA-4135; Consumer poll with no subscription or assignment should

[wangguoz] KAFKA-4153: Fix incorrect KStream-KStream join behavior with 
asymmetric

--
[...truncated 1307 lines...]
kafka.server.ServerStartupTest > testConflictBrokerRegistration STARTED

kafka.server.ServerStartupTest > testConflictBrokerRegistration PASSED

kafka.server.ServerStartupTest > testBrokerSelfAware STARTED

kafka.server.ServerStartupTest > testBrokerSelfAware PASSED

kafka.server.ApiVersionsRequestTest > 
testApiVersionsRequestWithUnsupportedVersion STARTED

kafka.server.ApiVersionsRequestTest > 
testApiVersionsRequestWithUnsupportedVersion PASSED

kafka.server.ApiVersionsRequestTest > testApiVersionsRequest STARTED

kafka.server.ApiVersionsRequestTest > testApiVersionsRequest PASSED

kafka.server.IsrExpirationTest > testIsrExpirationForSlowFollowers STARTED

kafka.server.IsrExpirationTest > testIsrExpirationForSlowFollowers PASSED

kafka.server.IsrExpirationTest > testIsrExpirationForStuckFollowers STARTED

kafka.server.IsrExpirationTest > testIsrExpirationForStuckFollowers PASSED

kafka.server.IsrExpirationTest > testIsrExpirationIfNoFetchRequestMade STARTED

kafka.server.IsrExpirationTest > testIsrExpirationIfNoFetchRequestMade PASSED

kafka.server.AdvertiseBrokerTest > testBrokerAdvertiseToZK STARTED

kafka.server.AdvertiseBrokerTest > testBrokerAdvertiseToZK PASSED

kafka.server.MetadataRequestTest > testReplicaDownResponse STARTED

kafka.server.MetadataRequestTest > testReplicaDownResponse PASSED

kafka.server.MetadataRequestTest > testRack STARTED

kafka.server.MetadataRequestTest > testRack PASSED

kafka.server.MetadataRequestTest > testIsInternal STARTED

kafka.server.MetadataRequestTest > testIsInternal PASSED

kafka.server.MetadataRequestTest > testControllerId STARTED

kafka.server.MetadataRequestTest > testControllerId PASSED

kafka.server.MetadataRequestTest > testAllTopicsRequest STARTED

kafka.server.MetadataRequestTest > testAllTopicsRequest PASSED

kafka.server.MetadataRequestTest > testClusterIdIsValid STARTED

kafka.server.MetadataRequestTest > testClusterIdIsValid PASSED

kafka.server.MetadataRequestTest > testNoTopicsRequest STARTED

kafka.server.MetadataRequestTest > testNoTopicsRequest PASSED

kafka.server.MetadataRequestTest > testClusterIdWithRequestVersion1 STARTED

kafka.server.MetadataRequestTest > testClusterIdWithRequestVersion1 PASSED

kafka.server.MetadataCacheTest > 
getTopicMetadataWithNonSupportedSecurityProtocol STARTED

kafka.server.MetadataCacheTest > 
getTopicMetadataWithNonSupportedSecurityProtocol PASSED

kafka.server.MetadataCacheTest > getTopicMetadataIsrNotAvailable STARTED

kafka.server.MetadataCacheTest > getTopicMetadataIsrNotAvailable PASSED

kafka.server.MetadataCacheTest > getTopicMetadata STARTED

kafka.server.MetadataCacheTest > getTopicMetadata PASSED

kafka.server.MetadataCacheTest > getTopicMetadataReplicaNotAvailable STARTED

kafka.server.MetadataCacheTest > getTopicMetadataReplicaNotAvailable PASSED

kafka.server.MetadataCacheTest > getTopicMetadataPartitionLeaderNotAvailable 
STARTED

kafka.server.MetadataCacheTest > getTopicMetadataPartitionLeaderNotAvailable 
PASSED

kafka.server.MetadataCacheTest > getAliveBrokersShouldNotBeMutatedByUpdateCache 
STARTED

kafka.server.MetadataCacheTest > getAliveBrokersShouldNotBeMutatedByUpdateCache 
PASSED

kafka.server.MetadataCacheTest > getTopicMetadataNonExistingTopics STARTED

kafka.server.MetadataCacheTest > getTopicMetadataNonExistingTopics PASSED

kafka.server.SaslSslReplicaFetchTest > testReplicaFetcherThread STARTED

kafka.server.SaslSslReplicaFetchTest > testReplicaFetcherThread PASSED

kafka.server.ProduceRequestTest > testSimpleProduceRequest STARTED

kafka.server.ProduceRequestTest > testSimpleProduceRequest PASSED

kafka.server.ProduceRequestTest > testCorruptLz4ProduceRequest STARTED

kafka.server.ProduceRequestTest > testCorruptLz4ProduceRequest PASSED

kafka.server.FetchRequestTest > testBrokerRespectsPartitionsOrderAndSizeLimits 
STARTED

kafka.server.FetchRequestTest > testBrokerRespectsPartitionsOrderAndSizeLimits 
PASSED

kafka.server.FetchRequestTest > testFetchRequestV2WithOversizedMessage STARTED

kafka.server.FetchRequestTest > testFetchRequestV2WithOversizedMessage PASSED

kafka.tools.MirrorMakerTest > 
testDefaultMirrorMakerMessageHandlerWithNoTimestampInSourceMessage STARTED

kafka.tools.MirrorMakerTest > 
testDefaultMirrorMakerMessageHandlerWithNoTimestampInSourceMessage PASSED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandler STARTED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandler PASSED

kafka.tools.ConsoleProducerTest > testParseKeyProp STARTED

kafka.tools.ConsoleProducerTest > testParseKeyProp PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsOldProducer STARTED

kafka.tools.ConsoleProducerTest > testValidConfigsOldProducer 

Build failed in Jenkins: kafka-trunk-jdk8 #896

2016-09-19 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-4079; Documentation for secure quotas

[wangguoz] MINOR: Allow creation of statestore without logging enabled or 
explicit

[jason] KAFKA-4183; Centralize checking for optional and default values in

[jason] MINOR: Add KafkaServerStartable constructor overload for compatibility

[jason] KAFKA-3283; Remove beta from new consumer documentation

--
[...truncated 11626 lines...]
   ^
:311:
 value timestamp in class PartitionData is deprecated: see corresponding 
Javadoc for more information.
  if (partitionData.timestamp == 
OffsetCommitRequest.DEFAULT_TIMESTAMP)
^
:314:
 value timestamp in class PartitionData is deprecated: see corresponding 
Javadoc for more information.
offsetRetention + partitionData.timestamp
^
:291:
 method fromReplica in object FetchRequest is deprecated: see corresponding 
Javadoc for more information.
  else JFetchRequest.fromReplica(replicaId, maxWait, minBytes, requestMap)
 ^
:43:
 class OldProducer in package producer is deprecated: This class has been 
deprecated and will be removed in a future release. Please use 
org.apache.kafka.clients.producer.KafkaProducer instead.
new OldProducer(getOldProducerProps(config))
^
:45:
 class NewShinyProducer in package producer is deprecated: This class has been 
deprecated and will be removed in a future release. Please use 
org.apache.kafka.clients.producer.KafkaProducer instead.
new NewShinyProducer(getNewProducerProps(config))
^
15 warnings found
warning: [options] bootstrap class path not set in conjunction with -source 1.7
1 warning
:core:processResources UP-TO-DATE
:core:classes
:core:copyDependantLibs
:core:jar
:examples:compileJavawarning: [options] bootstrap class path not set in 
conjunction with -source 1.7
1 warning

:examples:processResources UP-TO-DATE
:examples:classes
:examples:checkstyleMain
:examples:compileTestJava UP-TO-DATE
:examples:processTestResources UP-TO-DATE
:examples:testClasses UP-TO-DATE
:examples:checkstyleTest UP-TO-DATE
:examples:test UP-TO-DATE
:log4j-appender:compileJavawarning: [options] bootstrap class path not set in 
conjunction with -source 1.7
1 warning

:log4j-appender:processResources UP-TO-DATE
:log4j-appender:classes
:log4j-appender:checkstyleMain
:log4j-appender:compileTestJavawarning: [options] bootstrap class path not set 
in conjunction with -source 1.7
1 warning

:log4j-appender:processTestResources UP-TO-DATE
:log4j-appender:testClasses
:log4j-appender:checkstyleTest
:log4j-appender:test

org.apache.kafka.log4jappender.KafkaLog4jAppenderTest > testLog4jAppends STARTED

org.apache.kafka.log4jappender.KafkaLog4jAppenderTest > testLog4jAppends PASSED

org.apache.kafka.log4jappender.KafkaLog4jAppenderTest > testKafkaLog4jConfigs 
STARTED

org.apache.kafka.log4jappender.KafkaLog4jAppenderTest > testKafkaLog4jConfigs 
PASSED
:core:compileTestJava UP-TO-DATE
:core:compileTestScala
:88:
 method createAndShutdownStep in class MetricsTest is deprecated: This test has 
been deprecated and it will be removed in a future release
createAndShutdownStep("group0", "consumer0", "producer0")
^
:158:
 constructor FetchRequest in class FetchRequest is deprecated: see 
corresponding Javadoc for more information.
val fetchRequest = new FetchRequest(Int.MaxValue, 0, 
createPartitionMap(maxPartitionBytes, Seq(topicPartition)))
   ^
two warnings found
:core:processTestResources UP-TO-DATE
:core:testClasses
:connect:api:compileJavawarning: [options] bootstrap class path not set in 
conjunction with -source 1.7
1 warning

:connect:api:processResources UP-TO-DATE
:connect:api:classes
:connect:api:copyDependantLibs
:connect:api:jar
:connect:json:compileJavawarning: [options] bootstrap class path not set in 
conjunction with -source 1.7
1 warning

:connect:json:processResources UP-TO-DATE
:connect:json:classes
:connect:json:copyDependantLibs
:connect:json:jar
:streams:compileJavawarning: [options] bootstrap class path not set in 

[GitHub] kafka pull request #1885: Fix comments in KStreamKStreamJoinTest

2016-09-19 Thread eliaslevy
GitHub user eliaslevy opened a pull request:

https://github.com/apache/kafka/pull/1885

Fix comments in KStreamKStreamJoinTest

Minor comment fixes.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/eliaslevy/kafka fix-test-comments

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1885.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1885


commit 49493aadd4a37aa42b5295aa09eb2cd36082b824
Author: Elias Levy 
Date:   2016-09-20T00:36:14Z

Fix comments in KStreamKStreamJoinTest

Minor comment fixes.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4060) Remove ZkClient dependency in Kafka Streams

2016-09-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15505133#comment-15505133
 ] 

ASF GitHub Bot commented on KAFKA-4060:
---

GitHub user hjafarpour opened a pull request:

https://github.com/apache/kafka/pull/1884

KAFKA-4060: Remove zk client dependency in kafka streams

@dguy This is a new PR for KAFKA-4060.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hjafarpour/kafka 
KAFKA-4060-Remove-ZkClient-dependency-in-Kafka-Streams-new

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1884.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1884


commit db0405c6994123417441ac328d2f9e7f96b8e851
Author: Hojjat Jafarpour 
Date:   2016-09-20T00:10:54Z

Removed Zookeeper dependency from Kafka Streams. Added two test for 
creating and deleting topics. They work in IDE but fail while build. Removing 
the new tests for now.




> Remove ZkClient dependency in Kafka Streams
> ---
>
> Key: KAFKA-4060
> URL: https://issues.apache.org/jira/browse/KAFKA-4060
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Hojjat Jafarpour
>
> In Kafka Streams we need to dynamically create or update those internal 
> topics (i.e. repartition topics) upon rebalance, inside 
> {{InternalTopicManager}} which is triggered by {{StreamPartitionAssignor}}. 
> Currently we are using {{ZkClient}} to talk to ZK directly for such actions.
> With create and delete topics request merged in by [~granthenke] as part of 
> KIP-4, we should now be able to remove the ZkClient dependency and directly 
> use these requests.
> Related: 
> 1. KIP-4. 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-4+-+Command+line+and+centralized+administrative+operations
> 2. Consumer Reblance Protocol. 
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Client-side+Assignment+Proposal



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1884: KAFKA-4060: Remove zk client dependency in kafka s...

2016-09-19 Thread hjafarpour
GitHub user hjafarpour opened a pull request:

https://github.com/apache/kafka/pull/1884

KAFKA-4060: Remove zk client dependency in kafka streams

@dguy This is a new PR for KAFKA-4060.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hjafarpour/kafka 
KAFKA-4060-Remove-ZkClient-dependency-in-Kafka-Streams-new

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1884.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1884


commit db0405c6994123417441ac328d2f9e7f96b8e851
Author: Hojjat Jafarpour 
Date:   2016-09-20T00:10:54Z

Removed Zookeeper dependency from Kafka Streams. Added two test for 
creating and deleting topics. They work in IDE but fail while build. Removing 
the new tests for now.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #1799: Kafka 4060 remove zk client dependency in kafka st...

2016-09-19 Thread hjafarpour
Github user hjafarpour closed the pull request at:

https://github.com/apache/kafka/pull/1799


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-4153) Incorrect KStream-KStream join behavior with asymmetric time window

2016-09-19 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4153:
-
Assignee: Elias Levy  (was: Guozhang Wang)

> Incorrect KStream-KStream join behavior with asymmetric time window
> ---
>
> Key: KAFKA-4153
> URL: https://issues.apache.org/jira/browse/KAFKA-4153
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.0.1
>Reporter: Elias Levy
>Assignee: Elias Levy
> Fix For: 0.10.1.0
>
>
> Using Kafka 0.10.0.1, if joining records in two streams separated by some 
> time, but only when records from one stream are newer than records from the 
> other, i.e. doing:
> {{stream1.join(stream2, valueJoiner, JoinWindows.of("X").after(1))}}
> One would expect that the following would be equivalent:
> {{stream2.join(stream1, valueJoiner, JoinWindows.of("X").before(1))}}
> Alas, that this is not the case.  Instead, this generates the same output as 
> the first example:
> {{stream2.join(stream1, valueJoiner, JoinWindows.of("X").after(1))}}
> The problem is that the 
> [{{DefaultJoin}}|https://github.com/apache/kafka/blob/caa9bd0fcd2fab4758791408e2b145532153910e/streams/src/main/java/org/apache/kafka/streams/kstream/internals/KStreamImpl.java#L692-L697]
>  implementation in {{KStreamImpl}} fails to reverse the {{before}} and 
> {{after}} values when creates the {{KStreamKStreamJoin}} for the other 
> stream, even though is calls {{reverseJoiner}} to reverse the joiner.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4153) Incorrect KStream-KStream join behavior with asymmetric time window

2016-09-19 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15505117#comment-15505117
 ] 

Guozhang Wang commented on KAFKA-4153:
--

Thanks [~elevy], I have added you to the contributor list so you can assign 
tickets to yourself moving forward.

> Incorrect KStream-KStream join behavior with asymmetric time window
> ---
>
> Key: KAFKA-4153
> URL: https://issues.apache.org/jira/browse/KAFKA-4153
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.0.1
>Reporter: Elias Levy
>Assignee: Elias Levy
> Fix For: 0.10.1.0
>
>
> Using Kafka 0.10.0.1, if joining records in two streams separated by some 
> time, but only when records from one stream are newer than records from the 
> other, i.e. doing:
> {{stream1.join(stream2, valueJoiner, JoinWindows.of("X").after(1))}}
> One would expect that the following would be equivalent:
> {{stream2.join(stream1, valueJoiner, JoinWindows.of("X").before(1))}}
> Alas, that this is not the case.  Instead, this generates the same output as 
> the first example:
> {{stream2.join(stream1, valueJoiner, JoinWindows.of("X").after(1))}}
> The problem is that the 
> [{{DefaultJoin}}|https://github.com/apache/kafka/blob/caa9bd0fcd2fab4758791408e2b145532153910e/streams/src/main/java/org/apache/kafka/streams/kstream/internals/KStreamImpl.java#L692-L697]
>  implementation in {{KStreamImpl}} fails to reverse the {{before}} and 
> {{after}} values when creates the {{KStreamKStreamJoin}} for the other 
> stream, even though is calls {{reverseJoiner}} to reverse the joiner.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4153) Incorrect KStream-KStream join behavior with asymmetric time window

2016-09-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15505113#comment-15505113
 ] 

ASF GitHub Bot commented on KAFKA-4153:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1846


> Incorrect KStream-KStream join behavior with asymmetric time window
> ---
>
> Key: KAFKA-4153
> URL: https://issues.apache.org/jira/browse/KAFKA-4153
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.0.1
>Reporter: Elias Levy
>Assignee: Guozhang Wang
> Fix For: 0.10.1.0
>
>
> Using Kafka 0.10.0.1, if joining records in two streams separated by some 
> time, but only when records from one stream are newer than records from the 
> other, i.e. doing:
> {{stream1.join(stream2, valueJoiner, JoinWindows.of("X").after(1))}}
> One would expect that the following would be equivalent:
> {{stream2.join(stream1, valueJoiner, JoinWindows.of("X").before(1))}}
> Alas, that this is not the case.  Instead, this generates the same output as 
> the first example:
> {{stream2.join(stream1, valueJoiner, JoinWindows.of("X").after(1))}}
> The problem is that the 
> [{{DefaultJoin}}|https://github.com/apache/kafka/blob/caa9bd0fcd2fab4758791408e2b145532153910e/streams/src/main/java/org/apache/kafka/streams/kstream/internals/KStreamImpl.java#L692-L697]
>  implementation in {{KStreamImpl}} fails to reverse the {{before}} and 
> {{after}} values when creates the {{KStreamKStreamJoin}} for the other 
> stream, even though is calls {{reverseJoiner}} to reverse the joiner.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-4153) Incorrect KStream-KStream join behavior with asymmetric time window

2016-09-19 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-4153.
--
   Resolution: Fixed
Fix Version/s: 0.10.1.0

Issue resolved by pull request 1846
[https://github.com/apache/kafka/pull/1846]

> Incorrect KStream-KStream join behavior with asymmetric time window
> ---
>
> Key: KAFKA-4153
> URL: https://issues.apache.org/jira/browse/KAFKA-4153
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.0.1
>Reporter: Elias Levy
>Assignee: Guozhang Wang
> Fix For: 0.10.1.0
>
>
> Using Kafka 0.10.0.1, if joining records in two streams separated by some 
> time, but only when records from one stream are newer than records from the 
> other, i.e. doing:
> {{stream1.join(stream2, valueJoiner, JoinWindows.of("X").after(1))}}
> One would expect that the following would be equivalent:
> {{stream2.join(stream1, valueJoiner, JoinWindows.of("X").before(1))}}
> Alas, that this is not the case.  Instead, this generates the same output as 
> the first example:
> {{stream2.join(stream1, valueJoiner, JoinWindows.of("X").after(1))}}
> The problem is that the 
> [{{DefaultJoin}}|https://github.com/apache/kafka/blob/caa9bd0fcd2fab4758791408e2b145532153910e/streams/src/main/java/org/apache/kafka/streams/kstream/internals/KStreamImpl.java#L692-L697]
>  implementation in {{KStreamImpl}} fails to reverse the {{before}} and 
> {{after}} values when creates the {{KStreamKStreamJoin}} for the other 
> stream, even though is calls {{reverseJoiner}} to reverse the joiner.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1846: KAFKA-4153: Fix incorrect KStream-KStream join beh...

2016-09-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1846


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #1555

2016-09-19 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-4079; Documentation for secure quotas

[wangguoz] MINOR: Allow creation of statestore without logging enabled or 
explicit

[jason] KAFKA-4183; Centralize checking for optional and default values in

[jason] MINOR: Add KafkaServerStartable constructor overload for compatibility

[jason] KAFKA-3283; Remove beta from new consumer documentation

--
[...truncated 3562 lines...]

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicWrite STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testAuthorization STARTED

kafka.api.AuthorizerIntegrationTest > testAuthorization PASSED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicAndGroupRead STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicAndGroupRead PASSED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithoutDescribe 
STARTED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithoutDescribe 
PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicWrite STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoGroupAccess STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoAccess STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoGroupAccess STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoAccess STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead 
STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead 
PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicAndGroupRead STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicAndGroupRead PASSED

kafka.api.AuthorizerIntegrationTest > 
testSimpleConsumeWithExplicitSeekAndNoGroupAccess STARTED

kafka.api.AuthorizerIntegrationTest > 
testSimpleConsumeWithExplicitSeekAndNoGroupAccess PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testCoordinatorFailover STARTED

kafka.api.SaslPlainPlaintextConsumerTest > testCoordinatorFailover PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testSimpleConsumption STARTED

kafka.api.SaslPlainPlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.RequestResponseSerializationTest > 
testSerializationAndDeserialization STARTED

kafka.api.RequestResponseSerializationTest > 
testSerializationAndDeserialization PASSED

kafka.api.RequestResponseSerializationTest > testFetchResponseVersion STARTED

kafka.api.RequestResponseSerializationTest > testFetchResponseVersion PASSED

kafka.api.RequestResponseSerializationTest > testProduceResponseVersion STARTED

kafka.api.RequestResponseSerializationTest > testProduceResponseVersion PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoConsumeAcl STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoConsumeAcl PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testProduceConsumeViaAssign 
STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testProduceConsumeViaAssign 
PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoProduceAcl STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoProduceAcl PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoGroupAcl STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe STARTED


[jira] [Updated] (KAFKA-4135) Inconsistent javadoc for KafkaConsumer.poll behavior when there is no subscription

2016-09-19 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-4135:
---
   Resolution: Fixed
Fix Version/s: 0.10.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1839
[https://github.com/apache/kafka/pull/1839]

> Inconsistent javadoc for KafkaConsumer.poll behavior when there is no 
> subscription
> --
>
> Key: KAFKA-4135
> URL: https://issues.apache.org/jira/browse/KAFKA-4135
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Jason Gustafson
>Assignee: Vahid Hashemian
>Priority: Minor
> Fix For: 0.10.1.0
>
>
> Currently, the javadoc for {{KafkaConsumer.poll}} says the following: 
> "It is an error to not have subscribed to any topics or partitions before 
> polling for data." However, we don't actually raise an exception if this is 
> the case. Perhaps we should?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1839: KAFKA-4135: Consumer polls when not subscribed to ...

2016-09-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1839


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4135) Inconsistent javadoc for KafkaConsumer.poll behavior when there is no subscription

2016-09-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15505051#comment-15505051
 ] 

ASF GitHub Bot commented on KAFKA-4135:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1839


> Inconsistent javadoc for KafkaConsumer.poll behavior when there is no 
> subscription
> --
>
> Key: KAFKA-4135
> URL: https://issues.apache.org/jira/browse/KAFKA-4135
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Jason Gustafson
>Assignee: Vahid Hashemian
>Priority: Minor
>
> Currently, the javadoc for {{KafkaConsumer.poll}} says the following: 
> "It is an error to not have subscribed to any topics or partitions before 
> polling for data." However, we don't actually raise an exception if this is 
> the case. Perhaps we should?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : kafka-0.10.0-jdk7 #202

2016-09-19 Thread Apache Jenkins Server
See 



Build failed in Jenkins: kafka-trunk-jdk7 #1554

2016-09-19 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-4163: NPE in StreamsMetadataState during re-balance operations

--
[...truncated 1979 lines...]
kafka.api.SaslSslEndToEndAuthorizationTest > testNoConsumeAcl PASSED

kafka.api.SaslSslEndToEndAuthorizationTest > testProduceConsumeViaAssign STARTED

kafka.api.SaslSslEndToEndAuthorizationTest > testProduceConsumeViaAssign PASSED

kafka.api.SaslSslEndToEndAuthorizationTest > testNoProduceAcl STARTED

kafka.api.SaslSslEndToEndAuthorizationTest > testNoProduceAcl PASSED

kafka.api.SaslSslEndToEndAuthorizationTest > testNoGroupAcl STARTED

kafka.api.SaslSslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.SaslSslEndToEndAuthorizationTest > testProduceConsumeViaSubscribe 
STARTED

kafka.api.SaslSslEndToEndAuthorizationTest > testProduceConsumeViaSubscribe 
PASSED

kafka.api.PlaintextProducerSendTest > testSerializerConstructors STARTED

kafka.api.PlaintextProducerSendTest > testSerializerConstructors PASSED

kafka.api.PlaintextProducerSendTest > 
testSendCompressedMessageWithLogAppendTime STARTED

kafka.api.PlaintextProducerSendTest > 
testSendCompressedMessageWithLogAppendTime PASSED

kafka.api.PlaintextProducerSendTest > testAutoCreateTopic STARTED

kafka.api.PlaintextProducerSendTest > testAutoCreateTopic PASSED

kafka.api.PlaintextProducerSendTest > testSendWithInvalidCreateTime STARTED

kafka.api.PlaintextProducerSendTest > testSendWithInvalidCreateTime PASSED

kafka.api.PlaintextProducerSendTest > testWrongSerializer STARTED

kafka.api.PlaintextProducerSendTest > testWrongSerializer PASSED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithLogAppendTime STARTED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithLogAppendTime PASSED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithCreateTime STARTED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithCreateTime PASSED

kafka.api.PlaintextProducerSendTest > testClose STARTED

kafka.api.PlaintextProducerSendTest > testClose PASSED

kafka.api.PlaintextProducerSendTest > testFlush STARTED

kafka.api.PlaintextProducerSendTest > testFlush PASSED

kafka.api.PlaintextProducerSendTest > testSendToPartition STARTED

kafka.api.PlaintextProducerSendTest > testSendToPartition PASSED

kafka.api.PlaintextProducerSendTest > testSendOffset STARTED

kafka.api.PlaintextProducerSendTest > testSendOffset PASSED

kafka.api.PlaintextProducerSendTest > testSendCompressedMessageWithCreateTime 
STARTED

kafka.api.PlaintextProducerSendTest > testSendCompressedMessageWithCreateTime 
PASSED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromCallerThread 
STARTED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromCallerThread 
PASSED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromSenderThread 
STARTED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromSenderThread 
PASSED

kafka.api.SslConsumerTest > testCoordinatorFailover STARTED

kafka.api.SslConsumerTest > testCoordinatorFailover PASSED

kafka.api.SslConsumerTest > testSimpleConsumption STARTED

kafka.api.SslConsumerTest > testSimpleConsumption PASSED

kafka.api.UserQuotaTest > testProducerConsumerOverrideUnthrottled STARTED

kafka.api.UserQuotaTest > testProducerConsumerOverrideUnthrottled PASSED

kafka.api.UserQuotaTest > testThrottledProducerConsumer STARTED

kafka.api.UserQuotaTest > testThrottledProducerConsumer PASSED

kafka.api.UserQuotaTest > testQuotaOverrideDelete STARTED

kafka.api.UserQuotaTest > testQuotaOverrideDelete PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testCoordinatorFailover STARTED

kafka.api.SaslPlainPlaintextConsumerTest > testCoordinatorFailover PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testSimpleConsumption STARTED

kafka.api.SaslPlainPlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicWrite STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededToReadFromNonExistentTopic STARTED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededToReadFromNonExistentTopic PASSED

kafka.api.AuthorizerIntegrationTest > testDeleteWithWildCardAuth STARTED

kafka.api.AuthorizerIntegrationTest > testDeleteWithWildCardAuth PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoAccess STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testListOfsetsWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testListOfsetsWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicRead STARTED


[jira] [Updated] (KAFKA-3283) Remove beta from new consumer documentation

2016-09-19 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-3283:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1880
[https://github.com/apache/kafka/pull/1880]

> Remove beta from new consumer documentation
> ---
>
> Key: KAFKA-3283
> URL: https://issues.apache.org/jira/browse/KAFKA-3283
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.10.1.0
>
>
> Ideally, we would:
> * Remove the beta label
> * Filling any critical gaps in functionality
> * Update the documentation on the old consumers to recommend the new consumer 
> (without deprecating the old consumer, however)
> Current target is 0.10.1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1880: KAFKA-3283: Remove beta from new consumer document...

2016-09-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1880


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3283) Remove beta from new consumer documentation

2016-09-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504744#comment-15504744
 ] 

ASF GitHub Bot commented on KAFKA-3283:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1880


> Remove beta from new consumer documentation
> ---
>
> Key: KAFKA-3283
> URL: https://issues.apache.org/jira/browse/KAFKA-3283
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.10.1.0
>
>
> Ideally, we would:
> * Remove the beta label
> * Filling any critical gaps in functionality
> * Update the documentation on the old consumers to recommend the new consumer 
> (without deprecating the old consumer, however)
> Current target is 0.10.1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4187) Adding a flag to prefix topics with mirror maker

2016-09-19 Thread James Cheng (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504734#comment-15504734
 ] 

James Cheng commented on KAFKA-4187:


We do the exact same thing. 

MirrorMaker actually has some built in support for this. You can provide it a 
MessageHandler jar, which can be used to modify the message after it is 
retrieved from the source cluster, but before it is produced into the target 
cluster.

Here is an example of how to write one and how to use it: 
https://github.com/gwenshap/kafka-examples/tree/master/MirrorMakerHandler


> Adding a flag to prefix topics with mirror maker
> 
>
> Key: KAFKA-4187
> URL: https://issues.apache.org/jira/browse/KAFKA-4187
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 0.8.2.1, 0.9.0.1, 0.10.0.0, 0.10.0.1
>Reporter: Vincent Rischmann
>Priority: Minor
>
> So I have a setup where I need to mirror our production cluster to our 
> preproduction cluster, but can't use the original topic names.
> I've patched mirror maker to allow me to define a prefix for each topic and I 
> basically prefix everything with mirror_. I'm wondering if there's interest 
> for this feature upstream ?
> I have a patch available for Kafka 0.9.0.1 (what I'm using) and from what 
> I've seen it should apply well to Kafka 0.10.0.X too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #895

2016-09-19 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] HOTFIX: logic in 
QuerybaleStateIntegrationTest.shouldBeAbleToQueryState

[wangguoz] KAFKA-4175: Can't have StandbyTasks in KafkaStreams where

[wangguoz] KAFKA-4163: NPE in StreamsMetadataState during re-balance operations

--
[...truncated 13351 lines...]

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testRestore PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRange STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRange PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRangeWithDefaultSerdes STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRangeWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.DelegatingPeekingKeyValueIteratorTest 
> shouldPeekNext STARTED

org.apache.kafka.streams.state.internals.DelegatingPeekingKeyValueIteratorTest 
> shouldPeekNext PASSED

org.apache.kafka.streams.state.internals.DelegatingPeekingKeyValueIteratorTest 
> shouldPeekAndIterate STARTED

org.apache.kafka.streams.state.internals.DelegatingPeekingKeyValueIteratorTest 
> shouldPeekAndIterate PASSED

org.apache.kafka.streams.state.internals.DelegatingPeekingKeyValueIteratorTest 
> shouldThrowNoSuchElementWhenNoMoreItemsLeftAndPeekNextCalled STARTED

org.apache.kafka.streams.state.internals.DelegatingPeekingKeyValueIteratorTest 
> shouldThrowNoSuchElementWhenNoMoreItemsLeftAndPeekNextCalled PASSED

org.apache.kafka.streams.state.internals.DelegatingPeekingKeyValueIteratorTest 
> shouldThrowNoSuchElementWhenNoMoreItemsLeftAndNextCalled STARTED

org.apache.kafka.streams.state.internals.DelegatingPeekingKeyValueIteratorTest 
> shouldThrowNoSuchElementWhenNoMoreItemsLeftAndNextCalled PASSED

org.apache.kafka.streams.state.internals.MergedSortedCacheWindowStoreIteratorTest
 > shouldIterateOverValueFromBothIterators STARTED

org.apache.kafka.streams.state.internals.MergedSortedCacheWindowStoreIteratorTest
 > shouldIterateOverValueFromBothIterators PASSED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldReturnEmptyListIfNoStoresFoundWithName STARTED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldReturnEmptyListIfNoStoresFoundWithName PASSED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldThrowInvalidStoreExceptionIfKVStoreClosed STARTED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldThrowInvalidStoreExceptionIfKVStoreClosed PASSED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldThrowInvalidStoreExceptionIfNotAllStoresAvailable STARTED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldThrowInvalidStoreExceptionIfNotAllStoresAvailable PASSED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldFindWindowStores STARTED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldFindWindowStores PASSED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldReturnEmptyListIfStoreExistsButIsNotOfTypeValueStore STARTED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldReturnEmptyListIfStoreExistsButIsNotOfTypeValueStore PASSED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldFindKeyValueStores STARTED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldFindKeyValueStores PASSED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldThrowInvalidStoreExceptionIfWindowStoreClosed STARTED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldThrowInvalidStoreExceptionIfWindowStoreClosed PASSED

org.apache.kafka.streams.state.StoresTest > 
shouldCreateInMemoryStoreSupplierWithLoggedConfig STARTED

org.apache.kafka.streams.state.StoresTest > 
shouldCreateInMemoryStoreSupplierWithLoggedConfig PASSED

org.apache.kafka.streams.state.StoresTest > 
shouldCreatePersistenStoreSupplierNotLogged STARTED

org.apache.kafka.streams.state.StoresTest > 
shouldCreatePersistenStoreSupplierNotLogged PASSED

org.apache.kafka.streams.state.StoresTest > 
shouldCreatePersistenStoreSupplierWithLoggedConfig STARTED

org.apache.kafka.streams.state.StoresTest > 
shouldCreatePersistenStoreSupplierWithLoggedConfig PASSED

org.apache.kafka.streams.state.StoresTest > 
shouldCreateInMemoryStoreSupplierNotLogged STARTED

org.apache.kafka.streams.state.StoresTest > 
shouldCreateInMemoryStoreSupplierNotLogged PASSED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldReduce STARTED


Re: [VOTE] KIP-79 - ListOffsetRequest v1 and search by timestamp methods in new consumer.

2016-09-19 Thread Becket Qin
Thanks everyone for the comments and votes.

KIP-79 has passed with +4 binding (Jason, Neha, Jun, Ismael) and +1
non-binding (Bill).

On Mon, Sep 19, 2016 at 1:54 PM, Ismael Juma  wrote:

> Thanks for the KIP, +1 (binding) to the latest version.
>
> Ismael
>
> On Sat, Sep 10, 2016 at 12:38 AM, Becket Qin  wrote:
>
> > Hi all,
> >
> > I'd like to start the voting for KIP-79
> >
> > In short we propose to :
> > 1. add a ListOffsetRequest/ListOffsetResponse v1, and
> > 2. add earliestOffsts(), latestOffsets() and offsetForTime() methods in
> the
> > new consumer.
> >
> > The KIP wiki is the following:
> > https://cwiki.apache.org/confluence/pages/viewpage.
> action?pageId=65868090
> >
> > Thanks,
> >
> > Jiangjie (Becket) Qin
> >
>


Re: [VOTE] KIP-79 - ListOffsetRequest v1 and search by timestamp methods in new consumer.

2016-09-19 Thread Ismael Juma
Thanks for the KIP, +1 (binding) to the latest version.

Ismael

On Sat, Sep 10, 2016 at 12:38 AM, Becket Qin  wrote:

> Hi all,
>
> I'd like to start the voting for KIP-79
>
> In short we propose to :
> 1. add a ListOffsetRequest/ListOffsetResponse v1, and
> 2. add earliestOffsts(), latestOffsets() and offsetForTime() methods in the
> new consumer.
>
> The KIP wiki is the following:
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65868090
>
> Thanks,
>
> Jiangjie (Becket) Qin
>


Re: [VOTE] KIP-79 - ListOffsetRequest v1 and search by timestamp methods in new consumer.

2016-09-19 Thread Jun Rao
Hi, Jiangjie,

For 1, yes, since this api is about timestamp, returning any other metadata
more than the timestamp may not make sense any way. So, we can leave the
api as it is.

Thanks,

Jun

On Mon, Sep 19, 2016 at 12:57 PM, Becket Qin  wrote:

> Thanks for the comment Jun. Please see the reply in line
>
> On Mon, Sep 19, 2016 at 10:21 AM, Jason Gustafson 
> wrote:
>
> > +1 on Jun's suggestion to use "beginning" and "end". The term "latest" is
> > misleading since the last message in the log may not have the largest
> > timestamp.
> >
> > On Mon, Sep 19, 2016 at 9:49 AM, Jun Rao  wrote:
> >
> > > Hi, Jiangjie,
> > >
> > > Thanks for the proposal. Looks good to me overall. Just a couple of
> minor
> > > comments.
> > >
> > > 1. I thought at some point you considered to only return offset in
> > > offsetsForTimes instead of offset and timestamp. One benefit of doing
> > that
> > > is that it will make the return type consistent among all three new
> apis.
> > > It also feels a bit weird that we only return timestamp, but not other
> > > metadata associated with a message.
> >
> I changed the interface for earliestOffsets() and latestOffsets() to return
> only offset instead of OffsetAndTimestamp because the timestamp may not
> always be meaningful.
> For offsetsForTimes(), the primary reason to return the timestamp with
> offset is because the found timestamp may not always be the same as target
> timestamp. Returning timestamp with offset may be useful if users do not
> always want to read the message afterwards. For example, user may not want
> to consume a message if its timestamp is too far away from the target
> timestamp. Also users may only want to perform some query based timestamp
> without consuming all the messages. e.g. total number of messages in a time
> range. I did not think much about other metadata of the message because it
> seems the interface only cares about the timestamp and the offsets.
>
> >
> > > 2. To be consistent with existing seek apis, would it be better to
> rename
> > > earliestOffsets() and latestOffsets() to beginningOffsets() and
> > > endOffsets()?
> >
> Good point, will make the change. This makes the name align with
> seekToBeginning() and seekToEnd().
>
> >
> > > Jun
> > >
> > > On Fri, Sep 9, 2016 at 4:38 PM, Becket Qin 
> wrote:
> > >
> > > > Hi all,
> > > >
> > > > I'd like to start the voting for KIP-79
> > > >
> > > > In short we propose to :
> > > > 1. add a ListOffsetRequest/ListOffsetResponse v1, and
> > > > 2. add earliestOffsts(), latestOffsets() and offsetForTime() methods
> in
> > > the
> > > > new consumer.
> > > >
> > > > The KIP wiki is the following:
> > > > https://cwiki.apache.org/confluence/pages/viewpage.
> > > action?pageId=65868090
> > > >
> > > > Thanks,
> > > >
> > > > Jiangjie (Becket) Qin
> > > >
> > >
> >
>


Re: [VOTE] KIP-79 - ListOffsetRequest v1 and search by timestamp methods in new consumer.

2016-09-19 Thread Ismael Juma
Hi all,

A few comments below.

On Mon, Sep 19, 2016 at 5:49 PM, Jun Rao  wrote:
>
> 1. I thought at some point you considered to only return offset in
> offsetsForTimes instead of offset and timestamp. One benefit of doing that
> is that it will make the return type consistent among all three new apis.
> It also feels a bit weird that we only return timestamp, but not other
> metadata associated with a message.
>

To me, this API is different enough from the other ones that it seems OK
for the return type to be different. And since we are querying by
timestamp, it makes sense to get the actual timestamp that matched in the
result (as Becket said elsewhere).

2. To be consistent with existing seek apis, would it be better to rename
> earliestOffsets() and latestOffsets() to beginningOffsets() and
> endOffsets()?
>

I was actually wondering why we were using `earliest`/`latest` instead of
`beginning`/`end`, so I like this suggestion.

To ensure the accordance with KIP-45, we have changed the
> "earliestOffset()" and "latestOffset()" method to take
> Collection instead of Set.


I agree with this change too (predictably since I suggested it :)).

Ismael


[GitHub] kafka pull request #1878: MINOR: Add `KafkaServerStartable` constructor over...

2016-09-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1878


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3752) Provide a way for KStreams to recover from unclean shutdown

2016-09-19 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504535#comment-15504535
 ] 

Guozhang Wang commented on KAFKA-3752:
--

[~theduderog] Could you try to validate if this issue is already fixed? There 
are a couple of tickets related to this issue that is just merged recently.

> Provide a way for KStreams to recover from unclean shutdown
> ---
>
> Key: KAFKA-3752
> URL: https://issues.apache.org/jira/browse/KAFKA-3752
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 0.10.0.0
>Reporter: Roger Hoover
>  Labels: architecture
>
> If a KStream application gets killed with SIGKILL (e.g. by the Linux OOM 
> Killer), it may leave behind lock files and fail to recover.
> It would be useful to have an options (say --force) to tell KStreams to 
> proceed even if it finds old LOCK files.
> {noformat}
> [2016-05-24 17:37:52,886] ERROR Failed to create an active task #0_0 in 
> thread [StreamThread-1]:  
> (org.apache.kafka.streams.processor.internals.StreamThread:583)
> org.apache.kafka.streams.errors.ProcessorStateException: Error while creating 
> the state manager
>   at 
> org.apache.kafka.streams.processor.internals.AbstractTask.(AbstractTask.java:71)
>   at 
> org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:86)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:550)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:577)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.access$000(StreamThread.java:68)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:123)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:222)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$1.onSuccess(AbstractCoordinator.java:232)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$1.onSuccess(AbstractCoordinator.java:227)
>   at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
>   at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
>   at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$2.onSuccess(RequestFuture.java:182)
>   at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
>   at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:436)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:422)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:679)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:658)
>   at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
>   at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
>   at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:426)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:278)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:360)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:192)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:243)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.ensurePartitionAssignment(ConsumerCoordinator.java:345)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:977)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:937)
>   at 
> 

Re: [VOTE] KIP-79 - ListOffsetRequest v1 and search by timestamp methods in new consumer.

2016-09-19 Thread Becket Qin
Thanks for the comment Jun. Please see the reply in line

On Mon, Sep 19, 2016 at 10:21 AM, Jason Gustafson 
wrote:

> +1 on Jun's suggestion to use "beginning" and "end". The term "latest" is
> misleading since the last message in the log may not have the largest
> timestamp.
>
> On Mon, Sep 19, 2016 at 9:49 AM, Jun Rao  wrote:
>
> > Hi, Jiangjie,
> >
> > Thanks for the proposal. Looks good to me overall. Just a couple of minor
> > comments.
> >
> > 1. I thought at some point you considered to only return offset in
> > offsetsForTimes instead of offset and timestamp. One benefit of doing
> that
> > is that it will make the return type consistent among all three new apis.
> > It also feels a bit weird that we only return timestamp, but not other
> > metadata associated with a message.
>
I changed the interface for earliestOffsets() and latestOffsets() to return
only offset instead of OffsetAndTimestamp because the timestamp may not
always be meaningful.
For offsetsForTimes(), the primary reason to return the timestamp with
offset is because the found timestamp may not always be the same as target
timestamp. Returning timestamp with offset may be useful if users do not
always want to read the message afterwards. For example, user may not want
to consume a message if its timestamp is too far away from the target
timestamp. Also users may only want to perform some query based timestamp
without consuming all the messages. e.g. total number of messages in a time
range. I did not think much about other metadata of the message because it
seems the interface only cares about the timestamp and the offsets.

>
> > 2. To be consistent with existing seek apis, would it be better to rename
> > earliestOffsets() and latestOffsets() to beginningOffsets() and
> > endOffsets()?
>
Good point, will make the change. This makes the name align with
seekToBeginning() and seekToEnd().

>
> > Jun
> >
> > On Fri, Sep 9, 2016 at 4:38 PM, Becket Qin  wrote:
> >
> > > Hi all,
> > >
> > > I'd like to start the voting for KIP-79
> > >
> > > In short we propose to :
> > > 1. add a ListOffsetRequest/ListOffsetResponse v1, and
> > > 2. add earliestOffsts(), latestOffsets() and offsetForTime() methods in
> > the
> > > new consumer.
> > >
> > > The KIP wiki is the following:
> > > https://cwiki.apache.org/confluence/pages/viewpage.
> > action?pageId=65868090
> > >
> > > Thanks,
> > >
> > > Jiangjie (Becket) Qin
> > >
> >
>


[jira] [Commented] (KAFKA-4081) Consumer API consumer new interface commitSyn does not verify the validity of offset

2016-09-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504492#comment-15504492
 ] 

ASF GitHub Bot commented on KAFKA-4081:
---

Github user mimaison closed the pull request at:

https://github.com/apache/kafka/pull/1827


> Consumer API consumer new interface commitSyn does not verify the validity of 
> offset
> 
>
> Key: KAFKA-4081
> URL: https://issues.apache.org/jira/browse/KAFKA-4081
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.1
>Reporter: lifeng
>Assignee: Mickael Maison
>
> Consumer API consumer new interface commitSyn synchronization update offset, 
> for the illegal offset successful return, illegal offset<0 or offset>hw



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1827: KAFKA-4081: Consumer API consumer new interface co...

2016-09-19 Thread mimaison
Github user mimaison closed the pull request at:

https://github.com/apache/kafka/pull/1827


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4183) Logical converters in JsonConverter don't properly handle null values

2016-09-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504479#comment-15504479
 ] 

ASF GitHub Bot commented on KAFKA-4183:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1872


> Logical converters in JsonConverter don't properly handle null values
> -
>
> Key: KAFKA-4183
> URL: https://issues.apache.org/jira/browse/KAFKA-4183
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.0.1
>Reporter: Randall Hauch
>Assignee: Shikhar Bhushan
> Fix For: 0.10.1.0
>
>
> The {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} map contains 
> {{LogicalTypeConverter}} implementations to convert from the raw value into 
> the corresponding logical type value, and they are used during 
> deserialization of message keys and/or values. However, these implementations 
> do not handle the case when the input raw value is null, which can happen 
> when a key or value has a schema that is or contains a field that is 
> _optional_.
> Consider a Kafka Connect schema of type STRUCT that contains a field "date" 
> with an optional schema of type {{org.apache.kafka.connect.data.Date}}. When 
> the key or value with this schema contains a null "date" field and is 
> serialized, the logical serializer properly will serialize the null value. 
> However, upon _deserialization_, the 
> {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} are used to convert the 
> literal value (which is null) to a logical value. All of the 
> {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} implementations will throw a 
> NullPointerException when the input value is null. 
> For example:
> {code:java}
> java.lang.NullPointerException
>   at 
> org.apache.kafka.connect.json.JsonConverter$14.convert(JsonConverter.java:224)
>   at 
> org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:731)
>   at 
> org.apache.kafka.connect.json.JsonConverter.access$100(JsonConverter.java:53)
>   at 
> org.apache.kafka.connect.json.JsonConverter$12.convert(JsonConverter.java:200)
>   at 
> org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:727)
>   at 
> org.apache.kafka.connect.json.JsonConverter.jsonToConnect(JsonConverter.java:354)
>   at 
> org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:343)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-4183) Logical converters in JsonConverter don't properly handle null values

2016-09-19 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-4183.

Resolution: Fixed

Issue resolved by pull request 1872
[https://github.com/apache/kafka/pull/1872]

> Logical converters in JsonConverter don't properly handle null values
> -
>
> Key: KAFKA-4183
> URL: https://issues.apache.org/jira/browse/KAFKA-4183
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.0.1
>Reporter: Randall Hauch
>Assignee: Shikhar Bhushan
> Fix For: 0.10.1.0
>
>
> The {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} map contains 
> {{LogicalTypeConverter}} implementations to convert from the raw value into 
> the corresponding logical type value, and they are used during 
> deserialization of message keys and/or values. However, these implementations 
> do not handle the case when the input raw value is null, which can happen 
> when a key or value has a schema that is or contains a field that is 
> _optional_.
> Consider a Kafka Connect schema of type STRUCT that contains a field "date" 
> with an optional schema of type {{org.apache.kafka.connect.data.Date}}. When 
> the key or value with this schema contains a null "date" field and is 
> serialized, the logical serializer properly will serialize the null value. 
> However, upon _deserialization_, the 
> {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} are used to convert the 
> literal value (which is null) to a logical value. All of the 
> {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} implementations will throw a 
> NullPointerException when the input value is null. 
> For example:
> {code:java}
> java.lang.NullPointerException
>   at 
> org.apache.kafka.connect.json.JsonConverter$14.convert(JsonConverter.java:224)
>   at 
> org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:731)
>   at 
> org.apache.kafka.connect.json.JsonConverter.access$100(JsonConverter.java:53)
>   at 
> org.apache.kafka.connect.json.JsonConverter$12.convert(JsonConverter.java:200)
>   at 
> org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:727)
>   at 
> org.apache.kafka.connect.json.JsonConverter.jsonToConnect(JsonConverter.java:354)
>   at 
> org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:343)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1872: KAFKA-4183: centralize checking for optional and d...

2016-09-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1872


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #1883: Use pre-calculated value in trace message

2016-09-19 Thread lukezaparaniuk
GitHub user lukezaparaniuk opened a pull request:

https://github.com/apache/kafka/pull/1883

Use pre-calculated value in trace message

Replaced the math.min(size, sizeInBytes) call with the count variable that 
has already been calculated within the writeTo method. Replaced string 
concatenation with interpolation.

Replaces borked PR #1877 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lukezaparaniuk/kafka patch

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1883.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1883


commit 8fb46248b3bd945c34045503e6deb0ec43a55bef
Author: Luke Zaparaniuk 
Date:   2016-09-19T19:39:04Z

Use pre-calculated value in trace message

Replaced the math.min(size, sizeInBytes) call with the count variable that 
has already been calculated within the writeTo method. Replaced string 
concatenation with interpolation




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #1828: MINOR: allow creation of statestore without loggin...

2016-09-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1828


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #1877: Use pre-calculated value in trace message

2016-09-19 Thread lukezaparaniuk
Github user lukezaparaniuk closed the pull request at:

https://github.com/apache/kafka/pull/1877


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-4079) Document quota configuration changes from KIP-55

2016-09-19 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-4079:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1847
[https://github.com/apache/kafka/pull/1847]

> Document quota configuration changes from KIP-55
> 
>
> Key: KAFKA-4079
> URL: https://issues.apache.org/jira/browse/KAFKA-4079
> Project: Kafka
>  Issue Type: Task
>  Components: config
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.10.1.0
>
>
> Document the configuration changes introduced for KIP-55 in KAFKA-3492



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4079) Document quota configuration changes from KIP-55

2016-09-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504378#comment-15504378
 ] 

ASF GitHub Bot commented on KAFKA-4079:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1847


> Document quota configuration changes from KIP-55
> 
>
> Key: KAFKA-4079
> URL: https://issues.apache.org/jira/browse/KAFKA-4079
> Project: Kafka
>  Issue Type: Task
>  Components: config
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.10.1.0
>
>
> Document the configuration changes introduced for KIP-55 in KAFKA-3492



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1847: KAFKA-4079: Documentation for secure quotas

2016-09-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1847


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] KIP-79 - ListOffsetRequest v1 and search by timestamp methods in new consumer.

2016-09-19 Thread Jun Rao
+1 assuming the minor comments are addressed.

Thanks,

Jun

On Mon, Sep 19, 2016 at 9:49 AM, Jun Rao  wrote:

> Hi, Jiangjie,
>
> Thanks for the proposal. Looks good to me overall. Just a couple of minor
> comments.
>
> 1. I thought at some point you considered to only return offset in
> offsetsForTimes instead of offset and timestamp. One benefit of doing that
> is that it will make the return type consistent among all three new apis.
> It also feels a bit weird that we only return timestamp, but not other
> metadata associated with a message.
>
> 2. To be consistent with existing seek apis, would it be better to rename
> earliestOffsets() and latestOffsets() to beginningOffsets() and
> endOffsets()?
>
> Jun
>
> On Fri, Sep 9, 2016 at 4:38 PM, Becket Qin  wrote:
>
>> Hi all,
>>
>> I'd like to start the voting for KIP-79
>>
>> In short we propose to :
>> 1. add a ListOffsetRequest/ListOffsetResponse v1, and
>> 2. add earliestOffsts(), latestOffsets() and offsetForTime() methods in
>> the
>> new consumer.
>>
>> The KIP wiki is the following:
>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65868090
>>
>> Thanks,
>>
>> Jiangjie (Becket) Qin
>>
>
>


Jenkins build is back to normal : kafka-trunk-jdk7 #1553

2016-09-19 Thread Apache Jenkins Server
See 



Consul / Zookeeper [was Re: any update on this?]

2016-09-19 Thread Dana Powers
[+ dev list]

I have not worked on KAFKA-1793 directly, but I believe most of the
work so far has been in removing all zookeeper dependencies from
clients. The two main areas for that are (1) consumer rebalancing, and
(2) administrative commands.

1) Consumer rebalancing APIs were added to the broker in 0.9. The "new
consumer" uses these apis and does not connect directly to zookeeper
to manage group leadership and rebalancing. So my understanding is
that this work is complete.

2) Admin commands are being converted to direct API calls instead of
direct zookeeper access with KIP-4. A small part of this project was
released in 0.10.0.0 and are open PRs for additional chunks that may
make it into 0.10.1.0 . If you are interested in helping or getting
involved, you can follow the KIP-4 discussions on the dev mailing
list.

When the client issues are completed I think the next step will be to
refactor the broker's zookeeper access (zkUtils) into an abstract
interface that could potentially be provided by consul or etcd. With
an interface in place, it should be possible to write an alternate
implementation of that interface for consul.

Hope this helps,

-Dana

On Mon, Sep 19, 2016 at 6:31 AM, Martin Gainty  wrote:
> Jens/Kant
> not aware of any shortfall with zookeeper so perhaps you can suggest 
> advantages for Consul vs Zookeeper?
> Maven (I am building, testing and running kafka internally with maven) 
> implements wagon-providers for URLConnection vs HttpURLConnection 
> wagonshttps://maven.apache.org/guides/mini/guide-wagon-providers.html
> Thinking a network_provider should work for integrating external network 
> provider. how would you architect this integration?
>
> would a configurable network-provider such as maven-wagon-provider work for 
> kafka?Martin
>
>> From: kanth...@gmail.com
>> To: us...@kafka.apache.org
>> Subject: Re: any update on this?
>> Date: Mon, 19 Sep 2016 09:41:10 +
>>
>> Yes ofcourse the goal shouldn't be moving towards consul. It should just be
>> flexible enough for users to pick any distributed coordinated system.
>>
>>
>>
>>
>>
>>
>> On Mon, Sep 19, 2016 2:23 AM, Jens Rantil jens.ran...@tink.se
>> wrote:
>> I think I read somewhere that the long-term goal is to make Kafka
>>
>> independent of Zookeeper alltogether. Maybe not worth spending time on
>>
>> migrating to Consul in that case.
>>
>>
>>
>>
>> Cheers,
>>
>> Jens
>>
>>
>>
>>
>> On Sat, Sep 17, 2016 at 10:38 PM Jennifer Fountain 
>>
>> wrote:
>>
>>
>>
>>
>> > +2 watching.
>>
>> >
>>
>> > On Sat, Sep 17, 2016 at 2:45 AM, kant kodali  wrote:
>>
>> >
>>
>> > > https://issues.apache.org/jira/browse/KAFKA-1793
>>
>> > > It would be great to use Consul instead of Zookeeper for Kafka and I
>>
>> > think
>>
>> > > it
>>
>> > > would benefit Kafka a lot from the exponentially growing consul
>>
>> > community.
>>
>> >
>>
>> >
>>
>> >
>>
>> >
>>
>> > --
>>
>> >
>>
>> >
>>
>> > Jennifer Fountain
>>
>> > DevOPS
>>
>> >
>>
>> --
>>
>>
>>
>>
>> Jens Rantil
>>
>> Backend Developer @ Tink
>>
>>
>>
>>
>> Tink AB, Wallingatan 5, 111 60 Stockholm, Sweden
>>
>> For urgent matters you can reach me at +46-708-84 18 32.
>


[jira] [Commented] (KAFKA-4177) Replication Throttling: Remove ThrottledReplicationRateLimit from Server Config

2016-09-19 Thread Ben Stopford (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504230#comment-15504230
 ] 

Ben Stopford commented on KAFKA-4177:
-

Patch submitted here: https://github.com/apache/kafka/pull/1864

> Replication Throttling: Remove ThrottledReplicationRateLimit from Server 
> Config
> ---
>
> Key: KAFKA-4177
> URL: https://issues.apache.org/jira/browse/KAFKA-4177
> Project: Kafka
>  Issue Type: Improvement
>  Components: replication
>Affects Versions: 0.10.1.0
>Reporter: Ben Stopford
>Assignee: Ben Stopford
>
> Replication throttling included the concept of a dynamic broker config 
> (currently for just one property: ThrottledReplicationRateLimit). 
> On reflection it seems best to not include this in KafkaConfig, but rather 
> validate only in AdminUtils. Remove the property 
> ThrottledReplicationRateLimit and related config from KafkaConfig and add 
> validation in AdminUtils where the value can be applied/changed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4177) Replication Throttling: Remove ThrottledReplicationRateLimit from Server Config

2016-09-19 Thread Ben Stopford (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Stopford updated KAFKA-4177:

Status: Patch Available  (was: In Progress)

> Replication Throttling: Remove ThrottledReplicationRateLimit from Server 
> Config
> ---
>
> Key: KAFKA-4177
> URL: https://issues.apache.org/jira/browse/KAFKA-4177
> Project: Kafka
>  Issue Type: Improvement
>  Components: replication
>Affects Versions: 0.10.1.0
>Reporter: Ben Stopford
>Assignee: Ben Stopford
>
> Replication throttling included the concept of a dynamic broker config 
> (currently for just one property: ThrottledReplicationRateLimit). 
> On reflection it seems best to not include this in KafkaConfig, but rather 
> validate only in AdminUtils. Remove the property 
> ThrottledReplicationRateLimit and related config from KafkaConfig and add 
> validation in AdminUtils where the value can be applied/changed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4163) NPE in StreamsMetadataState during re-balance operations

2016-09-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504170#comment-15504170
 ] 

ASF GitHub Bot commented on KAFKA-4163:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1845


> NPE in StreamsMetadataState during re-balance operations
> 
>
> Key: KAFKA-4163
> URL: https://issues.apache.org/jira/browse/KAFKA-4163
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 0.10.1.0
>
>
> During rebalance operations it is possible that an NPE can be thrown on 
> StreamsMetadataState operations. We should first check it the Cluster object 
> is non-empty. If it is empty we should return StreamsMetadata.NOT_AVAILABLE.
> Also, we should tidy up InvalidStateStoreException messages in the store API 
> to suggest that the store may have migrated to another instance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4163) NPE in StreamsMetadataState during re-balance operations

2016-09-19 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4163:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1845
[https://github.com/apache/kafka/pull/1845]

> NPE in StreamsMetadataState during re-balance operations
> 
>
> Key: KAFKA-4163
> URL: https://issues.apache.org/jira/browse/KAFKA-4163
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 0.10.1.0
>
>
> During rebalance operations it is possible that an NPE can be thrown on 
> StreamsMetadataState operations. We should first check it the Cluster object 
> is non-empty. If it is empty we should return StreamsMetadata.NOT_AVAILABLE.
> Also, we should tidy up InvalidStateStoreException messages in the store API 
> to suggest that the store may have migrated to another instance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1845: KAFKA-4163: NPE in StreamsMetadataState during re-...

2016-09-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1845


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4175) Can't have StandbyTasks in KafkaStreams where NUM_STREAM_THREADS_CONFIG > 1

2016-09-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504087#comment-15504087
 ] 

ASF GitHub Bot commented on KAFKA-4175:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1862


> Can't have StandbyTasks in KafkaStreams where NUM_STREAM_THREADS_CONFIG > 1
> ---
>
> Key: KAFKA-4175
> URL: https://issues.apache.org/jira/browse/KAFKA-4175
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 0.10.1.0
>
>
> When we have StandbyTasks in a Kafka Streams app and we have > 1 threads per 
> instance we can run into:
> {code}
> Caused by: java.io.IOException: task [1_0] Failed to lock the state 
> directory: /private/tmp/kafka-streams-smoketest/2/SmokeTest/1_0
> {code}
> This is because the same StandbyTask has been assigned to each thread in the 
> same KafkaStreams instance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-4175) Can't have StandbyTasks in KafkaStreams where NUM_STREAM_THREADS_CONFIG > 1

2016-09-19 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-4175.
--
Resolution: Fixed

Issue resolved by pull request 1862
[https://github.com/apache/kafka/pull/1862]

> Can't have StandbyTasks in KafkaStreams where NUM_STREAM_THREADS_CONFIG > 1
> ---
>
> Key: KAFKA-4175
> URL: https://issues.apache.org/jira/browse/KAFKA-4175
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 0.10.1.0
>
>
> When we have StandbyTasks in a Kafka Streams app and we have > 1 threads per 
> instance we can run into:
> {code}
> Caused by: java.io.IOException: task [1_0] Failed to lock the state 
> directory: /private/tmp/kafka-streams-smoketest/2/SmokeTest/1_0
> {code}
> This is because the same StandbyTask has been assigned to each thread in the 
> same KafkaStreams instance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1862: KAFKA-4175: Can't have StandbyTasks in KafkaStream...

2016-09-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1862


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #1879: HOTFIX: logic in QuerybaleStateIntegrationTest.sho...

2016-09-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1879


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] KIP-79 - ListOffsetRequest v1 and search by timestamp methods in new consumer.

2016-09-19 Thread Jason Gustafson
+1 on Jun's suggestion to use "beginning" and "end". The term "latest" is
misleading since the last message in the log may not have the largest
timestamp.

On Mon, Sep 19, 2016 at 9:49 AM, Jun Rao  wrote:

> Hi, Jiangjie,
>
> Thanks for the proposal. Looks good to me overall. Just a couple of minor
> comments.
>
> 1. I thought at some point you considered to only return offset in
> offsetsForTimes instead of offset and timestamp. One benefit of doing that
> is that it will make the return type consistent among all three new apis.
> It also feels a bit weird that we only return timestamp, but not other
> metadata associated with a message.
>
> 2. To be consistent with existing seek apis, would it be better to rename
> earliestOffsets() and latestOffsets() to beginningOffsets() and
> endOffsets()?
>
> Jun
>
> On Fri, Sep 9, 2016 at 4:38 PM, Becket Qin  wrote:
>
> > Hi all,
> >
> > I'd like to start the voting for KIP-79
> >
> > In short we propose to :
> > 1. add a ListOffsetRequest/ListOffsetResponse v1, and
> > 2. add earliestOffsts(), latestOffsets() and offsetForTime() methods in
> the
> > new consumer.
> >
> > The KIP wiki is the following:
> > https://cwiki.apache.org/confluence/pages/viewpage.
> action?pageId=65868090
> >
> > Thanks,
> >
> > Jiangjie (Becket) Qin
> >
>


Re: ProducerRecord/Consumer MetaData/Headers

2016-09-19 Thread Michael Pearce
Hi Again,

I went to the wiki this afternoon to start writing it up, and seems I cannot 
create a page under the KIP area still. If someone could assist.

Cheers
Mike


On 9/18/16, 7:07 PM, "Michael Pearce"  wrote:

Hi Ismaelj

Thanks, my wiki user is michael.andre.pearce.

Re the link thanks again, actually indeed we started off trying to do this 
after we lost the ability to use the key to hold metadata once the compaction 
feature came, but it actually abusing the payload isn't imo a great solution, 
and has some issues that cannot be overcome and stopping us from using in some 
of our data / message flows. As such I think a solution in the 
broker/message/client needs to be made and formalised. Also then an ecosystems 
of tools could rely on such.

I will add all details in KIP proposal, once I have access.

Cheers
Mike



From: isma...@gmail.com  on behalf of Ismael Juma 

Sent: Sunday, September 18, 2016 9:01:22 AM
To: dev@kafka.apache.org
Subject: Re: ProducerRecord/Consumer MetaData/Headers

Hi Mike,

If you give me your wiki user name, I can give you the required permissions
to post a KIP. This is definitely a big change and there is no clear
consensus if changing the Kafka message format is the right way (it would
be good not to pay the cost if you don't need it) or if it should be done
via schemas, for example. Gwen shared some thoughts in the following
message:

http://search-hadoop.com/m/uyzND1OXS8EoGCU2

Ismael

On Sun, Sep 18, 2016 at 7:11 AM, Michael Pearce 
wrote:

> Hi All, (again)
>
> If it helps the discussion, and almost ready patch implementing this is
> available here:
>
> https://github.com/michaelandrepearce/kafka
>
>
> The biggest/most core change is obviously the kafka.message.Message 
object.
>
>
>
> Some key bits in this implementation is the server side, and submodules
> (connect, mirrormaker, streams) all updated to be aware of the new 
“headers”
>
>
>
> As a big API change have to use new feature on the client side, you use
> the ConsumerRecord and ProducerRecord (which now extend the new Enhanced
> versions) for K,V records without any code changes, to use the headers you
> use the Enhanced versions HeadersConsumerRecord and HeadersProducerRecord.
> This was needed to avoid causing code compilation failure just by
> upgrading. If the patch were accepted I would imagine it as a way to
> transition.
>
>
>
> I am guessing this needs a KIP rather than just myself raising a JIRA as
> fairly substantial api change but unsure whom can raise these so 
assistance
> in the process would be gratefully accepted..
>
>
>
> Cheers
>
> Mike
>
>
>
>
>
>
> From: Michael Pearce 
> Date: Saturday, September 17, 2016 at 6:40 AM
> To: "dev@kafka.apache.org" 
> Subject: ProducerRecord/Consumer MetaData/Headers
>
> Hi All,
>
> First of all apologies if this has been previously discussed I have just
> joined the mail list (I cannot find a JIRA or KIP related nor via good old
> google search)
>
> In our company we are looking to replace some of our more traditional
> message flows with Kafka.
>
> One thing we have found lacking though compared with most messaging
> systems is the ability to set header/metadata separate from our payload. 
We
> did think about the key, but as this is used for compaction we cannot have
> changing values here which metadata/header values will obviously be.
>
> e.g. these headers/metadata are useful for audit data or platform data
> that is not business payload related e.g. storing
> the clientId that generated the message, correlation id of a
> request/response, cluster id where the message was generate (case for
> MirrorMakers), message uuid etc this list is endless.
>
> We would like to propose extending the Record from
> ProducerRecord/ConsumerRecord to 
ProducerRecord/ConsumerRecord
> where M is metadata/header again being like the key and value a simple
> byte[] so that it is completely upto the end users how to serialize /
> deserialize it.
>
> What our people’s thoughts?
> Any other ideas how to add headers/metadata.
>
> How can I progress this?
>
> Cheers
> Mike
> The information contained in this email is strictly confidential and for
> the use of the addressee only, unless otherwise indicated. If you are not
> the intended recipient, please do not read, copy, use or disclose to 
others
> this message or any 

[GitHub] kafka pull request #1882: MINOR: some trace logging for streams debugging

2016-09-19 Thread norwood
GitHub user norwood opened a pull request:

https://github.com/apache/kafka/pull/1882

MINOR: some trace logging for streams debugging



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/norwood/kafka streams-logging

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1882.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1882


commit fc39f86b4eab9ab512021cffed95006cf5df9ec2
Author: Ubuntu 
Date:   2016-09-16T20:28:58Z

some trace logging for streams debugging




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-4193) FetcherTest fails intermittently

2016-09-19 Thread Ben Stopford (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Stopford updated KAFKA-4193:

Status: Patch Available  (was: In Progress)

> FetcherTest fails intermittently 
> -
>
> Key: KAFKA-4193
> URL: https://issues.apache.org/jira/browse/KAFKA-4193
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ben Stopford
>Assignee: Ben Stopford
>
> Running FetcherTest.testFetcher many times results in a fairly predictable 
> failure. 
> This appears to be a regression.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4193) FetcherTest fails intermittently

2016-09-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504017#comment-15504017
 ] 

ASF GitHub Bot commented on KAFKA-4193:
---

GitHub user benstopford opened a pull request:

https://github.com/apache/kafka/pull/1881

KAFKA-4193: Fix for Intermittent failure in FetcherTest



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/benstopford/kafka KAFKA-4193

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1881.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1881


commit 573bdd41cf954875e23cd53fb65f7ac56dc19345
Author: Ben Stopford 
Date:   2016-09-19T17:03:38Z

KAFKA-4193: Fix for intermittant failure in FetcherTest caused by 
erroneously altered acks setting




> FetcherTest fails intermittently 
> -
>
> Key: KAFKA-4193
> URL: https://issues.apache.org/jira/browse/KAFKA-4193
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ben Stopford
>Assignee: Ben Stopford
>
> Running FetcherTest.testFetcher many times results in a fairly predictable 
> failure. 
> This appears to be a regression.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1881: KAFKA-4193: Fix for Intermittent failure in Fetche...

2016-09-19 Thread benstopford
GitHub user benstopford opened a pull request:

https://github.com/apache/kafka/pull/1881

KAFKA-4193: Fix for Intermittent failure in FetcherTest



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/benstopford/kafka KAFKA-4193

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1881.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1881


commit 573bdd41cf954875e23cd53fb65f7ac56dc19345
Author: Ben Stopford 
Date:   2016-09-19T17:03:38Z

KAFKA-4193: Fix for intermittant failure in FetcherTest caused by 
erroneously altered acks setting




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] KIP-79 - ListOffsetRequest v1 and search by timestamp methods in new consumer.

2016-09-19 Thread Jun Rao
Hi, Jiangjie,

Thanks for the proposal. Looks good to me overall. Just a couple of minor
comments.

1. I thought at some point you considered to only return offset in
offsetsForTimes instead of offset and timestamp. One benefit of doing that
is that it will make the return type consistent among all three new apis.
It also feels a bit weird that we only return timestamp, but not other
metadata associated with a message.

2. To be consistent with existing seek apis, would it be better to rename
earliestOffsets() and latestOffsets() to beginningOffsets() and
endOffsets()?

Jun

On Fri, Sep 9, 2016 at 4:38 PM, Becket Qin  wrote:

> Hi all,
>
> I'd like to start the voting for KIP-79
>
> In short we propose to :
> 1. add a ListOffsetRequest/ListOffsetResponse v1, and
> 2. add earliestOffsts(), latestOffsets() and offsetForTime() methods in the
> new consumer.
>
> The KIP wiki is the following:
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65868090
>
> Thanks,
>
> Jiangjie (Becket) Qin
>


[jira] [Work started] (KAFKA-4193) FetcherTest fails intermittently

2016-09-19 Thread Ben Stopford (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-4193 started by Ben Stopford.
---
> FetcherTest fails intermittently 
> -
>
> Key: KAFKA-4193
> URL: https://issues.apache.org/jira/browse/KAFKA-4193
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ben Stopford
>Assignee: Ben Stopford
>
> Running FetcherTest.testFetcher many times results in a fairly predictable 
> failure. 
> This appears to be a regression.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4176) Node stopped receiving heartbeat responses once another node started within the same group

2016-09-19 Thread Marek Svitok (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marek Svitok updated KAFKA-4176:

Priority: Blocker  (was: Critical)

> Node stopped receiving heartbeat responses once another node started within 
> the same group
> --
>
> Key: KAFKA-4176
> URL: https://issues.apache.org/jira/browse/KAFKA-4176
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.0.1
> Environment: Centos 7: 3.10.0-229.el7.x86_64 #1 SMP Fri Mar 6 
> 11:36:42 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
> Java: java version "1.8.0_101"
> Java(TM) SE Runtime Environment (build 1.8.0_101-b13)
> Java HotSpot(TM) 64-Bit Server VM (build 25.101-b13, mixed mode)
>Reporter: Marek Svitok
>Priority: Blocker
>
> I have 3 nodes working in the same group. I started them one after the other. 
> As I can see from the log the node once started receives heartbeat responses
> for the group it is part of. However once I start another node the former one 
> stops receiving these responses and the new one keeps receiving them. 
> Moreover it stops consuming any messages from previously assigner partitions:
> Node0
> 03:14:36.224 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:14:39.223 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:14:39.224 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:14:39.429 [main-SendThread(mujsignal-03:2182)] DEBUG 
> org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 
> 0x256bc1ce8c30170 after 0ms
> 03:14:39.462 [main-SendThread(mujsignal-03:2182)] DEBUG 
> org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 
> 0x256bc1ce8c30171 after 0ms
> 03:14:42.224 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:14:42.224 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:14:45.224 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:14:45.224 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:14:48.224 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - Attempt 
> to heart beat failed for group test_streams_id since it is rebalancing.
> 03:14:48.224 [StreamThread-2] INFO  o.a.k.c.c.i.ConsumerCoordinator - 
> Revoking previously assigned partitions [StreamTopic-2] for group 
> test_streams_id
> 03:14:48.224 [StreamThread-2] INFO  o.a.k.s.p.internals.StreamThread - 
> Removing a task 0_2
> Node1
> 03:22:18.710 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:22:18.716 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:22:21.709 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:22:21.716 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:22:24.710 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:22:24.717 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:22:24.872 [main-SendThread(mujsignal-03:2182)] DEBUG 
> org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 
> 0x256bc1ce8c30172 after 0ms
> 03:22:24.992 [main-SendThread(mujsignal-03:2182)] DEBUG 
> org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 
> 0x256bc1ce8c30173 after 0ms
> 03:22:27.710 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:22:27.717 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:22:30.710 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:22:30.716 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> Configuration used:
> 03:14:24.520 [main] INFO  o.a.k.c.producer.ProducerConfig - ProducerConfig 
> values: 
>   metric.reporters = []
>   

[jira] [Created] (KAFKA-4193) FetcherTest fails intermittently in the presence of the fixed message size change

2016-09-19 Thread Ben Stopford (JIRA)
Ben Stopford created KAFKA-4193:
---

 Summary: FetcherTest fails intermittently in the presence of the 
fixed message size change
 Key: KAFKA-4193
 URL: https://issues.apache.org/jira/browse/KAFKA-4193
 Project: Kafka
  Issue Type: Bug
Reporter: Ben Stopford


Running FetcherTest.testFetcher many times results in a fairly predictable 
failure. 

This appears to be a regression.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-4193) FetcherTest fails intermittently

2016-09-19 Thread Ben Stopford (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Stopford reassigned KAFKA-4193:
---

Assignee: Ben Stopford

> FetcherTest fails intermittently 
> -
>
> Key: KAFKA-4193
> URL: https://issues.apache.org/jira/browse/KAFKA-4193
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ben Stopford
>Assignee: Ben Stopford
>
> Running FetcherTest.testFetcher many times results in a fairly predictable 
> failure. 
> This appears to be a regression.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4193) FetcherTest fails intermittently

2016-09-19 Thread Ben Stopford (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Stopford updated KAFKA-4193:

Summary: FetcherTest fails intermittently   (was: FetcherTest fails 
intermittently in the presence of the fixed message size change)

> FetcherTest fails intermittently 
> -
>
> Key: KAFKA-4193
> URL: https://issues.apache.org/jira/browse/KAFKA-4193
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ben Stopford
>
> Running FetcherTest.testFetcher many times results in a fairly predictable 
> failure. 
> This appears to be a regression.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Kafka usecase

2016-09-19 Thread kant kodali

Why does comcast needs to do better than 1-2 seconds?






On Sun, Sep 18, 2016 8:08 PM, Ghosh, Achintya (Contractor) 
achintya_gh...@comcast.com

wrote:
Hi there,




We have an usecase where we do a lot of business logic to process each message
and sometime it takes 1-2 sec, so will be Kafka fit in our usecase?




Thanks

Achintya

RE: Kafka usecase

2016-09-19 Thread Ghosh, Achintya (Contractor)
Please find my response here.

1. Kafka can be used as a message store.
2. What is the message arrival rate per second? 20 per sec
3. What is the SLA for the messages to be processed? 500 ms per message
4. If your messages arrive faster than they are consumed, you will get a 
backlog of messages. In that case, you may need to grow your cluster so that 
more messages are processed in parallel.
 You mean here to create more partitions or any thing else we need to do?

-Original Message-
From: Lohith Samaga M [mailto:lohith.sam...@mphasis.com] 
Sent: Monday, September 19, 2016 12:24 AM
To: us...@kafka.apache.org
Cc: dev@kafka.apache.org
Subject: RE: Kafka usecase

Hi Achintya,
1. Kafka can be used as a message store.
2. What is the message arrival rate per second?
3. What is the SLA for the messages to be processed?
4. If your messages arrive faster than they are consumed, you will get 
a backlog of messages. In that case, you may need to grow your cluster so that 
more messages are processed in parallel.

Best regards / Mit freundlichen Grüßen / Sincères salutations M. Lohith Samaga



-Original Message-
From: Ghosh, Achintya (Contractor) [mailto:achintya_gh...@comcast.com]
Sent: Monday, September 19, 2016 08.39
To: us...@kafka.apache.org
Cc: dev@kafka.apache.org
Subject: Kafka usecase

Hi there,

We have an usecase where we do a lot of business logic to process each message 
and sometime it takes 1-2 sec, so will be Kafka fit in our usecase?

Thanks
Achintya
Information transmitted by this e-mail is proprietary to Mphasis, its 
associated companies and/ or its customers and is intended for use only by the 
individual or entity to which it is addressed, and may contain information that 
is privileged, confidential or exempt from disclosure under applicable law. If 
you are not the intended recipient or it appears that this mail has been 
forwarded to you without proper authority, you are notified that any use or 
dissemination of this information in any manner is strictly prohibited. In such 
cases, please notify us immediately at mailmas...@mphasis.com and delete this 
mail from your records.




[jira] [Resolved] (KAFKA-4118) StreamsSmokeTest.test_streams started failing since 18 August build

2016-09-19 Thread Eno Thereska (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eno Thereska resolved KAFKA-4118.
-
Resolution: Fixed

> StreamsSmokeTest.test_streams started failing since 18 August build
> ---
>
> Key: KAFKA-4118
> URL: https://issues.apache.org/jira/browse/KAFKA-4118
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Ismael Juma
>Assignee: Eno Thereska
> Fix For: 0.10.1.0
>
>
> Link to the first failure on 18 August: 
> http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2016-08-18--001.1471540190--apache--trunk--40b1dd3/report.html
> The commit corresponding to the 18 August build was 
> https://github.com/apache/kafka/commit/40b1dd3f495a59ab, which is KIP-62 (and 
> before KIP-33)
> KAFKA-3807 tracks another test that started failing at the same time and 
> there's a possibility that the PR for that JIRA fixes this one too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4151) Update public docs for KIP-78

2016-09-19 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15502958#comment-15502958
 ] 

Ismael Juma commented on KAFKA-4151:


We should also add a note to the "upgrade.html" page.

> Update public docs for KIP-78
> -
>
> Key: KAFKA-4151
> URL: https://issues.apache.org/jira/browse/KAFKA-4151
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Sumit Arrawatia
>Assignee: Sumit Arrawatia
> Fix For: 0.10.1.0
>
>
> Add documentation to include details on Cluster Id in "Implementation" 
> section. The actual implementation is tracked in KAFKA-4093. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4151) Update public docs for KIP-78

2016-09-19 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4151:
---
Issue Type: Improvement  (was: New Feature)

> Update public docs for KIP-78
> -
>
> Key: KAFKA-4151
> URL: https://issues.apache.org/jira/browse/KAFKA-4151
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Sumit Arrawatia
>Assignee: Sumit Arrawatia
> Fix For: 0.10.1.0
>
>
> Add documentation to include details on Cluster Id in "Implementation" 
> section. The actual implementation is tracked in KAFKA-4093. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4151) Update public docs for KIP-78

2016-09-19 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4151:
---
Fix Version/s: 0.10.1.0

> Update public docs for KIP-78
> -
>
> Key: KAFKA-4151
> URL: https://issues.apache.org/jira/browse/KAFKA-4151
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Sumit Arrawatia
>Assignee: Sumit Arrawatia
> Fix For: 0.10.1.0
>
>
> Add documentation to include details on Cluster Id in "Implementation" 
> section. The actual implementation is tracked in KAFKA-4093. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4192) Update upgrade documentation to mention inter.broker.protocol.version

2016-09-19 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-4192:
--

 Summary: Update upgrade documentation to mention 
inter.broker.protocol.version
 Key: KAFKA-4192
 URL: https://issues.apache.org/jira/browse/KAFKA-4192
 Project: Kafka
  Issue Type: Task
Reporter: Ismael Juma
Assignee: Ismael Juma
Priority: Critical
 Fix For: 0.10.1.0


Because of KIP-74, the upgrade instructions need to mention the need to use 
inter.broker.protocol.version during the upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3282) Change tools to use new consumer if zookeeper is not specified

2016-09-19 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3282:
---
Status: Patch Available  (was: Open)

> Change tools to use new consumer if zookeeper is not specified
> --
>
> Key: KAFKA-3282
> URL: https://issues.apache.org/jira/browse/KAFKA-3282
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Ismael Juma
>Assignee: Arun Mahadevan
> Fix For: 0.10.1.0
>
>
> This only applies to tools that support the new consumer and it's similar to 
> what we did with the producer for 0.9.0.0, but with a better compatibility 
> story.
> Part of this JIRA is updating the documentation to remove `--new-consumer` 
> from command invocations where appropriate. An example where this will be the 
> case is in the security documentation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4188) compilation issues with org.apache.kafka.clients.consumer.internals.Fetcher.java

2016-09-19 Thread Rajini Sivaram (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15502917#comment-15502917
 ] 

Rajini Sivaram commented on KAFKA-4188:
---

Can you try a clean build?

> compilation issues with 
> org.apache.kafka.clients.consumer.internals.Fetcher.java
> 
>
> Key: KAFKA-4188
> URL: https://issues.apache.org/jira/browse/KAFKA-4188
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.0.1
> Environment: Maven home: /maven325
> Java version: 1.8.0_40, vendor: Oracle Corporation
>Reporter: Martin Gainty
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> from client module
> org.apache.kafka.clients.consumer.internals.Fetcher.java wont compile here is 
> *one* of the errors:
> private PartitionRecords parseFetchedData(CompletedFetch 
> completedFetch) {
> //later on highWatermark is referenced in partition and produces ERROR
> this.sensors.recordsFetchLag.record(partition.highWatermark - 
> record.offset());
> /kafka/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/internals/Fetcher.java:[590,66]
>  cannot find symbol
> [ERROR] symbol:   variable highWatermark
> //assuming partition is TopicPartition partition I can correct by inserting :
> public long highWatermark =0L; //into TopicPartition
> is Fetcher.java producing correct behaviour?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3283) Remove beta from new consumer documentation

2016-09-19 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3283:
---
Status: Patch Available  (was: Open)

> Remove beta from new consumer documentation
> ---
>
> Key: KAFKA-3283
> URL: https://issues.apache.org/jira/browse/KAFKA-3283
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.10.1.0
>
>
> Ideally, we would:
> * Remove the beta label
> * Filling any critical gaps in functionality
> * Update the documentation on the old consumers to recommend the new consumer 
> (without deprecating the old consumer, however)
> Current target is 0.10.1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3283) Remove beta from new consumer documentation

2016-09-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15502863#comment-15502863
 ] 

ASF GitHub Bot commented on KAFKA-3283:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/1880

KAFKA-3283: Remove beta from new consumer documentation

Include a few clean-ups (also in producer section), mention deprecation 
plans and reorder so that the new consumer documentation is before the old 
consumers.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
remove-beta-from-new-consumer-documentation

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1880.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1880


commit 5833a592955af4a6211f9ead0187ea7c57875022
Author: Ismael Juma 
Date:   2016-09-19T09:37:06Z

Remove beta from new consumer, clean-up a few things and list the new 
consumer before the older ones




> Remove beta from new consumer documentation
> ---
>
> Key: KAFKA-3283
> URL: https://issues.apache.org/jira/browse/KAFKA-3283
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.10.1.0
>
>
> Ideally, we would:
> * Remove the beta label
> * Filling any critical gaps in functionality
> * Update the documentation on the old consumers to recommend the new consumer 
> (without deprecating the old consumer, however)
> Current target is 0.10.1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1880: KAFKA-3283: Remove beta from new consumer document...

2016-09-19 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/1880

KAFKA-3283: Remove beta from new consumer documentation

Include a few clean-ups (also in producer section), mention deprecation 
plans and reorder so that the new consumer documentation is before the old 
consumers.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
remove-beta-from-new-consumer-documentation

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1880.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1880


commit 5833a592955af4a6211f9ead0187ea7c57875022
Author: Ismael Juma 
Date:   2016-09-19T09:37:06Z

Remove beta from new consumer, clean-up a few things and list the new 
consumer before the older ones




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4189) Consumer poll hangs forever if kafka is disabled

2016-09-19 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15502860#comment-15502860
 ] 

Umesh Chaudhary commented on KAFKA-4189:


Is it similar to [KAFKA-1894|https://issues.apache.org/jira/browse/KAFKA-1894] ?

> Consumer poll hangs forever if kafka is disabled
> 
>
> Key: KAFKA-4189
> URL: https://issues.apache.org/jira/browse/KAFKA-4189
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer
>Affects Versions: 0.9.0.1, 0.10.0.1
>Reporter: Tomas Benc
>Priority: Critical
>
> We develop web application, where client sends REST request and our 
> application downloads messages from Kafka and sends those messages back to 
> client. In our web application we use "New Consumer API" (not High Level nor 
> Simple Consumer API).
> Problem occurs in case of disabling Kafka and web application is running on. 
> Application receives request and tries to poll messages from Kafka. 
> Processing is on that line blocked until Kafka is enabled.
> ConsumerRecords records = consumer.poll(1000);
> Timeout parameter of the poll method has no influence in such case. I expect 
> poll method could throw some Exception describing about connection issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3283) Remove beta from new consumer documentation

2016-09-19 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3283:
---
Reviewer: Jason Gustafson  (was: Gwen Shapira)

> Remove beta from new consumer documentation
> ---
>
> Key: KAFKA-3283
> URL: https://issues.apache.org/jira/browse/KAFKA-3283
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.10.1.0
>
>
> Ideally, we would:
> * Remove the beta label
> * Filling any critical gaps in functionality
> * Update the documentation on the old consumers to recommend the new consumer 
> (without deprecating the old consumer, however)
> Current target is 0.10.1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3283) Remove beta from new consumer documentation

2016-09-19 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3283:
---
Summary: Remove beta from new consumer documentation  (was: Consider 
marking the new consumer as production-ready)

> Remove beta from new consumer documentation
> ---
>
> Key: KAFKA-3283
> URL: https://issues.apache.org/jira/browse/KAFKA-3283
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Ismael Juma
>Assignee: Jason Gustafson
> Fix For: 0.10.1.0
>
>
> Ideally, we would:
> * Remove the beta label
> * Filling any critical gaps in functionality
> * Update the documentation on the old consumers to recommend the new consumer 
> (without deprecating the old consumer, however)
> Current target is 0.10.1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3283) Remove beta from new consumer documentation

2016-09-19 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma reassigned KAFKA-3283:
--

Assignee: Ismael Juma  (was: Jason Gustafson)

> Remove beta from new consumer documentation
> ---
>
> Key: KAFKA-3283
> URL: https://issues.apache.org/jira/browse/KAFKA-3283
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.10.1.0
>
>
> Ideally, we would:
> * Remove the beta label
> * Filling any critical gaps in functionality
> * Update the documentation on the old consumers to recommend the new consumer 
> (without deprecating the old consumer, however)
> Current target is 0.10.1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1879: HOTFIX: logic in QuerybaleStateIntegrationTest.sho...

2016-09-19 Thread dguy
GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/1879

HOTFIX: logic in QuerybaleStateIntegrationTest.shouldBeAbleToQueryState 
incorrect

The logic in `verifyCanGetByKey` was incorrect. It was 
```
windowState.size() < keys.length &&
countState.size() < keys.length &&
System.currentTimeMillis() < timeout
```
but should be:
```
(windowState.size() < keys.length || countState.size() < keys.length) && 
System.currentTimeMillis() < timeout
```

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka minor-fix-test

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1879.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1879


commit d9eb06b407ad9d25ee037752eb533000f4e68630
Author: Damian Guy 
Date:   2016-09-19T08:48:37Z

logic in test incorrect




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Streams support for Serdes

2016-09-19 Thread Jeyhun Karimov
Hi community,

When using kafka-streams with POJO data types we write our own
de/serializers. However I think if we have built-in Serdes support for
Tuple-n data types (ex:Serdes.Tuple2) we may easily
use Tuples and built-in Serdes can help to reduce the development cycle.
Please correct me if I am wrong or if there is similar solution within
library please let me know.

Cheers
Jeyhun
-- 
-Cheers

Jeyhun


[jira] [Updated] (KAFKA-4176) Node stopped receiving heartbeat responses once another node started within the same group

2016-09-19 Thread Marek Svitok (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marek Svitok updated KAFKA-4176:

Priority: Critical  (was: Major)

> Node stopped receiving heartbeat responses once another node started within 
> the same group
> --
>
> Key: KAFKA-4176
> URL: https://issues.apache.org/jira/browse/KAFKA-4176
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.0.1
> Environment: Centos 7: 3.10.0-229.el7.x86_64 #1 SMP Fri Mar 6 
> 11:36:42 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
> Java: java version "1.8.0_101"
> Java(TM) SE Runtime Environment (build 1.8.0_101-b13)
> Java HotSpot(TM) 64-Bit Server VM (build 25.101-b13, mixed mode)
>Reporter: Marek Svitok
>Priority: Critical
>
> I have 3 nodes working in the same group. I started them one after the other. 
> As I can see from the log the node once started receives heartbeat responses
> for the group it is part of. However once I start another node the former one 
> stops receiving these responses and the new one keeps receiving them. 
> Moreover it stops consuming any messages from previously assigner partitions:
> Node0
> 03:14:36.224 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:14:39.223 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:14:39.224 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:14:39.429 [main-SendThread(mujsignal-03:2182)] DEBUG 
> org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 
> 0x256bc1ce8c30170 after 0ms
> 03:14:39.462 [main-SendThread(mujsignal-03:2182)] DEBUG 
> org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 
> 0x256bc1ce8c30171 after 0ms
> 03:14:42.224 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:14:42.224 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:14:45.224 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:14:45.224 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:14:48.224 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - Attempt 
> to heart beat failed for group test_streams_id since it is rebalancing.
> 03:14:48.224 [StreamThread-2] INFO  o.a.k.c.c.i.ConsumerCoordinator - 
> Revoking previously assigned partitions [StreamTopic-2] for group 
> test_streams_id
> 03:14:48.224 [StreamThread-2] INFO  o.a.k.s.p.internals.StreamThread - 
> Removing a task 0_2
> Node1
> 03:22:18.710 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:22:18.716 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:22:21.709 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:22:21.716 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:22:24.710 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:22:24.717 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:22:24.872 [main-SendThread(mujsignal-03:2182)] DEBUG 
> org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 
> 0x256bc1ce8c30172 after 0ms
> 03:22:24.992 [main-SendThread(mujsignal-03:2182)] DEBUG 
> org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 
> 0x256bc1ce8c30173 after 0ms
> 03:22:27.710 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:22:27.717 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:22:30.710 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> 03:22:30.716 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - 
> Received successful heartbeat response for group test_streams_id
> Configuration used:
> 03:14:24.520 [main] INFO  o.a.k.c.producer.ProducerConfig - ProducerConfig 
> values: 
>   metric.reporters = []
>   

[jira] [Updated] (KAFKA-4191) After the leader broker is down, then start the producer of librdkafka, it cannot produce any data any more

2016-09-19 Thread Leon (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leon updated KAFKA-4191:

Description: 
Hi,
I am using kafka_2.11-0.10.0.1 and librdkafka-master on Windows 7,
and there are 3 brokers, 1 zookeeper, 1 producer (rdkafka_example.exe) and 1 
consumer(rdkafka_consumer_example_cpp.exe), All of them are on the same PC. 
But I found an issue that the producer failed to produce any data after the 
leader of the brokers is down.
Here are the steps to reproduce this issue:
1.  Start zookeeper.
2.  Start the brokers by running the following commands:
  kafka-server-start.bat .\config\server.properties
  kafka-server-start.bat .\config\server-1.properties
  kafka-server-start.bat .\config\server-2.properties

 The configures for each server are:
 config/server.properties:
 broker.id=0
 listeners=PLAINTEXT://:9092
 log.dir=/tmp/kafka-logs-0

 config/server-1.properties:
 broker.id=1
 listeners=PLAINTEXT://:9093
 log.dir=/tmp/kafka-logs-1

 config/server-2.properties:
 broker.id=2
 listeners=PLAINTEXT://:9094
 log.dir=/tmp/kafka-logs-2

3. Create a new topic
  kafka-topics.bat --create --zookeeper localhost:2181 --replication-factor 
 3 --partitions 1 --topic topic1  
 Then you can see that the leader is broker 0 with following command
 kafka-topics.bat --describe --zookeeper localhost:2181 --topic topic1 

4. Start consumer:
  rdkafka_consumer_example_cpp.exe -g 1 -b localhost:9092 topic1 

5. Start producer:
  rdkafka_example.exe -P -t topic1 -b localhost:9092
  
 Now you can see that everything works fine.
6. Then stop broker0 by closing the command prompt which runs 
'kafka-server-start.bat .\config\server.properties', and you can see that the 
producer and consumer still work fine.

7. Then stop the producer and consumer by pressing Ctrl+C and then closing the 
related command prompt, and start them again with the same step 4 and 5, now 
you can see that both the producer and consumer do not work!
My expected behavior is that even the leader of multi-broker cluster is down, 
we can still restart the producer and consumer of librdkafka and make them work.

Would you please give me any help?
Thank you!

Leon

  was:
Hi,
I am using kafka_2.11-0.10.0.1 and librdkafka-master on Windows 7,
and there are 3 brokers, 1 zookeeper, 1 producer (rdkafka_example.exe) and 1 
consumer(rdkafka_consumer_example_cpp.exe), All of them are on the same PC. 
But I found an issue that the producer failed to produce any data after the 
leader of the brokers is down.
Here are the steps to reproduce this issue:
1.  Start zookeeper.
2.  Start the brokers by running the following commands:
  kafka-server-start.bat .\config\server.properties
  kafka-server-start.bat .\config\server-1.properties
  kafka-server-start.bat .\config\server-2.properties

 The configures for each server are:
 config/server.properties:
 broker.id=0
 listeners=PLAINTEXT://:9092
 log.dir=/tmp/kafka-logs-0

 config/server-1.properties:
 broker.id=1
 listeners=PLAINTEXT://:9093
 log.dir=/tmp/kafka-logs-1

 config/server-2.properties:
 broker.id=2
 listeners=PLAINTEXT://:9094
 log.dir=/tmp/kafka-logs-2

3. Create a new topic
  kafka-topics.bat --create --zookeeper localhost:2181 --replication-factor 
 3 --partitions 1 --topic topic1  
 Then you can see that the leader is broker 0 with following command
 kafka-topics.bat --describe --zookeeper localhost:2181 --topic topic1 

4. Start consumer:
  rdkafka_consumer_example_cpp.exe -g 1 -b localhost:9092 topic1 

5. Start producer:
  rdkafka_example.exe -P -t topic1 -b localhost:9092
  
 Now you can see that everything works fine.
6. Then stop broker0 by closing the command prompt which runs 
'kafka-server-start.bat .\config\server.properties', and you can see that the 
producer and consumer still work fine.

7. Then stop the producer and consumer by pressing Ctrl+C and then closing the 
related command prompt, and start them again with the same step 4 and 5, now 
you can see that both the producer and consumer do not work!
My expected behavior is that even the leader of multi-broker cluster is down, 
we still can start the producer and consumer of librdkafka.

Would you please give me any help?
Thank you!

Leon


> After the leader broker is down, then start the producer of librdkafka, it 
> cannot produce any data any more
> ---
>
> Key: KAFKA-4191
> URL: https://issues.apache.org/jira/browse/KAFKA-4191
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.10.0.1
> Environment: Windows 7
>Reporter: Leon
>Priority: Minor
>
> Hi,

[jira] [Updated] (KAFKA-4191) After the leader broker is down, then start the producer of librdkafka, it cannot produce any data any more

2016-09-19 Thread Leon (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leon updated KAFKA-4191:

Description: 
Hi,
I am using kafka_2.11-0.10.0.1 and librdkafka-master on Windows 7,
and there are 3 brokers, 1 zookeeper, 1 producer (rdkafka_example.exe) and 1 
consumer(rdkafka_consumer_example_cpp.exe), All of them are on the same PC. 
But I found an issue that the producer failed to produce any data after the 
leader of the brokers is down.
Here are the steps to reproduce this issue:
1.  Start zookeeper.
2.  Start the brokers by running the following commands:
  kafka-server-start.bat .\config\server.properties
  kafka-server-start.bat .\config\server-1.properties
  kafka-server-start.bat .\config\server-2.properties

 The configures for each server are:
 config/server.properties:
 broker.id=0
 listeners=PLAINTEXT://:9092
 log.dir=/tmp/kafka-logs-0

 config/server-1.properties:
 broker.id=1
 listeners=PLAINTEXT://:9093
 log.dir=/tmp/kafka-logs-1

 config/server-2.properties:
 broker.id=2
 listeners=PLAINTEXT://:9094
 log.dir=/tmp/kafka-logs-2

3. Create a new topic
  kafka-topics.bat --create --zookeeper localhost:2181 --replication-factor 
 3 --partitions 1 --topic topic1  
 Then you can see that the leader is broker 0 with following command
 kafka-topics.bat --describe --zookeeper localhost:2181 --topic topic1 

4. Start consumer:
  rdkafka_consumer_example_cpp.exe -g 1 -b localhost:9092 topic1 

5. Start producer:
  rdkafka_example.exe -P -t topic1 -b localhost:9092
  
 Now you can see that everything works fine.
6. Then stop broker0 by closing the command prompt which runs 
'kafka-server-start.bat .\config\server.properties', and you can see that the 
producer and consumer still work fine.

7. Then stop the producer and consumer by pressing Ctrl+C and then closing the 
related command prompt, and start them again with the same step 4 and 5, now 
you can see that both the producer and consumer do not work!
My expected behavior is that even the leader of multi-broker cluster is down, 
we still can start the producer and consumer of librdkafka.

Would you please give me any help?
Thank you!

Leon

  was:
Hi,
I am using kafka_2.11-0.10.0.1 and librdkafka-master on Windows 7,
and there are 3 brokers, 1 zookeeper, 1 producer (rdkafka_example.exe) and 1 
consumer(rdkafka_consumer_example_cpp.exe), All of them are on the same PC. 
But I found an issue that the producer failed to produce any data after the 
leader of the brokers is down.
Here are the steps to reproduce this issue:
1.  Start zookeeper.
2.  Start the brokers by running the following commands:
  kafka-server-start.bat .\config\server.properties
  kafka-server-start.bat .\config\server-1.properties
  kafka-server-start.bat .\config\server-2.properties

 The configures for each server are:
 config/server.properties:
 broker.id=0
 listeners=PLAINTEXT://:9092
 log.dir=/tmp/kafka-logs-0

 config/server-1.properties:
 broker.id=1
 listeners=PLAINTEXT://:9093
 log.dir=/tmp/kafka-logs-1

 config/server-2.properties:
 broker.id=2
 listeners=PLAINTEXT://:9094
 log.dir=/tmp/kafka-logs-2

3. Create a new topic
  kafka-topics.bat --create --zookeeper localhost:2181 --replication-factor 
 3 --partitions 1 --topic topic1  
 Then you can see that the leader is broker 0 with following command
 kafka-topics.bat --describe --zookeeper localhost:2181 --topic topic1 

4. Start consumer:
  rdkafka_consumer_example_cpp.exe -g 1 -b localhost:9092 topic1 

5. Start producer:
  rdkafka_example.exe -P -t topic1 -b localhost:9092
  
 Now you can see that everything works fine.
6. Then stop broker0 by closing the command prompt which runs 
'kafka-server-start.bat .\config\server.properties', and you can see that the 
producer and consumer still work fine.

7. Then stop the producer and consumer by pressing Ctrl+C and then closing the 
related command prompt, and start them again with the same step 4 and 5, now 
you can see that both the producer and consumer do not work!
My expected behavior is that even the leader of multi-broker cluster is down, 
we still can start the producer and consumer of librdkafka.

Would you please give me any help?
Thank you!



> After the leader broker is down, then start the producer of librdkafka, it 
> cannot produce any data any more
> ---
>
> Key: KAFKA-4191
> URL: https://issues.apache.org/jira/browse/KAFKA-4191
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.10.0.1
> Environment: Windows 7
>Reporter: Leon
>Priority: Minor
>
> Hi,
> I am using 

  1   2   >