[GitHub] kafka pull request #3943: MINOR: always set Serde.Long on count operations

2017-09-21 Thread dguy
GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/3943

MINOR: always set Serde.Long on count operations



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka count-materialized

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3943.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3943


commit f2471f1caaccc1262c830fdc6f10c5b202bd7f2e
Author: Damian Guy 
Date:   2017-09-22T06:32:08Z

always set Serde.Long on count operations




---


Build failed in Jenkins: kafka-trunk-jdk9 #49

2017-09-21 Thread Apache Jenkins Server
See 


Changes:

[me] KAFKA-5330: Use per-task converters in Connect

--
[...truncated 1.91 MB...]

org.apache.kafka.connect.runtime.isolation.PluginUtilsTest > 
testAllowedConnectFrameworkClasses STARTED

org.apache.kafka.connect.runtime.isolation.PluginUtilsTest > 
testAllowedConnectFrameworkClasses PASSED

org.apache.kafka.connect.runtime.isolation.PluginUtilsTest > 
testJavaLibraryClasses STARTED

org.apache.kafka.connect.runtime.isolation.PluginUtilsTest > 
testJavaLibraryClasses PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testConsumerError STARTED

org.apache.kafka.connect.util.KafkaBasedLogTest > testConsumerError PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testProducerError STARTED

org.apache.kafka.connect.util.KafkaBasedLogTest > testProducerError PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testStartStop STARTED

org.apache.kafka.connect.util.KafkaBasedLogTest > testStartStop PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testReloadOnStart STARTED

org.apache.kafka.connect.util.KafkaBasedLogTest > testReloadOnStart PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > 
testReloadOnStartWithNoNewRecordsPresent STARTED

org.apache.kafka.connect.util.KafkaBasedLogTest > 
testReloadOnStartWithNoNewRecordsPresent PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testSendAndReadToEnd STARTED

org.apache.kafka.connect.util.KafkaBasedLogTest > testSendAndReadToEnd PASSED

org.apache.kafka.connect.util.ShutdownableThreadTest > testGracefulShutdown 
STARTED

org.apache.kafka.connect.util.ShutdownableThreadTest > testGracefulShutdown 
PASSED

org.apache.kafka.connect.util.ShutdownableThreadTest > testForcibleShutdown 
STARTED

org.apache.kafka.connect.util.ShutdownableThreadTest > testForcibleShutdown 
PASSED

org.apache.kafka.connect.util.TopicAdminTest > 
shouldNotCreateTopicWhenItAlreadyExists STARTED

org.apache.kafka.connect.util.TopicAdminTest > 
shouldNotCreateTopicWhenItAlreadyExists PASSED

org.apache.kafka.connect.util.TopicAdminTest > 
shouldCreateTopicWhenItDoesNotExist STARTED

org.apache.kafka.connect.util.TopicAdminTest > 
shouldCreateTopicWhenItDoesNotExist PASSED

org.apache.kafka.connect.util.TopicAdminTest > 
shouldReturnFalseWhenSuppliedNullTopicDescription STARTED

org.apache.kafka.connect.util.TopicAdminTest > 
shouldReturnFalseWhenSuppliedNullTopicDescription PASSED

org.apache.kafka.connect.util.TopicAdminTest > returnNullWithApiVersionMismatch 
STARTED

org.apache.kafka.connect.util.TopicAdminTest > returnNullWithApiVersionMismatch 
PASSED

org.apache.kafka.connect.util.TopicAdminTest > 
shouldCreateOneTopicWhenProvidedMultipleDefinitionsWithSameTopicName STARTED

org.apache.kafka.connect.util.TopicAdminTest > 
shouldCreateOneTopicWhenProvidedMultipleDefinitionsWithSameTopicName PASSED

org.apache.kafka.connect.util.TableTest > basicOperations STARTED

org.apache.kafka.connect.util.TableTest > basicOperations PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutConnectorConfig STARTED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutTaskConfigsZeroTasks STARTED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutTaskConfigsZeroTasks PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testRestoreTargetState STARTED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testRestoreTargetState PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testBackgroundUpdateTargetState STARTED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testBackgroundUpdateTargetState PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testBackgroundConnectorDeletion STARTED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testBackgroundConnectorDeletion PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testRestoreTargetStateUnexpectedDeletion STARTED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testRestoreTargetStateUnexpectedDeletion PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > testRestore 
STARTED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > testRestore 
PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testRestoreConnectorDeletion STARTED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testRestoreConnectorDeletion PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testRestoreZeroTasks STARTED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testRestoreZeroTasks PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutTaskConfigsDoesNotResolveAllInconsistencies STARTED

org.apach

Build failed in Jenkins: kafka-trunk-jdk8 #2060

2017-09-21 Thread Apache Jenkins Server
See 


Changes:

[me] KAFKA-5330: Use per-task converters in Connect

--
[...truncated 360.70 KB...]
kafka.security.auth.SimpleAclAuthorizerTest > testAclInheritance PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression STARTED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic STARTED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest STARTED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete STARTED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
PASSED

kafka.integration.TopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
STARTED

kafka.integration.TopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithCollision STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithCollision PASSED

kafka.integration.TopicMetadataTest > testAliveBrokerListWithNoTopics STARTED

kafka.integration.TopicMetadataTest > testAliveBrokerListWithNoTopics PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.TopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.TopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.TopicMetadataTest > testBasicTopicMetadata STARTED

kafka.integration.TopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.TopicMetadataTest > testAutoCreat

[GitHub] kafka pull request #3196: KAFKA-5330: Use per-task converters in Connect

2017-09-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3196


---


[jira] [Resolved] (KAFKA-5330) Use per-task converters in Connect

2017-09-21 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-5330.
--
   Resolution: Fixed
Fix Version/s: 1.0.0

Issue resolved by pull request 3196
[https://github.com/apache/kafka/pull/3196]

> Use per-task converters in Connect
> --
>
> Key: KAFKA-5330
> URL: https://issues.apache.org/jira/browse/KAFKA-5330
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Ewen Cheslack-Postava
> Fix For: 1.0.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Because Connect started with a worker-wide model of data formats, we 
> currently allocate a single Converter per worker and only allocate an 
> independent one when the user overrides the converter.
> This can lead to performance problems when the worker-level default converter 
> is used by a large number of tasks because converters need to be threadsafe 
> to support this model and they may spend a lot of time just on 
> synchronization.
> We could, instead, simply allocate one converter per task. There is some 
> overhead involved, but generally it shouldn't be that large. For example, 
> Confluent's Avro converters will each have their own schema cache and have to 
> make their on calls to the schema registry API, but these are relatively 
> small, likely inconsequential compared to any normal overhead we would 
> already have for creating and managing each task. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5821) Intermittent test failure in SaslPlainSslEndToEndAuthorizationTest.testAcls

2017-09-21 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved KAFKA-5821.
---
Resolution: Cannot Reproduce

> Intermittent test failure in SaslPlainSslEndToEndAuthorizationTest.testAcls
> ---
>
> Key: KAFKA-5821
> URL: https://issues.apache.org/jira/browse/KAFKA-5821
> Project: Kafka
>  Issue Type: Test
>Reporter: Ted Yu
>Priority: Minor
>
> From 
> https://builds.apache.org/job/kafka-pr-jdk7-scala2.11/7245/testReport/junit/kafka.api/SaslPlainSslEndToEndAuthorizationTest/testAcls/
>  :
> {code}
> java.lang.SecurityException: zookeeper.set.acl is true, but the verification 
> of the JAAS login file failed.
>   at kafka.server.KafkaServer.initZk(KafkaServer.scala:329)
>   at kafka.server.KafkaServer.startup(KafkaServer.scala:192)
>   at kafka.utils.TestUtils$.createServer(TestUtils.scala:134)
>   at 
> kafka.integration.KafkaServerTestHarness$$anonfun$setUp$1.apply(KafkaServerTestHarness.scala:94)
>   at 
> kafka.integration.KafkaServerTestHarness$$anonfun$setUp$1.apply(KafkaServerTestHarness.scala:93)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at 
> kafka.integration.KafkaServerTestHarness.setUp(KafkaServerTestHarness.scala:93)
>   at 
> kafka.api.IntegrationTestHarness.setUp(IntegrationTestHarness.scala:66)
>   at 
> kafka.api.EndToEndAuthorizationTest.setUp(EndToEndAuthorizationTest.scala:158)
>   at 
> kafka.api.SaslEndToEndAuthorizationTest.setUp(SaslEndToEndAuthorizationTest.scala:48)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy1.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.jav

[jira] [Created] (KAFKA-5960) Producer uses unsupported ProduceRequest version against older brokers

2017-09-21 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-5960:
--

 Summary: Producer uses unsupported ProduceRequest version against 
older brokers
 Key: KAFKA-5960
 URL: https://issues.apache.org/jira/browse/KAFKA-5960
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Reporter: Jason Gustafson
Assignee: Jason Gustafson
Priority: Blocker


Reported recently errors from a trunk producer on an 0.11.0.0 broker:
{code}
org.apache.kafka.common.errors.UnsupportedVersionException: The broker does not 
support the requested version 5 for api PRODUCE. Supported versions are 0 to 3.
{code}
This is likely a regression introduced in KAFKA-5793. We should be using the 
latest version that the broker supports.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5959) NPE in NetworkClient

2017-09-21 Thread JIRA
Xavier Léauté created KAFKA-5959:


 Summary: NPE in NetworkClient
 Key: KAFKA-5959
 URL: https://issues.apache.org/jira/browse/KAFKA-5959
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 1.0.0
Reporter: Xavier Léauté
Assignee: Jason Gustafson


I'm experiencing the following error when running trunk clients against a 
0.11.0 cluster configured with SASL_PLAINTEXT

{code}
[2017-09-21 23:07:09,072] ERROR [kafka-producer-network-thread | xxx] [Producer 
clientId=xxx] Uncaught error in request completion: 
(org.apache.kafka.clients.NetworkClient)
java.lang.NullPointerException
at 
org.apache.kafka.clients.producer.internals.Sender.canRetry(Sender.java:639)
at 
org.apache.kafka.clients.producer.internals.Sender.completeBatch(Sender.java:522)
at 
org.apache.kafka.clients.producer.internals.Sender.handleProduceResponse(Sender.java:473)
at 
org.apache.kafka.clients.producer.internals.Sender.access$100(Sender.java:76)
at 
org.apache.kafka.clients.producer.internals.Sender$1.onComplete(Sender.java:693)
at 
org.apache.kafka.clients.ClientResponse.onComplete(ClientResponse.java:101)
at 
org.apache.kafka.clients.NetworkClient.completeResponses(NetworkClient.java:481)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:453)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:241)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:166)
at java.lang.Thread.run(Thread.java:748)
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5958) User StoreListener not available for global stores

2017-09-21 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-5958:
--

 Summary: User StoreListener not available for global stores
 Key: KAFKA-5958
 URL: https://issues.apache.org/jira/browse/KAFKA-5958
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Affects Versions: 1.0.0
Reporter: Matthias J. Sax


In 1.0 we added the ability to register a state restore listener, such that 
users can monitor state restore progress via this callback.

However, we no do not use this listener for global stores. Strictly speaking, 
global stores are never recovered, however, at startup they are bootstrapped. 
We might want to consider to user the same handler callback to allow users to 
monitor the bootstrapping of global stores, too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3942: KAFKA-5957: Prevent second deallocate if response ...

2017-09-21 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/3942

KAFKA-5957: Prevent second deallocate if response for aborted batch returns



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-5957

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3942.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3942


commit 4974842e629c098b0d36bc42e189bff211e7faac
Author: Jason Gustafson 
Date:   2017-09-22T00:09:30Z

KAFKA-5957: Prevent second deallocate if produce response for aborted batch 
returns




---


[GitHub] kafka pull request #3941: KAFKA-3999: Record raw size of fetch responses as ...

2017-09-21 Thread vahidhashemian
GitHub user vahidhashemian opened a pull request:

https://github.com/apache/kafka/pull/3941

KAFKA-3999: Record raw size of fetch responses as part of consumer metrics

Currently, only the decompressed size of fetch responses is recorded. This 
PR adds a sensor to keep track of the raw size as well.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-3999

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3941.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3941


commit 0eb84a3c83530081748d4bf9ffe7fcba9e41f50b
Author: Vahid Hashemian 
Date:   2017-09-22T00:01:48Z

KAFKA-3999: Record raw size of fetch responses as part of consumer metrics

Currently, only the decompressed size of fetch responses is recorded. This 
PR adds a sensor to keep track of the raw size as well.




---


[GitHub] kafka pull request #1698: KAFKA-3999: Record raw size of fetch responses as ...

2017-09-21 Thread vahidhashemian
Github user vahidhashemian closed the pull request at:

https://github.com/apache/kafka/pull/1698


---


Jenkins build is back to normal : kafka-trunk-jdk7 #2803

2017-09-21 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-5957) Producer IllegalStateException due to second deallocate after aborting a batch

2017-09-21 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-5957:
--

 Summary: Producer IllegalStateException due to second deallocate 
after aborting a batch
 Key: KAFKA-5957
 URL: https://issues.apache.org/jira/browse/KAFKA-5957
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson
Assignee: Jason Gustafson
 Fix For: 1.0.0


Saw this recently in a system test failure:

{code}
[2017-09-21 05:04:52,033] ERROR [Producer clientId=producer-1, 
transactionalId=my-second-transactional-id] Aborting producer batches due to 
fatal error (org.apache.kafka.clients.producer.internals.Sender)
org.apache.kafka.common.KafkaException: The client hasn't received 
acknowledgment for some previously sent messages and can no longer retry them. 
It isn't safe to continue.
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:211)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:164)
at java.lang.Thread.run(Thread.java:745)
[2017-09-21 05:04:52,033] TRACE Aborting batch for partition output-topic-2 
(org.apache.kafka.clients.producer.internals.ProducerBatch)
org.apache.kafka.common.KafkaException: The client hasn't received 
acknowledgment for some previously sent messages and can no longer retry them. 
It isn't safe to continue.
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:211)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:164)
at java.lang.Thread.run(Thread.java:745)
[2017-09-21 05:04:52,134] TRACE [Producer clientId=producer-1, 
transactionalId=my-second-transactional-id] Not sending transactional request 
(type=EndTxnRequest, transactionalId=my-second-transactional-id, 
producerId=1000, producerEpoch=0, result=COMMIT) because we are in an error 
state (org.apache.kafka.clients.producer.internals.TransactionManager)
[2017-09-21 05:04:52,134] INFO [Producer clientId=producer-1, 
transactionalId=my-second-transactional-id] Closing the Kafka producer with 
timeoutMillis = 9223372036854775807 ms. 
(org.apache.kafka.clients.producer.KafkaProducer)
[2017-09-21 05:04:52,134] DEBUG [Producer clientId=producer-1, 
transactionalId=my-second-transactional-id] Beginning shutdown of Kafka 
producer I/O thread, sending remaining records. 
(org.apache.kafka.clients.producer.internals.Sender)
[2017-09-21 05:04:52,360] TRACE [Producer clientId=producer-1, 
transactionalId=my-second-transactional-id] Received produce response from node 
1 with correlation id 245 (org.apache.kafka.clients.producer.internals.Sender)
[2017-09-21 05:04:52,360] DEBUG [Producer clientId=producer-1, 
transactionalId=my-second-transactional-id] ProducerId: 1000; Set last ack'd 
sequence number for topic-partition output-topic-2 to 136 
(org.apache.kafka.clients.producer.internals.Sender)
[2017-09-21 05:04:52,360] TRACE Successfully produced messages to 
output-topic-2 with base offset 387. 
(org.apache.kafka.clients.producer.internals.ProducerBatch)
[2017-09-21 05:04:52,360] DEBUG ProduceResponse returned for output-topic-2 
after batch had already been aborted. 
(org.apache.kafka.clients.producer.internals.ProducerBatch)
[2017-09-21 05:04:52,360] ERROR [Producer clientId=producer-1, 
transactionalId=my-second-transactional-id] Uncaught error in request 
completion: (org.apache.kafka.clients.NetworkClient)
java.lang.IllegalStateException: Remove from the incomplete set failed. This 
should be impossible.
at 
org.apache.kafka.clients.producer.internals.IncompleteBatches.remove(IncompleteBatches.java:44)
at 
org.apache.kafka.clients.producer.internals.RecordAccumulator.deallocate(RecordAccumulator.java:612)
at 
org.apache.kafka.clients.producer.internals.Sender.completeBatch(Sender.java:585)
at 
org.apache.kafka.clients.producer.internals.Sender.completeBatch(Sender.java:561)
at 
org.apache.kafka.clients.producer.internals.Sender.handleProduceResponse(Sender.java:475)
at 
org.apache.kafka.clients.producer.internals.Sender.access$100(Sender.java:74)
at 
org.apache.kafka.clients.producer.internals.Sender$1.onComplete(Sender.java:685)
at 
org.apache.kafka.clients.ClientResponse.onComplete(ClientResponse.java:101)
at 
org.apache.kafka.clients.NetworkClient.completeResponses(NetworkClient.java:481)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:473)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:225)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:177)
at java.lang.Thread.run(Thread.java:745)
{code}
Although we allow a batch to be aborted before it returns, we are not careful 
about preventing a second call to {{deallocate()}} which causes this error.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3913: KAFKA-5937: Improve ProcessorStateManager exceptio...

2017-09-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3913


---


[GitHub] kafka-site pull request #83: Line logo to Powered-by page + minor edits to s...

2017-09-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka-site/pull/83


---


[GitHub] kafka-site issue #83: Line logo to Powered-by page + minor edits to styles +...

2017-09-21 Thread guozhangwang
Github user guozhangwang commented on the issue:

https://github.com/apache/kafka-site/pull/83
  
LGTM. Merged to asf-site.


---


[GitHub] kafka pull request #3940: Adding LINE corp logo to streams page

2017-09-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3940


---


[GitHub] kafka pull request #3940: Adding LINE corp logo to streams page

2017-09-21 Thread manjuapu
GitHub user manjuapu opened a pull request:

https://github.com/apache/kafka/pull/3940

Adding LINE corp logo to streams page



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/confluentinc/kafka customer-logo-stream

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3940.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3940


commit dd4789b35187b4820f67d092011c85e9bebd5a10
Author: Manjula K 
Date:   2017-09-20T00:51:44Z

Adding See how Kafka Streams is being used section to Streams page

commit c5382a47fa25e115c245845cf69173399497ca93
Author: manjuapu 
Date:   2017-09-20T15:23:07Z

Update index.html

commit c7c9e8a9d04cf63673d885d9bd6f357fc4d84cf2
Author: Manjula K 
Date:   2017-09-21T22:30:33Z

Adding LINE corp logo to Streams page




---


Build failed in Jenkins: kafka-trunk-jdk7 #2802

2017-09-21 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Follow-up improvements on top of KAFKA-5793

--
[...truncated 354.64 KB...]
kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateSavedOffsetWhenOffsetToClearToIsBetweenEpochs PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryTailIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryTailIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLe

Re: [DISCUSS]: KIP-159: Introducing Rich functions to Streams

2017-09-21 Thread Jeyhun Karimov
Hi Damian,

Thanks for the update. I working on it and will provide an update soon.

Cheers,
Jeyhun

On Thu, Sep 21, 2017 at 4:50 PM Damian Guy  wrote:

> Hi Jeyhun,
>
> All KIP-182 API PRs have now been merged. So you can consider it as stable.
> Thanks,
> Damian
>
> On Thu, 21 Sep 2017 at 15:23 Jeyhun Karimov  wrote:
>
> > Hi all,
> >
> > Thanks a lot for your comments. For the single interface (RichXXX and
> > XXXWithKey) solution, I have already submitted a PR but probably it is
> > outdated (when the KIP first proposed), I need to revisit that one.
> >
> > @Guozhang, from our (offline) discussion, I understood that we may not
> make
> > it merge this KIP into the upcoming release, as KIP-159 is not voted yet
> > (because we want both KIP-149 and KIP-159 to be as an "atomic" merge).
> So
> > I decided to wait until KIP-182 gets stable (there are some minor updates
> > AFAIK) and update the KIP accordingly. Please correct me if I am wrong
> or I
> > misunderstood.
> >
> > Cheers,
> > Jeyhun
> >
> >
> > On Thu, Sep 21, 2017 at 4:11 PM Damian Guy  wrote:
> >
> > > +1
> > >
> > > On Thu, 21 Sep 2017 at 13:46 Guozhang Wang  wrote:
> > >
> > > > +1 for me as well for collapsing.
> > > >
> > > > Jeyhun, could you update the wiki accordingly to show what's the
> final
> > > > updates post KIP-182 that needs to be done in KIP-159 including
> > KIP-149?
> > > > The child page I made is just a suggestion, but you would still need
> to
> > > > update your proposal for people to comment and vote on.
> > > >
> > > >
> > > > Guozhang
> > > >
> > > >
> > > > On Thu, Sep 14, 2017 at 10:37 PM, Ted Yu 
> wrote:
> > > >
> > > > > +1
> > > > >
> > > > > One interface is cleaner.
> > > > >
> > > > > On Thu, Sep 14, 2017 at 7:26 AM, Bill Bejeck 
> > > wrote:
> > > > >
> > > > > > +1 for me on collapsing the Rich and ValueWithKey
> > interfaces
> > > > > into 1
> > > > > > interface.
> > > > > >
> > > > > > Thanks,
> > > > > > Bill
> > > > > >
> > > > > > On Wed, Sep 13, 2017 at 11:31 AM, Jeyhun Karimov <
> > > je.kari...@gmail.com
> > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Hi Damian,
> > > > > > >
> > > > > > > Thanks for your feedback. Actually, this (what you propose) was
> > the
> > > > > first
> > > > > > > idea of KIP-149. Then we decided to divide it into two KIPs. I
> > also
> > > > > > > expressed my opinion that keeping the two interfaces (Rich and
> > > > withKey)
> > > > > > > separate would add more overloads. So, email discussion
> resulted
> > > that
> > > > > > this
> > > > > > > would not be a problem.
> > > > > > >
> > > > > > > Our initial idea was similar to :
> > > > > > >
> > > > > > > public abstract class RichValueMapper  implements
> > > > > > > ValueMapperWithKey, RichFunction {
> > > > > > > ..
> > > > > > > }
> > > > > > >
> > > > > > >
> > > > > > > So, we check the type of object, whether it is RichXXX or
> > > XXXWithKey
> > > > > > inside
> > > > > > > the called method and continue accordingly.
> > > > > > >
> > > > > > > If this is ok with the community, I would like to revert the
> > > current
> > > > > > design
> > > > > > > to this again.
> > > > > > >
> > > > > > > Cheers,
> > > > > > > Jeyhun
> > > > > > >
> > > > > > > On Wed, Sep 13, 2017 at 3:02 PM Damian Guy <
> damian@gmail.com
> > >
> > > > > wrote:
> > > > > > >
> > > > > > > > Hi Jeyhun,
> > > > > > > >
> > > > > > > > Thanks for sending out the update. I guess i was thinking
> more
> > > > along
> > > > > > the
> > > > > > > > lines of option 2 where we collapse the Rich and
> > > > ValueWithKey
> > > > > > etc
> > > > > > > > interfaces into 1 interface that has all of the arguments. I
> > > think
> > > > we
> > > > > > > then
> > > > > > > > only need to add one additional overload for each operator?
> > > > > > > >
> > > > > > > > Thanks,
> > > > > > > > Damian
> > > > > > > >
> > > > > > > > On Wed, 13 Sep 2017 at 10:59 Jeyhun Karimov <
> > > je.kari...@gmail.com>
> > > > > > > wrote:
> > > > > > > >
> > > > > > > > > Dear all,
> > > > > > > > >
> > > > > > > > > I would like to resume the discussion on KIP-159. I (and
> > > > Guozhang)
> > > > > > > think
> > > > > > > > > that releasing KIP-149 and KIP-159 in the same release
> would
> > > make
> > > > > > sense
> > > > > > > > to
> > > > > > > > > avoid a release with "partial" public APIs. There is a KIP
> > [1]
> > > > > > proposed
> > > > > > > > by
> > > > > > > > > Guozhang (and approved by me) to unify both KIPs.
> > > > > > > > > Please feel free to comment on this.
> > > > > > > > >
> > > > > > > > > [1]
> > > > > > > > >
> > > > > > > > https://cwiki.apache.org/confluence/pages/viewpage.
> > > > > > > action?pageId=73637757
> > > > > > > > >
> > > > > > > > > Cheers,
> > > > > > > > > Jeyhun
> > > > > > > > >
> > > > > > > > > On Fri, Jul 21, 2017 at 2:00 AM Jeyhun Karimov <
> > > > > je.kari...@gmail.com
> > > > > > >
> > > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Hi Matthias, Damian, all,
> > > > > > > > > >
>

[GitHub] kafka-site pull request #83: Line logo to Powered-by page + minor edits to s...

2017-09-21 Thread manjuapu
GitHub user manjuapu opened a pull request:

https://github.com/apache/kafka-site/pull/83

Line logo to Powered-by page + minor edits to styles +google webmaster file

@dguy @guozhangwang  Can you please review this PR?

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/manjuapu/kafka-site asf-site

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka-site/pull/83.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #83


commit d96dcad69896db4f7b8eb43613aed6ce895ee562
Author: Manjula K 
Date:   2017-09-21T22:07:14Z

Line logo to Powered-by page + minor edits to styles +google webmaster file




---


[GitHub] kafka pull request #3878: KAFKA-5913: Add hasOffset() and hasTimestamp() met...

2017-09-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3878


---


Jenkins build is back to normal : kafka-trunk-jdk7 #2801

2017-09-21 Thread Apache Jenkins Server
See 




Re: Please add me to contributor list in JIRA and Confluence

2017-09-21 Thread Jakub Scholz
Hi,

It looks like I now have the JIRA rights. But I'm still missing the Wiki
rights to start with my first KIP. Can someone have a look at it please?

Thanks & Regards
Jakub

On Sun, Sep 17, 2017 at 11:33 PM, Jakub Scholz  wrote:

> Hi,
>
> I would like to try to start contributing to the Kafka project, look at
> some JIRAs and maybe raise some KIPs.
>
> Could you please give me the JIRA rights to pick up some issues and the
> rights to raise KIPs in Confluence Wiki? My username is scholzj for both
> JIRA and Confluence.
>
> Thanks & Regards
> Jakub
>


[GitHub] kafka pull request #3939: KAFKA-5949: User Callback Exceptions need to be ha...

2017-09-21 Thread mjsax
GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/3939

KAFKA-5949: User Callback Exceptions need to be handled properly

 - catch user exception in user callback (TimestampExtractor, 
DeserializationHandler, StateRestoreListener) and wrap with StreamsException

Additional cleanup:
 - rename globalRestoreListener to userRestoreListener
 - remove unnecessary interface -> collapse SourceNodeRecordDeserializer 
and RecordDeserializer
 - removed unused parameter loggingEnabled from ProcessorContext#register

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka 
kafka-5949-exceptions-user-callbacks

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3939.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3939


commit 0dbfeb64781de1a65a1d5f2c0567f6035c49e2ce
Author: Matthias J. Sax 
Date:   2017-09-21T21:10:26Z

KAFKA-5949: User Callback Exceptions need to be handled properly




---


[GitHub] kafka pull request #3935: MINOR: improvement on top of KAFKA-5793: Tighten u...

2017-09-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3935


---


[GitHub] kafka-site pull request #79: Adding Customer section to Streams page

2017-09-21 Thread manjuapu
Github user manjuapu closed the pull request at:

https://github.com/apache/kafka-site/pull/79


---


[GitHub] kafka pull request #3906: KAFKA-5735: KIP-190: Handle client-ids consistentl...

2017-09-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3906


---


[GitHub] kafka pull request #3938: HOTFIX: ConsumerGroupCommand - Offset and partitio...

2017-09-21 Thread eu657
GitHub user eu657 opened a pull request:

https://github.com/apache/kafka/pull/3938

HOTFIX: ConsumerGroupCommand - Offset and partition numbers are not 
converted to long and int correctly

Running the command line with --from-file option causes the following 
exception:

java.lang.ClassCastException: java.lang.String cannot be cast to 
java.lang.Integer

Reason: asInstanceOf used for the conversion.

Also, unit test is using --to-earliest and --from-file together when 
executing the test. This is executing --to-earliest option only and ignoring 
--from-file option. Since the preparation part is also using --to-earliest to 
create the file, this unit test passes. Fixed the unit test too.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/eu657/kafka eu657-patch-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3938.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3938






---


[GitHub] kafka pull request #3937: KAFKA-5856 AdminClient.createPartitions() follow u...

2017-09-21 Thread tombentley
GitHub user tombentley opened a pull request:

https://github.com/apache/kafka/pull/3937

KAFKA-5856 AdminClient.createPartitions() follow up

Adds support for noop requests. When assignments are given with a request 
that would be a noop we validate that the given assignments match the actual 
ones, so that the state of the partitions after a successful call definitely 
matches what was requested.

I can put the additional Javadoc on the exception in a separate PR if you 
prefer.

The tests have the [improvements 
requested](https://github.com/apache/kafka/pull/3930#issuecomment-331130475) by 
@ijuma

The Javadoc is improved too. Putting the expected exceptions on the 
AdminClient method is rather distant from where they're actually throw (the 
Future from the Map from the Results from the call), but it keeps the 
documentation about the method as a whole in one place. I didn't know whether 
to include the detailed possible causes for each Exception. The more detail the 
harder it is to maintain, but the more useful to the client.

/cc @ijuma 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tombentley/kafka 
KAFKA-5856-AdminClient.createPartitions-follow-up

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3937.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3937


commit 1fad83a82bd1477cf6b46ac26755bfcb37b4914a
Author: Tom Bentley 
Date:   2017-09-21T13:09:55Z

End exception messages with a period

commit cfcf1dbd59d3707035e15f2e6a3f857d22aadcdd
Author: Tom Bentley 
Date:   2017-09-21T13:18:12Z

Javadoc the difference between InvalidTopic and UknownTopicOrPartition

commit 3b2bb532a1097ffc58b8ca519dd42fcee71b63de
Author: Tom Bentley 
Date:   2017-09-21T16:38:05Z

Handling for noop requests

commit 08625b187c7713b41a49f5a12c2ca5221a24fb38
Author: Tom Bentley 
Date:   2017-09-21T17:08:32Z

Javadoc, plus change a couple of exceptions to be more consistent

commit 512308b563a192aa2a924ae732ef9b14e0b1b49a
Author: Tom Bentley 
Date:   2017-09-21T19:46:46Z

Improve tests

Ensure state doesn't change for errors cases whether or not 
validateOnly=true

Add a mixed success and failure case




---


[GitHub] kafka pull request #3936: KAFKA-5956: use serdes from materialized in table ...

2017-09-21 Thread dguy
GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/3936

KAFKA-5956: use serdes from materialized in table and globalTable

The new overloads `StreamBuilder.table(String, Materialized)` and 
`StreamsBuilder.globalTable(String, Materialized)` need to set the serdes from 
`Materialized` on the internal `Consumed` instance that is created, otherwise 
the defaults will be used and may result in serialization errors

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka table-materialized

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3936.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3936


commit a9dac7ef6499312d7e2a333fee80f3363c9f573b
Author: Damian Guy 
Date:   2017-09-21T18:16:52Z

use serdes from materialized in table and globaltable




---


[jira] [Created] (KAFKA-5956) StreamBuilder#table and StreamsBuilder#globalTable should use serdes from Materialized

2017-09-21 Thread Damian Guy (JIRA)
Damian Guy created KAFKA-5956:
-

 Summary: StreamBuilder#table and StreamsBuilder#globalTable should 
use serdes from Materialized
 Key: KAFKA-5956
 URL: https://issues.apache.org/jira/browse/KAFKA-5956
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Damian Guy
Assignee: Damian Guy
 Fix For: 1.0.0


The new overloads {{StreamBuilder.table(String, Materialized)}} and 
{{StreamsBuilder.globalTable(String, Materialized)}} need to set the serdes 
from {{Materialized}} on the internal {{Consumed}} instance that is created, 
otherwise the defaults will be used and may result in serialization errors



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Jenkins build is back to normal : kafka-trunk-jdk9 #45

2017-09-21 Thread Apache Jenkins Server
See 




[GitHub] kafka pull request #3935: MINOR: improvement on top of KAFKA-5793: Tighten u...

2017-09-21 Thread tedyu
GitHub user tedyu opened a pull request:

https://github.com/apache/kafka/pull/3935

MINOR: improvement on top of KAFKA-5793: Tighten up the semantics of the 
OutOfOrderSequenceException

Simplified the condition in Sender#failBatch()
Added log in TransactionManager#updateLastAckedOffset()

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tedyu/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3935.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3935


commit 7875acaa1f52b568aa190ac9b00c5d77f35b8219
Author: tedyu 
Date:   2017-09-21T17:00:39Z

MINOR: improvement on top of KAFKA-5793: Tighten up the semantics of the 
OutOfOrderSequenceException




---


Jenkins build is back to normal : kafka-trunk-jdk8 #2055

2017-09-21 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk7 #2800

2017-09-21 Thread Apache Jenkins Server
See 


Changes:

[ismael] KAFKA-5954; Correct Connect REST API system test

--
[...truncated 354.37 KB...]

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryTailIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryTailIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurvive

[jira] [Created] (KAFKA-5955) Transient failure: AuthorizerIntegrationTest.testTransactionalProducerTopicAuthorizationExceptionInCommit

2017-09-21 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-5955:
--

 Summary: Transient failure: 
AuthorizerIntegrationTest.testTransactionalProducerTopicAuthorizationExceptionInCommit
 Key: KAFKA-5955
 URL: https://issues.apache.org/jira/browse/KAFKA-5955
 Project: Kafka
  Issue Type: Sub-task
Reporter: Ismael Juma
 Fix For: 1.0.0


{code}
org.apache.kafka.common.KafkaException: Cannot execute transactional method 
because we are in an error state
at 
org.apache.kafka.clients.producer.internals.TransactionManager.maybeFailWithError(TransactionManager.java:754)
at 
org.apache.kafka.clients.producer.internals.TransactionManager.beginCommit(TransactionManager.java:218)
at 
org.apache.kafka.clients.producer.KafkaProducer.commitTransaction(KafkaProducer.java:618)
at 
kafka.api.AuthorizerIntegrationTest.testTransactionalProducerTopicAuthorizationExceptionInCommit(AuthorizerIntegrationTest.scala:1079)
{code}

https://builds.apache.org/job/kafka-pr-jdk9-scala2.12/1193/testReport/junit/kafka.api/AuthorizerIntegrationTest/testTransactionalProducerTopicAuthorizationExceptionInCommit/



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3934: KAFKA-5954: Correct Connect REST API system test

2017-09-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3934


---


Build failed in Jenkins: kafka-trunk-jdk7 #2799

2017-09-21 Thread Apache Jenkins Server
See 


Changes:

[ismael] KAFKA-5856; AdminClient.createPartitions() follow-up (KIP-195)

--
[...truncated 1.79 MB...]
org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterOuterJoinQueryable PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin STARTED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableJoin STARTED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableJoin PASSED

org.apache.kafka.streams.integration.FanoutIntegrationTest > 
shouldFanoutTheInput STARTED

org.apache.kafka.streams.integration.FanoutIntegrationTest > 
shouldFanoutTheInput PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRunWithEosEnabled STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRunWithEosEnabled PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToCommitToMultiplePartitions STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToCommitToMultiplePartitions PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldNotViolateEosIfOneTaskFails STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldNotViolateEosIfOneTaskFails PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToPerformMultipleTransactions STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToPerformMultipleTransactions PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToCommitMultiplePartitionOffsets STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToCommitMultiplePartitionOffsets PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRunWithTwoSubtopologies STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRunWithTwoSubtopologies PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldNotViolateEosIfOneTaskGetsFencedUsingIsolatedAppInstances STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldNotViolateEosIfOneTaskGetsFencedUsingIsolatedAppInstances PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldNotViolateEosIfOneTaskFailsWithState STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldNotViolateEosIfOneTaskFailsWithState PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRunWithTwoSubtopologiesAndMultiplePartitions STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRunWithTwoSubtopologiesAndMultiplePartitions PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRestartAfterClose STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRestartAfterClose PASSED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithZeroSizedCache STARTED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithZeroSizedCache PASSED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithNonZeroSizedCache STARTED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithNonZeroSizedCache PASSED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegionWithZeroByteCache STARTED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegionWithZeroByteCache PASSED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegionWithNonZeroByteCache STARTED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegionWithNonZeroByteCache PASSED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldCompactTopicsForStateChangelogs STARTED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldCompactTopicsForStateChangelogs PASSED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldUseCompactAndDeleteForWindowStoreChangelogs STARTED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldUseCompactAndDeleteForWindowStoreChangelogs PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceSessionWindows STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceSessionWindows PASSED

org.apache.kafka.streams.integration.KStreamAggregation

Build failed in Jenkins: kafka-trunk-jdk9 #44

2017-09-21 Thread Apache Jenkins Server
See 


Changes:

[ismael] KAFKA-5856; AdminClient.createPartitions() follow-up (KIP-195)

--
[...truncated 1.39 MB...]
kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldAddEpochAndMessageOffsetToCache STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldAddEpochAndMessageOffsetToCache PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldDropEntriesOnEpochBoundaryWhenRemovingLatestEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldDropEntriesOnEpochBoundaryWhenRemovingLatestEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateSavedOffsetWhenOffsetToClearToIsBetweenEpochs STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateSavedOffsetWhenOffsetToClearToIsBetweenEpochs PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryTailIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryTailIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse 

Re: [DISCUSS]: KIP-159: Introducing Rich functions to Streams

2017-09-21 Thread Damian Guy
Hi Jeyhun,

All KIP-182 API PRs have now been merged. So you can consider it as stable.
Thanks,
Damian

On Thu, 21 Sep 2017 at 15:23 Jeyhun Karimov  wrote:

> Hi all,
>
> Thanks a lot for your comments. For the single interface (RichXXX and
> XXXWithKey) solution, I have already submitted a PR but probably it is
> outdated (when the KIP first proposed), I need to revisit that one.
>
> @Guozhang, from our (offline) discussion, I understood that we may not make
> it merge this KIP into the upcoming release, as KIP-159 is not voted yet
> (because we want both KIP-149 and KIP-159 to be as an "atomic" merge).  So
> I decided to wait until KIP-182 gets stable (there are some minor updates
> AFAIK) and update the KIP accordingly. Please correct me if I am wrong or I
> misunderstood.
>
> Cheers,
> Jeyhun
>
>
> On Thu, Sep 21, 2017 at 4:11 PM Damian Guy  wrote:
>
> > +1
> >
> > On Thu, 21 Sep 2017 at 13:46 Guozhang Wang  wrote:
> >
> > > +1 for me as well for collapsing.
> > >
> > > Jeyhun, could you update the wiki accordingly to show what's the final
> > > updates post KIP-182 that needs to be done in KIP-159 including
> KIP-149?
> > > The child page I made is just a suggestion, but you would still need to
> > > update your proposal for people to comment and vote on.
> > >
> > >
> > > Guozhang
> > >
> > >
> > > On Thu, Sep 14, 2017 at 10:37 PM, Ted Yu  wrote:
> > >
> > > > +1
> > > >
> > > > One interface is cleaner.
> > > >
> > > > On Thu, Sep 14, 2017 at 7:26 AM, Bill Bejeck 
> > wrote:
> > > >
> > > > > +1 for me on collapsing the Rich and ValueWithKey
> interfaces
> > > > into 1
> > > > > interface.
> > > > >
> > > > > Thanks,
> > > > > Bill
> > > > >
> > > > > On Wed, Sep 13, 2017 at 11:31 AM, Jeyhun Karimov <
> > je.kari...@gmail.com
> > > >
> > > > > wrote:
> > > > >
> > > > > > Hi Damian,
> > > > > >
> > > > > > Thanks for your feedback. Actually, this (what you propose) was
> the
> > > > first
> > > > > > idea of KIP-149. Then we decided to divide it into two KIPs. I
> also
> > > > > > expressed my opinion that keeping the two interfaces (Rich and
> > > withKey)
> > > > > > separate would add more overloads. So, email discussion resulted
> > that
> > > > > this
> > > > > > would not be a problem.
> > > > > >
> > > > > > Our initial idea was similar to :
> > > > > >
> > > > > > public abstract class RichValueMapper  implements
> > > > > > ValueMapperWithKey, RichFunction {
> > > > > > ..
> > > > > > }
> > > > > >
> > > > > >
> > > > > > So, we check the type of object, whether it is RichXXX or
> > XXXWithKey
> > > > > inside
> > > > > > the called method and continue accordingly.
> > > > > >
> > > > > > If this is ok with the community, I would like to revert the
> > current
> > > > > design
> > > > > > to this again.
> > > > > >
> > > > > > Cheers,
> > > > > > Jeyhun
> > > > > >
> > > > > > On Wed, Sep 13, 2017 at 3:02 PM Damian Guy  >
> > > > wrote:
> > > > > >
> > > > > > > Hi Jeyhun,
> > > > > > >
> > > > > > > Thanks for sending out the update. I guess i was thinking more
> > > along
> > > > > the
> > > > > > > lines of option 2 where we collapse the Rich and
> > > ValueWithKey
> > > > > etc
> > > > > > > interfaces into 1 interface that has all of the arguments. I
> > think
> > > we
> > > > > > then
> > > > > > > only need to add one additional overload for each operator?
> > > > > > >
> > > > > > > Thanks,
> > > > > > > Damian
> > > > > > >
> > > > > > > On Wed, 13 Sep 2017 at 10:59 Jeyhun Karimov <
> > je.kari...@gmail.com>
> > > > > > wrote:
> > > > > > >
> > > > > > > > Dear all,
> > > > > > > >
> > > > > > > > I would like to resume the discussion on KIP-159. I (and
> > > Guozhang)
> > > > > > think
> > > > > > > > that releasing KIP-149 and KIP-159 in the same release would
> > make
> > > > > sense
> > > > > > > to
> > > > > > > > avoid a release with "partial" public APIs. There is a KIP
> [1]
> > > > > proposed
> > > > > > > by
> > > > > > > > Guozhang (and approved by me) to unify both KIPs.
> > > > > > > > Please feel free to comment on this.
> > > > > > > >
> > > > > > > > [1]
> > > > > > > >
> > > > > > > https://cwiki.apache.org/confluence/pages/viewpage.
> > > > > > action?pageId=73637757
> > > > > > > >
> > > > > > > > Cheers,
> > > > > > > > Jeyhun
> > > > > > > >
> > > > > > > > On Fri, Jul 21, 2017 at 2:00 AM Jeyhun Karimov <
> > > > je.kari...@gmail.com
> > > > > >
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > > Hi Matthias, Damian, all,
> > > > > > > > >
> > > > > > > > > Thanks for your comments and sorry for super-late update.
> > > > > > > > >
> > > > > > > > > Sure, the DSL refactoring is not blocking for this KIP.
> > > > > > > > > I made some changes to KIP document based on my prototype.
> > > > > > > > >
> > > > > > > > > Please feel free to comment.
> > > > > > > > >
> > > > > > > > > Cheers,
> > > > > > > > > Jeyhun
> > > > > > > > >
> > > > > > > > > On Fri, Jul 7, 2017 at 9:35 PM Matthias J. Sax <
> > > > > > matth...@confluent.io>
> > > >

Build failed in Jenkins: kafka-trunk-jdk8 #2054

2017-09-21 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-5922: Add SessionWindowedKStream

[wangguoz] MINOR: add upgrade note for KIP-173 topic configs

[ismael] KAFKA-5947; Handle authentication failure in admin client, txn producer

--
[...truncated 1.80 MB...]
org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableJoin STARTED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableJoin PASSED

org.apache.kafka.streams.integration.ResetIntegrationTest > 
testReprocessingFromScratchAfterResetWithIntermediateUserTopic STARTED

org.apache.kafka.streams.integration.ResetIntegrationTest > 
testReprocessingFromScratchAfterResetWithIntermediateUserTopic PASSED

org.apache.kafka.streams.integration.ResetIntegrationTest > 
testReprocessingFromScratchAfterResetWithoutIntermediateUserTopic STARTED

org.apache.kafka.streams.integration.ResetIntegrationTest > 
testReprocessingFromScratchAfterResetWithoutIntermediateUserTopic PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftOuterJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftOuterJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerLeftJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerLeftJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterOuterJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterOuterJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerOuterJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerOuterJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftInnerJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftInnerJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerInnerJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerInnerJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerOuterJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerOuterJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterLeftJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterLeftJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterInnerJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterInnerJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerInnerJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerInnerJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftOuterJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftOuterJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterLeftJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterLeftJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftLeftJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftLeftJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterInnerJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterInnerJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftInnerJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftInnerJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerLeftJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerLeftJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftLeftJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftLeftJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterOuterJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterOuterJoinQu

[GitHub] kafka pull request #3934: KAFKA-5954 Correct Connect REST API system test

2017-09-21 Thread rhauch
GitHub user rhauch opened a pull request:

https://github.com/apache/kafka/pull/3934

KAFKA-5954 Correct Connect REST API system test



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rhauch/kafka kafka-5954

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3934.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3934


commit 2f6f949fdd32b6bc0cadc075589c0c3373b03f82
Author: Randall Hauch 
Date:   2017-09-21T14:38:17Z

KAFKA-5954 Correct Connect REST API system test




---


[GitHub] kafka pull request #3933: all topics should be in paused states

2017-09-21 Thread lisa2lisa
GitHub user lisa2lisa opened a pull request:

https://github.com/apache/kafka/pull/3933

all topics should be in paused states



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lisa2lisa/kafka log-fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3933.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3933


commit 2dcd77e21817fd2e3142d17e55cc01750513
Author: Xin Li 
Date:   2017-09-21T13:59:08Z

all topics should be in paused states




---


Re: [DISCUSS]: KIP-159: Introducing Rich functions to Streams

2017-09-21 Thread Jeyhun Karimov
Hi all,

Thanks a lot for your comments. For the single interface (RichXXX and
XXXWithKey) solution, I have already submitted a PR but probably it is
outdated (when the KIP first proposed), I need to revisit that one.

@Guozhang, from our (offline) discussion, I understood that we may not make
it merge this KIP into the upcoming release, as KIP-159 is not voted yet
(because we want both KIP-149 and KIP-159 to be as an "atomic" merge).  So
I decided to wait until KIP-182 gets stable (there are some minor updates
AFAIK) and update the KIP accordingly. Please correct me if I am wrong or I
misunderstood.

Cheers,
Jeyhun


On Thu, Sep 21, 2017 at 4:11 PM Damian Guy  wrote:

> +1
>
> On Thu, 21 Sep 2017 at 13:46 Guozhang Wang  wrote:
>
> > +1 for me as well for collapsing.
> >
> > Jeyhun, could you update the wiki accordingly to show what's the final
> > updates post KIP-182 that needs to be done in KIP-159 including KIP-149?
> > The child page I made is just a suggestion, but you would still need to
> > update your proposal for people to comment and vote on.
> >
> >
> > Guozhang
> >
> >
> > On Thu, Sep 14, 2017 at 10:37 PM, Ted Yu  wrote:
> >
> > > +1
> > >
> > > One interface is cleaner.
> > >
> > > On Thu, Sep 14, 2017 at 7:26 AM, Bill Bejeck 
> wrote:
> > >
> > > > +1 for me on collapsing the Rich and ValueWithKey interfaces
> > > into 1
> > > > interface.
> > > >
> > > > Thanks,
> > > > Bill
> > > >
> > > > On Wed, Sep 13, 2017 at 11:31 AM, Jeyhun Karimov <
> je.kari...@gmail.com
> > >
> > > > wrote:
> > > >
> > > > > Hi Damian,
> > > > >
> > > > > Thanks for your feedback. Actually, this (what you propose) was the
> > > first
> > > > > idea of KIP-149. Then we decided to divide it into two KIPs. I also
> > > > > expressed my opinion that keeping the two interfaces (Rich and
> > withKey)
> > > > > separate would add more overloads. So, email discussion resulted
> that
> > > > this
> > > > > would not be a problem.
> > > > >
> > > > > Our initial idea was similar to :
> > > > >
> > > > > public abstract class RichValueMapper  implements
> > > > > ValueMapperWithKey, RichFunction {
> > > > > ..
> > > > > }
> > > > >
> > > > >
> > > > > So, we check the type of object, whether it is RichXXX or
> XXXWithKey
> > > > inside
> > > > > the called method and continue accordingly.
> > > > >
> > > > > If this is ok with the community, I would like to revert the
> current
> > > > design
> > > > > to this again.
> > > > >
> > > > > Cheers,
> > > > > Jeyhun
> > > > >
> > > > > On Wed, Sep 13, 2017 at 3:02 PM Damian Guy 
> > > wrote:
> > > > >
> > > > > > Hi Jeyhun,
> > > > > >
> > > > > > Thanks for sending out the update. I guess i was thinking more
> > along
> > > > the
> > > > > > lines of option 2 where we collapse the Rich and
> > ValueWithKey
> > > > etc
> > > > > > interfaces into 1 interface that has all of the arguments. I
> think
> > we
> > > > > then
> > > > > > only need to add one additional overload for each operator?
> > > > > >
> > > > > > Thanks,
> > > > > > Damian
> > > > > >
> > > > > > On Wed, 13 Sep 2017 at 10:59 Jeyhun Karimov <
> je.kari...@gmail.com>
> > > > > wrote:
> > > > > >
> > > > > > > Dear all,
> > > > > > >
> > > > > > > I would like to resume the discussion on KIP-159. I (and
> > Guozhang)
> > > > > think
> > > > > > > that releasing KIP-149 and KIP-159 in the same release would
> make
> > > > sense
> > > > > > to
> > > > > > > avoid a release with "partial" public APIs. There is a KIP [1]
> > > > proposed
> > > > > > by
> > > > > > > Guozhang (and approved by me) to unify both KIPs.
> > > > > > > Please feel free to comment on this.
> > > > > > >
> > > > > > > [1]
> > > > > > >
> > > > > > https://cwiki.apache.org/confluence/pages/viewpage.
> > > > > action?pageId=73637757
> > > > > > >
> > > > > > > Cheers,
> > > > > > > Jeyhun
> > > > > > >
> > > > > > > On Fri, Jul 21, 2017 at 2:00 AM Jeyhun Karimov <
> > > je.kari...@gmail.com
> > > > >
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Hi Matthias, Damian, all,
> > > > > > > >
> > > > > > > > Thanks for your comments and sorry for super-late update.
> > > > > > > >
> > > > > > > > Sure, the DSL refactoring is not blocking for this KIP.
> > > > > > > > I made some changes to KIP document based on my prototype.
> > > > > > > >
> > > > > > > > Please feel free to comment.
> > > > > > > >
> > > > > > > > Cheers,
> > > > > > > > Jeyhun
> > > > > > > >
> > > > > > > > On Fri, Jul 7, 2017 at 9:35 PM Matthias J. Sax <
> > > > > matth...@confluent.io>
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > >> I would not block this KIP with regard to DSL refactoring.
> > IMHO,
> > > > we
> > > > > > can
> > > > > > > >> just finish this one and the DSL refactoring will help later
> > on
> > > to
> > > > > > > >> reduce the number of overloads.
> > > > > > > >>
> > > > > > > >> -Matthias
> > > > > > > >>
> > > > > > > >> On 7/7/17 5:28 AM, Jeyhun Karimov wrote:
> > > > > > > >> > I am following the related thread in the mail

[GitHub] kafka pull request #3930: KAFKA-5856; AdminClient.createPartitions() follow-...

2017-09-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3930


---


[GitHub] kafka-site pull request #82: Adding minor style changes

2017-09-21 Thread manjuapu
Github user manjuapu closed the pull request at:

https://github.com/apache/kafka-site/pull/82


---


Re: [VOTE] KIP-182 - Reduce Streams DSL overloads and allow easier use of custom storage engines

2017-09-21 Thread Damian Guy
Hi All,

There has been one further update to the KIP. WindowedKStream has been
renamed to TimeWindowedKStream.

Thanks,
Damian

On Tue, 19 Sep 2017 at 12:21 Damian Guy  wrote:

> All, a small addition to the KIP. During implementation we realized it
> would be good to add the below two methods to StreamBuilder
>
> public synchronized  GlobalKTable globalTable(final String
> topic, final Materialized> materialized)
> public synchronized  KTable table(final String topic, final
> Materialized> materialized)
>
> Thanks,
> Damian
>
> On Fri, 15 Sep 2017 at 12:37 Damian Guy  wrote:
>
>> Sounds good to me.
>>
>> On Thu, 14 Sep 2017 at 19:55 Guozhang Wang  wrote:
>>
>>> I'd suggest we remove both to and through together in KIP-182, since for
>>> operator "KTable#to" is as confusing as to "KTable#through" which
>>> overwhelms its benefit as a syntax sugar. I think the extra step
>>> "toStream"
>>> is actually better to remind the caller that it is sending its changelog
>>> stream to topic, plus it is not that much characters.
>>>
>>>
>>> Guozhang
>>>
>>> On Wed, Sep 13, 2017 at 12:40 AM, Damian Guy 
>>> wrote:
>>>
>>> > Hi Guozhang,
>>> >
>>> > I had an offline discussion with Matthias and Bill about it. It is
>>> thought
>>> > that `to` offers some benefit, i.e., syntactic sugar, so perhaps no
>>> harm in
>>> > keeping it. However, `through` less so, seeing as we can materialize
>>> stores
>>> > via `filter`, `map` etc, so one of the main benefits of `through` no
>>> longer
>>> > exists. WDYT?
>>> >
>>> > Thanks,
>>> > Damian
>>> >
>>> > On Tue, 12 Sep 2017 at 18:17 Guozhang Wang  wrote:
>>> >
>>> > > Hi Damian,
>>> > >
>>> > > Why we are deprecating KTable.through while keeping KTable.to?
>>> Should we
>>> > > either keep both of them or deprecate both of them in favor or
>>> > > KTable.toStream if people agree that it is confusing to users?
>>> > >
>>> > >
>>> > > Guozhang
>>> > >
>>> > >
>>> > > On Tue, Sep 12, 2017 at 1:18 AM, Damian Guy 
>>> > wrote:
>>> > >
>>> > > > Hi All,
>>> > > >
>>> > > > A minor update to the KIP, i needed to add KTable.to(Produced) for
>>> > > > consistency. KTable.through will be deprecated in favour of using
>>> > > > KTable.toStream().through()
>>> > > >
>>> > > > Thanks,
>>> > > > Damian
>>> > > >
>>> > > > On Thu, 7 Sep 2017 at 08:52 Damian Guy 
>>> wrote:
>>> > > >
>>> > > > > Thanks all. The vote is now closed and the KIP has been accepted
>>> > with:
>>> > > > > 2 non binding votes - bill and matthias
>>> > > > > 3 binding  - Damian, Guozhang, Sriram
>>> > > > >
>>> > > > > Regards,
>>> > > > > Damian
>>> > > > >
>>> > > > > On Tue, 5 Sep 2017 at 22:24 Sriram Subramanian >> >
>>> > > wrote:
>>> > > > >
>>> > > > >> +1
>>> > > > >>
>>> > > > >> On Tue, Sep 5, 2017 at 1:33 PM, Guozhang Wang <
>>> wangg...@gmail.com>
>>> > > > wrote:
>>> > > > >>
>>> > > > >> > +1
>>> > > > >> >
>>> > > > >> > On Fri, Sep 1, 2017 at 3:45 PM, Matthias J. Sax <
>>> > > > matth...@confluent.io>
>>> > > > >> > wrote:
>>> > > > >> >
>>> > > > >> > > +1
>>> > > > >> > >
>>> > > > >> > > On 9/1/17 2:53 PM, Bill Bejeck wrote:
>>> > > > >> > > > +1
>>> > > > >> > > >
>>> > > > >> > > > On Thu, Aug 31, 2017 at 10:20 AM, Damian Guy <
>>> > > > damian@gmail.com>
>>> > > > >> > > wrote:
>>> > > > >> > > >
>>> > > > >> > > >> Thanks everyone for voting! Unfortunately i've had to
>>> make a
>>> > > bit
>>> > > > >> of an
>>> > > > >> > > >> update based on some issues found during implementation.
>>> > > > >> > > >> The main changes are:
>>> > > > >> > > >> BytesStoreSupplier -> StoreSupplier
>>> > > > >> > > >> Addition of:
>>> > > > >> > > >> WindowBytesStoreSupplier, KeyValueBytesStoreSupplier,
>>> > > > >> > > >> SessionBytesStoreSupplier that will restrict store types
>>> to
>>> > > > >> > > > >> > > byte[]>
>>> > > > >> > > >> 3 new overloads added to Materialized to enable
>>> developers to
>>> > > > >> create a
>>> > > > >> > > >> Materialized of the appropriate type, i..e, WindowStore
>>> etc
>>> > > > >> > > >> Update DSL where Materialized is used such that the
>>> stores
>>> > have
>>> > > > >> > generic
>>> > > > >> > > >> types of 
>>> > > > >> > > >> Some minor changes to the arguments to
>>> > > > Store#persistentWindowStore
>>> > > > >> and
>>> > > > >> > > >> Store#persistentSessionStore
>>> > > > >> > > >>
>>> > > > >> > > >> Please take a look and recast the votes.
>>> > > > >> > > >>
>>> > > > >> > > >> Thanks for your time,
>>> > > > >> > > >> Damian
>>> > > > >> > > >>
>>> > > > >> > > >> On Fri, 25 Aug 2017 at 17:05 Matthias J. Sax <
>>> > > > >> matth...@confluent.io>
>>> > > > >> > > >> wrote:
>>> > > > >> > > >>
>>> > > > >> > > >>> Thanks Damian. Great KIP!
>>> > > > >> > > >>>
>>> > > > >> > > >>> +1
>>> > > > >> > > >>>
>>> > > > >> > > >>>
>>> > > > >> > > >>> -Matthias
>>> > > > >> > > >>>
>>> > > > >> > > >>> On 8/25/17 6:45 AM, Damian Guy wrote:
>>> > > > >> > >  Hi,
>>> > > > >> > > 
>>> > > > >> > >  I've just realised we need to add two methods to
>>> > > >

Re: [DISCUSS]: KIP-159: Introducing Rich functions to Streams

2017-09-21 Thread Damian Guy
+1

On Thu, 21 Sep 2017 at 13:46 Guozhang Wang  wrote:

> +1 for me as well for collapsing.
>
> Jeyhun, could you update the wiki accordingly to show what's the final
> updates post KIP-182 that needs to be done in KIP-159 including KIP-149?
> The child page I made is just a suggestion, but you would still need to
> update your proposal for people to comment and vote on.
>
>
> Guozhang
>
>
> On Thu, Sep 14, 2017 at 10:37 PM, Ted Yu  wrote:
>
> > +1
> >
> > One interface is cleaner.
> >
> > On Thu, Sep 14, 2017 at 7:26 AM, Bill Bejeck  wrote:
> >
> > > +1 for me on collapsing the Rich and ValueWithKey interfaces
> > into 1
> > > interface.
> > >
> > > Thanks,
> > > Bill
> > >
> > > On Wed, Sep 13, 2017 at 11:31 AM, Jeyhun Karimov  >
> > > wrote:
> > >
> > > > Hi Damian,
> > > >
> > > > Thanks for your feedback. Actually, this (what you propose) was the
> > first
> > > > idea of KIP-149. Then we decided to divide it into two KIPs. I also
> > > > expressed my opinion that keeping the two interfaces (Rich and
> withKey)
> > > > separate would add more overloads. So, email discussion resulted that
> > > this
> > > > would not be a problem.
> > > >
> > > > Our initial idea was similar to :
> > > >
> > > > public abstract class RichValueMapper  implements
> > > > ValueMapperWithKey, RichFunction {
> > > > ..
> > > > }
> > > >
> > > >
> > > > So, we check the type of object, whether it is RichXXX or XXXWithKey
> > > inside
> > > > the called method and continue accordingly.
> > > >
> > > > If this is ok with the community, I would like to revert the current
> > > design
> > > > to this again.
> > > >
> > > > Cheers,
> > > > Jeyhun
> > > >
> > > > On Wed, Sep 13, 2017 at 3:02 PM Damian Guy 
> > wrote:
> > > >
> > > > > Hi Jeyhun,
> > > > >
> > > > > Thanks for sending out the update. I guess i was thinking more
> along
> > > the
> > > > > lines of option 2 where we collapse the Rich and
> ValueWithKey
> > > etc
> > > > > interfaces into 1 interface that has all of the arguments. I think
> we
> > > > then
> > > > > only need to add one additional overload for each operator?
> > > > >
> > > > > Thanks,
> > > > > Damian
> > > > >
> > > > > On Wed, 13 Sep 2017 at 10:59 Jeyhun Karimov 
> > > > wrote:
> > > > >
> > > > > > Dear all,
> > > > > >
> > > > > > I would like to resume the discussion on KIP-159. I (and
> Guozhang)
> > > > think
> > > > > > that releasing KIP-149 and KIP-159 in the same release would make
> > > sense
> > > > > to
> > > > > > avoid a release with "partial" public APIs. There is a KIP [1]
> > > proposed
> > > > > by
> > > > > > Guozhang (and approved by me) to unify both KIPs.
> > > > > > Please feel free to comment on this.
> > > > > >
> > > > > > [1]
> > > > > >
> > > > > https://cwiki.apache.org/confluence/pages/viewpage.
> > > > action?pageId=73637757
> > > > > >
> > > > > > Cheers,
> > > > > > Jeyhun
> > > > > >
> > > > > > On Fri, Jul 21, 2017 at 2:00 AM Jeyhun Karimov <
> > je.kari...@gmail.com
> > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Hi Matthias, Damian, all,
> > > > > > >
> > > > > > > Thanks for your comments and sorry for super-late update.
> > > > > > >
> > > > > > > Sure, the DSL refactoring is not blocking for this KIP.
> > > > > > > I made some changes to KIP document based on my prototype.
> > > > > > >
> > > > > > > Please feel free to comment.
> > > > > > >
> > > > > > > Cheers,
> > > > > > > Jeyhun
> > > > > > >
> > > > > > > On Fri, Jul 7, 2017 at 9:35 PM Matthias J. Sax <
> > > > matth...@confluent.io>
> > > > > > > wrote:
> > > > > > >
> > > > > > >> I would not block this KIP with regard to DSL refactoring.
> IMHO,
> > > we
> > > > > can
> > > > > > >> just finish this one and the DSL refactoring will help later
> on
> > to
> > > > > > >> reduce the number of overloads.
> > > > > > >>
> > > > > > >> -Matthias
> > > > > > >>
> > > > > > >> On 7/7/17 5:28 AM, Jeyhun Karimov wrote:
> > > > > > >> > I am following the related thread in the mailing list and
> > > looking
> > > > > > >> forward
> > > > > > >> > for one-shot solution for overloads issue.
> > > > > > >> >
> > > > > > >> > Cheers,
> > > > > > >> > Jeyhun
> > > > > > >> >
> > > > > > >> > On Fri, Jul 7, 2017 at 10:32 AM Damian Guy <
> > > damian@gmail.com>
> > > > > > >> wrote:
> > > > > > >> >
> > > > > > >> >> Hi Jeyhun,
> > > > > > >> >>
> > > > > > >> >> About overrides, what other alternatives do we have? For
> > > > > > >> >>> backwards-compatibility we have to add extra methods to
> the
> > > > > existing
> > > > > > >> >> ones.
> > > > > > >> >>>
> > > > > > >> >>>
> > > > > > >> >> It wasn't clear to me in the KIP if these are new methods
> or
> > > > > > replacing
> > > > > > >> >> existing ones.
> > > > > > >> >> Also, we are currently discussing options for replacing the
> > > > > > overrides.
> > > > > > >> >>
> > > > > > >> >> Thanks,
> > > > > > >> >> Damian
> > > > > > >> >>
> > > > > > >> >>
> > > > > > >> >>> About ProcessorContext vs RecordContext, you are right. 

[jira] [Created] (KAFKA-5954) Failure in Connect system test: ConnectRestApiTest

2017-09-21 Thread Randall Hauch (JIRA)
Randall Hauch created KAFKA-5954:


 Summary: Failure in Connect system test: ConnectRestApiTest
 Key: KAFKA-5954
 URL: https://issues.apache.org/jira/browse/KAFKA-5954
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 1.0.0
Reporter: Randall Hauch
Assignee: Randall Hauch
 Fix For: 1.0.0


KAFKA-5657 recently changed the REST response for several endpoints to include 
the connector type. The {{ConnectRestApiTest}} system test checks the response 
and compares to an expected document, and this is now failing:

{noformat}
[INFO:2017-09-21 12:53:42,875]: Triggering test 291 of 311...
[INFO:2017-09-21 12:53:42,883]: RunnerClient: Loading test {'directory': 
'/home/jenkins/workspace/system-test-kafka-trunk/kafka/tests/kafkatest/tests/connect',
 'file_name': 'connect_rest_test.py', 'method_name': 'test_rest_api', 
'cls_name': 'ConnectRestApiTest', 'injected_args': None}
[INFO:2017-09-21 12:53:44,051]: RunnerClient: 
kafkatest.tests.connect.connect_rest_test.ConnectRestApiTest.test_rest_api: 
Setting up...
[INFO:2017-09-21 12:53:53,274]: RunnerClient: 
kafkatest.tests.connect.connect_rest_test.ConnectRestApiTest.test_rest_api: 
Running...
[INFO:2017-09-21 12:54:27,193]: RunnerClient: 
kafkatest.tests.connect.connect_rest_test.ConnectRestApiTest.test_rest_api: 
FAIL: Incorrect info:{"type": "source", "tasks": [{"connector": 
"local-file-source", "task": 0}], "config": {"topic": "test", 
"connector.class": "org.apache.kafka.connect.file.FileStreamSourceConnector", 
"name": "local-file-source", "file": "/mnt/connect.input", "tasks.max": "1"}, 
"name": "local-file-source"}
Traceback (most recent call last):
  File 
"/home/jenkins/workspace/system-test-kafka-trunk/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.7.1-py2.7.egg/ducktape/tests/runner_client.py",
 line 132, in run
data = self.run_test()
  File 
"/home/jenkins/workspace/system-test-kafka-trunk/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.7.1-py2.7.egg/ducktape/tests/runner_client.py",
 line 185, in run_test
return self.test_context.function(self.test)
  File 
"/home/jenkins/workspace/system-test-kafka-trunk/kafka/tests/kafkatest/tests/connect/connect_rest_test.py",
 line 121, in test_rest_api
assert expected_source_info == source_info, "Incorrect info:" + 
json.dumps(source_info)
AssertionError: Incorrect info:{"type": "source", "tasks": [{"connector": 
"local-file-source", "task": 0}], "config": {"topic": "test", 
"connector.class": "org.apache.kafka.connect.file.FileStreamSourceConnector", 
"name": "local-file-source", "file": "/mnt/connect.input", "tasks.max": "1"}, 
"name": "local-file-source"}

[INFO:2017-09-21 12:54:27,194]: RunnerClient: 
kafkatest.tests.connect.connect_rest_test.ConnectRestApiTest.test_rest_api: 
Tearing down...
[INFO:2017-09-21 12:54:34,249]: RunnerClient: 
kafkatest.tests.connect.connect_rest_test.ConnectRestApiTest.test_rest_api: 
Summary: Incorrect info:{"type": "source", "tasks": [{"connector": 
"local-file-source", "task": 0}], "config": {"topic": "test", 
"connector.class": "org.apache.kafka.connect.file.FileStreamSourceConnector", 
"name": "local-file-source", "file": "/mnt/connect.input", "tasks.max": "1"}, 
"name": "local-file-source"}
Traceback (most recent call last):
  File 
"/home/jenkins/workspace/system-test-kafka-trunk/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.7.1-py2.7.egg/ducktape/tests/runner_client.py",
 line 132, in run
data = self.run_test()
  File 
"/home/jenkins/workspace/system-test-kafka-trunk/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.7.1-py2.7.egg/ducktape/tests/runner_client.py",
 line 185, in run_test
return self.test_context.function(self.test)
  File 
"/home/jenkins/workspace/system-test-kafka-trunk/kafka/tests/kafkatest/tests/connect/connect_rest_test.py",
 line 121, in test_rest_api
assert expected_source_info == source_info, "Incorrect info:" + 
json.dumps(source_info)
AssertionError: Incorrect info:{"type": "source", "tasks": [{"connector": 
"local-file-source", "task": 0}], "config": {"topic": "test", 
"connector.class": "org.apache.kafka.connect.file.FileStreamSourceConnector", 
"name": "local-file-source", "file": "/mnt/connect.input", "tasks.max": "1"}, 
"name": "local-file-source"}

[INFO:2017-09-21 12:54:34,250]: RunnerClient: 
kafkatest.tests.connect.connect_rest_test.ConnectRestApiTest.test_rest_api: 
Data: None
[INFO:2017-09-21 12:54:34,366]: 
~
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Build failed in Jenkins: kafka-trunk-jdk8 #2053

2017-09-21 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Fix testUniqueErrorCodes unit test failure

--
[...truncated 3.47 MB...]
org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldPerformRangeQueriesWithCachingDisabled PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldCloseOpenIteratorsWhenStoreClosedAndThrowInvalidStateStoreOnHasNextAndNext
 STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldCloseOpenIteratorsWhenStoreClosedAndThrowInvalidStateStoreOnHasNextAndNext
 PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldNotThrowNullPointerExceptionOnPutIfAbsentNullValue STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldNotThrowNullPointerExceptionOnPutIfAbsentNullValue PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldThrowNullPointerExceptionOnPutAllNullKey STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldThrowNullPointerExceptionOnPutAllNullKey PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > testSize 
STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > testSize 
PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldNotThrowNullPointerExceptionOnPutNullValue STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldNotThrowNullPointerExceptionOnPutNullValue PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldThrowNullPointerExceptionOnPutIfAbsentNullKey STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldThrowNullPointerExceptionOnPutIfAbsentNullKey PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutIfAbsent STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutIfAbsent PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testRestoreWithDefaultSerdes STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testRestoreWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldThrowNullPointerExceptionOnPutNullKey STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldThrowNullPointerExceptionOnPutNullKey PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > testRestore 
STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > testRestore 
PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldThrowNullPointerExceptionOnRangeNullToKey STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldThrowNullPointerExceptionOnRangeNullToKey PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldNotThrowNullPointerExceptionOnPutAllNullKey STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldNotThrowNullPointerExceptionOnPutAllNullKey PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutGetRange STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutGetRange PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldThrowNullPointerExceptionOnGetNullKey STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldThrowNullPointerExceptionOnGetNullKey PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldThrowNullPointerExceptionOnRangeNullFromKey STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldThrowNullPointerExceptionOnRangeNullFromKey PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldThrowNullPointerExceptionOnDeleteNullKey STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldThrowNullPointerExceptionOnDeleteNullKey PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutGetRangeWithDefaultSerdes STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutGetRangeWithDefaultSerdes PASSED

org.apache.kafka.streams.KeyValueTest > shouldHaveSaneEqualsAndHashCode STARTED

org.apache.kafka.streams.KeyValueTest > shouldHaveSaneEqualsAndHashCode PASSED

org.apache.kafka.streams.KafkaStreamsTest > testIllegalMetricsConfig STARTED

org.apache.kafka.streams.KafkaStreamsTest > testIllegalMetricsConfig PASSED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldNotSetGlobalRestoreListenerAfterStarting STARTED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldNotSetGlobalRestoreListenerAfterStarting PASSED

org.apache.kafka.streams.KafkaStreamsTest > shouldNotGetAllTasksWhenNotRunning 
STARTED

org.apache.kafka.streams.KafkaStreams

[jira] [Resolved] (KAFKA-5947) Handle authentication failures from transactional producer and KafkaAdminClient

2017-09-21 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-5947.

Resolution: Fixed

> Handle authentication failures from transactional producer and 
> KafkaAdminClient
> ---
>
> Key: KAFKA-5947
> URL: https://issues.apache.org/jira/browse/KAFKA-5947
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 1.0.0
>
>
> Follow on from KAFKA-5854 to handle authentication failures better for 
> transactional producer API and KafkaAdminClient



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3928: KAFKA-5947: Handle authentication failure in admin...

2017-09-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3928


---


Re: [DISCUSS]: KIP-159: Introducing Rich functions to Streams

2017-09-21 Thread Guozhang Wang
+1 for me as well for collapsing.

Jeyhun, could you update the wiki accordingly to show what's the final
updates post KIP-182 that needs to be done in KIP-159 including KIP-149?
The child page I made is just a suggestion, but you would still need to
update your proposal for people to comment and vote on.


Guozhang


On Thu, Sep 14, 2017 at 10:37 PM, Ted Yu  wrote:

> +1
>
> One interface is cleaner.
>
> On Thu, Sep 14, 2017 at 7:26 AM, Bill Bejeck  wrote:
>
> > +1 for me on collapsing the Rich and ValueWithKey interfaces
> into 1
> > interface.
> >
> > Thanks,
> > Bill
> >
> > On Wed, Sep 13, 2017 at 11:31 AM, Jeyhun Karimov 
> > wrote:
> >
> > > Hi Damian,
> > >
> > > Thanks for your feedback. Actually, this (what you propose) was the
> first
> > > idea of KIP-149. Then we decided to divide it into two KIPs. I also
> > > expressed my opinion that keeping the two interfaces (Rich and withKey)
> > > separate would add more overloads. So, email discussion resulted that
> > this
> > > would not be a problem.
> > >
> > > Our initial idea was similar to :
> > >
> > > public abstract class RichValueMapper  implements
> > > ValueMapperWithKey, RichFunction {
> > > ..
> > > }
> > >
> > >
> > > So, we check the type of object, whether it is RichXXX or XXXWithKey
> > inside
> > > the called method and continue accordingly.
> > >
> > > If this is ok with the community, I would like to revert the current
> > design
> > > to this again.
> > >
> > > Cheers,
> > > Jeyhun
> > >
> > > On Wed, Sep 13, 2017 at 3:02 PM Damian Guy 
> wrote:
> > >
> > > > Hi Jeyhun,
> > > >
> > > > Thanks for sending out the update. I guess i was thinking more along
> > the
> > > > lines of option 2 where we collapse the Rich and ValueWithKey
> > etc
> > > > interfaces into 1 interface that has all of the arguments. I think we
> > > then
> > > > only need to add one additional overload for each operator?
> > > >
> > > > Thanks,
> > > > Damian
> > > >
> > > > On Wed, 13 Sep 2017 at 10:59 Jeyhun Karimov 
> > > wrote:
> > > >
> > > > > Dear all,
> > > > >
> > > > > I would like to resume the discussion on KIP-159. I (and Guozhang)
> > > think
> > > > > that releasing KIP-149 and KIP-159 in the same release would make
> > sense
> > > > to
> > > > > avoid a release with "partial" public APIs. There is a KIP [1]
> > proposed
> > > > by
> > > > > Guozhang (and approved by me) to unify both KIPs.
> > > > > Please feel free to comment on this.
> > > > >
> > > > > [1]
> > > > >
> > > > https://cwiki.apache.org/confluence/pages/viewpage.
> > > action?pageId=73637757
> > > > >
> > > > > Cheers,
> > > > > Jeyhun
> > > > >
> > > > > On Fri, Jul 21, 2017 at 2:00 AM Jeyhun Karimov <
> je.kari...@gmail.com
> > >
> > > > > wrote:
> > > > >
> > > > > > Hi Matthias, Damian, all,
> > > > > >
> > > > > > Thanks for your comments and sorry for super-late update.
> > > > > >
> > > > > > Sure, the DSL refactoring is not blocking for this KIP.
> > > > > > I made some changes to KIP document based on my prototype.
> > > > > >
> > > > > > Please feel free to comment.
> > > > > >
> > > > > > Cheers,
> > > > > > Jeyhun
> > > > > >
> > > > > > On Fri, Jul 7, 2017 at 9:35 PM Matthias J. Sax <
> > > matth...@confluent.io>
> > > > > > wrote:
> > > > > >
> > > > > >> I would not block this KIP with regard to DSL refactoring. IMHO,
> > we
> > > > can
> > > > > >> just finish this one and the DSL refactoring will help later on
> to
> > > > > >> reduce the number of overloads.
> > > > > >>
> > > > > >> -Matthias
> > > > > >>
> > > > > >> On 7/7/17 5:28 AM, Jeyhun Karimov wrote:
> > > > > >> > I am following the related thread in the mailing list and
> > looking
> > > > > >> forward
> > > > > >> > for one-shot solution for overloads issue.
> > > > > >> >
> > > > > >> > Cheers,
> > > > > >> > Jeyhun
> > > > > >> >
> > > > > >> > On Fri, Jul 7, 2017 at 10:32 AM Damian Guy <
> > damian@gmail.com>
> > > > > >> wrote:
> > > > > >> >
> > > > > >> >> Hi Jeyhun,
> > > > > >> >>
> > > > > >> >> About overrides, what other alternatives do we have? For
> > > > > >> >>> backwards-compatibility we have to add extra methods to the
> > > > existing
> > > > > >> >> ones.
> > > > > >> >>>
> > > > > >> >>>
> > > > > >> >> It wasn't clear to me in the KIP if these are new methods or
> > > > > replacing
> > > > > >> >> existing ones.
> > > > > >> >> Also, we are currently discussing options for replacing the
> > > > > overrides.
> > > > > >> >>
> > > > > >> >> Thanks,
> > > > > >> >> Damian
> > > > > >> >>
> > > > > >> >>
> > > > > >> >>> About ProcessorContext vs RecordContext, you are right. I
> > think
> > > I
> > > > > >> need to
> > > > > >> >>> implement a prototype to understand the full picture as some
> > > parts
> > > > > of
> > > > > >> the
> > > > > >> >>> KIP might not be as straightforward as I thought.
> > > > > >> >>>
> > > > > >> >>>
> > > > > >> >>> Cheers,
> > > > > >> >>> Jeyhun
> > > > > >> >>>
> > > > > >> >>> On Wed, Jul 5, 2017 at 10:40 AM Damian Guy <

Re: [DISCUSS]: KIP-159: Introducing Rich functions to Streams

2017-09-21 Thread Guozhang Wang
+1

On Thu, Sep 14, 2017 at 10:37 PM, Ted Yu  wrote:

> +1
>
> One interface is cleaner.
>
> On Thu, Sep 14, 2017 at 7:26 AM, Bill Bejeck  wrote:
>
> > +1 for me on collapsing the Rich and ValueWithKey interfaces
> into 1
> > interface.
> >
> > Thanks,
> > Bill
> >
> > On Wed, Sep 13, 2017 at 11:31 AM, Jeyhun Karimov 
> > wrote:
> >
> > > Hi Damian,
> > >
> > > Thanks for your feedback. Actually, this (what you propose) was the
> first
> > > idea of KIP-149. Then we decided to divide it into two KIPs. I also
> > > expressed my opinion that keeping the two interfaces (Rich and withKey)
> > > separate would add more overloads. So, email discussion resulted that
> > this
> > > would not be a problem.
> > >
> > > Our initial idea was similar to :
> > >
> > > public abstract class RichValueMapper  implements
> > > ValueMapperWithKey, RichFunction {
> > > ..
> > > }
> > >
> > >
> > > So, we check the type of object, whether it is RichXXX or XXXWithKey
> > inside
> > > the called method and continue accordingly.
> > >
> > > If this is ok with the community, I would like to revert the current
> > design
> > > to this again.
> > >
> > > Cheers,
> > > Jeyhun
> > >
> > > On Wed, Sep 13, 2017 at 3:02 PM Damian Guy 
> wrote:
> > >
> > > > Hi Jeyhun,
> > > >
> > > > Thanks for sending out the update. I guess i was thinking more along
> > the
> > > > lines of option 2 where we collapse the Rich and ValueWithKey
> > etc
> > > > interfaces into 1 interface that has all of the arguments. I think we
> > > then
> > > > only need to add one additional overload for each operator?
> > > >
> > > > Thanks,
> > > > Damian
> > > >
> > > > On Wed, 13 Sep 2017 at 10:59 Jeyhun Karimov 
> > > wrote:
> > > >
> > > > > Dear all,
> > > > >
> > > > > I would like to resume the discussion on KIP-159. I (and Guozhang)
> > > think
> > > > > that releasing KIP-149 and KIP-159 in the same release would make
> > sense
> > > > to
> > > > > avoid a release with "partial" public APIs. There is a KIP [1]
> > proposed
> > > > by
> > > > > Guozhang (and approved by me) to unify both KIPs.
> > > > > Please feel free to comment on this.
> > > > >
> > > > > [1]
> > > > >
> > > > https://cwiki.apache.org/confluence/pages/viewpage.
> > > action?pageId=73637757
> > > > >
> > > > > Cheers,
> > > > > Jeyhun
> > > > >
> > > > > On Fri, Jul 21, 2017 at 2:00 AM Jeyhun Karimov <
> je.kari...@gmail.com
> > >
> > > > > wrote:
> > > > >
> > > > > > Hi Matthias, Damian, all,
> > > > > >
> > > > > > Thanks for your comments and sorry for super-late update.
> > > > > >
> > > > > > Sure, the DSL refactoring is not blocking for this KIP.
> > > > > > I made some changes to KIP document based on my prototype.
> > > > > >
> > > > > > Please feel free to comment.
> > > > > >
> > > > > > Cheers,
> > > > > > Jeyhun
> > > > > >
> > > > > > On Fri, Jul 7, 2017 at 9:35 PM Matthias J. Sax <
> > > matth...@confluent.io>
> > > > > > wrote:
> > > > > >
> > > > > >> I would not block this KIP with regard to DSL refactoring. IMHO,
> > we
> > > > can
> > > > > >> just finish this one and the DSL refactoring will help later on
> to
> > > > > >> reduce the number of overloads.
> > > > > >>
> > > > > >> -Matthias
> > > > > >>
> > > > > >> On 7/7/17 5:28 AM, Jeyhun Karimov wrote:
> > > > > >> > I am following the related thread in the mailing list and
> > looking
> > > > > >> forward
> > > > > >> > for one-shot solution for overloads issue.
> > > > > >> >
> > > > > >> > Cheers,
> > > > > >> > Jeyhun
> > > > > >> >
> > > > > >> > On Fri, Jul 7, 2017 at 10:32 AM Damian Guy <
> > damian@gmail.com>
> > > > > >> wrote:
> > > > > >> >
> > > > > >> >> Hi Jeyhun,
> > > > > >> >>
> > > > > >> >> About overrides, what other alternatives do we have? For
> > > > > >> >>> backwards-compatibility we have to add extra methods to the
> > > > existing
> > > > > >> >> ones.
> > > > > >> >>>
> > > > > >> >>>
> > > > > >> >> It wasn't clear to me in the KIP if these are new methods or
> > > > > replacing
> > > > > >> >> existing ones.
> > > > > >> >> Also, we are currently discussing options for replacing the
> > > > > overrides.
> > > > > >> >>
> > > > > >> >> Thanks,
> > > > > >> >> Damian
> > > > > >> >>
> > > > > >> >>
> > > > > >> >>> About ProcessorContext vs RecordContext, you are right. I
> > think
> > > I
> > > > > >> need to
> > > > > >> >>> implement a prototype to understand the full picture as some
> > > parts
> > > > > of
> > > > > >> the
> > > > > >> >>> KIP might not be as straightforward as I thought.
> > > > > >> >>>
> > > > > >> >>>
> > > > > >> >>> Cheers,
> > > > > >> >>> Jeyhun
> > > > > >> >>>
> > > > > >> >>> On Wed, Jul 5, 2017 at 10:40 AM Damian Guy <
> > > damian@gmail.com>
> > > > > >> wrote:
> > > > > >> >>>
> > > > > >>  HI Jeyhun,
> > > > > >> 
> > > > > >>  Is the intention that these methods are new overloads on
> the
> > > > > KStream,
> > > > > >>  KTable, etc?
> > > > > >> 
> > > > > >>  It is worth noting that a ProcessorCo

[GitHub] kafka pull request #3932: KAFKA-5867: Log Kafka Connect worker info during s...

2017-09-21 Thread kkonstantine
GitHub user kkonstantine opened a pull request:

https://github.com/apache/kafka/pull/3932

KAFKA-5867: Log Kafka Connect worker info during startup



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kkonstantine/kafka 
KAFKA-5867-Kafka-Connect-applications-should-log-info-message-when-starting-up

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3932.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3932


commit 793d5820785f4c0d92e5b6e31b8b196d46abfdd8
Author: Konstantine Karantasis 
Date:   2017-09-21T11:37:15Z

KAFKA-5867: Log Kafka Connect worker info during startup




---


Jenkins build is back to normal : kafka-trunk-jdk7 #2797

2017-09-21 Thread Apache Jenkins Server
See 




Re: 1.0.0 KIPs Update

2017-09-21 Thread Ismael Juma
The code freeze is 3rd of October, but the last week is only for bug fixes
typically.

Ismael

On Thu, Sep 21, 2017 at 11:16 AM, Guozhang Wang  wrote:

> Note that the code freeze date is Oct.3, so people will have about two
> weeks for that.
>
>
> Guozhang
>
> On Thu, Sep 21, 2017 at 7:46 AM, Ismael Juma  wrote:
>
> > Yes, although it would have to be reviewed and merged by next Wednesday.
> >
> > Ismael
> >
> > On Thu, Sep 21, 2017 at 12:17 AM, James Cheng 
> > wrote:
> >
> > > Thanks Ismael. Yes, you're right, PR 3799 is purely a refactoring.
> > >
> > > There are new subtasks of https://issues.apache.org/
> > jira/browse/KAFKA-3480
> > >  that I haven't yet
> > > filed, that I want to work on. They will add auto generation of metrics
> > > documentation for more areas of the codebase. So I was trying to figure
> > out
> > > what were my chances of getting them into 1.0.0, to help me prioritize
> > when
> > > I should work on them.
> > >
> > > But to your general point, it sounds like if I make my case and the
> > > community agrees that its worth the cost/benefit, that it might be
> > eligible
> > > to get into 1.0.0. Sounds perfect. Thanks!
> > >
> > > -James
> > >
> > > > On Sep 20, 2017, at 3:14 PM, Ismael Juma  wrote:
> > > >
> > > > Hi James,
> > > >
> > > > Isn't PR 3799 purely a code refactoring (i.e. no change in
> behaviour)?
> > If
> > > > so, it would probably land in trunk only after today. If it's new
> > > > functionality, then it depends on size, risk and importance. If you
> > think
> > > > it's important and should target 1.0.0, feel free to make your case
> in
> > > the
> > > > JIRA or PR.
> > > >
> > > > Thanks,
> > > > Ismael
> > > >
> > > > On Wed, Sep 20, 2017 at 10:59 PM, James Cheng 
> > > wrote:
> > > >
> > > >> Hi. I'm a little unclear on what types of things are subject to the
> > > >> "Feature Freeze" today (Sept 20th).
> > > >>
> > > >> For changes such as https://github.com/apache/kafka/pull/3799 <
> > > >> https://github.com/apache/kafka/pull/3799> and
> > > https://issues.apache.org/
> > > >> jira/browse/KAFKA-3480  > > jira/browse/KAFKA-3480>
> > > >> that don't require KIPs, is today the freeze date? If I have PR
> > created
> > > but
> > > >> not yet reviewed, does that mean it is eligible to get into 1.0.0?
> > > >>
> > > >> And if I have a PR that comes AFTER today, does that mean it is NOT
> > > >> eligible for 1.0.0?
> > > >>
> > > >> Thanks!
> > > >>
> > > >> -James
> > > >>
> > > >>> On Sep 11, 2017, at 12:14 PM, Guozhang Wang 
> > > wrote:
> > > >>>
> > > >>> Sure! Please feel free to update the wiki.
> > > >>>
> > > >>>
> > > >>> Guozhang
> > > >>>
> > > >>> On Mon, Sep 11, 2017 at 9:28 AM, Rajini Sivaram <
> > > rajinisiva...@gmail.com
> > > >>>
> > > >>> wrote:
> > > >>>
> > >  Hi Guozhang,
> > > 
> > >  Can KIP-188 be added to the list, please? The vote has passed and
> PR
> > > >> should
> > >  be ready soon.
> > > 
> > >  Thank you,
> > > 
> > >  Rajini
> > > 
> > >  On Thu, Sep 7, 2017 at 10:28 PM, Guozhang Wang <
> wangg...@gmail.com>
> > > >> wrote:
> > > 
> > > > Actually my bad, there is already a voting thread and you asked
> > > people
> > > >> to
> > > > recast a vote on a small change.
> > > >
> > > > On Thu, Sep 7, 2017 at 2:27 PM, Guozhang Wang <
> wangg...@gmail.com>
> > >  wrote:
> > > >
> > > >> Hi Tom,
> > > >>
> > > >> It seems KIP-183 is still in the discussion phase, and voting
> has
> > > not
> > > > been
> > > >> started?
> > > >>
> > > >>
> > > >> Guozhang
> > > >>
> > > >>
> > > >> On Thu, Sep 7, 2017 at 1:13 AM, Tom Bentley <
> > t.j.bent...@gmail.com>
> > > > wrote:
> > > >>
> > > >>> Would it be possible to add KIP-183 to the list too, please?
> > > >>>
> > > >>> Thanks,
> > > >>>
> > > >>> Tom
> > > >>>
> > > >>> On 6 September 2017 at 22:04, Guozhang Wang <
> wangg...@gmail.com>
> > >  wrote:
> > > >>>
> > >  Hi Vahid,
> > > 
> > >  Yes I have just added it while sending this email :)
> > > 
> > > 
> > >  Guozhang
> > > 
> > >  On Wed, Sep 6, 2017 at 1:54 PM, Vahid S Hashemian <
> > >  vahidhashem...@us.ibm.com
> > > > wrote:
> > > 
> > > > Hi Guozhang,
> > > >
> > > > Thanks for the heads-up.
> > > >
> > > > Can KIP-163 be added to the list?
> > > > The proposal for this KIP is accepted, and the PR is ready
> for
> > > > review.
> > > >
> > > > Thanks.
> > > > --Vahid
> > > >
> > > >
> > > >
> > > > From:   Guozhang Wang 
> > > > To: "dev@kafka.apache.org" 
> > > > Date:   09/06/2017 01:45 PM
> > > > Subject:1.0.0 KIPs Update
> > > >
> > > >
> > > >>

[GitHub] kafka pull request #3921: MINOR: add upgrade note for KIP-173 topic configs

2017-09-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3921


---


Re: 1.0.0 KIPs Update

2017-09-21 Thread Guozhang Wang
Note that the code freeze date is Oct.3, so people will have about two
weeks for that.


Guozhang

On Thu, Sep 21, 2017 at 7:46 AM, Ismael Juma  wrote:

> Yes, although it would have to be reviewed and merged by next Wednesday.
>
> Ismael
>
> On Thu, Sep 21, 2017 at 12:17 AM, James Cheng 
> wrote:
>
> > Thanks Ismael. Yes, you're right, PR 3799 is purely a refactoring.
> >
> > There are new subtasks of https://issues.apache.org/
> jira/browse/KAFKA-3480
> >  that I haven't yet
> > filed, that I want to work on. They will add auto generation of metrics
> > documentation for more areas of the codebase. So I was trying to figure
> out
> > what were my chances of getting them into 1.0.0, to help me prioritize
> when
> > I should work on them.
> >
> > But to your general point, it sounds like if I make my case and the
> > community agrees that its worth the cost/benefit, that it might be
> eligible
> > to get into 1.0.0. Sounds perfect. Thanks!
> >
> > -James
> >
> > > On Sep 20, 2017, at 3:14 PM, Ismael Juma  wrote:
> > >
> > > Hi James,
> > >
> > > Isn't PR 3799 purely a code refactoring (i.e. no change in behaviour)?
> If
> > > so, it would probably land in trunk only after today. If it's new
> > > functionality, then it depends on size, risk and importance. If you
> think
> > > it's important and should target 1.0.0, feel free to make your case in
> > the
> > > JIRA or PR.
> > >
> > > Thanks,
> > > Ismael
> > >
> > > On Wed, Sep 20, 2017 at 10:59 PM, James Cheng 
> > wrote:
> > >
> > >> Hi. I'm a little unclear on what types of things are subject to the
> > >> "Feature Freeze" today (Sept 20th).
> > >>
> > >> For changes such as https://github.com/apache/kafka/pull/3799 <
> > >> https://github.com/apache/kafka/pull/3799> and
> > https://issues.apache.org/
> > >> jira/browse/KAFKA-3480  > jira/browse/KAFKA-3480>
> > >> that don't require KIPs, is today the freeze date? If I have PR
> created
> > but
> > >> not yet reviewed, does that mean it is eligible to get into 1.0.0?
> > >>
> > >> And if I have a PR that comes AFTER today, does that mean it is NOT
> > >> eligible for 1.0.0?
> > >>
> > >> Thanks!
> > >>
> > >> -James
> > >>
> > >>> On Sep 11, 2017, at 12:14 PM, Guozhang Wang 
> > wrote:
> > >>>
> > >>> Sure! Please feel free to update the wiki.
> > >>>
> > >>>
> > >>> Guozhang
> > >>>
> > >>> On Mon, Sep 11, 2017 at 9:28 AM, Rajini Sivaram <
> > rajinisiva...@gmail.com
> > >>>
> > >>> wrote:
> > >>>
> >  Hi Guozhang,
> > 
> >  Can KIP-188 be added to the list, please? The vote has passed and PR
> > >> should
> >  be ready soon.
> > 
> >  Thank you,
> > 
> >  Rajini
> > 
> >  On Thu, Sep 7, 2017 at 10:28 PM, Guozhang Wang 
> > >> wrote:
> > 
> > > Actually my bad, there is already a voting thread and you asked
> > people
> > >> to
> > > recast a vote on a small change.
> > >
> > > On Thu, Sep 7, 2017 at 2:27 PM, Guozhang Wang 
> >  wrote:
> > >
> > >> Hi Tom,
> > >>
> > >> It seems KIP-183 is still in the discussion phase, and voting has
> > not
> > > been
> > >> started?
> > >>
> > >>
> > >> Guozhang
> > >>
> > >>
> > >> On Thu, Sep 7, 2017 at 1:13 AM, Tom Bentley <
> t.j.bent...@gmail.com>
> > > wrote:
> > >>
> > >>> Would it be possible to add KIP-183 to the list too, please?
> > >>>
> > >>> Thanks,
> > >>>
> > >>> Tom
> > >>>
> > >>> On 6 September 2017 at 22:04, Guozhang Wang 
> >  wrote:
> > >>>
> >  Hi Vahid,
> > 
> >  Yes I have just added it while sending this email :)
> > 
> > 
> >  Guozhang
> > 
> >  On Wed, Sep 6, 2017 at 1:54 PM, Vahid S Hashemian <
> >  vahidhashem...@us.ibm.com
> > > wrote:
> > 
> > > Hi Guozhang,
> > >
> > > Thanks for the heads-up.
> > >
> > > Can KIP-163 be added to the list?
> > > The proposal for this KIP is accepted, and the PR is ready for
> > > review.
> > >
> > > Thanks.
> > > --Vahid
> > >
> > >
> > >
> > > From:   Guozhang Wang 
> > > To: "dev@kafka.apache.org" 
> > > Date:   09/06/2017 01:45 PM
> > > Subject:1.0.0 KIPs Update
> > >
> > >
> > >
> > > Hello folks,
> > >
> > > This is a heads up on 1.0.0 progress:
> > >
> > > https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.a
> > > pache.org_confluence_pages_viewpage.action-3FpageId-3D717649
> > > 13&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_itwloTQj3_
> > > xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc&m=bLvgeykOujjty9joOuWXD4wZab
> > > o1CV0pULY4eqBxqzk&s=90UN7ejzCQmdPOyRR_
> >  2z304xLUSBCtOYi0KqhAo4EyU&e=
> > >
> > >
> > > We have one week

[GitHub] kafka-site issue #82: Adding minor style changes

2017-09-21 Thread guozhangwang
Github user guozhangwang commented on the issue:

https://github.com/apache/kafka-site/pull/82
  
@manjuapu Only the PR submitter can close her own PRs unfortunately.


---


[GitHub] kafka-site issue #79: Adding Customer section to Streams page

2017-09-21 Thread guozhangwang
Github user guozhangwang commented on the issue:

https://github.com/apache/kafka-site/pull/79
  
Some meta comments:

1. The `.DS_Store` should not be included.
2. As @ewencp mentioned, these files under `0110/` should be in `kafka` 
repo as well. You would need to file a separate PR for `kafka` repo adding 
these files, against trunk. The committer who merges this PR will also 
cherry-pick it to `0.11.0` branch. So that when 1.0.0 web docs are up these 
files will be reflected as well.
3. For `css/styles.css` it is only in `kafka-site` repo, so it is good as 
is.


---


Build failed in Jenkins: kafka-trunk-jdk7 #2796

2017-09-21 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-5922: Add SessionWindowedKStream

--
[...truncated 356.36 KB...]
kafka.message.MessageCompressionTest > testCompressSize STARTED

kafka.message.MessageCompressionTest > testCompressSize PASSED

kafka.message.MessageCompressionTest > testSimpleCompressDecompress STARTED

kafka.message.MessageCompressionTest > testSimpleCompressDecompress PASSED

kafka.message.ByteBufferMessageSetTest > testMessageWithProvidedOffsetSeq 
STARTED

kafka.message.ByteBufferMessageSetTest > testMessageWithProvidedOffsetSeq PASSED

kafka.message.ByteBufferMessageSetTest > testValidBytes STARTED

kafka.message.ByteBufferMessageSetTest > testValidBytes PASSED

kafka.message.ByteBufferMessageSetTest > testValidBytesWithCompression STARTED

kafka.message.ByteBufferMessageSetTest > testValidBytesWithCompression PASSED

kafka.message.ByteBufferMessageSetTest > 
testWriteToChannelThatConsumesPartially STARTED

kafka.message.ByteBufferMessageSetTest > 
testWriteToChannelThatConsumesPartially PASSED

kafka.message.ByteBufferMessageSetTest > testIteratorIsConsistent STARTED

kafka.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.message.ByteBufferMessageSetTest > testWrittenEqualsRead STARTED

kafka.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.message.ByteBufferMessageSetTest > testWriteTo STARTED

kafka.message.ByteBufferMessageSetTest > testWriteTo PASSED

kafka.message.ByteBufferMessageSetTest > testEquals STARTED

kafka.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.message.ByteBufferMessageSetTest > testSizeInBytes STARTED

kafka.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.message.ByteBufferMessageSetTest > testIterator STARTED

kafka.message.ByteBufferMessageSetTest > testIterator PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer STARTED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testClusterIdMetric STARTED

kafka.metrics.MetricsTest > testClusterIdMetric PASSED

kafka.metrics.MetricsTest > testControllerMetrics STARTED

kafka.metrics.MetricsTest > testControllerMetrics PASSED

kafka.metrics.MetricsTest > testMetricsLeak STARTED

kafka.metrics.MetricsTest > testMetricsLeak PASSED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut STARTED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut PASSED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic STARTED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.security.auth.ResourceTypeTest > testJavaConversions STARTED

kafka.security.auth.ResourceTypeTest > testJavaConversions PASSED

kafka.security.auth.ResourceTypeTest > testFromString STARTED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.OperationTest > testJavaConversions STARTED

kafka.security.auth.OperationTest > testJavaConversions PASSED

kafka.security.auth.PermissionTypeTest > testJavaConversions STARTED

kafka.security.auth.PermissionTypeTest > testJavaConversions PASSED

kafka.security.auth.PermissionTypeTest > testFromString STARTED

kafka.security.auth.PermissionTypeTest > testFromString PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
t

[jira] [Created] (KAFKA-5953) Connect classloader isolation may be broken for JDBC drivers

2017-09-21 Thread Jiri Pechanec (JIRA)
Jiri Pechanec created KAFKA-5953:


 Summary: Connect classloader isolation may be broken for JDBC 
drivers
 Key: KAFKA-5953
 URL: https://issues.apache.org/jira/browse/KAFKA-5953
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 0.11.0.0
Reporter: Jiri Pechanec
Priority: Critical


Let's suppose there are two connectors deployed
# using JDBC driver (Debezium MySQL connector)
# using PostgreSQL JDBC driver (JDBC sink).

Connector 1 is started first - it executes a statement
{code:java}
Connection conn = DriverManager.getConnection(url, props);
{code}
As a result a {{DriverManager}} calls {{ServiceLoader}} and searches for all 
JDBC drivers. The postgres driver from connector 2) is found associated with 
classloader from connector 1).

Connector 2 is started after that - it executes a statement
{code:java}
connection = DriverManager.getConnection(url, username, password);
{code}

DriverManager finds the connector that was loaded in step before but becuase 
the classloader is different - now we use classloader 2) so it refuses to load 
the class and no JDBC driver is found.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Jenkins build is back to normal : kafka-trunk-jdk9 #40

2017-09-21 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk7 #2795

2017-09-21 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Fix testUniqueErrorCodes unit test failure

--
[...truncated 354.48 KB...]

kafka.server.ServerGenerateBrokerIdTest > testMultipleLogDirsMetaProps STARTED

kafka.server.ServerGenerateBrokerIdTest > testMultipleLogDirsMetaProps PASSED

kafka.server.ServerGenerateBrokerIdTest > testDisableGeneratedBrokerId STARTED

kafka.server.ServerGenerateBrokerIdTest > testDisableGeneratedBrokerId PASSED

kafka.server.ServerGenerateBrokerIdTest > testUserConfigAndGeneratedBrokerId 
STARTED

kafka.server.ServerGenerateBrokerIdTest > testUserConfigAndGeneratedBrokerId 
PASSED

kafka.server.ServerGenerateBrokerIdTest > 
testConsistentBrokerIdFromUserConfigAndMetaProps STARTED

kafka.server.ServerGenerateBrokerIdTest > 
testConsistentBrokerIdFromUserConfigAndMetaProps PASSED

kafka.server.DelayedOperationTest > testRequestPurge STARTED

kafka.server.DelayedOperationTest > testRequestPurge PASSED

kafka.server.DelayedOperationTest > testRequestExpiry STARTED

kafka.server.DelayedOperationTest > testRequestExpiry PASSED

kafka.server.DelayedOperationTest > 
shouldReturnNilOperationsOnCancelForKeyWhenKeyDoesntExist STARTED

kafka.server.DelayedOperationTest > 
shouldReturnNilOperationsOnCancelForKeyWhenKeyDoesntExist PASSED

kafka.server.DelayedOperationTest > 
shouldCancelForKeyReturningCancelledOperations STARTED

kafka.server.DelayedOperationTest > 
shouldCancelForKeyReturningCancelledOperations PASSED

kafka.server.DelayedOperationTest > testRequestSatisfaction STARTED

kafka.server.DelayedOperationTest > testRequestSatisfaction PASSED

kafka.server.MultipleListenersWithDefaultJaasContextTest > testProduceConsume 
STARTED

kafka.server.MultipleListenersWithDefaultJaasContextTest > testProduceConsume 
PASSED

kafka.server.ThrottledResponseExpirationTest > testThrottledRequest STARTED

kafka.server.ThrottledResponseExpirationTest > testThrottledRequest PASSED

kafka.server.ThrottledResponseExpirationTest > testExpire STARTED

kafka.server.ThrottledResponseExpirationTest > testExpire PASSED

kafka.server.ReplicaManagerQuotasTest > shouldGetBothMessagesIfQuotasAllow 
STARTED

kafka.server.ReplicaManagerQuotasTest > shouldGetBothMessagesIfQuotasAllow 
PASSED

kafka.server.ReplicaManagerQuotasTest > 
shouldExcludeSubsequentThrottledPartitions STARTED

kafka.server.ReplicaManagerQuotasTest > 
shouldExcludeSubsequentThrottledPartitions PASSED

kafka.server.ReplicaManagerQuotasTest > 
shouldGetNoMessagesIfQuotasExceededOnSubsequentPartitions STARTED

kafka.server.ReplicaManagerQuotasTest > 
shouldGetNoMessagesIfQuotasExceededOnSubsequentPartitions PASSED

kafka.server.ReplicaManagerQuotasTest > shouldIncludeInSyncThrottledReplicas 
STARTED

kafka.server.ReplicaManagerQuotasTest > shouldIncludeInSyncThrottledReplicas 
PASSED

kafka.server.SaslApiVersionsRequestTest > 
testApiVersionsRequestWithUnsupportedVersion STARTED

kafka.server.SaslApiVersionsRequestTest > 
testApiVersionsRequestWithUnsupportedVersion PASSED

kafka.server.SaslApiVersionsRequestTest > 
testApiVersionsRequestBeforeSaslHandshakeRequest STARTED

kafka.server.SaslApiVersionsRequestTest > 
testApiVersionsRequestBeforeSaslHandshakeRequest PASSED

kafka.server.SaslApiVersionsRequestTest > 
testApiVersionsRequestAfterSaslHandshakeRequest STARTED

kafka.server.SaslApiVersionsRequestTest > 
testApiVersionsRequestAfterSaslHandshakeRequest PASSED

kafka.server.ControlledShutdownLeaderSelectorTest > testSelectLeader STARTED

kafka.server.ControlledShutdownLeaderSelectorTest > testSelectLeader PASSED

kafka.server.MetadataCacheTest > 
getTopicMetadataWithNonSupportedSecurityProtocol STARTED

kafka.server.MetadataCacheTest > 
getTopicMetadataWithNonSupportedSecurityProtocol PASSED

kafka.server.MetadataCacheTest > getTopicMetadataIsrNotAvailable STARTED

kafka.server.MetadataCacheTest > getTopicMetadataIsrNotAvailable PASSED

kafka.server.MetadataCacheTest > getTopicMetadata STARTED

kafka.server.MetadataCacheTest > getTopicMetadata PASSED

kafka.server.MetadataCacheTest > getTopicMetadataReplicaNotAvailable STARTED

kafka.server.MetadataCacheTest > getTopicMetadataReplicaNotAvailable PASSED

kafka.server.MetadataCacheTest > getTopicMetadataPartitionLeaderNotAvailable 
STARTED

kafka.server.MetadataCacheTest > getTopicMetadataPartitionLeaderNotAvailable 
PASSED

kafka.server.MetadataCacheTest > getAliveBrokersShouldNotBeMutatedByUpdateCache 
STARTED

kafka.server.MetadataCacheTest > getAliveBrokersShouldNotBeMutatedByUpdateCache 
PASSED

kafka.server.MetadataCacheTest > getTopicMetadataNonExistingTopics STARTED

kafka.server.MetadataCacheTest > getTopicMetadataNonExistingTopics PASSED

kafka.server.FetchRequestTest > testBrokerRespectsPartitionsOrderAndSizeLimits 
STARTED

kafka.server.FetchRequestTest > testBrokerRespectsPartitionsOrderAndSizeLimits 
PASSED

kafka.server.FetchRequestTest > 
testDownConversionFromBatchedToU

[jira] [Resolved] (KAFKA-5922) Add SessionWindowedKStream

2017-09-21 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-5922.
--
Resolution: Fixed

Issue resolved by pull request 3902
[https://github.com/apache/kafka/pull/3902]

> Add SessionWindowedKStream
> --
>
> Key: KAFKA-5922
> URL: https://issues.apache.org/jira/browse/KAFKA-5922
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 1.0.0
>
>
> Add SessionWindowedKStream interface and implementation



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3902: KAFKA-5922: Add SessionWindowedKStream

2017-09-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3902


---


[GitHub] kafka pull request #3931: Minor: Fix testUniqueErrorCodes unit test failure

2017-09-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3931


---


[GitHub] kafka pull request #3931: Minor: Fix testUniqueErrorCodes unit test failure

2017-09-21 Thread wushujames
GitHub user wushujames opened a pull request:

https://github.com/apache/kafka/pull/3931

Minor: Fix testUniqueErrorCodes unit test failure

I'm not really sure if I'm doing the right thing here, but I thought I'd 
give it a shot and try to fix the unit test breakage. Running ```./gradlew 
:clients:unitTest``` now passes.

Fix error code collision accidentally introduced when merging in 
https://github.com/apache/kafka/commit/5f6393f9b17cce17ded7a00e439599dfa77deb2d#diff-b119227df7efa3ffeb7fe69e49ff1afeR541

There were two Error 59's

Requesting review by @tombentley 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/wushujames/kafka KAFKA-5856.buildfix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3931.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3931


commit b3c738ba1dd8771c1e6a088222e3e9d850758aa1
Author: James Cheng 
Date:   2017-09-21T07:14:47Z

Fix error code collision accidentally introduced when merging in 
https://github.com/apache/kafka/commit/5f6393f9b17cce17ded7a00e439599dfa77deb2d#diff-b119227df7efa3ffeb7fe69e49ff1afeR541




---


[GitHub] kafka-site issue #82: Adding minor style changes

2017-09-21 Thread dguy
Github user dguy commented on the issue:

https://github.com/apache/kafka-site/pull/82
  
@manjuapu you should be able to close it yourself? I can't close it.


---