Re: Please add me to contributor list

2017-09-15 Thread Guozhang Wang
What's your apache id?

On Sat, Sep 16, 2017 at 8:24 AM, 鄭紹志  wrote:

> I want to work on some issue, please add me to contributor list in JIRA.
>
> Also need to write permission in Confluence.
>
>
> Thanks !
> Vito
>



-- 
-- Guozhang


[GitHub] kafka-site pull request #77: MINOR: Add streams child topics to left-hand na...

2017-09-15 Thread guozhangwang
Github user guozhangwang commented on a diff in the pull request:

https://github.com/apache/kafka-site/pull/77#discussion_r139275915
  
--- Diff: index.html ---
@@ -7,17 +7,17 @@

Publish  subscribe
to streams of data like a messaging 
system
-   Learn more 

+   Learn about APIs 

--- End diff --

`Learn about producer and consumer`?


---


[GitHub] kafka-site pull request #77: MINOR: Add streams child topics to left-hand na...

2017-09-15 Thread guozhangwang
Github user guozhangwang commented on a diff in the pull request:

https://github.com/apache/kafka-site/pull/77#discussion_r139275946
  
--- Diff: index.html ---
@@ -7,17 +7,17 @@

Publish  subscribe
to streams of data like a messaging 
system
-   Learn more 

+   Learn about APIs 



Process
streams of data efficiently and in real 
time
-   Learn more 

+   Learn about 
Streams 
--- End diff --

`Learn about Streams API`?


---


[GitHub] kafka-site pull request #77: MINOR: Add streams child topics to left-hand na...

2017-09-15 Thread guozhangwang
Github user guozhangwang commented on a diff in the pull request:

https://github.com/apache/kafka-site/pull/77#discussion_r139275996
  
--- Diff: index.html ---
@@ -7,17 +7,17 @@

Publish  subscribe
to streams of data like a messaging 
system
-   Learn more 

+   Learn about APIs 



Process
streams of data efficiently and in real 
time
-   Learn more 

+   Learn about 
Streams 


Store
streams of data safely in a distributed 
replicated cluster
-   Learn more 

+   Learn about 
Storage 
--- End diff --

`Learn about storage architecture design`?


---


[GitHub] kafka-site pull request #77: MINOR: Add streams child topics to left-hand na...

2017-09-15 Thread guozhangwang
Github user guozhangwang commented on a diff in the pull request:

https://github.com/apache/kafka-site/pull/77#discussion_r139275998
  
--- Diff: includes/_nav.htm ---
@@ -11,6 +11,12 @@
 getting started
 APIs
 kafka streams
+
--- End diff --

Could you paste a screenshot of this change?


---


[GitHub] kafka pull request #3879: KAFKA-5915: Support unmapping of mapped/direct buf...

2017-09-15 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/3879

KAFKA-5915: Support unmapping of mapped/direct buffers in Java 9

As mentioned in MappedByteBuffers' class documentation, its
implementation was inspired by Lucene's MMapDirectory:


https://github.com/apache/lucene-solr/blob/releases/lucene-solr/6.6.1/lucene/core/src/java/org/apache/lucene/store/MMapDirectory.java#L315

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-5915-unmap-mapped-buffers-java-9

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3879.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3879


commit e168f970ed726524b1d7f6ec70dfb4bfff8da754
Author: Ismael Juma 
Date:   2017-09-16T02:05:50Z

KAFKA-5915: Support unmapping of mapped/direct buffers in Java 9




---


[jira] [Created] (KAFKA-5915) Support unmapping of mapped/direct buffers in Java 9

2017-09-15 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-5915:
--

 Summary: Support unmapping of mapped/direct buffers in Java 9
 Key: KAFKA-5915
 URL: https://issues.apache.org/jira/browse/KAFKA-5915
 Project: Kafka
  Issue Type: Improvement
Reporter: Ismael Juma
Assignee: Ismael Juma
 Fix For: 1.0.0


This currently fails with:

{code}
java.lang.IllegalAccessError: class kafka.log.AbstractIndex (in unnamed module 
@0x45103d6b) cannot access class jdk.internal.ref.Cleaner (in module java.base) 
because module java.base does not export jdk.internal.ref to unnamed module 
@0x45103d6b
{code}

A commit that shows how Lucene changed their code to run without warnings: 
https://github.com/apache/lucene-solr/commit/7e03427fa14a024ce257babcb8362d2451941e21




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3878: KAFKA-5913: Add the RecordMetadataNotAvailableExce...

2017-09-15 Thread apurvam
GitHub user apurvam opened a pull request:

https://github.com/apache/kafka/pull/3878

KAFKA-5913: Add the RecordMetadataNotAvailableException

We return this exception from `RecordMetadata.offset()` or 
`RecordMetadata.timestamp()` if these pieces of metadata were not returned by 
the broker. 

This will happen, for instance, when the broker returns a 
`DuplicateSequenceException`.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apurvam/kafka 
KAFKA-5913-add-record-metadata-not-available-exception

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3878.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3878


commit e9770a65203f7bdcea8c25a5fbaaa9366d12851c
Author: Apurva Mehta 
Date:   2017-09-16T01:15:15Z

Initial commit




---


[jira] [Created] (KAFKA-5914) Return MessageFormatVersion and MessageMaxBytes in MetadataResponse

2017-09-15 Thread Apurva Mehta (JIRA)
Apurva Mehta created KAFKA-5914:
---

 Summary: Return MessageFormatVersion and MessageMaxBytes in 
MetadataResponse
 Key: KAFKA-5914
 URL: https://issues.apache.org/jira/browse/KAFKA-5914
 Project: Kafka
  Issue Type: Sub-task
Reporter: Apurva Mehta
Assignee: Apurva Mehta
 Fix For: 1.0.0


As part of KIP-192, we want to send two additional fields in the 
{{TopicMetadata}} which is part of the {{MetadataResponse}}. These fields are 
the {{MessageFormatVersion}} and the {{MessageMaxBytes}}.

The {{MessageFormatVersion}} is required to implement 
https://issues.apache.org/jira/browse/KAFKA-5794 . The latter will be 
implemented in a future release, but with the changes proposed here, the said 
future release will be backward compatible with 1.0.0



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5913) Add RecordMetadataNotAvailableException to indicate that ProduceResponse did not contain offset and timestamp information

2017-09-15 Thread Apurva Mehta (JIRA)
Apurva Mehta created KAFKA-5913:
---

 Summary: Add RecordMetadataNotAvailableException to indicate that 
ProduceResponse did not contain offset and timestamp information
 Key: KAFKA-5913
 URL: https://issues.apache.org/jira/browse/KAFKA-5913
 Project: Kafka
  Issue Type: Sub-task
Reporter: Apurva Mehta
Assignee: Apurva Mehta
 Fix For: 1.0.0


One of the changes in KIP-192 is to send a {{DUPLICATE_SEQUENCE}} error code 
with a {{ProduceResponse}} when we detect a duplicate on the broker but don't 
have the batch metadata for the sequence in question in memory.

To handle this on the client, we mark the batch as successful, but cannot 
return the offset and timestamp information in the {{RecordMetadata}} returned 
in the produce future. Instead of returning implicit invalid values (like -1), 
we should throw a {{RecordMetadataNotAvailableException}} to ensure that 
applications don't suffer from faulty processing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Please add me to contributor list

2017-09-15 Thread 鄭紹志
I want to work on some issue, please add me to contributor list in JIRA.

Also need to write permission in Confluence.


Thanks !
Vito


Re: Kip Write Access

2017-09-15 Thread Guozhang Wang
Hi Richard,

It's done. Cheers.

Guozhang

On Fri, Sep 15, 2017 at 10:28 AM, Richard Yu 
wrote:

> Hello, I wish to write a kip. Could you grant me access?
>
> Thanks
>
> (Wiki username is yohan.richard.yu)
>



-- 
-- Guozhang


Jenkins build is back to normal : kafka-trunk-jdk7 #2766

2017-09-15 Thread Apache Jenkins Server
See 




[GitHub] kafka pull request #3877: MINOR: Disable KafkaAdminClientTest.testHandleTime...

2017-09-15 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3877


---


[GitHub] kafka pull request #3877: MINOR: Disable KafkaAdminClientTest.testHandleTime...

2017-09-15 Thread apurvam
GitHub user apurvam opened a pull request:

https://github.com/apache/kafka/pull/3877

MINOR: Disable KafkaAdminClientTest.testHandleTimeout

This test is super flaky in the PR builder. 
https://issues.apache.org/jira/browse/KAFKA-5792 tracks the fix.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apurvam/kafka 
MINOR-disable-adminclient-timeout-test

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3877.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3877


commit d533e0c598fa5a96591ff46c20df29897596d250
Author: Apurva Mehta 
Date:   2017-09-15T20:09:51Z

Disable KafkaAdminClientTest.testHandleTimeout




---


[jira] [Created] (KAFKA-5912) Trogdor AgentTest.testAgentActivatesFaults is flaky

2017-09-15 Thread Apurva Mehta (JIRA)
Apurva Mehta created KAFKA-5912:
---

 Summary: Trogdor AgentTest.testAgentActivatesFaults is flaky
 Key: KAFKA-5912
 URL: https://issues.apache.org/jira/browse/KAFKA-5912
 Project: Kafka
  Issue Type: Bug
Reporter: Apurva Mehta
Assignee: Colin P. McCabe


I have seen the the following failures occasionally in the PR builder.

{noformat}
Error Message

java.lang.AssertionError: Condition not met within timeout 15000. Timed out 
waiting for expected fault specs {bar: {state: 
{"stateName":"done","doneMs":7,"errorStr":""}}, baz: {state: 
{"stateName":"running","startedMs":7}}, foo: {state: 
{"stateName":"done","doneMs":3,"errorStr":""}}}
Stacktrace

java.lang.AssertionError: Condition not met within timeout 15000. Timed out 
waiting for expected fault specs {bar: {state: 
{"stateName":"done","doneMs":7,"errorStr":""}}, baz: {state: 
{"stateName":"running","startedMs":7}}, foo: {state: 
{"stateName":"done","doneMs":3,"errorStr":""}}}
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:275)
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:253)
at 
org.apache.kafka.trogdor.common.ExpectedFaults.waitFor(ExpectedFaults.java:119)
at 
org.apache.kafka.trogdor.common.ExpectedFaults.waitFor(ExpectedFaults.java:109)
at 
org.apache.kafka.trogdor.agent.AgentTest.testAgentActivatesFaults(AgentTest.java:152)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)

{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5911) Avoid creation of extra Map for futures in KafkaAdminClient

2017-09-15 Thread Ted Yu (JIRA)
Ted Yu created KAFKA-5911:
-

 Summary: Avoid creation of extra Map for futures in 
KafkaAdminClient
 Key: KAFKA-5911
 URL: https://issues.apache.org/jira/browse/KAFKA-5911
 Project: Kafka
  Issue Type: Bug
Reporter: Ted Yu


In various methods from KafkaAdminClient, there is extra Map created when 
constructing XXResult instance.
e.g.
{code}
return new DescribeReplicaLogDirResult(new 
HashMap(futures));
{code}
Prior to returning, futures Map is already filled.
Calling get() and values() does not involve the internals of HashMap when we 
consider thread-safety.

The extra Map doesn't need to be created.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: system test builder

2017-09-15 Thread Apurva Mehta
Hi Ted,

Unfortunately the jenkins.confluent.io address is no longer publicly
accessible.

Thanks,
Apurva

On Thu, Sep 14, 2017 at 7:35 PM, Ted Yu  wrote:

> Hi,
> When I put the following in the address bar of Chrome:
>
> https://jenkins.confluent.io/job/system-test-kafka-branch-builder
>
> I was told:
>
> This site can’t be reached
>
> Are the tests accessible by the public ?
>
> Thanks
>


[GitHub] kafka pull request #3876: Force Connect tasks to stop via thread interruptio...

2017-09-15 Thread 56quarters
GitHub user 56quarters opened a pull request:

https://github.com/apache/kafka/pull/3876

Force Connect tasks to stop via thread interruption after a timeout

Interrupt the thread of Kafka Connect tasks that do not stop within
the timeout via `Worker::stopAndAwaitTasks()`. Previously tasks would
be asked to stop via setting a `stopping` flag. It was possible for
tasks to ignore this flag if they were, for example, waiting for
a lock or blocked on I/O.

This prevents issues where tasks may end up with multiple threads
all running and attempting to make progress when there should only
be a single thread running for that task at a time.

Fixes KAFKA-5896

/cc @rhauch @tedyu 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/smarter-travel-media/kafka force-task-stop

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3876.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3876


commit 31c879c1a1f0bd4f5999c021baca8e99e733ffe1
Author: Nick Pillitteri 
Date:   2017-09-13T14:54:40Z

Force Connect tasks to stop via thread interruption after a timeout

Interrupt the thread of Kafka Connect tasks that do not stop within
the timeout via Worker::stopAndAwaitTasks(). Previously tasks would
be asked to stop via setting a `stopping` flag. It was possible for
tasks to ignore this flag if they were, for example, waiting for
a lock or blocked on I/O.

This prevents issues where tasks may end up with multiple threads
all running and attempting to make progress when there should only
be a single thread running for that task at a time.

Fixes KAFKA-5896




---


[GitHub] kafka-site issue #78: MINOR: Add header items

2017-09-15 Thread joel-hamill
Github user joel-hamill commented on the issue:

https://github.com/apache/kafka-site/pull/78
  
@manjuapu


---


Build failed in Jenkins: kafka-trunk-jdk8 #2025

2017-09-15 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] KAFKA-4764; Wrap SASL tokens in Kafka headers to improve 
diagnostics

--
[...truncated 4.46 MB...]

kafka.utils.json.JsonValueTest > testDecodeBoolean STARTED

kafka.utils.json.JsonValueTest > testDecodeBoolean PASSED

kafka.utils.UtilsTest > testGenerateUuidAsBase64 STARTED

kafka.utils.UtilsTest > testGenerateUuidAsBase64 PASSED

kafka.utils.UtilsTest > testAbs STARTED

kafka.utils.UtilsTest > testAbs PASSED

kafka.utils.UtilsTest > testReplaceSuffix STARTED

kafka.utils.UtilsTest > testReplaceSuffix PASSED

kafka.utils.UtilsTest > testCircularIterator STARTED

kafka.utils.UtilsTest > testCircularIterator PASSED

kafka.utils.UtilsTest > testReadBytes STARTED

kafka.utils.UtilsTest > testReadBytes PASSED

kafka.utils.UtilsTest > testCsvList STARTED

kafka.utils.UtilsTest > testCsvList PASSED

kafka.utils.UtilsTest > testReadInt STARTED

kafka.utils.UtilsTest > testReadInt PASSED

kafka.utils.UtilsTest > testUrlSafeBase64EncodeUUID STARTED

kafka.utils.UtilsTest > testUrlSafeBase64EncodeUUID PASSED

kafka.utils.UtilsTest > testCsvMap STARTED

kafka.utils.UtilsTest > testCsvMap PASSED

kafka.utils.UtilsTest > testInLock STARTED

kafka.utils.UtilsTest > testInLock PASSED

kafka.utils.UtilsTest > testSwallow STARTED

kafka.utils.UtilsTest > testSwallow PASSED

kafka.producer.AsyncProducerTest > testFailedSendRetryLogic STARTED

kafka.producer.AsyncProducerTest > testFailedSendRetryLogic PASSED

kafka.producer.AsyncProducerTest > testQueueTimeExpired STARTED

kafka.producer.AsyncProducerTest > testQueueTimeExpired PASSED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents STARTED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents PASSED

kafka.producer.AsyncProducerTest > testBatchSize STARTED

kafka.producer.AsyncProducerTest > testBatchSize PASSED

kafka.producer.AsyncProducerTest > testSerializeEvents STARTED

kafka.producer.AsyncProducerTest > testSerializeEvents PASSED

kafka.producer.AsyncProducerTest > testProducerQueueSize STARTED

kafka.producer.AsyncProducerTest > testProducerQueueSize PASSED

kafka.producer.AsyncProducerTest > testRandomPartitioner STARTED

kafka.producer.AsyncProducerTest > testRandomPartitioner PASSED

kafka.producer.AsyncProducerTest > testInvalidConfiguration STARTED

kafka.producer.AsyncProducerTest > testInvalidConfiguration PASSED

kafka.producer.AsyncProducerTest > testInvalidPartition STARTED

kafka.producer.AsyncProducerTest > testInvalidPartition PASSED

kafka.producer.AsyncProducerTest > testNoBroker STARTED

kafka.producer.AsyncProducerTest > testNoBroker PASSED

kafka.producer.AsyncProducerTest > testProduceAfterClosed STARTED

kafka.producer.AsyncProducerTest > testProduceAfterClosed PASSED

kafka.producer.AsyncProducerTest > testJavaProducer STARTED

kafka.producer.AsyncProducerTest > testJavaProducer PASSED

kafka.producer.AsyncProducerTest > testIncompatibleEncoder STARTED

kafka.producer.AsyncProducerTest > testIncompatibleEncoder PASSED

kafka.producer.SyncProducerTest > testReachableServer STARTED

kafka.producer.SyncProducerTest > testReachableServer PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLarge STARTED

kafka.producer.SyncProducerTest > testMessageSizeTooLarge PASSED

kafka.producer.SyncProducerTest > testNotEnoughReplicas STARTED

kafka.producer.SyncProducerTest > testNotEnoughReplicas PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLargeWithAckZero STARTED

kafka.producer.SyncProducerTest > testMessageSizeTooLargeWithAckZero PASSED

kafka.producer.SyncProducerTest > testProducerCanTimeout STARTED

kafka.producer.SyncProducerTest > testProducerCanTimeout PASSED

kafka.producer.SyncProducerTest > testProduceRequestWithNoResponse STARTED

kafka.producer.SyncProducerTest > testProduceRequestWithNoResponse PASSED

kafka.producer.SyncProducerTest > testEmptyProduceRequest STARTED

kafka.producer.SyncProducerTest > testEmptyProduceRequest PASSED

kafka.producer.SyncProducerTest > testProduceCorrectlyReceivesResponse STARTED

kafka.producer.SyncProducerTest > testProduceCorrectlyReceivesResponse PASSED

kafka.producer.ProducerTest > testSendToNewTopic STARTED

kafka.producer.ProducerTest > testSendToNewTopic PASSED

kafka.producer.ProducerTest > testAsyncSendCanCorrectlyFailWithTimeout STARTED

kafka.producer.ProducerTest > testAsyncSendCanCorrectlyFailWithTimeout PASSED

kafka.producer.ProducerTest > testSendNullMessage STARTED

kafka.producer.ProducerTest > testSendNullMessage PASSED

kafka.producer.ProducerTest > testUpdateBrokerPartitionInfo STARTED

kafka.producer.ProducerTest > testUpdateBrokerPartitionInfo PASSED

kafka.producer.ProducerTest > testSendWithDeadBroker STARTED

kafka.producer.ProducerTest > testSendWithDeadBroker PASSED

unit.kafka.server.KafkaApisTest > 

[GitHub] kafka-site issue #78: MINOR: Add header items

2017-09-15 Thread ewencp
Github user ewencp commented on the issue:

https://github.com/apache/kafka-site/pull/78
  
@joel-hamill Something like this should work:

```
diff --git a/css/styles.css b/css/styles.css
index 1e75e91..2aa587e 100644
--- a/css/styles.css
+++ b/css/styles.css
@@ -241,6 +241,14 @@ ol.toc > li {
padding: 2rem 0 1rem;
background-color: #FF;
 }
+.head th {
+   padding-left: .7rem;
+   padding-right: .7rem;
+}
+.head th.logo {
+   padding-left: 0;
+   padding-right: 4rem;
+}
 .footer {
flex: 1;
position: relative;
diff --git a/includes/_top.htm b/includes/_top.htm
index 2f69803..ff07f8f 100644
--- a/includes/_top.htm
+++ b/includes/_top.htm
@@ -7,43 +7,13 @@


  
-   
-   
-   
-   
-   
-   
-   
-   
-   
-   
-   
-   
-   
+   
Getting 
Started
-   
-   
-   
|
-   
-   
-   
Documentation
-   
-   
-   
|
-   
-   
-   
Downloads
-   
-   
-   
|
-   
-   
-   
Community
  

```

Would just push it, but don't have write access to your copy of the repo. 
Complete updated version is here 
https://github.com/ewencp/kafka-site/tree/joel-hamill/header-nav


---


[GitHub] kafka pull request #3875: MINOR: Use full package name when classes referenc...

2017-09-15 Thread KevinLiLu
GitHub user KevinLiLu opened a pull request:

https://github.com/apache/kafka/pull/3875

MINOR: Use full package name when classes referenced in documentation

The `metric.reporters` description in the documentation says to implement 
the `MetricReporter` class, but the actual class is `MetricsReporter`. 
[MetricsReporter.java](https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/metrics/MetricsReporter.java)

The documentation is also inconsistent as some references to classes do not 
have the package name.

@ijuma 


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/KevinLiLu/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3875.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3875


commit 8cd5040ffb227e0314db71c89bddad1a7c77c6d5
Author: Kevin Lu 
Date:   2017-09-15T17:51:53Z

Use full package name when classes referenced in documentation




---


[GitHub] kafka pull request #3867: MINOR: Use full package name when classes referenc...

2017-09-15 Thread KevinLiLu
Github user KevinLiLu closed the pull request at:

https://github.com/apache/kafka/pull/3867


---


Jenkins build is back to normal : kafka-trunk-jdk9 #14

2017-09-15 Thread Apache Jenkins Server
See 




[GitHub] kafka-site pull request #78: MINOR: Add header items

2017-09-15 Thread joel-hamill
Github user joel-hamill commented on a diff in the pull request:

https://github.com/apache/kafka-site/pull/78#discussion_r139205540
  
--- Diff: includes/_top.htm ---
@@ -1,5 +1,55 @@
 

-   
+   
+   
+   
+ 
+   
+   
--- End diff --


![image](https://user-images.githubusercontent.com/11722533/30495284-4c70bd70-9a00-11e7-88a5-01db7b262269.png)



---


[GitHub] kafka-site pull request #78: MINOR: Add header items

2017-09-15 Thread joel-hamill
Github user joel-hamill commented on a diff in the pull request:

https://github.com/apache/kafka-site/pull/78#discussion_r139205416
  
--- Diff: includes/_top.htm ---
@@ -1,5 +1,55 @@
 

-   
+   
+   
+   
+ 
+   
+   
--- End diff --

it's definitely for spacing, and could definitely be handled with CSS more 
elegantly - however,  that's outside of my level of knowledge. i was just 
trying to hack something together quickly. CC: @rajinisivaram 


---


[GitHub] kafka-site pull request #78: MINOR: Add header items

2017-09-15 Thread joel-hamill
Github user joel-hamill commented on a diff in the pull request:

https://github.com/apache/kafka-site/pull/78#discussion_r139205053
  
--- Diff: includes/_nav.htm ---
@@ -36,7 +36,7 @@
 http://www.apache.org/security/; target="_blank">security
   
 
-  download
+  
--- End diff --

yeah, it was intentional. i was trying to clean up the page since IMHO 
there are way too many headings on the left-hand nav. but i can see how 
removing this might be too much. i was using this page for inspiration (which 
does have a big ol' icon, but is definitely better design): 
http://mesos.apache.org/


---


[GitHub] kafka-site pull request #78: MINOR: Add header items

2017-09-15 Thread ewencp
Github user ewencp commented on a diff in the pull request:

https://github.com/apache/kafka-site/pull/78#discussion_r139203439
  
--- Diff: includes/_nav.htm ---
@@ -36,7 +36,7 @@
 http://www.apache.org/security/; target="_blank">security
   
 
-  download
+  
--- End diff --

is this commenting intentional? i imagine we would want to keep the big ol' 
download button even if we have the header? having it clearly highlighted seems 
important


---


[GitHub] kafka-site pull request #78: MINOR: Add header items

2017-09-15 Thread ewencp
Github user ewencp commented on a diff in the pull request:

https://github.com/apache/kafka-site/pull/78#discussion_r139203750
  
--- Diff: includes/_top.htm ---
@@ -1,5 +1,55 @@
 

-   
+   
+   
+   
+ 
+   
+   
--- End diff --

what's with all the empty headers? is this just for spacing or something? 
can we use css instead?


---


Build failed in Jenkins: kafka-trunk-jdk7 #2765

2017-09-15 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] KAFKA-4764; Wrap SASL tokens in Kafka headers to improve 
diagnostics

--
[...truncated 932.07 KB...]

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryTailIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange STARTED


[GitHub] kafka pull request #3874: KAFKA-5163; Support replicas movement between log ...

2017-09-15 Thread lindong28
GitHub user lindong28 opened a pull request:

https://github.com/apache/kafka/pull/3874

KAFKA-5163; Support replicas movement between log directories (KIP-113)



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lindong28/kafka KAFKA-5163

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3874.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3874


commit d85e65be124aaf30eb39d90906197639be0da128
Author: Dong Lin 
Date:   2017-09-14T01:30:33Z

KAFKA-5163; Support replicas movement between log directories (KIP-113)




---


Build failed in Jenkins: kafka-trunk-jdk9 #13

2017-09-15 Thread Apache Jenkins Server
See 


Changes:

[damian.guy] KAFKA-5908; fix range query in CompositeReadOnlyWindowStore

--
[...truncated 1.15 MB...]
kafka.api.SslConsumerTest > testCoordinatorFailover STARTED

kafka.api.SslConsumerTest > testCoordinatorFailover PASSED

kafka.api.SslConsumerTest > testSimpleConsumption STARTED

kafka.api.SslConsumerTest > testSimpleConsumption PASSED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe STARTED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe PASSED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeViaAssign STARTED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeViaAssign PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoConsumeWithDescribeAclViaAssign 
STARTED

kafka.api.SslEndToEndAuthorizationTest > testNoConsumeWithDescribeAclViaAssign 
PASSED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe STARTED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe PASSED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign STARTED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoGroupAcl STARTED

kafka.api.SslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl STARTED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl PASSED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeViaSubscribe STARTED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeViaSubscribe PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceWithoutDescribeAcl STARTED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceWithoutDescribeAcl PASSED

kafka.api.FetchRequestTest > testShuffleWithSingleTopic STARTED

kafka.api.FetchRequestTest > testShuffleWithSingleTopic PASSED

kafka.api.FetchRequestTest > testShuffle STARTED

kafka.api.FetchRequestTest > testShuffle PASSED

kafka.api.SaslSslConsumerTest > testCoordinatorFailover STARTED

kafka.api.SaslSslConsumerTest > testCoordinatorFailover PASSED

kafka.api.SaslSslConsumerTest > testSimpleConsumption STARTED

kafka.api.SaslSslConsumerTest > testSimpleConsumption PASSED

kafka.api.test.ProducerCompressionTest > testCompression[0 compressionType = 
none] STARTED

kafka.api.test.ProducerCompressionTest > testCompression[0 compressionType = 
none] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[1 compressionType = 
gzip] STARTED

kafka.api.test.ProducerCompressionTest > testCompression[1 compressionType = 
gzip] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[2 compressionType = 
snappy] STARTED

kafka.api.test.ProducerCompressionTest > testCompression[2 compressionType = 
snappy] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[3 compressionType = 
lz4] STARTED

kafka.api.test.ProducerCompressionTest > testCompression[3 compressionType = 
lz4] PASSED

kafka.api.ConsumerBounceTest > testCloseDuringRebalance STARTED

kafka.api.ConsumerBounceTest > testCloseDuringRebalance PASSED

kafka.api.ConsumerBounceTest > testClose STARTED

kafka.api.ConsumerBounceTest > testClose PASSED

kafka.api.ConsumerBounceTest > testSeekAndCommitWithBrokerFailures STARTED

kafka.api.ConsumerBounceTest > testSeekAndCommitWithBrokerFailures PASSED

kafka.api.ConsumerBounceTest > testSubscribeWhenTopicUnavailable STARTED

kafka.api.ConsumerBounceTest > testSubscribeWhenTopicUnavailable PASSED

kafka.api.ConsumerBounceTest > testConsumptionWithBrokerFailures STARTED

kafka.api.ConsumerBounceTest > testConsumptionWithBrokerFailures SKIPPED

kafka.api.PlaintextConsumerTest > testEarliestOrLatestOffsets STARTED

kafka.api.PlaintextConsumerTest > testEarliestOrLatestOffsets PASSED

kafka.api.PlaintextConsumerTest > testPartitionsForAutoCreate STARTED

kafka.api.PlaintextConsumerTest > testPartitionsForAutoCreate PASSED

kafka.api.PlaintextConsumerTest > testShrinkingTopicSubscriptions STARTED

kafka.api.PlaintextConsumerTest > testShrinkingTopicSubscriptions PASSED

kafka.api.PlaintextConsumerTest > testMaxPollIntervalMs STARTED

kafka.api.PlaintextConsumerTest > testMaxPollIntervalMs PASSED

kafka.api.PlaintextConsumerTest > testOffsetsForTimes STARTED

kafka.api.PlaintextConsumerTest > testOffsetsForTimes PASSED

kafka.api.PlaintextConsumerTest > testSubsequentPatternSubscription STARTED

kafka.api.PlaintextConsumerTest > testSubsequentPatternSubscription PASSED

kafka.api.PlaintextConsumerTest > testConsumeMessagesWithCreateTime STARTED

kafka.api.PlaintextConsumerTest > testConsumeMessagesWithCreateTime PASSED

kafka.api.PlaintextConsumerTest > testAsyncCommit STARTED

kafka.api.PlaintextConsumerTest > testAsyncCommit PASSED


[GitHub] kafka pull request #3708: KAFKA-4764: Wrap SASL tokens in Kafka headers to i...

2017-09-15 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3708


---


[GitHub] kafka pull request #1875: KAFKA-4190 kafka-reassign-partitions does not repo...

2017-09-15 Thread chemikadze
Github user chemikadze closed the pull request at:

https://github.com/apache/kafka/pull/1875


---


Re: [DISCUSS] KIP-152 - Improve diagnostics for SASL authentication failures

2017-09-15 Thread Rajini Sivaram
Hi Ramkumar,

This is being fixed for 1.0.0.

a) Retries will be removed for authentication failures.
b) With the current behaviour, retries do add load on the broker as new
connections are established for retries. There is a backoff interval
between connections to reduce the impact.

Regards,

Rajini

On Fri, Sep 15, 2017 at 3:31 PM, SEMBAIYAN, RAMKUMAR  wrote:

> Hi,
>
> Can you pls let me know if this is resolved or any work around is there. I
> am using Kafka 0.11.0.1 version.
>
>
>
> a.   When incorrect credentials are sent the publisher or consumer
> API, logs below warning error , but keeps retrying to broker with out
> disconnecting.
>
> b.  We are using kafka as middleware application for multiple pubs and
> subs. So if any one use wrong password , the broker will also be busy  and
> see as performance issue, right?
>
>
> [kafka-producer-network-thread | producer-1] WARN 
> org.apache.kafka.clients.NetworkClient
> - Connection to node -1 terminated during authentication. This may indicate
> that authentication failed due to invalid credentials.
> [kafka-producer-network-thread | producer-1] WARN 
> org.apache.kafka.clients.NetworkClient
> - Connection to node -1 terminated during authentication. This may indicate
> that authentication failed due to invalid credentials.
> [kafka-producer-network-thread | producer-1] WARN 
> org.apache.kafka.clients.NetworkClient
> - Connection to node -1 terminated during authentication. This may indicate
> that authentication failed due to invalid credentials.
> [kafka-producer-network-thread | producer-1] WARN 
> org.apache.kafka.clients.NetworkClient
> - Connection to node -1 terminated during authentication. This may indicate
> that authentication failed due to invalid credentials.
> [kafka-producer-network-thread | producer-1] WARN 
> org.apache.kafka.clients.NetworkClient
> - Connection to node -1 terminated during authentication. This may indicate
> that authentication failed due to invalid credentials.
> [kafka-producer-network-thread | producer-1] WARN 
> org.apache.kafka.clients.NetworkClient
> - Connection to node -1 terminated during authentication. This may indicate
> that authentication failed due to invalid credentials.
>
>
>
>
>
> Any inputs will be helpful.
>
>
>
> Thanks,
>
> Ramkumar
>
>
>
>
>
>
>
> On 2017-05-04 07:37, Rajini Sivaram >
> wrote:
>
> > Hi all,>
>
> >
>
> > I have created a KIP to improve diagnostics for SASL authentication>
>
> > failures and reduce retries and blocking when authentication fails:>
>
> >
>
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 152+-+Improve+diagnostics+for+SASL+authentication+failures>
>
> >
>
> > Comments and suggestions are welcome.>
>
> >
>
> > Thank you...>
>
> >
>
> > Regards,>
>
> >
>
> > Rajini>
>
> >
>


[GitHub] kafka-site issue #77: MINOR: Add streams child topics to left-hand nav

2017-09-15 Thread joel-hamill
Github user joel-hamill commented on the issue:

https://github.com/apache/kafka-site/pull/77
  
@dguy i don't think these files are in 
https://github.com/apache/kafka/tree/trunk/docs. these are changes specifically 
to the header/footer/nav, and files that are specific to the kafka-site repo.


---


[GitHub] kafka-site issue #78: MINOR: Add header items

2017-09-15 Thread joel-hamill
Github user joel-hamill commented on the issue:

https://github.com/apache/kafka-site/pull/78
  
@dguy i don't think these files are in 
https://github.com/apache/kafka/tree/trunk/docs. these are changes specifically 
to the header/footer/nav.


---


Re: [DISCUSS] KIP-152 - Improve diagnostics for SASL authentication failures

2017-09-15 Thread SEMBAIYAN, RAMKUMAR
Hi,

Can you pls let me know if this is resolved or any work around is there. I am 
using Kafka 0.11.0.1 version.



a.   When incorrect credentials are sent the publisher or consumer API, 
logs below warning error , but keeps retrying to broker with out disconnecting.

b.  We are using kafka as middleware application for multiple pubs and 
subs. So if any one use wrong password , the broker will also be busy  and see 
as performance issue, right?


[kafka-producer-network-thread | producer-1] WARN 
org.apache.kafka.clients.NetworkClient - Connection to node -1 terminated 
during authentication. This may indicate that authentication failed due to 
invalid credentials.
[kafka-producer-network-thread | producer-1] WARN 
org.apache.kafka.clients.NetworkClient - Connection to node -1 terminated 
during authentication. This may indicate that authentication failed due to 
invalid credentials.
[kafka-producer-network-thread | producer-1] WARN 
org.apache.kafka.clients.NetworkClient - Connection to node -1 terminated 
during authentication. This may indicate that authentication failed due to 
invalid credentials.
[kafka-producer-network-thread | producer-1] WARN 
org.apache.kafka.clients.NetworkClient - Connection to node -1 terminated 
during authentication. This may indicate that authentication failed due to 
invalid credentials.
[kafka-producer-network-thread | producer-1] WARN 
org.apache.kafka.clients.NetworkClient - Connection to node -1 terminated 
during authentication. This may indicate that authentication failed due to 
invalid credentials.
[kafka-producer-network-thread | producer-1] WARN 
org.apache.kafka.clients.NetworkClient - Connection to node -1 terminated 
during authentication. This may indicate that authentication failed due to 
invalid credentials.





Any inputs will be helpful.



Thanks,

Ramkumar







On 2017-05-04 07:37, Rajini Sivaram > 
wrote:

> Hi all,>

>

> I have created a KIP to improve diagnostics for SASL authentication>

> failures and reduce retries and blocking when authentication fails:>

>

> https://cwiki.apache.org/confluence/display/KAFKA/KIP-152+-+Improve+diagnostics+for+SASL+authentication+failures>

>

> Comments and suggestions are welcome.>

>

> Thank you...>

>

> Regards,>

>

> Rajini>

>


[jira] [Created] (KAFKA-5910) Kafka 0.11.0.1 Kafka consumer/producers retries in infinite loop when wrong SASL creds are passed

2017-09-15 Thread Ramkumar (JIRA)
Ramkumar created KAFKA-5910:
---

 Summary: Kafka 0.11.0.1 Kafka consumer/producers retries in 
infinite loop when wrong SASL creds are passed
 Key: KAFKA-5910
 URL: https://issues.apache.org/jira/browse/KAFKA-5910
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.11.0.0
Reporter: Ramkumar


Similar to https://issues.apache.org/jira/browse/KAFKA-4764 , but the status 
shows patch available but the client wont disconnects after getting the warning.


Issue 1:
Publisher flow:
Kafka publisher goes into infinite loop if the AAF credentials are wrong when 
authenticating in Kaka broker.
Detail:
If the correct user name and password are used at the kafka publisher client 
side to connect to kafka broker, then it authenticates and authorizes fine.
If  incorrect username or password is used at the kafka publisher client side, 
then broker logs shows a continuous (infinite loop)  log showing client is 
trying to reconnect the broker as it doesn’t get authentication failure 
exception from broker. 
JIRA defect in apache:
https://issues.apache.org/jira/browse/KAFKA-4764

Can you pls let me know if this issue is resolved in kafka 0.11.0.1 version or 
still an open issue?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3873: MINOR: Add semicolon to 'SASL/SCRAM' doc

2017-09-15 Thread makubi
GitHub user makubi opened a pull request:

https://github.com/apache/kafka/pull/3873

MINOR: Add semicolon to 'SASL/SCRAM' doc



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/makubi/kafka doc-sasl-scram-jaas

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3873.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3873


commit d97395735e8223dbfeab5777e727f3f565804345
Author: Mathias Kub 
Date:   2017-09-15T13:28:58Z

MINOR: Add semicolon to 'SASL/SCRAM' doc




---


[GitHub] kafka pull request #3872: KAFKA-5716 [WIP]: Recent polled offsets may not be...

2017-09-15 Thread steff1193
GitHub user steff1193 opened a pull request:

https://github.com/apache/kafka/pull/3872

KAFKA-5716 [WIP]: Recent polled offsets may not be written/flushed at 
SourceTask.commit

@rhauch @tedyu @hachikuji 

For now a test showing that the claimed problem is true. Should definitely 
not be committed - its a failing test. It fails trying to assert 
SourceTask.commit javadoc: Commit the offsets, up to the offsets that have been 
returned by {@link #poll()}

The contribution is my (@steff1193) original work and I license the work to 
the project under the project's open source license.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/TeletronicsDotAe/kafka KAFKA-5716

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3872.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3872


commit 9d093478b606f2defdc64ea64e306e46541126e6
Author: Per Steffensen 
Date:   2017-09-15T13:07:04Z

KAFKA-5716 Added test showing that the claim in KAFKA-5716 is true. It is 
possible that the offsets of the records from very recent polls are not 
included in the offsets written and flushed at the time of SourceTask.commit 
called




---


Jenkins build is back to normal : kafka-trunk-jdk8 #2024

2017-09-15 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-182 - Reduce Streams DSL overloads and allow easier use of custom storage engines

2017-09-15 Thread Damian Guy
Sounds good to me.

On Thu, 14 Sep 2017 at 19:55 Guozhang Wang  wrote:

> I'd suggest we remove both to and through together in KIP-182, since for
> operator "KTable#to" is as confusing as to "KTable#through" which
> overwhelms its benefit as a syntax sugar. I think the extra step "toStream"
> is actually better to remind the caller that it is sending its changelog
> stream to topic, plus it is not that much characters.
>
>
> Guozhang
>
> On Wed, Sep 13, 2017 at 12:40 AM, Damian Guy  wrote:
>
> > Hi Guozhang,
> >
> > I had an offline discussion with Matthias and Bill about it. It is
> thought
> > that `to` offers some benefit, i.e., syntactic sugar, so perhaps no harm
> in
> > keeping it. However, `through` less so, seeing as we can materialize
> stores
> > via `filter`, `map` etc, so one of the main benefits of `through` no
> longer
> > exists. WDYT?
> >
> > Thanks,
> > Damian
> >
> > On Tue, 12 Sep 2017 at 18:17 Guozhang Wang  wrote:
> >
> > > Hi Damian,
> > >
> > > Why we are deprecating KTable.through while keeping KTable.to? Should
> we
> > > either keep both of them or deprecate both of them in favor or
> > > KTable.toStream if people agree that it is confusing to users?
> > >
> > >
> > > Guozhang
> > >
> > >
> > > On Tue, Sep 12, 2017 at 1:18 AM, Damian Guy 
> > wrote:
> > >
> > > > Hi All,
> > > >
> > > > A minor update to the KIP, i needed to add KTable.to(Produced) for
> > > > consistency. KTable.through will be deprecated in favour of using
> > > > KTable.toStream().through()
> > > >
> > > > Thanks,
> > > > Damian
> > > >
> > > > On Thu, 7 Sep 2017 at 08:52 Damian Guy  wrote:
> > > >
> > > > > Thanks all. The vote is now closed and the KIP has been accepted
> > with:
> > > > > 2 non binding votes - bill and matthias
> > > > > 3 binding  - Damian, Guozhang, Sriram
> > > > >
> > > > > Regards,
> > > > > Damian
> > > > >
> > > > > On Tue, 5 Sep 2017 at 22:24 Sriram Subramanian 
> > > wrote:
> > > > >
> > > > >> +1
> > > > >>
> > > > >> On Tue, Sep 5, 2017 at 1:33 PM, Guozhang Wang  >
> > > > wrote:
> > > > >>
> > > > >> > +1
> > > > >> >
> > > > >> > On Fri, Sep 1, 2017 at 3:45 PM, Matthias J. Sax <
> > > > matth...@confluent.io>
> > > > >> > wrote:
> > > > >> >
> > > > >> > > +1
> > > > >> > >
> > > > >> > > On 9/1/17 2:53 PM, Bill Bejeck wrote:
> > > > >> > > > +1
> > > > >> > > >
> > > > >> > > > On Thu, Aug 31, 2017 at 10:20 AM, Damian Guy <
> > > > damian@gmail.com>
> > > > >> > > wrote:
> > > > >> > > >
> > > > >> > > >> Thanks everyone for voting! Unfortunately i've had to make
> a
> > > bit
> > > > >> of an
> > > > >> > > >> update based on some issues found during implementation.
> > > > >> > > >> The main changes are:
> > > > >> > > >> BytesStoreSupplier -> StoreSupplier
> > > > >> > > >> Addition of:
> > > > >> > > >> WindowBytesStoreSupplier, KeyValueBytesStoreSupplier,
> > > > >> > > >> SessionBytesStoreSupplier that will restrict store types to
> > > >  > > > >> > > byte[]>
> > > > >> > > >> 3 new overloads added to Materialized to enable developers
> to
> > > > >> create a
> > > > >> > > >> Materialized of the appropriate type, i..e, WindowStore etc
> > > > >> > > >> Update DSL where Materialized is used such that the stores
> > have
> > > > >> > generic
> > > > >> > > >> types of 
> > > > >> > > >> Some minor changes to the arguments to
> > > > Store#persistentWindowStore
> > > > >> and
> > > > >> > > >> Store#persistentSessionStore
> > > > >> > > >>
> > > > >> > > >> Please take a look and recast the votes.
> > > > >> > > >>
> > > > >> > > >> Thanks for your time,
> > > > >> > > >> Damian
> > > > >> > > >>
> > > > >> > > >> On Fri, 25 Aug 2017 at 17:05 Matthias J. Sax <
> > > > >> matth...@confluent.io>
> > > > >> > > >> wrote:
> > > > >> > > >>
> > > > >> > > >>> Thanks Damian. Great KIP!
> > > > >> > > >>>
> > > > >> > > >>> +1
> > > > >> > > >>>
> > > > >> > > >>>
> > > > >> > > >>> -Matthias
> > > > >> > > >>>
> > > > >> > > >>> On 8/25/17 6:45 AM, Damian Guy wrote:
> > > > >> > >  Hi,
> > > > >> > > 
> > > > >> > >  I've just realised we need to add two methods to
> > > > >> StateStoreBuilder
> > > > >> > or
> > > > >> > > >> it
> > > > >> > >  isn't going to work:
> > > > >> > > 
> > > > >> > >  Map logConfig();
> > > > >> > >  boolean loggingEnabled();
> > > > >> > > 
> > > > >> > >  These are needed when we are building the topology and
> > > > >> determining
> > > > >> > >  changelog topic names and configs.
> > > > >> > > 
> > > > >> > > 
> > > > >> > >  I've also update the KIP to add
> > > > >> > > 
> > > > >> > >  StreamBuilder#stream(String topic)
> > > > >> > > 
> > > > >> > >  StreamBuilder#stream(String topic, Consumed options)
> > > > >> > > 
> > > > >> > > 
> > > > >> > >  Thanks
> > > > 

Build failed in Jenkins: kafka-trunk-jdk7 #2764

2017-09-15 Thread Apache Jenkins Server
See 


Changes:

[damian.guy] KAFKA-5908; fix range query in CompositeReadOnlyWindowStore

--
[...truncated 2.53 MB...]

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullNameWhenAddingProcessor STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullNameWhenAddingProcessor PASSED

org.apache.kafka.streams.TopologyTest > shouldNotAllowToAddStoreWithSameName 
STARTED

org.apache.kafka.streams.TopologyTest > shouldNotAllowToAddStoreWithSameName 
PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullNameWhenAddingSourceWithPattern STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullNameWhenAddingSourceWithPattern PASSED

org.apache.kafka.streams.TopologyTest > 
multipleSourcesWithSinksShouldHaveDistinctSubtopologies STARTED

org.apache.kafka.streams.TopologyTest > 
multipleSourcesWithSinksShouldHaveDistinctSubtopologies PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfProducerEnableIdempotenceIsOverriddenIfEosEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfProducerEnableIdempotenceIsOverriddenIfEosEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldUseNewConfigsWhenPresent 
STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldUseNewConfigsWhenPresent 
PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptAtLeastOnce STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptAtLeastOnce PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldUseCorrectDefaultsWhenNoneSpecified STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldUseCorrectDefaultsWhenNoneSpecified PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerEnableIdempotenceIfEosDisabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerEnableIdempotenceIfEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
STARTED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetDifferentDefaultsIfEosEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetDifferentDefaultsIfEosEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigRetriesIfExactlyOnceEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigRetriesIfExactlyOnceEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerMaxInFlightRequestPerConnectionsWhenEosDisabled 
STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerMaxInFlightRequestPerConnectionsWhenEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowStreamsExceptionIfValueSerdeConfigFails STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowStreamsExceptionIfValueSerdeConfigFails PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfConsumerAutoCommitIsOverridden STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfConsumerAutoCommitIsOverridden PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowExceptionIfNotAtLestOnceOrExactlyOnce STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowExceptionIfNotAtLestOnceOrExactlyOnce PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingConsumerIsolationLevelIfEosDisabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingConsumerIsolationLevelIfEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedProducerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedProducerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedPropertiesThatAreNotPartOfRestoreConsumerConfig STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedPropertiesThatAreNotPartOfRestoreConsumerConfig PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultProducerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultProducerConfigs 

[GitHub] kafka pull request #3871: KAFKA-5909; Removed the source jars from classpath...

2017-09-15 Thread Kamal15
GitHub user Kamal15 opened a pull request:

https://github.com/apache/kafka/pull/3871

KAFKA-5909; Removed the source jars from classpath while executing CL…

…I tools.

- redundant `for` loops removed.
- I did this change with assumption that there are no priorities in jar 
while assigning it to the classpath.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Kamal15/kafka kafka-5909

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3871.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3871


commit 8ebca3259f79ce10e21372cb875ae800e9532865
Author: Kamal Chandraprakash 
Date:   2017-09-15T11:30:19Z

KAFKA-5909; Removed the source jars from classpath while executing CLI 
tools.

- redundant `for` loops removed.




---


[jira] [Created] (KAFKA-5909) Remove source jars from classpath while executing CLI tools

2017-09-15 Thread Kamal Chandraprakash (JIRA)
Kamal Chandraprakash created KAFKA-5909:
---

 Summary: Remove source jars from classpath while executing CLI 
tools
 Key: KAFKA-5909
 URL: https://issues.apache.org/jira/browse/KAFKA-5909
 Project: Kafka
  Issue Type: Bug
  Components: tools
Affects Versions: 0.11.0.0
Reporter: Kamal Chandraprakash
Assignee: Kamal Chandraprakash
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3870: KAFKA-5856: Add AdminClient.createPartitions()

2017-09-15 Thread tombentley
GitHub user tombentley opened a pull request:

https://github.com/apache/kafka/pull/3870

KAFKA-5856: Add AdminClient.createPartitions()

See KIP-195.

The contribution is my original work and I license the work to the project 
under the project's open source license.

This patch adds AdminClient.createPartitions() and the network protocol is
uses. The broker-side algorithm is as follows:

1. KafkaApis makes some initial checks on the request, then delegates to the
   new AdminManager.createPartitions() method.
2. AdminManager.createPartitions() performs some validation then delegates 
to
   AdminUtils.addPartitions().

Aside: I felt it was safer to add the extra validation in
AdminManager.createPartitions() than in AdminUtils.addPartitions() since the
latter is used on other code paths which might fail differently with the
introduction of extra checks.

3. AdminUtils.addPartitions() does its own checks and adds the partitions.
4. AdminManager then uses the existing topic purgatory to wait for the
   PartitionInfo available from the metadata cache to become consistent with
   the new total number of partitions.

The messages of exceptions thrown in AdminUtils affecting this new API have
been made consistent with initial capital letter and terminating period.
A few have been reworded for clarity. I've also standardized on using
String.format().

cc @ijuma

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tombentley/kafka 
KAFKA-5856-AdminClient.createPartitions

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3870.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3870


commit ab07f15a794c385cbfdecd33a5a44c7725e8d103
Author: Tom Bentley 
Date:   2017-09-15T09:50:59Z

KAFKA-5856: Add AdminClient.createPartitions()

See KIP-195.

This patch adds AdminClient.createPartitions() and the network protocol is
uses. The broker-side algorithm is as follows:

1. KafkaApis makes some initial checks on the request, then delegates to the
   new AdminManager.createPartitions() method.
2. AdminManager.createPartitions() performs some validation then delegates 
to
   AdminUtils.addPartitions().

Aside: I felt it was safer to add the extra validation in
AdminManager.createPartitions() than in AdminUtils.addPartitions() since the
latter is used on other code paths which might fail differently with the
introduction of extra checks.

3. AdminUtils.addPartitions() does its own checks and adds the partitions.
4. AdminManager then uses the existing topic purgatory to wait for the
   PartitionInfo available from the metadata cache to become consistent with
   the new total number of partitions.

The messages of exceptions thrown in AdminUtils affecting this new API have
been made consistent with initial capital letter and terminating period.
A few have been reworded for clarity. I've also standardized on using
String.format().




---


[GitHub] kafka pull request #3609: MINOR: Make the state change log more consistent

2017-09-15 Thread ijuma
Github user ijuma closed the pull request at:

https://github.com/apache/kafka/pull/3609


---


[GitHub] kafka pull request #3869: MINOR: Make the state change log more consistent

2017-09-15 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/3869

MINOR: Make the state change log more consistent

Use logIdent to achieve this.

Also fixed an issue where we were logging about replicas going offline with
an empty set of replicas (i.e. no replicas had gone offline so no need to 
log).

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka improve-state-change-log

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3869.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3869


commit a9e3c540a4bf480059678708701af3680da8a589
Author: Ismael Juma 
Date:   2017-08-02T10:39:59Z

MINOR: Only log about marking replicas as offline if there is a replica

I noticed a bunch of these log messages with an empty set for the replicas.

commit c3cee6f9cbb16d9173fde25e28b4c7afb6383803
Author: Ismael Juma 
Date:   2017-08-03T12:36:24Z

Use `logIdent` for more consistent state change logger

commit 3eaa96ed421f40a20a4a093a7e752c17de53d6f1
Author: Ismael Juma 
Date:   2017-08-03T12:38:52Z

Remove duplication in `KafkaController` log messages

We were repeating information already available due to the `logIdent`.

commit 01a7d243501fb949e4825318431beafd1c33b051
Author: Ismael Juma 
Date:   2017-08-03T12:43:07Z

Avoid unnecessary `toArray`

commit aa04ef55dbc0d1e362c042d605700fb222f6b065
Author: Ismael Juma 
Date:   2017-09-15T01:45:24Z

Make it possible to add the controller epoch to the log prefix

commit 7c99c85947f04036d544025fc2b3c873198334c3
Author: Ismael Juma 
Date:   2017-09-15T01:55:20Z

Share an instance of the underlying state change logger

commit d040105e629dcdf7770eb70e8b81bff02f0330d7
Author: Ismael Juma 
Date:   2017-09-15T10:49:35Z

Logging consistency improvements




---


[jira] [Resolved] (KAFKA-4454) Authorizer should also include the Principal generated by the PrincipalBuilder.

2017-09-15 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4454.
--
Resolution: Fixed

This is covered in KIP-189/KAFKA-5783

> Authorizer should also include the Principal generated by the 
> PrincipalBuilder.
> ---
>
> Key: KAFKA-4454
> URL: https://issues.apache.org/jira/browse/KAFKA-4454
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.1
>Reporter: Mayuresh Gharat
>Assignee: Mayuresh Gharat
>
> Currently kafka allows users to plugin a custom PrincipalBuilder and a custom 
> Authorizer.
> The Authorizer.authorize() object takes in a Session object that wraps 
> KafkaPrincipal and InetAddress.
> The KafkaPrincipal currently has a PrincipalType and Principal name, which is 
> the name of Principal generated by the PrincipalBuilder. 
> This Principal, generated by the pluggedin PrincipalBuilder might have other 
> fields that might be required by the pluggedin Authorizer but currently we 
> loose this information since we only extract the name of Principal while 
> creating KaflkaPrincipal in SocketServer.  
> It would be great if KafkaPrincipal has an additional field 
> "channelPrincipal" which is used to store the Principal generated by the 
> plugged in PrincipalBuilder.
> The pluggedin Authorizer can then use this "channelPrincipal" to do 
> authorization.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3868: KAFKA-5908: fix range query in CompositeReadOnlyWi...

2017-09-15 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3868


---


[jira] [Resolved] (KAFKA-5908) CompositeReadOnlyWindowStore range fetch doesn't return all values when fetching with different start and end times

2017-09-15 Thread Damian Guy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damian Guy resolved KAFKA-5908.
---
Resolution: Fixed

Issue resolved by pull request 3868
[https://github.com/apache/kafka/pull/3868]

> CompositeReadOnlyWindowStore range fetch doesn't return all values when 
> fetching with different start and end times
> ---
>
> Key: KAFKA-5908
> URL: https://issues.apache.org/jira/browse/KAFKA-5908
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.0.0
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 1.0.0
>
>
> The {{NextIteratorFunction}} in {{CompositeReadOnlyWindowStore}} is 
> incorrectly using the {{timeFrom}} as the {{timeTo}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka-site issue #77: MINOR: Add streams child topics to left-hand nav

2017-09-15 Thread dguy
Github user dguy commented on the issue:

https://github.com/apache/kafka-site/pull/77
  
as per: https://github.com/apache/kafka-site/pull/78#issuecomment-329644776


---


[GitHub] kafka-site issue #78: MINOR: Add header items

2017-09-15 Thread dguy
Github user dguy commented on the issue:

https://github.com/apache/kafka-site/pull/78
  
@joel-hamill thanks for the PR. However... these changes should be made 
against the Apache Kafka project - otherwise we run the risk of losing them.


---


[GitHub] kafka pull request #3868: KAFKA-5908: fix range query in CompositeReadOnlyWi...

2017-09-15 Thread dguy
GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/3868

KAFKA-5908: fix range query in CompositeReadOnlyWindowStore

The `NextIteratorFunction` in `CompositeReadOnlyWindowStore` was 
incorrectly using the `timeFrom` as the `timeTo`

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka window-store-range-scan

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3868.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3868


commit 925f7aa400d876187032cfda191c7f120d9a141f
Author: Damian Guy 
Date:   2017-09-15T09:02:33Z

fix range query




---


[jira] [Created] (KAFKA-5908) CompositeReadOnlyWindowStore range fetch doesn't return all values when fetching with different start and end times

2017-09-15 Thread Damian Guy (JIRA)
Damian Guy created KAFKA-5908:
-

 Summary: CompositeReadOnlyWindowStore range fetch doesn't return 
all values when fetching with different start and end times
 Key: KAFKA-5908
 URL: https://issues.apache.org/jira/browse/KAFKA-5908
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 1.0.0
Reporter: Damian Guy
Assignee: Damian Guy
 Fix For: 1.0.0


The {{NextIteratorFunction}} in {{CompositeReadOnlyWindowStore}} is incorrectly 
using the {{timeFrom}} as the {{timeTo}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5907) Support aggregatedJavadoc in Java 9

2017-09-15 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-5907:
--

 Summary: Support aggregatedJavadoc in Java 9
 Key: KAFKA-5907
 URL: https://issues.apache.org/jira/browse/KAFKA-5907
 Project: Kafka
  Issue Type: Improvement
Reporter: Ismael Juma
 Fix For: 1.1.0


The Java 9 Javadoc tool has some improvements including a search bar. However, 
it currently fails with a number of errors like:

{code}
> Task :aggregatedJavadoc
/Users/ijuma/src/kafka/streams/src/main/java/org/apache/kafka/streams/Topology.java:29:
 error: package org.apache.kafka.streams.processor.internals does not exist
import org.apache.kafka.streams.processor.internals.InternalTopologyBuilder;
   ^
/Users/ijuma/src/kafka/streams/src/main/java/org/apache/kafka/streams/Topology.java:30:
 error: package org.apache.kafka.streams.processor.internals does not exist
import org.apache.kafka.streams.processor.internals.ProcessorNode;
   ^
/Users/ijuma/src/kafka/streams/src/main/java/org/apache/kafka/streams/Topology.java:31:
 error: package org.apache.kafka.streams.processor.internals does not exist
import org.apache.kafka.streams.processor.internals.ProcessorTopology;
   ^
/Users/ijuma/src/kafka/streams/src/main/java/org/apache/kafka/streams/Topology.java:32:
 error: package org.apache.kafka.streams.processor.internals does not exist
import org.apache.kafka.streams.processor.internals.SinkNode;
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)