Jenkins build is back to normal : kafka-1.0-jdk7 #4

2017-10-04 Thread Apache Jenkins Server
See 




[GitHub] kafka pull request #4022: KAFKA-6015: Fix NullPointerExceptionInRecordAccumu...

2017-10-04 Thread apurvam
GitHub user apurvam opened a pull request:

https://github.com/apache/kafka/pull/4022

KAFKA-6015: Fix NullPointerExceptionInRecordAccumulator

It is possible for batches with sequence numbers to be in the `deque` while 
at the same time the in flight batches in the `TransactionManager` are removed 
due to a producerId reset.

In this case, when the batches in the `deque` are drained, we will get a 
`NullPointerException` in the background thread due to this line: 

```java
if (first.hasSequence() && first.baseSequence() != 
transactionManager.nextBatchBySequence(first.topicPartition).baseSequence())
```

Particularly, `transactionManager.nextBatchBySequence` will return null, 
because there no inflight batches being tracked. 

In this patch, we simply allow the batches in the `deque` to be drained if 
there are no in flight batches being tracked in the TransactionManager. If they 
succeed, well and good. If the responses come back with an error, the batces 
will be ultimately failed in the producer with an `OutOfOrderSequenceException` 
when the response comes back.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apurvam/kafka 
KAFKA-6015-npe-in-record-accumulator

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4022.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4022


commit 8d4ced9f42afb16113ed9464697b4d52394e1304
Author: Apurva Mehta 
Date:   2017-10-05T05:56:44Z

Fix NPE in Accumulator

commit 18053f749d8b80bb878f7781f95cb02b58933f9f
Author: Apurva Mehta 
Date:   2017-10-05T06:34:47Z

Add test case to ensure that we can send queued batches from a previous 
producer id after the producer state is reset




---


[GitHub] kafka pull request #4004: KAFKA-6003: Accept appends on replicas uncondition...

2017-10-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4004


---


[jira] [Created] (KAFKA-6016) Use the idempotent producer in the reassign_partitions_test

2017-10-04 Thread Apurva Mehta (JIRA)
Apurva Mehta created KAFKA-6016:
---

 Summary: Use the idempotent producer in the 
reassign_partitions_test
 Key: KAFKA-6016
 URL: https://issues.apache.org/jira/browse/KAFKA-6016
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.11.0.1, 1.0.0
Reporter: Apurva Mehta
Assignee: Apurva Mehta
 Fix For: 1.1.0


Currently, the reassign partitions test doesn't use the idempotent producer. 
This means that bugs like KAFKA-6003 have gone unnoticed. We should update the 
test to use the idempotent producer and recreate that bug on a regular basis so 
that we are fully testing all code paths.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6015) NPE in RecordAccumulator

2017-10-04 Thread Apurva Mehta (JIRA)
Apurva Mehta created KAFKA-6015:
---

 Summary: NPE in RecordAccumulator
 Key: KAFKA-6015
 URL: https://issues.apache.org/jira/browse/KAFKA-6015
 Project: Kafka
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Apurva Mehta
Assignee: Apurva Mehta
Priority: Blocker
 Fix For: 1.0.0


I found this inadvertently while trying to create a system test to reproduce  
KAFKA-6003

{noformat}java.lang.NullPointerException
at 
org.apache.kafka.clients.producer.internals.RecordAccumulator.drain(RecordAccumulator.java:542)
at 
org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:270)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:238)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:163)
at java.lang.Thread.run(Thread.java:748)
{noformat}

The problem is with this line

{code:java}
if (first.hasSequence() && first.baseSequence() != 
transactionManager.nextBatchBySequence(first.topicPartition).baseSequence())
{code}

It is possible for the producer state to be reset (for instance if retries are 
expired), in which case the transaction manager will drop the in flight batches 
it is tracking. However, these batches will continue to be in the accumulator 
with a sequence, causing an NPE in the background thread on this line.

It would be better to drain the batches with the old Pid/Sequence in this case. 
Either they are accepted, or they will be returned to the user with an 
{{OutOfOrderSequenceException}} which is clearer.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSS] KIP-158: Kafka Connect should allow source connectors to set topic-specific settings for new topics

2017-10-04 Thread Randall Hauch
Oops. Yes, I meant “replication factor”. 

> On Oct 4, 2017, at 7:18 PM, Ted Yu  wrote:
> 
> Randall:
> bq. AdminClient currently allows changing the replication factory.
> 
> By 'replication factory' did you mean 'replication factor' ?
> 
> Cheers
> 
>> On Wed, Oct 4, 2017 at 9:58 AM, Randall Hauch  wrote:
>> 
>> Currently the KIP's scope is only topics that don't yet exist, and we have
>> to cognizant of race conditions between tasks with the same connector. I
>> think it is worthwhile to consider whether the KIP's scope should expand to
>> also address *existing* partitions, though it may not be appropriate to
>> have as much control when changing the topic settings for an existing
>> topic. For example, changing the number of partitions (which the KIP
>> considers a "topic-specific setting" even though technically it is not)
>> shouldn't be done blindly due to the partitioning impacts, and IIRC you
>> can't reduce them (which we could verify before applying). Also, I don't
>> think the AdminClient currently allows changing the replication factory. I
>> think changing the topic configs is less problematic both from what makes
>> sense for connectors to verify/change and from what the AdminClient
>> supports.
>> 
>> Even if we decide that it's not appropriate to change the settings on an
>> existing topic, I do think it's advantageous to at least notify the
>> connector (or task) prior to the first record sent to a given topic so that
>> the connector can fail or issue a warning if it doesn't meet its
>> requirements.
>> 
>> Best regards,
>> 
>> Randall
>> 
>> On Wed, Oct 4, 2017 at 12:52 AM, Stephane Maarek <
>> steph...@simplemachines.com.au> wrote:
>> 
>>> Hi Randall,
>>> 
>>> Thanks for the KIP. I like it
>>> What happens when the target topic is already created but the configs do
>>> not match?
>>> i.e. wrong RF, num partitions, or missing / additional configs? Will you
>>> attempt to apply the necessary changes or throw an error?
>>> 
>>> Thanks!
>>> Stephane
>>> 
>>> 
>>> On 24/5/17, 5:59 am, "Mathieu Fenniak" 
>>> wrote:
>>> 
>>>Ah, yes, I see you a highlighted part that should've made this clear
>>>to me the first read. :-)  Much clearer now!
>>> 
>>>By the way, enjoyed your Debezium talk in NYC.
>>> 
>>>Looking forward to this Kafka Connect change; it will allow me to
>>>remove a post-deployment tool that I hacked together for the purpose
>>>of ensuring auto-created topics have the right config.
>>> 
>>>Mathieu
>>> 
>>> 
>>>On Tue, May 23, 2017 at 11:38 AM, Randall Hauch 
>>> wrote:
 Thanks for the quick feedback, Mathieu. Yes, the first
>> configuration
>>> rule
 whose regex matches will be applied, and no other rules will be
>>> used. I've
 updated the KIP to try to make this more clear, but let me know if
>>> it's
 still not clear.
 
 Best regards,
 
 Randall
 
 On Tue, May 23, 2017 at 10:07 AM, Mathieu Fenniak <
 mathieu.fenn...@replicon.com> wrote:
 
> Hi Randall,
> 
> Awesome, very much looking forward to this.
> 
> It isn't 100% clear from the KIP how multiple config-based rules
>>> would
> be applied; it looks like the first configuration rule whose regex
> matches the topic name will be used, and no other rules will be
> applied.  Is that correct?  (I wasn't sure if it might cascade
> together multiple matching rules...)
> 
> Looks great,
> 
> Mathieu
> 
> 
> On Mon, May 22, 2017 at 1:43 PM, Randall Hauch 
>>> wrote:
>> Hi, all.
>> 
>> We recently added the ability for Kafka Connect to create
>>> *internal*
> topics
>> using the new AdminClient, but it still would be great if Kafka
>>> Connect
>> could do this for new topics that result from source connector
>>> records.
>> I've outlined an approach to do this in "KIP-158 Kafka Connect
>>> should
> allow
>> source connectors to set topic-specific settings for new
>> topics".
>> 
>> *https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 158%3A+Kafka+Connect+should+allow+source+connectors+to+
> set+topic-specific+settings+for+new+topics
>>  158%3A+Kafka+Connect+should+allow+source+connectors+to+
> set+topic-specific+settings+for+new+topics>*
>> 
>> Please take a look and provide feedback. Thanks!
>> 
>> Best regards,
>> 
>> Randall
> 
>>> 
>>> 
>>> 
>>> 
>> 


[jira] [Created] (KAFKA-6014) new consumer mirror maker halts after committing offsets to a deleted topic

2017-10-04 Thread Onur Karaman (JIRA)
Onur Karaman created KAFKA-6014:
---

 Summary: new consumer mirror maker halts after committing offsets 
to a deleted topic
 Key: KAFKA-6014
 URL: https://issues.apache.org/jira/browse/KAFKA-6014
 Project: Kafka
  Issue Type: Bug
Reporter: Onur Karaman


New consumer throws an unexpected KafkaException when trying to commit to a 
topic that has been deleted. MirrorMaker.commitOffsets doesn't attempt to catch 
the KafkaException and just kills the process. We didn't see this in the old 
consumer because old consumer just silently drops failed offset commits.

I ran a quick experiment locally to prove the behavior. The experiment:
1. start up a single broker
2. create a single-partition topic t
3. create a new consumer that consumes topic t
4. make the consumer commit every few seconds
5. delete topic t
6. expect: KafkaException that kills the process.

Here's my script:
{code}
package org.apache.kafka.clients.consumer;

import org.apache.kafka.common.TopicPartition;

import java.util.Collections;
import java.util.List;
import java.util.Properties;

public class OffsetCommitTopicDeletionTest {
public static void main(String[] args) throws InterruptedException {
Properties props = new Properties();
props.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, 
"localhost:9090");
props.setProperty(ConsumerConfig.GROUP_ID_CONFIG, "g");
props.setProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, 
"org.apache.kafka.common.serialization.ByteArrayDeserializer");
props.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, 
"org.apache.kafka.common.serialization.ByteArrayDeserializer");
props.setProperty(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
KafkaConsumer kafkaConsumer = new 
KafkaConsumer<>(props);
TopicPartition partition = new TopicPartition("t", 0);
List partitions = Collections.singletonList(partition);
kafkaConsumer.assign(partitions);
while (true) {
kafkaConsumer.commitSync(Collections.singletonMap(partition, new 
OffsetAndMetadata(0, "")));
Thread.sleep(1000);
}
}
}
{code}

Here are the other commands:
{code}
> rm -rf /tmp/zookeeper/ /tmp/kafka-logs* logs*
> ./gradlew clean jar
> ./bin/zookeeper-server-start.sh config/zookeeper.properties
> export LOG_DIR=logs0 && ./bin/kafka-server-start.sh config/server0.properties
> ./bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic t 
> --partitions 1 --replication-factor 1
> ./bin/kafka-run-class.sh 
> org.apache.kafka.clients.consumer.OffsetCommitTopicDeletionTest
> ./bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic t
{code}

Here is the output:
{code}
[2017-10-04 20:00:14,451] ERROR [Consumer clientId=consumer-1, groupId=g] 
Offset commit failed on partition t-0 at offset 0: This server does not host 
this topic-partition. 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
Exception in thread "main" org.apache.kafka.common.KafkaException: Partition 
t-0 may not exist or user may not have Describe access to topic
  at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:789)
  at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:734)
  at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:808)
  at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:788)
  at 
org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:204)
  at 
org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:167)
  at 
org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:127)
  at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:506)
  at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:353)
  at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:268)
  at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:214)
  at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:190)
  at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:600)
  at 
org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1231)
  at 
org.apache.kafka.clients.consumer.OffsetCommitTopicDeletionTest.main(OffsetCommitTopicDeletionTest.java:22)
{code}

A couple way

Re: want to join this mail list

2017-10-04 Thread Ted Yu
See https://kafka.apache.org/contact for instructions.

2017-10-04 19:31 GMT-07:00 吴晓菊 :

> --
>
> Chrysan Wu
> 联系方式:17717640807
>


want to join this mail list

2017-10-04 Thread 吴晓菊
-- 

Chrysan Wu
联系方式:17717640807


Jenkins build is back to normal : kafka-trunk-jdk8 #2107

2017-10-04 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-6013) Controller getting stuck

2017-10-04 Thread Ivan Babrou (JIRA)
Ivan Babrou created KAFKA-6013:
--

 Summary: Controller getting stuck
 Key: KAFKA-6013
 URL: https://issues.apache.org/jira/browse/KAFKA-6013
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.11.0.0, 0.11.0.1
Reporter: Ivan Babrou


It looks like a new issue in 0.11.0.0 and 0.11.0.1 still has it.

We upgraded one of the clusters from 0.11.0.0 to 0.11.0.1 by shutting down 28 
machines at once (single rack). When nodes came up none of them progressed 
after these log lines:

{noformat}
Oct 05 02:17:42 mybroker14 kafka[32940]: INFO Kafka version : 0.11.0.1 
(org.apache.kafka.common.utils.AppInfoParser)
Oct 05 02:17:42 mybroker14 kafka[32940]: INFO Kafka commitId : c2a0d5f9b1f45bf5 
(org.apache.kafka.common.utils.AppInfoParser)
Oct 05 02:17:42 mybroker14 kafka[32940]: INFO [Kafka Server 10014], started 
(kafka.server.KafkaServer)
{noformat}

There was no indication in controller node logs that it picked up rebooted 
nodes. This happened multiple times during the upgrade: once per rack plus some 
on top of that.

Reboot took ~20m, all nodes in a single rack rebooted in parallel.

The fix was to restart controller node, but that did not go cleanly too:

{noformat}
ivan@mybroker26:~$ sudo journalctl --since 01:00 -u kafka | fgrep 'Error during 
controlled shutdown' -A1
Oct 05 01:57:41 mybroker26 kafka[37409]: WARN [Kafka Server 10026], Error 
during controlled shutdown, possibly because leader movement took longer than 
the configured controller.socket.timeout.ms and/or request.timeout.ms: 
Connection to 10026 was disconnected before the response was read 
(kafka.server.KafkaServer)
Oct 05 01:57:46 mybroker26 kafka[37409]: WARN [Kafka Server 10026], Retrying 
controlled shutdown after the previous attempt failed... 
(kafka.server.KafkaServer)
--
Oct 05 01:58:16 mybroker26 kafka[37409]: WARN [Kafka Server 10026], Error 
during controlled shutdown, possibly because leader movement took longer than 
the configured controller.socket.timeout.ms and/or request.timeout.ms: 
Connection to 10026 was disconnected before the response was read 
(kafka.server.KafkaServer)
Oct 05 01:58:18 mybroker26 kafka[37409]: INFO Rolled new log segment for 
'requests-40' in 3 ms. (kafka.log.Log)
--
Oct 05 01:58:51 mybroker26 kafka[37409]: WARN [Kafka Server 10026], Error 
during controlled shutdown, possibly because leader movement took longer than 
the configured controller.socket.timeout.ms and/or request.timeout.ms: 
Connection to 10026 was disconnected before the response was read 
(kafka.server.KafkaServer)
Oct 05 01:58:56 mybroker26 kafka[37409]: WARN [Kafka Server 10026], Retrying 
controlled shutdown after the previous attempt failed... 
(kafka.server.KafkaServer)
{noformat}

I'm unable to reproduce the issue by just restarting or even rebooting one 
broker, controller picks it up:

{noformat}
Oct 05 03:18:18 mybroker83 kafka[37402]: INFO [Controller 10083]: Newly added 
brokers: 10001, deleted brokers: , all live brokers: ...
{noformat}

KAFKA-5028 happened in 0.11.0.0, so it's likely related.

cc [~ijuma]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Jenkins build is back to normal : kafka-trunk-jdk7 #2856

2017-10-04 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-158: Kafka Connect should allow source connectors to set topic-specific settings for new topics

2017-10-04 Thread Ted Yu
Randall:
bq. AdminClient currently allows changing the replication factory.

By 'replication factory' did you mean 'replication factor' ?

Cheers

On Wed, Oct 4, 2017 at 9:58 AM, Randall Hauch  wrote:

> Currently the KIP's scope is only topics that don't yet exist, and we have
> to cognizant of race conditions between tasks with the same connector. I
> think it is worthwhile to consider whether the KIP's scope should expand to
> also address *existing* partitions, though it may not be appropriate to
> have as much control when changing the topic settings for an existing
> topic. For example, changing the number of partitions (which the KIP
> considers a "topic-specific setting" even though technically it is not)
> shouldn't be done blindly due to the partitioning impacts, and IIRC you
> can't reduce them (which we could verify before applying). Also, I don't
> think the AdminClient currently allows changing the replication factory. I
> think changing the topic configs is less problematic both from what makes
> sense for connectors to verify/change and from what the AdminClient
> supports.
>
> Even if we decide that it's not appropriate to change the settings on an
> existing topic, I do think it's advantageous to at least notify the
> connector (or task) prior to the first record sent to a given topic so that
> the connector can fail or issue a warning if it doesn't meet its
> requirements.
>
> Best regards,
>
> Randall
>
> On Wed, Oct 4, 2017 at 12:52 AM, Stephane Maarek <
> steph...@simplemachines.com.au> wrote:
>
> > Hi Randall,
> >
> > Thanks for the KIP. I like it
> > What happens when the target topic is already created but the configs do
> > not match?
> > i.e. wrong RF, num partitions, or missing / additional configs? Will you
> > attempt to apply the necessary changes or throw an error?
> >
> > Thanks!
> > Stephane
> >
> >
> > On 24/5/17, 5:59 am, "Mathieu Fenniak" 
> > wrote:
> >
> > Ah, yes, I see you a highlighted part that should've made this clear
> > to me the first read. :-)  Much clearer now!
> >
> > By the way, enjoyed your Debezium talk in NYC.
> >
> > Looking forward to this Kafka Connect change; it will allow me to
> > remove a post-deployment tool that I hacked together for the purpose
> > of ensuring auto-created topics have the right config.
> >
> > Mathieu
> >
> >
> > On Tue, May 23, 2017 at 11:38 AM, Randall Hauch 
> > wrote:
> > > Thanks for the quick feedback, Mathieu. Yes, the first
> configuration
> > rule
> > > whose regex matches will be applied, and no other rules will be
> > used. I've
> > > updated the KIP to try to make this more clear, but let me know if
> > it's
> > > still not clear.
> > >
> > > Best regards,
> > >
> > > Randall
> > >
> > > On Tue, May 23, 2017 at 10:07 AM, Mathieu Fenniak <
> > > mathieu.fenn...@replicon.com> wrote:
> > >
> > >> Hi Randall,
> > >>
> > >> Awesome, very much looking forward to this.
> > >>
> > >> It isn't 100% clear from the KIP how multiple config-based rules
> > would
> > >> be applied; it looks like the first configuration rule whose regex
> > >> matches the topic name will be used, and no other rules will be
> > >> applied.  Is that correct?  (I wasn't sure if it might cascade
> > >> together multiple matching rules...)
> > >>
> > >> Looks great,
> > >>
> > >> Mathieu
> > >>
> > >>
> > >> On Mon, May 22, 2017 at 1:43 PM, Randall Hauch 
> > wrote:
> > >> > Hi, all.
> > >> >
> > >> > We recently added the ability for Kafka Connect to create
> > *internal*
> > >> topics
> > >> > using the new AdminClient, but it still would be great if Kafka
> > Connect
> > >> > could do this for new topics that result from source connector
> > records.
> > >> > I've outlined an approach to do this in "KIP-158 Kafka Connect
> > should
> > >> allow
> > >> > source connectors to set topic-specific settings for new
> topics".
> > >> >
> > >> > *https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > >> 158%3A+Kafka+Connect+should+allow+source+connectors+to+
> > >> set+topic-specific+settings+for+new+topics
> > >> >  > >> 158%3A+Kafka+Connect+should+allow+source+connectors+to+
> > >> set+topic-specific+settings+for+new+topics>*
> > >> >
> > >> > Please take a look and provide feedback. Thanks!
> > >> >
> > >> > Best regards,
> > >> >
> > >> > Randall
> > >>
> >
> >
> >
> >
>


Build failed in Jenkins: kafka-trunk-jdk7 #2855

2017-10-04 Thread Apache Jenkins Server
See 


Changes:

[junrao] KAFKA-5767; Kafka server should halt if IBP < 1.0.0 and there is log

--
[...truncated 1.81 MB...]

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterOuterJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterOuterJoinQueryable PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin STARTED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableJoin STARTED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableJoin PASSED

org.apache.kafka.streams.integration.FanoutIntegrationTest > 
shouldFanoutTheInput STARTED

org.apache.kafka.streams.integration.FanoutIntegrationTest > 
shouldFanoutTheInput PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRunWithEosEnabled STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRunWithEosEnabled PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToCommitToMultiplePartitions STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToCommitToMultiplePartitions PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldNotViolateEosIfOneTaskFails STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldNotViolateEosIfOneTaskFails PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToPerformMultipleTransactions STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToPerformMultipleTransactions PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToCommitMultiplePartitionOffsets STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToCommitMultiplePartitionOffsets PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRunWithTwoSubtopologies STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRunWithTwoSubtopologies PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldNotViolateEosIfOneTaskGetsFencedUsingIsolatedAppInstances STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldNotViolateEosIfOneTaskGetsFencedUsingIsolatedAppInstances PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldNotViolateEosIfOneTaskFailsWithState STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldNotViolateEosIfOneTaskFailsWithState PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRunWithTwoSubtopologiesAndMultiplePartitions STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRunWithTwoSubtopologiesAndMultiplePartitions PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRestartAfterClose STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRestartAfterClose PASSED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithZeroSizedCache STARTED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithZeroSizedCache PASSED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithNonZeroSizedCache STARTED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithNonZeroSizedCache PASSED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegionWithZeroByteCache STARTED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegionWithZeroByteCache PASSED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegionWithNonZeroByteCache STARTED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegionWithNonZeroByteCache PASSED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldCompactTopicsForStateChangelogs STARTED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldCompactTopicsForStateChangelogs PASSED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldUseCompactAndDeleteForWindowStoreChangelogs STARTED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldUseCompactAndDeleteForWindowStoreChangelogs PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceSessionWindows STARTED

org.apache.kafka.streams.integration.KStreamAg

Re: [DISCUSS] KIP-158: Kafka Connect should allow source connectors to set topic-specific settings for new topics

2017-10-04 Thread Stephane Maarek
I agree. I'm personally against increasing the partitions number, but RF would 
make sense. 
Same for configs, I'm okay with them being overriden.  
 
Maybe a "conflict" setting would make sense? Options: do nothing, throw 
exception, or apply? (default: do nothing - for safety)

It'd be worth including this in the scope of that KIP in my opinion

On 5/10/17, 3:58 am, "Randall Hauch"  wrote:

Currently the KIP's scope is only topics that don't yet exist, and we have
to cognizant of race conditions between tasks with the same connector. I
think it is worthwhile to consider whether the KIP's scope should expand to
also address *existing* partitions, though it may not be appropriate to
have as much control when changing the topic settings for an existing
topic. For example, changing the number of partitions (which the KIP
considers a "topic-specific setting" even though technically it is not)
shouldn't be done blindly due to the partitioning impacts, and IIRC you
can't reduce them (which we could verify before applying). Also, I don't
think the AdminClient currently allows changing the replication factory. I
think changing the topic configs is less problematic both from what makes
sense for connectors to verify/change and from what the AdminClient
supports.

Even if we decide that it's not appropriate to change the settings on an
existing topic, I do think it's advantageous to at least notify the
connector (or task) prior to the first record sent to a given topic so that
the connector can fail or issue a warning if it doesn't meet its
requirements.

Best regards,

Randall

On Wed, Oct 4, 2017 at 12:52 AM, Stephane Maarek <
steph...@simplemachines.com.au> wrote:

> Hi Randall,
>
> Thanks for the KIP. I like it
> What happens when the target topic is already created but the configs do
> not match?
> i.e. wrong RF, num partitions, or missing / additional configs? Will you
> attempt to apply the necessary changes or throw an error?
>
> Thanks!
> Stephane
>
>
> On 24/5/17, 5:59 am, "Mathieu Fenniak" 
> wrote:
>
> Ah, yes, I see you a highlighted part that should've made this clear
> to me the first read. :-)  Much clearer now!
>
> By the way, enjoyed your Debezium talk in NYC.
>
> Looking forward to this Kafka Connect change; it will allow me to
> remove a post-deployment tool that I hacked together for the purpose
> of ensuring auto-created topics have the right config.
>
> Mathieu
>
>
> On Tue, May 23, 2017 at 11:38 AM, Randall Hauch 
> wrote:
> > Thanks for the quick feedback, Mathieu. Yes, the first configuration
> rule
> > whose regex matches will be applied, and no other rules will be
> used. I've
> > updated the KIP to try to make this more clear, but let me know if
> it's
> > still not clear.
> >
> > Best regards,
> >
> > Randall
> >
> > On Tue, May 23, 2017 at 10:07 AM, Mathieu Fenniak <
> > mathieu.fenn...@replicon.com> wrote:
> >
> >> Hi Randall,
> >>
> >> Awesome, very much looking forward to this.
> >>
> >> It isn't 100% clear from the KIP how multiple config-based rules
> would
> >> be applied; it looks like the first configuration rule whose regex
> >> matches the topic name will be used, and no other rules will be
> >> applied.  Is that correct?  (I wasn't sure if it might cascade
> >> together multiple matching rules...)
> >>
> >> Looks great,
> >>
> >> Mathieu
> >>
> >>
> >> On Mon, May 22, 2017 at 1:43 PM, Randall Hauch 
> wrote:
> >> > Hi, all.
> >> >
> >> > We recently added the ability for Kafka Connect to create
> *internal*
> >> topics
> >> > using the new AdminClient, but it still would be great if Kafka
> Connect
> >> > could do this for new topics that result from source connector
> records.
> >> > I've outlined an approach to do this in "KIP-158 Kafka Connect
> should
> >> allow
> >> > source connectors to set topic-specific settings for new topics".
> >> >
> >> > *https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> >> 158%3A+Kafka+Connect+should+allow+source+connectors+to+
> >> set+topic-specific+settings+for+new+topics
> >> >  >> 158%3A+Kafka+Connect+should+allow+source+connectors+to+
> >> set+topic-specific+settings+for+new+topics>*
> >> >
> >> > Please take a look and provide feedback. Thanks!
> >> >
> >> > Best regards,
> >> >
> >> > Randal

[jira] [Created] (KAFKA-6012) NoSuchElementException in markErrorMeter during TransactionsBounceTest

2017-10-04 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-6012:
--

 Summary: NoSuchElementException in markErrorMeter during 
TransactionsBounceTest
 Key: KAFKA-6012
 URL: https://issues.apache.org/jira/browse/KAFKA-6012
 Project: Kafka
  Issue Type: Bug
Reporter: Ismael Juma
Assignee: Rajini Sivaram
Priority: Blocker
 Fix For: 1.0.0


I think this is probably a test issue, but setting as "Blocker" until we can 
confirm that.

{code}
Error
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for 
output-topic-0: 10467 ms has passed since batch creation plus linger time
Stacktrace
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for 
output-topic-0: 10467 ms has passed since batch creation plus linger time
Standard Output
[2017-10-05 00:29:31,327] ERROR ZKShutdownHandler is not registered, so 
ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
changes (org.apache.zookeeper.server.ZooKeeperServer:472)
[2017-10-05 00:29:31,877] ERROR [ReplicaFetcher replicaId=0, leaderId=1, 
fetcherId=0] Error for partition input-topic-0 to broker 
%1:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. (kafka.server.ReplicaFetcherThread:101)
[2017-10-05 00:29:31,877] ERROR [ReplicaFetcher replicaId=3, leaderId=1, 
fetcherId=0] Error for partition input-topic-0 to broker 
%1:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. (kafka.server.ReplicaFetcherThread:101)
[2017-10-05 00:29:31,877] ERROR [ReplicaFetcher replicaId=0, leaderId=2, 
fetcherId=0] Error for partition input-topic-1 to broker 
%2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. (kafka.server.ReplicaFetcherThread:101)
[2017-10-05 00:29:32,268] ERROR [ReplicaFetcher replicaId=0, leaderId=1, 
fetcherId=0] Error for partition output-topic-1 to broker 
%1:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. (kafka.server.ReplicaFetcherThread:101)
[2017-10-05 00:29:32,284] ERROR [ReplicaFetcher replicaId=2, leaderId=1, 
fetcherId=0] Error for partition output-topic-1 to broker 
%1:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. (kafka.server.ReplicaFetcherThread:101)
[2017-10-05 00:29:44,283] ERROR [KafkaApi-0] Error when handling request 
{controller_id=0,controller_epoch=1,delete_partitions=false,partitions=[{topic=input-topic,partition=1}]}
 (kafka.server.KafkaApis:107)
java.util.NoSuchElementException: key not found: NONE
at scala.collection.MapLike$class.default(MapLike.scala:228)
at scala.collection.AbstractMap.default(Map.scala:59)
at scala.collection.mutable.HashMap.apply(HashMap.scala:65)
at kafka.network.RequestMetrics.markErrorMeter(RequestChannel.scala:410)
at 
kafka.network.RequestChannel$$anonfun$updateErrorMetrics$1.apply(RequestChannel.scala:315)
at 
kafka.network.RequestChannel$$anonfun$updateErrorMetrics$1.apply(RequestChannel.scala:314)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
kafka.network.RequestChannel.updateErrorMetrics(RequestChannel.scala:314)
at 
kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$sendResponse$1.apply(KafkaApis.scala:2092)
at 
kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$sendResponse$1.apply(KafkaApis.scala:2092)
at scala.Option.foreach(Option.scala:257)
at 
kafka.server.KafkaApis.kafka$server$KafkaApis$$sendResponse(KafkaApis.scala:2092)
at 
kafka.server.KafkaApis.sendResponseExemptThrottle(KafkaApis.scala:2061)
at kafka.server.KafkaApis.handleStopReplicaRequest(KafkaApis.scala:202)
at kafka.server.KafkaApis.handle(KafkaApis.scala:104)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:65)
{code}

https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/2106/tests



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Build failed in Jenkins: kafka-trunk-jdk8 #2106

2017-10-04 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: add suppress warnings annotations in Streams API

[wangguoz] KAFKA-5980: FailOnInvalidTimestamp does not log error

[wangguoz] Bump up version to 1.1.0-SNAPSHOT

--
[...truncated 369.11 KB...]

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression STARTED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic STARTED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest STARTED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete STARTED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
PASSED

kafka.integration.TopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
STARTED

kafka.integration.TopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithCollision STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithCollision PASSED

kafka.integration.TopicMetadataTest > testAliveBrokerListWithNoTopics STARTED

kafka.integration.TopicMetadataTest > testAliveBrokerListWithNoTopics PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.TopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.TopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.TopicMetadataTest > testBasicTopicMetadata STARTED

kafka.integration.TopicMetadataTest > testBasicTopicMetadata PASS

[GitHub] kafka pull request #3840: KAFKA-5879; Controller should read the latest IsrC...

2017-10-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3840


---


[GitHub] kafka pull request #3718: KAFKA-5767; Kafka server should halt if IBP < 1.0....

2017-10-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3718


---


[jira] [Resolved] (KAFKA-5767) Kafka server should halt if IBP < 1.0.0 and there is log directory failure

2017-10-04 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-5767.

Resolution: Fixed

Issue resolved by pull request 3718
[https://github.com/apache/kafka/pull/3718]

> Kafka server should halt if IBP < 1.0.0 and there is log directory failure
> --
>
> Key: KAFKA-5767
> URL: https://issues.apache.org/jira/browse/KAFKA-5767
> Project: Kafka
>  Issue Type: Bug
>Reporter: Dong Lin
>Assignee: Dong Lin
>Priority: Critical
> Fix For: 1.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Build failed in Jenkins: kafka-1.0.0-jdk7 #1

2017-10-04 Thread Apache Jenkins Server
See 

--
[...truncated 368.16 KB...]

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateSavedOffsetWhenOffsetToClearToIsBetweenEpochs PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryTailIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryTailIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcce

[GitHub] kafka-site pull request #89: Back out changes to index

2017-10-04 Thread joel-hamill
GitHub user joel-hamill opened a pull request:

https://github.com/apache/kafka-site/pull/89

Back out changes to index



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/joel-hamill/kafka-site 
joel-hamill/backout-changes2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka-site/pull/89.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #89


commit 7774f050ee5095fdec7a5b1b0e0a21dd1285d859
Author: Joel Hamill 
Date:   2017-10-05T00:05:42Z

Back out changes to index




---


[GitHub] kafka-site issue #89: Back out changes to index

2017-10-04 Thread joel-hamill
Github user joel-hamill commented on the issue:

https://github.com/apache/kafka-site/pull/89
  
@guozhangwang 


---


Jenkins build is back to normal : kafka-trunk-jdk8 #2105

2017-10-04 Thread Apache Jenkins Server
See 




[GitHub] kafka-site pull request #88: Remove last commit

2017-10-04 Thread joel-hamill
Github user joel-hamill closed the pull request at:

https://github.com/apache/kafka-site/pull/88


---


[GitHub] kafka-site pull request #88: Remove last commit

2017-10-04 Thread joel-hamill
GitHub user joel-hamill opened a pull request:

https://github.com/apache/kafka-site/pull/88

Remove last commit



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/joel-hamill/kafka-site 
joel-hamill/backout-changess

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka-site/pull/88.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #88


commit 59c24417ce79c99917c6c396e759ba152b8bccd1
Author: Joel Hamill 
Date:   2017-10-05T00:02:09Z

Remove last commit




---


1.0.0 Branch Update

2017-10-04 Thread Guozhang Wang
Hello Kafka developers,

This is another update on the 1.0.0 release progress.

We were planing to close the code freeze and make the first RC by today,
but we are behind our schedule a bit. After making a pass over all the
JIRAs to move non-blockers to the next release we still have 20 tickets
ongoing, and three KIPs not yet being merged (namely, KIP-91, KIP-180,
KIP-196). Among these tickets, 5 are marked as blockers for now.

I have also created the release branch for 1.0.0 and trunk has been bumped
to 1.1.0-SNAPSHOT. From this point, most changes should go to trunk except
the above mentioned JIRAs which needs to be double committed. Newly created
tickets should only be marked as 1.0.0 issues if they are critical bugs.

* For contributors working on existing or new issues, please discuss with
your reviewer whether it is really a 1.0.0 blocker, and if your PR could be
merged soon and make it to trunk+release. Our goal is to have zero JIRAs by
the end of this week in order to make the first RC.


Thanks!

-- 
-- Guozhang


[GitHub] kafka pull request #4021: KAFKA-5972 Flatten SMT does not work with null val...

2017-10-04 Thread shivsantham
GitHub user shivsantham opened a pull request:

https://github.com/apache/kafka/pull/4021

KAFKA-5972 Flatten SMT does not work with null values

A bug in Flatten SMT while doing tests with different SMTs that are 
provided out-of-box. Flatten SMT does not work as expected with schemaless JSON 
that has properties with null values.

Example json:
  {A={D=dValue, B=null, C=cValue}}
The issue is in if statement that checks for null value.

CURRENT VERSION:
  for (Map.Entry entry : originalRecord.entrySet()) {
final String fieldName = fieldName(fieldNamePrefix, 
entry.getKey());
Object value = entry.getValue();
if (value == null) {
newRecord.put(fieldName(fieldNamePrefix, entry.getKey()), 
null);
return;
}

PROPOSED VERSION:
  for (Map.Entry entry : originalRecord.entrySet()) {
final String fieldName = fieldName(fieldNamePrefix, 
entry.getKey());
Object value = entry.getValue();
if (value == null) {
newRecord.put(fieldName(fieldNamePrefix, entry.getKey()), 
null);
continue;
}

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shivsantham/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4021.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4021


commit ff377759a943c7bfb89a56ad721e7ba1b3b0b24c
Author: siva santhalingam 
Date:   2017-09-28T23:37:47Z

KAFKA-5967 Ineffective check of negative value in 
CompositeReadOnlyKeyValueStore#approximateNumEntries()

long total = 0;
   for (ReadOnlyKeyValueStore store : stores) {
  total += store.approximateNumEntries();
   }

return total < 0 ? Long.MAX_VALUE : total;

The check for negative value seems to account for wrapping. However, 
wrapping can happen within the for loop. So the check should be performed 
inside the loop.

commit 3ea736ac17a4a8ce799b1214f6c0b167b44ee977
Author: siva santhalingam 
Date:   2017-09-29T03:16:29Z

 KAFKA-5967 Ineffective check of negative value in 
CompositeReadOnlyKeyValueStore

Adding a test for #KAFKA-5967

commit 921664384a7d6f53e2cc76cf5699021cdca73893
Author: siva santhalingam 
Date:   2017-09-30T08:22:41Z

 KAFKA-5967 Ineffective check of negative value in 
CompositeReadOnlyKeyValueStore

-Fixing test

commit 48e50992139c03099b3249195efc316a84d6bba1
Author: siva santhalingam 
Date:   2017-10-03T17:41:18Z

Flatten SMT does not work with null values

A bug in Flatten SMT while doing tests with different SMTs that are 
provided out-of-box. Flatten SMT does not work as expected with schemaless JSON 
that has properties with null values.

Example json:
  {A={D=dValue, B=null, C=cValue}}
The issue is in if statement that checks for null value.

CURRENT VERSION:
  for (Map.Entry entry : originalRecord.entrySet()) {
final String fieldName = fieldName(fieldNamePrefix, 
entry.getKey());
Object value = entry.getValue();
if (value == null) {
newRecord.put(fieldName(fieldNamePrefix, entry.getKey()), 
null);
return;
}

PROPOSED VERSION:
  for (Map.Entry entry : originalRecord.entrySet()) {
final String fieldName = fieldName(fieldNamePrefix, 
entry.getKey());
Object value = entry.getValue();
if (value == null) {
newRecord.put(fieldName(fieldNamePrefix, entry.getKey()), 
null);
continue;
}

commit 7abd3d5c76febc3e3f90b140aaf4c8bec7e08e8a
Author: siva santhalingam 
Date:   2017-10-03T18:07:07Z

Revert "Flatten SMT does not work with null values"

This reverts commit 48e50992139c03099b3249195efc316a84d6bba1.

commit 6f9c11726d1baf9eb2446867899218a4f46a77a4
Author: shivsantham 
Date:   2017-10-04T22:53:49Z

Merge branch 'trunk' of https://github.com/shivsantham/kafka into trunk

commit 084466bfd1f8bb9c9a07ec4f8255a42dfc6b8768
Author: shivsantham 
Date:   2017-10-04T22:59:27Z

KAFKA-5972 Flatten SMT does not work with null values

A bug in Flatten SMT while doing tests with different SMTs that are 
provided out-of-box. Flatten SMT does not work as expected with schemaless JSON 
that has properties with null values.

Example json:
  {A={D=dValue, B=null, C=cValue}}
The issue is in if statement that checks for null value.

CURRENT VERSION:
  for (Map.Entry entry : originalRecord.entrySet()) {
final String fieldName = fieldName(fieldNamePrefix, 
entry.getKey());
  

Jenkins build is back to normal : kafka-trunk-jdk7 #2853

2017-10-04 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-5977) Upgrade RocksDB dependency to legally acceptable version

2017-10-04 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-5977.
--
Resolution: Duplicate

Marking as duplicate of KAFKA-5576.

> Upgrade RocksDB dependency to legally acceptable version
> 
>
> Key: KAFKA-5977
> URL: https://issues.apache.org/jira/browse/KAFKA-5977
> Project: Kafka
>  Issue Type: Task
>  Components: streams
>Affects Versions: 0.10.0.0
>Reporter: Stevo Slavic
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 1.0.0
>
>
> RocksDB 5.5.5+ seems to be legally acceptable. For more info see
> - https://issues.apache.org/jira/browse/LEGAL-303 and
> - https://www.apache.org/legal/resolved.html#category-x
> Even latest trunk of Apache Kafka depends on older RocksDB 
> https://github.com/apache/kafka/blob/trunk/gradle/dependencies.gradle#L67
> If I'm not mistaken, this makes all current Apache Kafka 0.10+ releases not 
> legally acceptable Apache products.
> Please consider upgrading the dependency. If possible please include the 
> change in Apache Kafka 1.0.0 release, if not also in patch releases of older 
> still supported 0.x Apache Kafka branches.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3966: KAFKA-5980: FailOnInvalidTimestamp does not log er...

2017-10-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3966


---


[GitHub] kafka pull request #3740: MINOR: reduce logging to trace in NetworkClient wh...

2017-10-04 Thread dguy
Github user dguy closed the pull request at:

https://github.com/apache/kafka/pull/3740


---


[GitHub] kafka pull request #4003: MINOR: add suppress warnings annotations

2017-10-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4003


---


[GitHub] kafka pull request #4018: KAFKA-6010: Relax record conversion time test to a...

2017-10-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4018


---


[jira] [Resolved] (KAFKA-6010) Transient failure: MemoryRecordsBuilderTest.convertToV1WithMixedV0AndV2Data

2017-10-04 Thread Rajini Sivaram (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-6010.
---
   Resolution: Fixed
Fix Version/s: 1.0.0

Issue resolved by pull request 4018
[https://github.com/apache/kafka/pull/4018]

> Transient failure: MemoryRecordsBuilderTest.convertToV1WithMixedV0AndV2Data
> ---
>
> Key: KAFKA-6010
> URL: https://issues.apache.org/jira/browse/KAFKA-6010
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Assignee: Rajini Sivaram
>  Labels: transient-unit-test-failure
> Fix For: 1.0.0
>
>
> The issue happens with various tests that call verifyRecordsProcessingStats. 
> One example:
> {code}
> Stacktrace
> java.lang.AssertionError: Processing time not recorded
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilderTest.verifyRecordsProcessingStats(MemoryRecordsBuilderTest.java:651)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilderTest.convertToV1WithMixedV0AndV2Data(MemoryRecordsBuilderTest.java:515)
> {code}
> https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/2102/tests
> cc [~rsivaram]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: integration between pull request and JIRA

2017-10-04 Thread Matthias J. Sax
A contributor should have the JIRA assigned to him/herself and thus
should get notification about JIRA comments too, as if the JIRA is
assigned, one also watches the JIRA.


-Matthias


On 10/4/17 1:32 PM, Ted Yu wrote:
> The only thing missing is that the contributor needs to watch the JIRA
> after sending out PR - otherwise he / she may miss comments on the JIRA
> (but not on PR).
> 
> On Wed, Sep 6, 2017 at 12:27 PM, Matthias J. Sax 
> wrote:
> 
>> You can subscribe to single PR if you want, too. (That actually happens,
>> when you get tagged or comment on one, ie, you get auto subscribed to
>> the PR.)
>>
>> There is a "Subscribe" button on the right hand side.
>>
>> -Matthias
>>
>>
>> On 9/5/17 8:57 PM, Ted Yu wrote:
>>> bq. I did get tagged or I did comment on etc.
>>>
>>> What if nobody tags me on the PR and I don't comment on it ?
>>>
>>> Cheers
>>>
>>> On Tue, Sep 5, 2017 at 8:55 PM, Matthias J. Sax 
>>> wrote:
>>>
>> If a person watches github PR, that person watches conversations on
>> all
>> PRs,

 One can just "not watch" Kafka's Github repo. I don't watch it either
 and thus I get emails for only those PRs I did get tagged or I did
 comment on etc.

 Would this not work for you?


 -Matthias

 On 9/5/17 7:31 PM, Ted Yu wrote:
> If a person watches github PR, that person watches conversations on all
> PRs, not just the one he / she intends to pay attention to.
>
> Quite often this leads to ton of emails in his / her inbox which is
> distracting.
>
> If the conversation is posted from PR to JIRA, watcher is per PR /
>> JIRA.
> This is much focused.
>
> Cheers
>
> On Tue, Sep 5, 2017 at 7:20 PM, Matthias J. Sax >>
> wrote:
>
>> This integration was never set up for Kafka.
>>
>> I personally don't see any advantage in this, as it just duplicates
>> everything and does not add value IMHO. The PRs are linked and one can
>> go to the PR to read the discussion if interested.
>>
>> Or what do you think the value would be?
>>
>>
>> -Matthias
>>
>>
>> On 9/5/17 6:16 PM, Ted Yu wrote:
>>> Hi,
>>> Currently the conversations on pull request are not posted back to
 JIRA.
>>>
>>> Is there technical hurdle preventing this from being done ?
>>>
>>> Other Apache projects, such as Flink, establish automatic post from
 pull
>>> request to JIRA.
>>>
>>> Cheers
>>>
>>
>>
>


>>>
>>
>>
> 



signature.asc
Description: OpenPGP digital signature


Build failed in Jenkins: kafka-trunk-jdk7 #2852

2017-10-04 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: streams dev guide fixup

[jason] KAFKA-5970; Use ReentrantLock for delayed operation lock to avoid

[me] KAFKA-6008: Sanitize the app id before creating app id metric

[wangguoz] MINOR: update streams quickstart for KIP-182

[wangguoz] MINOR: JavaDoc improvements for new state store API

--
[...truncated 1.81 MB...]
org.apache.kafka.streams.KafkaStreamsTest > testStateGlobalThreadClose PASSED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldThrowExceptionSettingUncaughtExceptionHandlerNotInCreateState STARTED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldThrowExceptionSettingUncaughtExceptionHandlerNotInCreateState PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCleanup STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCleanup PASSED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldThrowExceptionSettingStateListenerNotInCreateState STARTED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldThrowExceptionSettingStateListenerNotInCreateState PASSED

org.apache.kafka.streams.KafkaStreamsTest > testNumberDefaultMetrics STARTED

org.apache.kafka.streams.KafkaStreamsTest > testNumberDefaultMetrics PASSED

org.apache.kafka.streams.KafkaStreamsTest > shouldReturnThreadMetadata STARTED

org.apache.kafka.streams.KafkaStreamsTest > shouldReturnThreadMetadata PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCloseIsIdempotent STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCloseIsIdempotent PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCannotCleanupWhileRunning 
STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCannotCleanupWhileRunning PASSED

org.apache.kafka.streams.KafkaStreamsTest > testStateThreadClose STARTED

org.apache.kafka.streams.KafkaStreamsTest > testStateThreadClose PASSED

org.apache.kafka.streams.KafkaStreamsTest > testStateChanges STARTED

org.apache.kafka.streams.KafkaStreamsTest > testStateChanges PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartTwice STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartTwice PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfProducerEnableIdempotenceIsOverriddenIfEosEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfProducerEnableIdempotenceIsOverriddenIfEosEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldUseNewConfigsWhenPresent 
STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldUseNewConfigsWhenPresent 
PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptAtLeastOnce STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptAtLeastOnce PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldUseCorrectDefaultsWhenNoneSpecified STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldUseCorrectDefaultsWhenNoneSpecified PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerEnableIdempotenceIfEosDisabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerEnableIdempotenceIfEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
STARTED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetDifferentDefaultsIfEosEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetDifferentDefaultsIfEosEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigRetriesIfExactlyOnceEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigRetriesIfExactlyOnceEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerMaxInFlightRequestPerConnectionsWhenEosDisabled 
STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerMaxInFlightRequestPerConnectionsWhenEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowStreamsExceptionIfValueSerdeConfigFails STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowStreamsExceptionIfValueSerdeConfigFails PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDef

Build failed in Jenkins: kafka-trunk-jdk8 #2104

2017-10-04 Thread Apache Jenkins Server
See 


Changes:

[me] KAFKA-5990: Enable generation of metrics docs for Connect (KIP-196)

[wangguoz] MINOR: streams dev guide fixup

[jason] KAFKA-5970; Use ReentrantLock for delayed operation lock to avoid

[me] KAFKA-6008: Sanitize the app id before creating app id metric

[wangguoz] MINOR: update streams quickstart for KIP-182

[wangguoz] MINOR: JavaDoc improvements for new state store API

--
[...truncated 368.35 KB...]

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression STARTED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic STARTED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest STARTED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete STARTED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
PASSED

kafka.integration.TopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
STARTED

kafka.integration.TopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithCollision STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithCollision PASSED

kafka.integration.TopicMetadataTest > testAliveBrokerListWithNoTopics STARTED

kafka.integration.TopicMetadataTest > testAliveBrokerListWithNoTopics PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.TopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.TopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.TopicMetadataTest

[GitHub] kafka pull request #4001: KAFKA-6001: remove from Materializ...

2017-10-04 Thread dguy
Github user dguy closed the pull request at:

https://github.com/apache/kafka/pull/4001


---


[GitHub] kafka pull request #4020: KAFKA-6003: Accept appends on replicas and when re...

2017-10-04 Thread apurvam
GitHub user apurvam opened a pull request:

https://github.com/apache/kafka/pull/4020

KAFKA-6003: Accept appends on replicas and when rebuilding the log 
unconditionally

This is a port of #4004 for the 0.11.0 branch.

With this patch so that we _only_ validate appends which originate
from the client. In general, once the append is validated and written to
the leader the first time, revalidating it is undesirable since we can't
do anything if validation fails, and also because it is hard to maintain
the correct assumptions during validation, leading to spurious
validation failures.

For example, when we have compacted topics, it is possible for batches
to be compacted on the follower but not on the leader. This case would
also lead to an OutOfOrderSequencException during replication. The same
applies to when we rebuild state from compacted topics: we would get
gaps in the sequence numbers, causing the OutOfOrderSequence.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apurvam/kafka 
KAKFA-6003-0.11.0-handle-unknown-producer-on-replica

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4020.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4020


commit 0a6a0213c091c8e6b6a9c5ce7655b7e0d06c9db0
Author: Apurva Mehta 
Date:   2017-10-04T20:42:17Z

KAFKA-6003: Accept appends on replicas and when rebuilding state from
the log unconditionally.

With this patch so that we _only_ validate appends which originate
from the client. In general, once the append is validated and written to
the leader the first time, revalidating it is undesirable since we can't
do anything if validation fails, and also because it is hard to maintain
the correct assumptions during validation, leading to spurious
validation failures.

For example, when we have compacted topics, it is possible for batches
to be compacted on the follower but not on the leader. This case would
also lead to an OutOfOrderSequencException during replication. The same
applies to when we rebuild state from compacted topics: we would get
gaps in the sequence numbers, causing the OutOfOrderSequence.




---


Re: integration between pull request and JIRA

2017-10-04 Thread Ted Yu
The only thing missing is that the contributor needs to watch the JIRA
after sending out PR - otherwise he / she may miss comments on the JIRA
(but not on PR).

On Wed, Sep 6, 2017 at 12:27 PM, Matthias J. Sax 
wrote:

> You can subscribe to single PR if you want, too. (That actually happens,
> when you get tagged or comment on one, ie, you get auto subscribed to
> the PR.)
>
> There is a "Subscribe" button on the right hand side.
>
> -Matthias
>
>
> On 9/5/17 8:57 PM, Ted Yu wrote:
> > bq. I did get tagged or I did comment on etc.
> >
> > What if nobody tags me on the PR and I don't comment on it ?
> >
> > Cheers
> >
> > On Tue, Sep 5, 2017 at 8:55 PM, Matthias J. Sax 
> > wrote:
> >
>  If a person watches github PR, that person watches conversations on
> all
>  PRs,
> >>
> >> One can just "not watch" Kafka's Github repo. I don't watch it either
> >> and thus I get emails for only those PRs I did get tagged or I did
> >> comment on etc.
> >>
> >> Would this not work for you?
> >>
> >>
> >> -Matthias
> >>
> >> On 9/5/17 7:31 PM, Ted Yu wrote:
> >>> If a person watches github PR, that person watches conversations on all
> >>> PRs, not just the one he / she intends to pay attention to.
> >>>
> >>> Quite often this leads to ton of emails in his / her inbox which is
> >>> distracting.
> >>>
> >>> If the conversation is posted from PR to JIRA, watcher is per PR /
> JIRA.
> >>> This is much focused.
> >>>
> >>> Cheers
> >>>
> >>> On Tue, Sep 5, 2017 at 7:20 PM, Matthias J. Sax  >
> >>> wrote:
> >>>
>  This integration was never set up for Kafka.
> 
>  I personally don't see any advantage in this, as it just duplicates
>  everything and does not add value IMHO. The PRs are linked and one can
>  go to the PR to read the discussion if interested.
> 
>  Or what do you think the value would be?
> 
> 
>  -Matthias
> 
> 
>  On 9/5/17 6:16 PM, Ted Yu wrote:
> > Hi,
> > Currently the conversations on pull request are not posted back to
> >> JIRA.
> >
> > Is there technical hurdle preventing this from being done ?
> >
> > Other Apache projects, such as Flink, establish automatic post from
> >> pull
> > request to JIRA.
> >
> > Cheers
> >
> 
> 
> >>>
> >>
> >>
> >
>
>


Jenkins build is back to normal : kafka-trunk-jdk8 #2103

2017-10-04 Thread Apache Jenkins Server
See 




[GitHub] kafka pull request #4019: KAFKA-6011 AppInfoParser should only use metrics A...

2017-10-04 Thread tedyu
GitHub user tedyu opened a pull request:

https://github.com/apache/kafka/pull/4019

KAFKA-6011 AppInfoParser should only use metrics API and should not 
register JMX mbeans directly

Added app ID to metrics API.

The JMX can be dropped post 1.0.0

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tedyu/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4019.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4019


commit 28b6a5868327fe827d57328c03e086eb3bb5c19c
Author: tedyu 
Date:   2017-10-04T20:10:22Z

KAFKA-6011 AppInfoParser should only use metrics API and should not 
register JMX mbeans directly




---


Jenkins build is back to normal : kafka-trunk-jdk7 #2851

2017-10-04 Thread Apache Jenkins Server
See 




[GitHub] kafka pull request #4006: MINOR: JavaDoc improvements for new state store AP...

2017-10-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4006


---


Re: [DISCUSS] URIs on Producer and Consumer

2017-10-04 Thread Clebert Suconic
Ping???


Any thoughts?


Or anyone can help me with write access to the Wiki so I can start a
KIP discussion? my userID is clebert.suco...@gmail.com

On Tue, Oct 3, 2017 at 2:45 PM, Clebert Suconic
 wrote:
> I believe I need write access to the WIKI Page:
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
>
> As I don't see the KIP Template..
>
>
> If anyone could please include me the to the group please?
>
>
>
>
> On Tue, Oct 3, 2017 at 9:23 AM, Christopher Shannon
>  wrote:
>> I think this would be useful as a secondary way to configure.  If others
>> agree then you can write up a KIP and it can be discussed in more detail.
>>
>> On Tue, Oct 3, 2017 at 8:56 AM, Clebert Suconic 
>> wrote:
>>
>>> Maybe I didn't make the message clear enough...
>>>
>>> Would using an URI to the constructor (in addition to the properties)
>>> help the API, or anyone see a reason to not do it?
>>>
>>> KafkaConsumer consumer = new
>>> KafkaConsumer<>("tcp://localhost:?receive.buffer.bytes=-2", new
>>> ByteArrayDeserializer(), new ByteArrayDeserializer());
>>>
>>> I could send a Pull Request for that. The framework I would write
>>> would validate if the parameters are valid or not.
>>>
>>>
>>> Thanks in advance
>>>
>>>
>>> On Mon, Oct 2, 2017 at 9:14 AM, Clebert Suconic
>>>  wrote:
>>> > At ActiveMQ and ActiveMQ Artemis, ConnectionFactories have an
>>> > interesting feature where you can pass parameters through an URI.
>>> >
>>> > I was looking at Producer and Consumer APIs, and these two classes are
>>> > using a method that I considered old for Artemis resembling HornetQ:
>>> >
>>> > Instead of passing a Properties (aka HashMaps), users would be able to
>>> > create a Consumer or Producer by simply doing:
>>> >
>>> > new Consumer("tcp::/host:port?properties=values;properties=
>>> values...etc");
>>> >
>>> > Example:
>>> >
>>> >
>>> > Instead of the following:
>>> >
>>> > Map config = new HashMap<>();
>>> > config.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:");
>>> > config.put(ConsumerConfig.RECEIVE_BUFFER_CONFIG, -2);
>>> > new KafkaConsumer<>(config, new ByteArrayDeserializer(), new
>>> > ByteArrayDeserializer());
>>> >
>>> >
>>> >
>>> > Someone could do
>>> >
>>> > new KafkaConsumer<>("tcp://localhost:?receive.buffer.bytes=-2",
>>> > new ByteArrayDeserializer(), new ByteArrayDeserializer());
>>> >
>>> >
>>> >
>>> > I don't know if that little API improvement would be welcomed? I would be
>>> > able to send a Pull Request but I don't want to do it if that wouldn't
>>> > be welcomed in the first place:
>>> >
>>> >
>>> > Just an idea...  let me know if that is welcomed or not.
>>> >
>>> > If so I can forward the discussion into how I would implement it.
>>>
>>>
>>>
>>> --
>>> Clebert Suconic
>>>
>
>
>
> --
> Clebert Suconic



-- 
Clebert Suconic


[GitHub] kafka pull request #3984: MINOR: update streams quickstart for KIP-182

2017-10-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3984


---


[GitHub] kafka pull request #4018: KAFKA-6010: Relax record conversion time test to a...

2017-10-04 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/4018

KAFKA-6010: Relax record conversion time test to avoid build failure

For record conversion tests, check time >=0 since conversion times may be 
too small to be measured accurately. Since default value is -1, the test is 
still useful. Also increase message size in 
SslTransportLayerTest#testNetworkThreadTimeRecorded to avoid failures when 
processing time is too small.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka 
KAFKA-6010-MemoryRecordsBuilderTest

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4018.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4018


commit de88403f7a77e67c77f7ad36dabcccdfff661fe4
Author: Rajini Sivaram 
Date:   2017-10-04T18:44:31Z

KAFKA-6010: Relax record conversion time test to avoid build failure




---


[GitHub] kafka pull request #4012: KAFKA-6008: Sanitize the Kafka Connect workerId be...

2017-10-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4012


---


[jira] [Resolved] (KAFKA-6008) Kafka Connect: Unsanitized workerID causes exception during startup

2017-10-04 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-6008.
--
Resolution: Fixed

Issue resolved by pull request 4012
[https://github.com/apache/kafka/pull/4012]

> Kafka Connect: Unsanitized workerID causes exception during startup
> ---
>
> Key: KAFKA-6008
> URL: https://issues.apache.org/jira/browse/KAFKA-6008
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.0.0
> Environment: MacOS, Java 1.8.0_77-b03
>Reporter: Jakub Scholz
>Assignee: Jakub Scholz
> Fix For: 1.0.0
>
>
> When KAfka Connect starts, it seems to use unsanitized workerId for creating 
> Metrics. As a result it throws following exception:
> {code}
> [2017-10-04 13:16:08,886] WARN Error registering AppInfo mbean 
> (org.apache.kafka.common.utils.AppInfoParser:66)
> javax.management.MalformedObjectNameException: Invalid character ':' in value 
> part of property
>   at javax.management.ObjectName.construct(ObjectName.java:618)
>   at javax.management.ObjectName.(ObjectName.java:1382)
>   at 
> org.apache.kafka.common.utils.AppInfoParser.registerAppInfo(AppInfoParser.java:60)
>   at 
> org.apache.kafka.connect.runtime.ConnectMetrics.(ConnectMetrics.java:77)
>   at org.apache.kafka.connect.runtime.Worker.(Worker.java:88)
>   at 
> org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:81)
> {code}
> It looks like in my case the generated workerId is :. The 
> workerId should be sanitized before creating the metric.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3956: KAFKA-5970: Use ReentrantLock for delayed operatio...

2017-10-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3956


---


[GitHub] kafka pull request #3862: MINOR: Dev guide fixup

2017-10-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3862


---


Jenkins build is back to normal : kafka-0.11.0-jdk7 #315

2017-10-04 Thread Apache Jenkins Server
See 




[GitHub] kafka pull request #4017: Rename streams tutorial and quickstart

2017-10-04 Thread joel-hamill
GitHub user joel-hamill opened a pull request:

https://github.com/apache/kafka/pull/4017

Rename streams tutorial and quickstart

Write your own Streams Applications -> Tutorial: Write a Streams Application
Play with a Streams Application -> Run the Streams Demo Application


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/joel-hamill/kafka joel-hamill/streams-titles

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4017.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4017






---


[jira] [Created] (KAFKA-6011) AppInfoParser should only use metrics API and should not register JMX mbeans directly

2017-10-04 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-6011:


 Summary: AppInfoParser should only use metrics API and should not 
register JMX mbeans directly
 Key: KAFKA-6011
 URL: https://issues.apache.org/jira/browse/KAFKA-6011
 Project: Kafka
  Issue Type: Bug
  Components: metrics
Reporter: Ewen Cheslack-Postava
Priority: Minor


AppInfoParser collects info about the app ID, version, and commit ID and logs 
them + exposes corresponding metrics. For some reason we ended up with the app 
ID metric being registered directly to JMX while the version and commit ID use 
the metrics API. This means the app ID would not be accessible to custom 
metrics reporter.

This isn't a huge loss as this is probably a rarely used metric, but we should 
really only be using the metrics API. Only using the metrics API would also 
reduce and centralize the places we need to do name mangling to handle 
characters that might not be valid for metrics.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Build failed in Jenkins: kafka-trunk-jdk7 #2850

2017-10-04 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-4416; Add a `--group` option to console consumer

[jason] MINOR: Use SecurityProtocol in AuthenticationContext

--
[...truncated 1.81 MB...]

org.apache.kafka.streams.KafkaStreamsTest > testStateGlobalThreadClose PASSED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldThrowExceptionSettingUncaughtExceptionHandlerNotInCreateState STARTED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldThrowExceptionSettingUncaughtExceptionHandlerNotInCreateState PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCleanup STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCleanup PASSED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldThrowExceptionSettingStateListenerNotInCreateState STARTED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldThrowExceptionSettingStateListenerNotInCreateState PASSED

org.apache.kafka.streams.KafkaStreamsTest > testNumberDefaultMetrics STARTED

org.apache.kafka.streams.KafkaStreamsTest > testNumberDefaultMetrics PASSED

org.apache.kafka.streams.KafkaStreamsTest > shouldReturnThreadMetadata STARTED

org.apache.kafka.streams.KafkaStreamsTest > shouldReturnThreadMetadata PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCloseIsIdempotent STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCloseIsIdempotent PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCannotCleanupWhileRunning 
STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCannotCleanupWhileRunning PASSED

org.apache.kafka.streams.KafkaStreamsTest > testStateThreadClose STARTED

org.apache.kafka.streams.KafkaStreamsTest > testStateThreadClose PASSED

org.apache.kafka.streams.KafkaStreamsTest > testStateChanges STARTED

org.apache.kafka.streams.KafkaStreamsTest > testStateChanges PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartTwice STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartTwice PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfProducerEnableIdempotenceIsOverriddenIfEosEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfProducerEnableIdempotenceIsOverriddenIfEosEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldUseNewConfigsWhenPresent 
STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldUseNewConfigsWhenPresent 
PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptAtLeastOnce STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptAtLeastOnce PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldUseCorrectDefaultsWhenNoneSpecified STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldUseCorrectDefaultsWhenNoneSpecified PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerEnableIdempotenceIfEosDisabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerEnableIdempotenceIfEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
STARTED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetDifferentDefaultsIfEosEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetDifferentDefaultsIfEosEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigRetriesIfExactlyOnceEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigRetriesIfExactlyOnceEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerMaxInFlightRequestPerConnectionsWhenEosDisabled 
STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerMaxInFlightRequestPerConnectionsWhenEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowStreamsExceptionIfValueSerdeConfigFails STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowStreamsExceptionIfValueSerdeConfigFails PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfConsumerAutoCommitIsOverridden STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfConsumerAutoCommitIsOverridden PASSED

org.apache.kafka.streams.St

[jira] [Created] (KAFKA-6010) Transient failure: MemoryRecordsBuilderTest.convertToV1WithMixedV0AndV2Data

2017-10-04 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-6010:
--

 Summary: Transient failure: 
MemoryRecordsBuilderTest.convertToV1WithMixedV0AndV2Data
 Key: KAFKA-6010
 URL: https://issues.apache.org/jira/browse/KAFKA-6010
 Project: Kafka
  Issue Type: Bug
Reporter: Ismael Juma


The issue happens with various tests that call verifyRecordsProcessingStats. 
One example:

{code}
Stacktrace
java.lang.AssertionError: Processing time not recorded
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.kafka.common.record.MemoryRecordsBuilderTest.verifyRecordsProcessingStats(MemoryRecordsBuilderTest.java:651)
at 
org.apache.kafka.common.record.MemoryRecordsBuilderTest.convertToV1WithMixedV0AndV2Data(MemoryRecordsBuilderTest.java:515)
{code}

https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/2102/tests

cc [~rsivaram]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3987: KAFKA-5990: Enable generation of metrics docs for ...

2017-10-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3987


---


[jira] [Resolved] (KAFKA-4890) State directory being deleted when another thread holds the lock

2017-10-04 Thread Damian Guy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damian Guy resolved KAFKA-4890.
---
Resolution: Duplicate

> State directory being deleted when another thread holds the lock
> 
>
> Key: KAFKA-4890
> URL: https://issues.apache.org/jira/browse/KAFKA-4890
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Damian Guy
> Attachments: logs2.tar.gz, logs3.tar.gz, logs.tar.gz
>
>
> Looks like a state directory is being cleaned up when another thread already 
> has the lock:
> {code}
> 2017-03-12 20:39:17 [StreamThread-1] DEBUG o.a.k.s.p.i.ProcessorStateManager 
> - task [0_6] Registering state store perGameScoreStore to its state manager
> 2017-03-12 20:40:21 [StreamThread-3] INFO  o.a.k.s.p.i.StateDirectory - 
> Deleting obsolete state directory 0_6 for task 0_6
> 2017-03-12 20:40:22 [StreamThread-1] ERROR o.a.k.c.c.i.ConsumerCoordinator - 
> User provided listener 
> org.apache.kafka.streams.processor.internals.StreamThread$1 for group 
> fireflyProd failed on partition assignment
> org.apache.kafka.streams.errors.ProcessorStateException: Error while 
> executing put key 
> \x00\x00\x00\x00}\xA2\x9E\x9D\x05\xF6\x95\xAB\x01\x12dayOfGame and value 
> \x00\x00\x00\x00z\x00\x00\x00\x00\x00\x80G@ from store perGameScoreStore
> at 
> org.apache.kafka.streams.state.internals.RocksDBStore.putInternal(RocksDBStore.java:248)
> at 
> org.apache.kafka.streams.state.internals.RocksDBStore.access$000(RocksDBStore.java:65)
> at 
> org.apache.kafka.streams.state.internals.RocksDBStore$1.restore(RocksDBStore.java:156)
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.restoreActiveState(ProcessorStateManager.java:230)
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.register(ProcessorStateManager.java:193)
> at 
> org.apache.kafka.streams.processor.internals.AbstractProcessorContext.register(AbstractProcessorContext.java:99)
> at 
> org.apache.kafka.streams.state.internals.RocksDBStore.init(RocksDBStore.java:152)
> at 
> org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStore.init(ChangeLoggingKeyValueBytesStore.java:39)
> at 
> org.apache.kafka.streams.state.internals.MeteredKeyValueStore$7.run(MeteredKeyValueStore.java:100)
> at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:188)
> at 
> org.apache.kafka.streams.state.internals.MeteredKeyValueStore.init(MeteredKeyValueStore.java:131)
> at 
> org.apache.kafka.streams.state.internals.CachingKeyValueStore.init(CachingKeyValueStore.java:62)
> at 
> org.apache.kafka.streams.processor.internals.AbstractTask.initializeStateStores(AbstractTask.java:86)
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:141)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:834)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:1207)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.retryWithBackoff(StreamThread.java:1180)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:937)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.access$500(StreamThread.java:69)
>  at 
> org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:236)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:255)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:339)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:303)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:286)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1030)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:582)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:368)
> Caused by: org.rocksdb.RocksDBException: `
> at org.rocksdb.RocksDB.put(Native Method)
> at org.rocksdb.RocksDB.put(RocksDB.java:488)
> at 
> org.apache.kafka.streams.state.internals.RocksDBStore.putInternal(RocksDBStore.java:2

[jira] [Resolved] (KAFKA-5990) Add generated documentation for Connect metrics

2017-10-04 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-5990.
--
Resolution: Fixed

Issue resolved by pull request 3987
[https://github.com/apache/kafka/pull/3987]

> Add generated documentation for Connect metrics
> ---
>
> Key: KAFKA-5990
> URL: https://issues.apache.org/jira/browse/KAFKA-5990
> Project: Kafka
>  Issue Type: Sub-task
>  Components: documentation, KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Blocker
> Fix For: 1.0.0
>
>
> KAFKA-5191 recently added a new {{MetricNameTemplate}} that is used to create 
> the {{MetricName}} objects in the producer and consumer, as we as in the 
> newly-added generation of metric documentation. The {{Metric.toHtmlTable}} 
> method then takes these templates and generates an HTML documentation for the 
> metrics.
> Change the Connect metrics to use these templates and update the build to 
> generate these metrics and include them in the Kafka documentation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Build failed in Jenkins: kafka-trunk-jdk8 #2102

2017-10-04 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] KAFKA-5738; Upgrade note for cumulative count metric (KIP-187)

[jason] KAFKA-4416; Add a `--group` option to console consumer

[jason] MINOR: Use SecurityProtocol in AuthenticationContext

--
[...truncated 2.96 MB...]
org.apache.kafka.common.security.JaasContextTest > testControlFlag PASSED

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
noAuthorizationIdSpecified STARTED

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
noAuthorizationIdSpecified PASSED

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
authorizatonIdEqualsAuthenticationId STARTED

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
authorizatonIdEqualsAuthenticationId PASSED

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
authorizatonIdNotEqualsAuthenticationId STARTED

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
authorizatonIdNotEqualsAuthenticationId PASSED

org.apache.kafka.common.security.authenticator.ClientAuthenticationFailureTest 
> testProducerWithInvalidCredentials STARTED

org.apache.kafka.common.security.authenticator.ClientAuthenticationFailureTest 
> testProducerWithInvalidCredentials PASSED

org.apache.kafka.common.security.authenticator.ClientAuthenticationFailureTest 
> testTransactionalProducerWithInvalidCredentials STARTED

org.apache.kafka.common.security.authenticator.ClientAuthenticationFailureTest 
> testTransactionalProducerWithInvalidCredentials PASSED

org.apache.kafka.common.security.authenticator.ClientAuthenticationFailureTest 
> testConsumerWithInvalidCredentials STARTED

org.apache.kafka.common.security.authenticator.ClientAuthenticationFailureTest 
> testConsumerWithInvalidCredentials PASSED

org.apache.kafka.common.security.authenticator.ClientAuthenticationFailureTest 
> testAdminClientWithInvalidCredentials STARTED

org.apache.kafka.common.security.authenticator.ClientAuthenticationFailureTest 
> testAdminClientWithInvalidCredentials PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testMissingUsernameSaslPlain STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testMissingUsernameSaslPlain PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testValidSaslScramMechanisms STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testValidSaslScramMechanisms PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslScramSslServerWithoutSaslAuthenticateHeaderFailure STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslScramSslServerWithoutSaslAuthenticateHeaderFailure PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslScramPlaintextServerWithoutSaslAuthenticateHeaderFailure STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslScramPlaintextServerWithoutSaslAuthenticateHeaderFailure PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslScramPlaintextServerWithoutSaslAuthenticateHeader STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslScramPlaintextServerWithoutSaslAuthenticateHeader PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testMechanismPluggability STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testMechanismPluggability PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testScramUsernameWithSpecialCharacters STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testScramUsernameWithSpecialCharacters PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testApiVersionsRequestWithUnsupportedVersion STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testApiVersionsRequestWithUnsupportedVersion PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testMissingPasswordSaslPlain STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testMissingPasswordSaslPlain PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testInvalidLoginModule STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testInvalidLoginModule PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslPlainPlaintextClientWithoutSaslAuthenticateHeader STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslPlainPlaintextClientWithoutSaslAuthenticateHeader PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslPlainSslClientWithoutSaslAuthenticateHeader STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTes

[GitHub] kafka pull request #3937: KAFKA-5856 AdminClient.createPartitions() follow u...

2017-10-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3937


---


[jira] [Created] (KAFKA-6009) Fix formatting of autogenerated docs tables

2017-10-04 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-6009:


 Summary: Fix formatting of autogenerated docs tables
 Key: KAFKA-6009
 URL: https://issues.apache.org/jira/browse/KAFKA-6009
 Project: Kafka
  Issue Type: Sub-task
  Components: documentation
Reporter: Ewen Cheslack-Postava


In reviewing https://github.com/apache/kafka/pull/3987 I noticed that the 
autogenerated tables currently differ from the manually created ones. The 
manual ones have 3 columns -- metric/attribute name, description, and mbean 
name with a regex. The new ones have 3 columns, but for some reason the first 
one is just empty, the second is the metric/attribute name, the last one is the 
description, and there is no regex.

We could potentially just drop to two columns since the regex column is 
generally very repetitive and is now handled by a header row giving the general 
group mbean name info/format. The one thing that seems to currently be missing 
is the regex that would restrict the format of these (although these weren't 
really technically enforced and some of the restrictions are being removed, 
e.g. see some of the follow up discussion to 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-190%3A+Handle+client-ids+consistently+between+clients+and+brokers).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #4016: MINOR: Simplify log cleaner and fix compiler warni...

2017-10-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4016


---


[GitHub] kafka pull request #4015: KAFKA-6004: Allow authentication providers to over...

2017-10-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4015


---


[jira] [Resolved] (KAFKA-6004) Enable custom authentication plugins to return error messages to clients

2017-10-04 Thread Rajini Sivaram (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-6004.
---
Resolution: Fixed

Issue resolved by pull request 4015
[https://github.com/apache/kafka/pull/4015]

> Enable custom authentication plugins to return error messages to clients
> 
>
> Key: KAFKA-6004
> URL: https://issues.apache.org/jira/browse/KAFKA-6004
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Blocker
> Fix For: 1.0.0
>
>
> KIP-152 enables authentication failures to be returned to clients to simplify 
> diagnosis of security configuration issues. At the moment, a fixed message is 
> returned to clients by SaslServerAuthenticator which says "Authentication 
> failed due to invalid credentials with SASL mechanism $mechanism".
> We have added an error message string to SaslAuthenticateResponse to return 
> custom messages from the broker to clients. Custom SASL server 
> implementations may want to return more specific error messages in some 
> cases. We should allow this by returning error messages from specific 
> exceptions (e.g. org.apache.kafka.common.errors.SaslAuthenticationException) 
> in SaslAuthenticateResponse. It would be better not to return the error 
> message from SaslException since it may contain information that we do not 
> want to leak to clients.
> We should do this for 1.0.0 to avoid compatibility issues later since third 
> party implementors of SASL server may assume that SaslAuthenticationException 
> is only logged on the server and not sent to clients, making it a security 
> risk to update later.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5569) Document any changes from this task

2017-10-04 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-5569.

Resolution: Not A Problem

There is nothing to document for 1.0 release, besides KIP-161 that is done 
already

> Document any changes from this task
> ---
>
> Key: KAFKA-5569
> URL: https://issues.apache.org/jira/browse/KAFKA-5569
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Eno Thereska
>Assignee: Matthias J. Sax
> Fix For: 1.0.0
>
>
> After fixing the exceptions, document what was done, e.g., KIP-161 at a 
> minimum.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3988: KAFKA-5967 Ineffective check of negative value in ...

2017-10-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3988


---


Re: [DISCUSS] KIP-158: Kafka Connect should allow source connectors to set topic-specific settings for new topics

2017-10-04 Thread Randall Hauch
Currently the KIP's scope is only topics that don't yet exist, and we have
to cognizant of race conditions between tasks with the same connector. I
think it is worthwhile to consider whether the KIP's scope should expand to
also address *existing* partitions, though it may not be appropriate to
have as much control when changing the topic settings for an existing
topic. For example, changing the number of partitions (which the KIP
considers a "topic-specific setting" even though technically it is not)
shouldn't be done blindly due to the partitioning impacts, and IIRC you
can't reduce them (which we could verify before applying). Also, I don't
think the AdminClient currently allows changing the replication factory. I
think changing the topic configs is less problematic both from what makes
sense for connectors to verify/change and from what the AdminClient
supports.

Even if we decide that it's not appropriate to change the settings on an
existing topic, I do think it's advantageous to at least notify the
connector (or task) prior to the first record sent to a given topic so that
the connector can fail or issue a warning if it doesn't meet its
requirements.

Best regards,

Randall

On Wed, Oct 4, 2017 at 12:52 AM, Stephane Maarek <
steph...@simplemachines.com.au> wrote:

> Hi Randall,
>
> Thanks for the KIP. I like it
> What happens when the target topic is already created but the configs do
> not match?
> i.e. wrong RF, num partitions, or missing / additional configs? Will you
> attempt to apply the necessary changes or throw an error?
>
> Thanks!
> Stephane
>
>
> On 24/5/17, 5:59 am, "Mathieu Fenniak" 
> wrote:
>
> Ah, yes, I see you a highlighted part that should've made this clear
> to me the first read. :-)  Much clearer now!
>
> By the way, enjoyed your Debezium talk in NYC.
>
> Looking forward to this Kafka Connect change; it will allow me to
> remove a post-deployment tool that I hacked together for the purpose
> of ensuring auto-created topics have the right config.
>
> Mathieu
>
>
> On Tue, May 23, 2017 at 11:38 AM, Randall Hauch 
> wrote:
> > Thanks for the quick feedback, Mathieu. Yes, the first configuration
> rule
> > whose regex matches will be applied, and no other rules will be
> used. I've
> > updated the KIP to try to make this more clear, but let me know if
> it's
> > still not clear.
> >
> > Best regards,
> >
> > Randall
> >
> > On Tue, May 23, 2017 at 10:07 AM, Mathieu Fenniak <
> > mathieu.fenn...@replicon.com> wrote:
> >
> >> Hi Randall,
> >>
> >> Awesome, very much looking forward to this.
> >>
> >> It isn't 100% clear from the KIP how multiple config-based rules
> would
> >> be applied; it looks like the first configuration rule whose regex
> >> matches the topic name will be used, and no other rules will be
> >> applied.  Is that correct?  (I wasn't sure if it might cascade
> >> together multiple matching rules...)
> >>
> >> Looks great,
> >>
> >> Mathieu
> >>
> >>
> >> On Mon, May 22, 2017 at 1:43 PM, Randall Hauch 
> wrote:
> >> > Hi, all.
> >> >
> >> > We recently added the ability for Kafka Connect to create
> *internal*
> >> topics
> >> > using the new AdminClient, but it still would be great if Kafka
> Connect
> >> > could do this for new topics that result from source connector
> records.
> >> > I've outlined an approach to do this in "KIP-158 Kafka Connect
> should
> >> allow
> >> > source connectors to set topic-specific settings for new topics".
> >> >
> >> > *https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> >> 158%3A+Kafka+Connect+should+allow+source+connectors+to+
> >> set+topic-specific+settings+for+new+topics
> >> >  >> 158%3A+Kafka+Connect+should+allow+source+connectors+to+
> >> set+topic-specific+settings+for+new+topics>*
> >> >
> >> > Please take a look and provide feedback. Thanks!
> >> >
> >> > Best regards,
> >> >
> >> > Randall
> >>
>
>
>
>


Jenkins build is back to normal : kafka-trunk-jdk8 #2101

2017-10-04 Thread Apache Jenkins Server
See 




[GitHub] kafka pull request #3863: MINOR: Use SecurityProtocol in AuthenticationConte...

2017-10-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3863


---


[GitHub] kafka pull request #2150: KAFKA-4416: Add a `--group` option to console cons...

2017-10-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2150


---


[jira] [Resolved] (KAFKA-5842) QueryableStateIntegrationTest may fail with JDK 7

2017-10-04 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved KAFKA-5842.
---
Resolution: Cannot Reproduce

> QueryableStateIntegrationTest may fail with JDK 7
> -
>
> Key: KAFKA-5842
> URL: https://issues.apache.org/jira/browse/KAFKA-5842
> Project: Kafka
>  Issue Type: Test
>Reporter: Ted Yu
>Priority: Minor
>
> Found the following when running test suite for 0.11.0.1 RC0 :
> {code}
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
> concurrentAccesses FAILED
> java.lang.AssertionError: Key not found one
> at org.junit.Assert.fail(Assert.java:88)
> at 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest.verifyGreaterOrEqual(QueryableStateIntegrationTest.java:893)
> at 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest.concurrentAccesses(QueryableStateIntegrationTest.java:399)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #4014: KAFKA-5738: Upgrade note for cumulative count metr...

2017-10-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4014


---


[GitHub] kafka pull request #4013: KAFKA-4764: Upgrade notes for authentication failu...

2017-10-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4013


---


[GitHub] kafka pull request #4016: MINOR: Simplify log cleaner and fix compiler warni...

2017-10-04 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/4016

MINOR: Simplify log cleaner and fix compiler warnings

- Simplify LogCleaner.cleanSegments and add comment regarding thread
unsafe usage of `LogSegment.append`. This was a result of investigating
KAFKA-4972.
- Fix compiler warnings.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
simplify-log-cleaner-and-fix-warnings

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4016.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4016


commit 3b26b21c4a41b9857d48a09a63a560228924df4f
Author: Ismael Juma 
Date:   2017-10-04T13:57:03Z

Simplify LogCleaner.cleanSegments and add comment regarding thread unsafe 
usage of `LogSegment.append`

commit a1e50d8fbffc977646397f0446efeaa798816d87
Author: Ismael Juma 
Date:   2017-10-04T13:57:20Z

Fix compiler warnings




---


[GitHub] kafka pull request #4015: KAFKA-6004: Allow authentication providers to over...

2017-10-04 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/4015

KAFKA-6004: Allow authentication providers to override error message



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-6004-auth-exception

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4015.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4015


commit 8cbbbae3f3da625856db27b2fdf4004c9980931e
Author: Rajini Sivaram 
Date:   2017-10-04T13:27:30Z

KAFKA-6004: Allow authentication providers to override error message




---


[GitHub] kafka pull request #4014: KAFKA-5738: Upgrade note for cumulative count metr...

2017-10-04 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/4014

KAFKA-5738: Upgrade note for cumulative count metric (KIP-187)



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka MINOR-upgrade-KIP-187

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4014.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4014


commit 146a05fce547c559602799b81188793ed749d7a8
Author: Rajini Sivaram 
Date:   2017-10-04T12:54:46Z

KAFKA-5738: Upgrade note for cumulative count metric (KIP-187)




---


[GitHub] kafka pull request #4013: KAFKA-4764: Upgrade notes for authentication failu...

2017-10-04 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/4013

KAFKA-4764: Upgrade notes for authentication failure handling (KIP-152)



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka MINOR-upgrade-auth-failure

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4013.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4013


commit 6e4630515162aa058a2856f1efd4118a0f834c3f
Author: Rajini Sivaram 
Date:   2017-10-04T11:51:25Z

KAFKA-4764: Upgrade notes for authentication failure handling (KIP-152)




---


[GitHub] kafka pull request #4012: KAFKA-6008: Sanitize the Kafka Connect workerId be...

2017-10-04 Thread scholzj
GitHub user scholzj opened a pull request:

https://github.com/apache/kafka/pull/4012

KAFKA-6008: Sanitize the Kafka Connect workerId before passing it to 
AppInfoParser



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/scholzj/kafka KAFKA-6008

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4012.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4012


commit 8aac02d76ca9bc723a58bd54965977c7a2a6dfec
Author: Jakub Scholz 
Date:   2017-10-04T11:26:21Z

Sanitize the workerId before passing it to AppInfoParser




---


[jira] [Created] (KAFKA-6008) Kafka Connect: Unsanitized workerID causes exception during startup

2017-10-04 Thread Jakub Scholz (JIRA)
Jakub Scholz created KAFKA-6008:
---

 Summary: Kafka Connect: Unsanitized workerID causes exception 
during startup
 Key: KAFKA-6008
 URL: https://issues.apache.org/jira/browse/KAFKA-6008
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 1.0.0
 Environment: MacOS, Java 1.8.0_77-b03
Reporter: Jakub Scholz
 Fix For: 1.0.0


When KAfka Connect starts, it seems to use unsanitized workerId for creating 
Metrics. As a result it throws following exception:
{code}
[2017-10-04 13:16:08,886] WARN Error registering AppInfo mbean 
(org.apache.kafka.common.utils.AppInfoParser:66)
javax.management.MalformedObjectNameException: Invalid character ':' in value 
part of property
at javax.management.ObjectName.construct(ObjectName.java:618)
at javax.management.ObjectName.(ObjectName.java:1382)
at 
org.apache.kafka.common.utils.AppInfoParser.registerAppInfo(AppInfoParser.java:60)
at 
org.apache.kafka.connect.runtime.ConnectMetrics.(ConnectMetrics.java:77)
at org.apache.kafka.connect.runtime.Worker.(Worker.java:88)
at 
org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:81)
{code}

It looks like in my case the generated workerId is :. The 
workerId should be sanitized before creating the metric.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSS] KIP-201: Rationalising Policy interfaces

2017-10-04 Thread Tom Bentley
Good point. Then I guess I can do those items too. I would also need to do
the same changes for DeleteRecordsRequest and Response.

On 4 October 2017 at 10:37, Ismael Juma  wrote:

> Those two points are related to policies in the following sense:
>
> 1. A policy that can't send errors to clients is much less useful
> 2. Testing policies is much easier with `validateOnly`
>
> Ismael
>
> On Wed, Oct 4, 2017 at 9:20 AM, Tom Bentley  wrote:
>
> > Thanks Edoardo,
> >
> > I've added that motivation to the KIP.
> >
> > KIP-201 doesn't address two points raised in KIP-170: Adding a
> > validationOnly flag to
> > DeleteTopicRequest and adding an error message to DeleteTopicResponse.
> > Since those are not policy-related I think they're best left out of
> > KIP-201. I suppose it is up to you and Mickael whether to narrow the
> scope
> > of KIP-170 to address those points.
> >
> > Thanks again,
> >
> > Tom
> >
> > On 4 October 2017 at 08:20, Edoardo Comar  wrote:
> >
> > > Thanks Tom,
> > > looks got to me and KIP-201 could supersede KIP-170
> > > but could you please add a missing motivation bullet that was behind
> > > KIP-170:
> > >
> > > introducing ClusterState to allow validation of create/alter topic
> > request
> > >
> > > not just against the request metadata but also
> > > against the current amount of resources already used in the cluster (eg
> > > number of partitions).
> > >
> > > thanks
> > > Edo
> > > --
> > >
> > > Edoardo Comar
> > >
> > > IBM Message Hub
> > >
> > > IBM UK Ltd, Hursley Park, SO21 2JN
> > >
> > >
> > >
> > > From:   Tom Bentley 
> > > To: dev@kafka.apache.org
> > > Date:   02/10/2017 15:15
> > > Subject:Re: [DISCUSS] KIP-201: Rationalising Policy interfaces
> > >
> > >
> > >
> > > Hi All,
> > >
> > > I've updated KIP-201 again so there is now a single policy interface
> (and
> > > thus a single key by which to configure it) for topic creation,
> > > modification, deletion and record deletion, which each have their own
> > > validation method.
> > >
> > > There are still a few loose ends:
> > >
> > > 1. I currently propose validateAlterTopic(), but it would be possible
> to
> > > be
> > > more fine grained about this: validateAlterConfig(),
> validAddPartitions()
> > > and validateReassignPartitions(), for example. Obviously this results
> in
> > a
> > > policy method per operation, and makes it more clear what is being
> > > changed.
> > > I guess the down side is its more work for implementer, and potentially
> > > makes it harder to change the interface in the future.
> > >
> > > 2. A couple of TODOs about what the TopicState interface should return
> > > when
> > > a topic's partitions are being reassigned.
> > >
> > > Your thoughts on these or any other points are welcome.
> > >
> > > Thanks,
> > >
> > > Tom
> > >
> > > On 27 September 2017 at 11:45, Paolo Patierno 
> > wrote:
> > >
> > > > Hi Ismael,
> > > >
> > > >
> > > >   1.  I don't have a real requirement now but "deleting" is an
> > operation
> > > > that could be really dangerous so it's always better having a way for
> > > > having more control on that. I know that we have the authorizer used
> > for
> > > > that (delete on topic) but fine grained control could be better (even
> > > > already happens for topic deletion).
> > > >   2.  I know about the problem of restarting broker due to changes on
> > > > policies but what do you mean by doing that on the clients ?
> > > >
> > > >
> > > > Paolo Patierno
> > > > Senior Software Engineer (IoT) @ Red Hat
> > > > Microsoft MVP on Azure & IoT
> > > > Microsoft Azure Advisor
> > > >
> > > > Twitter : @ppatierno<
> > > https://urldefense.proofpoint.com/v2/url?u=http-3A__twitter.
> > > com_ppatierno&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=
> > > EzRhmSah4IHsUZVekRUIINhltZK7U0OaeRo7hgW4_tQ&m=h-D-nA7uiy1Z-jta5y-
> > > yh7dKgV77XtsUnJ9Rab1gheY&s=43hzTLEDKw2v5Vh0zwkMTaaKD-
> > HdJD8d_F4-Bsw25-Y&e=
> > > >
> > > > Linkedin : paolopatierno<
> > > https://urldefense.proofpoint.com/v2/url?u=http-3A__it.
> > > linkedin.com_in_paolopatierno&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=
> > > EzRhmSah4IHsUZVekRUIINhltZK7U0OaeRo7hgW4_tQ&m=h-D-nA7uiy1Z-jta5y-
> > > yh7dKgV77XtsUnJ9Rab1gheY&s=Ig0N7Nwf9EHfTJ2pH3jRM1JIdlzXw6
> > R5Drocu0TMRLk&e=
> > > >
> > > > Blog : DevExperience<
> > > https://urldefense.proofpoint.com/v2/url?u=http-3A__
> > > paolopatierno.wordpress.com_&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=
> > > EzRhmSah4IHsUZVekRUIINhltZK7U0OaeRo7hgW4_tQ&m=h-D-nA7uiy1Z-jta5y-
> > > yh7dKgV77XtsUnJ9Rab1gheY&s=Tc9NrTtG2GP7-zRjOHkXHfYI0rncO8_
> > jKpedna692z4&e=
> > > >
> > > >
> > > >
> > > > 
> > > > From: isma...@gmail.com  on behalf of Ismael
> Juma <
> > > > ism...@juma.me.uk>
> > > > Sent: Wednesday, September 27, 2017 10:30 AM
> > > > To: dev@kafka.apache.org
> > > > Subject: Re: [DISCUSS] KIP-201: Rationalising Policy interfaces
> > > >
> > > > A couple of questions:
> > > >
> > > > 1. Is this a concret

Re: [DISCUSS] KIP-201: Rationalising Policy interfaces

2017-10-04 Thread Ismael Juma
Those two points are related to policies in the following sense:

1. A policy that can't send errors to clients is much less useful
2. Testing policies is much easier with `validateOnly`

Ismael

On Wed, Oct 4, 2017 at 9:20 AM, Tom Bentley  wrote:

> Thanks Edoardo,
>
> I've added that motivation to the KIP.
>
> KIP-201 doesn't address two points raised in KIP-170: Adding a
> validationOnly flag to
> DeleteTopicRequest and adding an error message to DeleteTopicResponse.
> Since those are not policy-related I think they're best left out of
> KIP-201. I suppose it is up to you and Mickael whether to narrow the scope
> of KIP-170 to address those points.
>
> Thanks again,
>
> Tom
>
> On 4 October 2017 at 08:20, Edoardo Comar  wrote:
>
> > Thanks Tom,
> > looks got to me and KIP-201 could supersede KIP-170
> > but could you please add a missing motivation bullet that was behind
> > KIP-170:
> >
> > introducing ClusterState to allow validation of create/alter topic
> request
> >
> > not just against the request metadata but also
> > against the current amount of resources already used in the cluster (eg
> > number of partitions).
> >
> > thanks
> > Edo
> > --
> >
> > Edoardo Comar
> >
> > IBM Message Hub
> >
> > IBM UK Ltd, Hursley Park, SO21 2JN
> >
> >
> >
> > From:   Tom Bentley 
> > To: dev@kafka.apache.org
> > Date:   02/10/2017 15:15
> > Subject:Re: [DISCUSS] KIP-201: Rationalising Policy interfaces
> >
> >
> >
> > Hi All,
> >
> > I've updated KIP-201 again so there is now a single policy interface (and
> > thus a single key by which to configure it) for topic creation,
> > modification, deletion and record deletion, which each have their own
> > validation method.
> >
> > There are still a few loose ends:
> >
> > 1. I currently propose validateAlterTopic(), but it would be possible to
> > be
> > more fine grained about this: validateAlterConfig(), validAddPartitions()
> > and validateReassignPartitions(), for example. Obviously this results in
> a
> > policy method per operation, and makes it more clear what is being
> > changed.
> > I guess the down side is its more work for implementer, and potentially
> > makes it harder to change the interface in the future.
> >
> > 2. A couple of TODOs about what the TopicState interface should return
> > when
> > a topic's partitions are being reassigned.
> >
> > Your thoughts on these or any other points are welcome.
> >
> > Thanks,
> >
> > Tom
> >
> > On 27 September 2017 at 11:45, Paolo Patierno 
> wrote:
> >
> > > Hi Ismael,
> > >
> > >
> > >   1.  I don't have a real requirement now but "deleting" is an
> operation
> > > that could be really dangerous so it's always better having a way for
> > > having more control on that. I know that we have the authorizer used
> for
> > > that (delete on topic) but fine grained control could be better (even
> > > already happens for topic deletion).
> > >   2.  I know about the problem of restarting broker due to changes on
> > > policies but what do you mean by doing that on the clients ?
> > >
> > >
> > > Paolo Patierno
> > > Senior Software Engineer (IoT) @ Red Hat
> > > Microsoft MVP on Azure & IoT
> > > Microsoft Azure Advisor
> > >
> > > Twitter : @ppatierno<
> > https://urldefense.proofpoint.com/v2/url?u=http-3A__twitter.
> > com_ppatierno&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=
> > EzRhmSah4IHsUZVekRUIINhltZK7U0OaeRo7hgW4_tQ&m=h-D-nA7uiy1Z-jta5y-
> > yh7dKgV77XtsUnJ9Rab1gheY&s=43hzTLEDKw2v5Vh0zwkMTaaKD-
> HdJD8d_F4-Bsw25-Y&e=
> > >
> > > Linkedin : paolopatierno<
> > https://urldefense.proofpoint.com/v2/url?u=http-3A__it.
> > linkedin.com_in_paolopatierno&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=
> > EzRhmSah4IHsUZVekRUIINhltZK7U0OaeRo7hgW4_tQ&m=h-D-nA7uiy1Z-jta5y-
> > yh7dKgV77XtsUnJ9Rab1gheY&s=Ig0N7Nwf9EHfTJ2pH3jRM1JIdlzXw6
> R5Drocu0TMRLk&e=
> > >
> > > Blog : DevExperience<
> > https://urldefense.proofpoint.com/v2/url?u=http-3A__
> > paolopatierno.wordpress.com_&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=
> > EzRhmSah4IHsUZVekRUIINhltZK7U0OaeRo7hgW4_tQ&m=h-D-nA7uiy1Z-jta5y-
> > yh7dKgV77XtsUnJ9Rab1gheY&s=Tc9NrTtG2GP7-zRjOHkXHfYI0rncO8_
> jKpedna692z4&e=
> > >
> > >
> > >
> > > 
> > > From: isma...@gmail.com  on behalf of Ismael Juma <
> > > ism...@juma.me.uk>
> > > Sent: Wednesday, September 27, 2017 10:30 AM
> > > To: dev@kafka.apache.org
> > > Subject: Re: [DISCUSS] KIP-201: Rationalising Policy interfaces
> > >
> > > A couple of questions:
> > >
> > > 1. Is this a concrete requirement from a user or is it hypothetical?
> > > 2. You sure you would want to do this in the broker instead of the
> > clients?
> > > It's worth remembering that updating broker policies involves a rolling
> > > restart of the cluster, so it's not the right place for things that
> > change
> > > frequently.
> > >
> > > Ismael
> > >
> > > On Wed, Sep 27, 2017 at 11:26 AM, Paolo Patierno 
> > > wrote:
> > >
> > > > Hi Ismael,
> > > >
> > > > regarding motivation

Re: [DISCUSS] KIP-201: Rationalising Policy interfaces

2017-10-04 Thread Tom Bentley
Thanks Edoardo,

I've added that motivation to the KIP.

KIP-201 doesn't address two points raised in KIP-170: Adding a
validationOnly flag to
DeleteTopicRequest and adding an error message to DeleteTopicResponse.
Since those are not policy-related I think they're best left out of
KIP-201. I suppose it is up to you and Mickael whether to narrow the scope
of KIP-170 to address those points.

Thanks again,

Tom

On 4 October 2017 at 08:20, Edoardo Comar  wrote:

> Thanks Tom,
> looks got to me and KIP-201 could supersede KIP-170
> but could you please add a missing motivation bullet that was behind
> KIP-170:
>
> introducing ClusterState to allow validation of create/alter topic request
>
> not just against the request metadata but also
> against the current amount of resources already used in the cluster (eg
> number of partitions).
>
> thanks
> Edo
> --
>
> Edoardo Comar
>
> IBM Message Hub
>
> IBM UK Ltd, Hursley Park, SO21 2JN
>
>
>
> From:   Tom Bentley 
> To: dev@kafka.apache.org
> Date:   02/10/2017 15:15
> Subject:Re: [DISCUSS] KIP-201: Rationalising Policy interfaces
>
>
>
> Hi All,
>
> I've updated KIP-201 again so there is now a single policy interface (and
> thus a single key by which to configure it) for topic creation,
> modification, deletion and record deletion, which each have their own
> validation method.
>
> There are still a few loose ends:
>
> 1. I currently propose validateAlterTopic(), but it would be possible to
> be
> more fine grained about this: validateAlterConfig(), validAddPartitions()
> and validateReassignPartitions(), for example. Obviously this results in a
> policy method per operation, and makes it more clear what is being
> changed.
> I guess the down side is its more work for implementer, and potentially
> makes it harder to change the interface in the future.
>
> 2. A couple of TODOs about what the TopicState interface should return
> when
> a topic's partitions are being reassigned.
>
> Your thoughts on these or any other points are welcome.
>
> Thanks,
>
> Tom
>
> On 27 September 2017 at 11:45, Paolo Patierno  wrote:
>
> > Hi Ismael,
> >
> >
> >   1.  I don't have a real requirement now but "deleting" is an operation
> > that could be really dangerous so it's always better having a way for
> > having more control on that. I know that we have the authorizer used for
> > that (delete on topic) but fine grained control could be better (even
> > already happens for topic deletion).
> >   2.  I know about the problem of restarting broker due to changes on
> > policies but what do you mean by doing that on the clients ?
> >
> >
> > Paolo Patierno
> > Senior Software Engineer (IoT) @ Red Hat
> > Microsoft MVP on Azure & IoT
> > Microsoft Azure Advisor
> >
> > Twitter : @ppatierno<
> https://urldefense.proofpoint.com/v2/url?u=http-3A__twitter.
> com_ppatierno&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=
> EzRhmSah4IHsUZVekRUIINhltZK7U0OaeRo7hgW4_tQ&m=h-D-nA7uiy1Z-jta5y-
> yh7dKgV77XtsUnJ9Rab1gheY&s=43hzTLEDKw2v5Vh0zwkMTaaKD-HdJD8d_F4-Bsw25-Y&e=
> >
> > Linkedin : paolopatierno<
> https://urldefense.proofpoint.com/v2/url?u=http-3A__it.
> linkedin.com_in_paolopatierno&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=
> EzRhmSah4IHsUZVekRUIINhltZK7U0OaeRo7hgW4_tQ&m=h-D-nA7uiy1Z-jta5y-
> yh7dKgV77XtsUnJ9Rab1gheY&s=Ig0N7Nwf9EHfTJ2pH3jRM1JIdlzXw6R5Drocu0TMRLk&e=
> >
> > Blog : DevExperience<
> https://urldefense.proofpoint.com/v2/url?u=http-3A__
> paolopatierno.wordpress.com_&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=
> EzRhmSah4IHsUZVekRUIINhltZK7U0OaeRo7hgW4_tQ&m=h-D-nA7uiy1Z-jta5y-
> yh7dKgV77XtsUnJ9Rab1gheY&s=Tc9NrTtG2GP7-zRjOHkXHfYI0rncO8_jKpedna692z4&e=
> >
> >
> >
> > 
> > From: isma...@gmail.com  on behalf of Ismael Juma <
> > ism...@juma.me.uk>
> > Sent: Wednesday, September 27, 2017 10:30 AM
> > To: dev@kafka.apache.org
> > Subject: Re: [DISCUSS] KIP-201: Rationalising Policy interfaces
> >
> > A couple of questions:
> >
> > 1. Is this a concrete requirement from a user or is it hypothetical?
> > 2. You sure you would want to do this in the broker instead of the
> clients?
> > It's worth remembering that updating broker policies involves a rolling
> > restart of the cluster, so it's not the right place for things that
> change
> > frequently.
> >
> > Ismael
> >
> > On Wed, Sep 27, 2017 at 11:26 AM, Paolo Patierno 
> > wrote:
> >
> > > Hi Ismael,
> > >
> > > regarding motivations for delete records, as I said during the
> discussion
> > > on KIP-204, it gives the possibility to avoid deleting messages for
> > > specific partitions (inside the topic) and starting from a specific
> > offset.
> > > I could think on some users solutions where they know exactly what the
> > > partitions means in a specific topic (because they are using a custom
> > > partitioner on the producer side) so they know what kind of messages
> are
> > > inside a partition allowing to delete them but not the others.  In
> such a
> > > po

Build failed in Jenkins: kafka-trunk-jdk8 #2100

2017-10-04 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Java 9 version handling improvements

--
[...truncated 1.39 MB...]

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
noAuthorizationIdSpecified STARTED

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
noAuthorizationIdSpecified PASSED

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
authorizatonIdEqualsAuthenticationId STARTED

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
authorizatonIdEqualsAuthenticationId PASSED

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
authorizatonIdNotEqualsAuthenticationId STARTED

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
authorizatonIdNotEqualsAuthenticationId PASSED

org.apache.kafka.common.security.authenticator.ClientAuthenticationFailureTest 
> testProducerWithInvalidCredentials STARTED

org.apache.kafka.common.security.authenticator.ClientAuthenticationFailureTest 
> testProducerWithInvalidCredentials PASSED

org.apache.kafka.common.security.authenticator.ClientAuthenticationFailureTest 
> testTransactionalProducerWithInvalidCredentials STARTED

org.apache.kafka.common.security.authenticator.ClientAuthenticationFailureTest 
> testTransactionalProducerWithInvalidCredentials PASSED

org.apache.kafka.common.security.authenticator.ClientAuthenticationFailureTest 
> testConsumerWithInvalidCredentials STARTED

org.apache.kafka.common.security.authenticator.ClientAuthenticationFailureTest 
> testConsumerWithInvalidCredentials PASSED

org.apache.kafka.common.security.authenticator.ClientAuthenticationFailureTest 
> testAdminClientWithInvalidCredentials STARTED

org.apache.kafka.common.security.authenticator.ClientAuthenticationFailureTest 
> testAdminClientWithInvalidCredentials PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testMissingUsernameSaslPlain STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testMissingUsernameSaslPlain PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testValidSaslScramMechanisms STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testValidSaslScramMechanisms PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslScramSslServerWithoutSaslAuthenticateHeaderFailure STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslScramSslServerWithoutSaslAuthenticateHeaderFailure PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslScramPlaintextServerWithoutSaslAuthenticateHeaderFailure STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslScramPlaintextServerWithoutSaslAuthenticateHeaderFailure PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslScramPlaintextServerWithoutSaslAuthenticateHeader STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslScramPlaintextServerWithoutSaslAuthenticateHeader PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testMechanismPluggability STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testMechanismPluggability PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testScramUsernameWithSpecialCharacters STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testScramUsernameWithSpecialCharacters PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testApiVersionsRequestWithUnsupportedVersion STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testApiVersionsRequestWithUnsupportedVersion PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testMissingPasswordSaslPlain STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testMissingPasswordSaslPlain PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testInvalidLoginModule STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testInvalidLoginModule PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslPlainPlaintextClientWithoutSaslAuthenticateHeader STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslPlainPlaintextClientWithoutSaslAuthenticateHeader PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslPlainSslClientWithoutSaslAuthenticateHeader STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslPlainSslClientWithoutSaslAuthenticateHeader PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslPlainSslClientWithoutSaslAuthenticateHeaderFailure STARTED

org.apache.kafka.common

Re: [DISCUSS] KIP-201: Rationalising Policy interfaces

2017-10-04 Thread Edoardo Comar
Thanks Tom,
looks got to me and KIP-201 could supersede KIP-170 
but could you please add a missing motivation bullet that was behind 
KIP-170:

introducing ClusterState to allow validation of create/alter topic request 

not just against the request metadata but also
against the current amount of resources already used in the cluster (eg 
number of partitions).

thanks
Edo
--

Edoardo Comar

IBM Message Hub

IBM UK Ltd, Hursley Park, SO21 2JN



From:   Tom Bentley 
To: dev@kafka.apache.org
Date:   02/10/2017 15:15
Subject:Re: [DISCUSS] KIP-201: Rationalising Policy interfaces



Hi All,

I've updated KIP-201 again so there is now a single policy interface (and
thus a single key by which to configure it) for topic creation,
modification, deletion and record deletion, which each have their own
validation method.

There are still a few loose ends:

1. I currently propose validateAlterTopic(), but it would be possible to 
be
more fine grained about this: validateAlterConfig(), validAddPartitions()
and validateReassignPartitions(), for example. Obviously this results in a
policy method per operation, and makes it more clear what is being 
changed.
I guess the down side is its more work for implementer, and potentially
makes it harder to change the interface in the future.

2. A couple of TODOs about what the TopicState interface should return 
when
a topic's partitions are being reassigned.

Your thoughts on these or any other points are welcome.

Thanks,

Tom

On 27 September 2017 at 11:45, Paolo Patierno  wrote:

> Hi Ismael,
>
>
>   1.  I don't have a real requirement now but "deleting" is an operation
> that could be really dangerous so it's always better having a way for
> having more control on that. I know that we have the authorizer used for
> that (delete on topic) but fine grained control could be better (even
> already happens for topic deletion).
>   2.  I know about the problem of restarting broker due to changes on
> policies but what do you mean by doing that on the clients ?
>
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Azure & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno<
https://urldefense.proofpoint.com/v2/url?u=http-3A__twitter.com_ppatierno&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=EzRhmSah4IHsUZVekRUIINhltZK7U0OaeRo7hgW4_tQ&m=h-D-nA7uiy1Z-jta5y-yh7dKgV77XtsUnJ9Rab1gheY&s=43hzTLEDKw2v5Vh0zwkMTaaKD-HdJD8d_F4-Bsw25-Y&e=
 
>
> Linkedin : paolopatierno<
https://urldefense.proofpoint.com/v2/url?u=http-3A__it.linkedin.com_in_paolopatierno&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=EzRhmSah4IHsUZVekRUIINhltZK7U0OaeRo7hgW4_tQ&m=h-D-nA7uiy1Z-jta5y-yh7dKgV77XtsUnJ9Rab1gheY&s=Ig0N7Nwf9EHfTJ2pH3jRM1JIdlzXw6R5Drocu0TMRLk&e=
 
>
> Blog : DevExperience<
https://urldefense.proofpoint.com/v2/url?u=http-3A__paolopatierno.wordpress.com_&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=EzRhmSah4IHsUZVekRUIINhltZK7U0OaeRo7hgW4_tQ&m=h-D-nA7uiy1Z-jta5y-yh7dKgV77XtsUnJ9Rab1gheY&s=Tc9NrTtG2GP7-zRjOHkXHfYI0rncO8_jKpedna692z4&e=
 
>
>
>
> 
> From: isma...@gmail.com  on behalf of Ismael Juma <
> ism...@juma.me.uk>
> Sent: Wednesday, September 27, 2017 10:30 AM
> To: dev@kafka.apache.org
> Subject: Re: [DISCUSS] KIP-201: Rationalising Policy interfaces
>
> A couple of questions:
>
> 1. Is this a concrete requirement from a user or is it hypothetical?
> 2. You sure you would want to do this in the broker instead of the 
clients?
> It's worth remembering that updating broker policies involves a rolling
> restart of the cluster, so it's not the right place for things that 
change
> frequently.
>
> Ismael
>
> On Wed, Sep 27, 2017 at 11:26 AM, Paolo Patierno 
> wrote:
>
> > Hi Ismael,
> >
> > regarding motivations for delete records, as I said during the 
discussion
> > on KIP-204, it gives the possibility to avoid deleting messages for
> > specific partitions (inside the topic) and starting from a specific
> offset.
> > I could think on some users solutions where they know exactly what the
> > partitions means in a specific topic (because they are using a custom
> > partitioner on the producer side) so they know what kind of messages 
are
> > inside a partition allowing to delete them but not the others.  In 
such a
> > policy a user could also check the timestamp related to the offset for
> > allowing or not deletion on time base.
> >
> >
> > Paolo Patierno
> > Senior Software Engineer (IoT) @ Red Hat
> > Microsoft MVP on Azure & IoT
> > Microsoft Azure Advisor
> >
> > Twitter : @ppatierno<
https://urldefense.proofpoint.com/v2/url?u=http-3A__twitter.com_ppatierno&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=EzRhmSah4IHsUZVekRUIINhltZK7U0OaeRo7hgW4_tQ&m=h-D-nA7uiy1Z-jta5y-yh7dKgV77XtsUnJ9Rab1gheY&s=43hzTLEDKw2v5Vh0zwkMTaaKD-HdJD8d_F4-Bsw25-Y&e=
 
>
> > Linkedin : paolopatierno<
https://urldefense.proofpoint.com/v2/url?u=http-3A__it.linkedin.com_in_paolopatierno&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=EzRhmSah4IHsUZVekRUIIN

[GitHub] kafka pull request #4007: MINOR: Java 9 version handling improvements

2017-10-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4007


---