Re: [ANNOUNCE] New Committer: Manikumar Reddy

2018-10-11 Thread Andras Beni
Congratulations, Manikumar!

Srinivas Reddy  ezt írta (időpont: 2018. okt.
12., P 3:00):

> Congratulations Mani. We'll deserved 
>
> -
> Srinivas
>
> - Typed on tiny keys. pls ignore typos.{mobile app}
>
> On Fri 12 Oct, 2018, 01:39 Jason Gustafson,  wrote:
>
> > Hi all,
> >
> > The PMC for Apache Kafka has invited Manikumar Reddy as a committer and
> we
> > are
> > pleased to announce that he has accepted!
> >
> > Manikumar has contributed 134 commits including significant work to add
> > support for delegation tokens in Kafka:
> >
> > KIP-48:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+Kafka
> > KIP-249
> > <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+KafkaKIP-249
> >
> > :
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-249%3A+Add+Delegation+Token+Operations+to+KafkaAdminClient
> >
> > He has broad experience working with many of the core components in Kafka
> > and he has reviewed over 80 PRs. He has also made huge progress
> addressing
> > some of our technical debt.
> >
> > We appreciate the contributions and we are looking forward to more.
> > Congrats Manikumar!
> >
> > Jason, on behalf of the Apache Kafka PMC
> >
>


[jira] [Created] (KAFKA-7500) MirrorMaker 2.0 (KIP-382)

2018-10-11 Thread Ryanne Dolan (JIRA)
Ryanne Dolan created KAFKA-7500:
---

 Summary: MirrorMaker 2.0 (KIP-382)
 Key: KAFKA-7500
 URL: https://issues.apache.org/jira/browse/KAFKA-7500
 Project: Kafka
  Issue Type: New Feature
  Components: KafkaConnect, mirrormaker
Reporter: Ryanne Dolan
Assignee: Ryanne Dolan


Implement a drop-in replacement for MirrorMaker leveraging the Connect 
framework.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-380: Detect outdated control requests and bounced brokers using broker generation

2018-10-11 Thread Patrick Huang
Hi Jun,

Thanks a lot for the comments.

1. czxid is globally unique and monotonically increasing based on the
zookeeper doc.
References (from
https://zookeeper.apache.org/doc/r3.1.2/zookeeperProgrammers.html):
"Every change to the ZooKeeper state receives a stamp in the form of a
*zxid* (ZooKeeper Transaction Id). This exposes the total ordering of all
changes to ZooKeeper. Each change will have a unique zxid and if zxid1 is
smaller than zxid2 then zxid1 happened before zxid2."
"czxid: The zxid of the change that caused this znode to be created."

2. You are right. There will be only on broker change event fired in the
case I mentioned because we will not register the watcher before the read.

3. Let's say we want to initialize the states of broker set A and we want
the cluster to be aware of the absence of broker set B. The currently live
broker set in the cluster is C.

From the design point of view, here are the rules for broker state
transition:
- Pass in broker ids of A for onBrokerStartup() and pass in broker ids
of B for onBrokerFailure().
- When processing onBrokerStartup(), we use the broker generation
controller read from zk to send requests to broker set A and use the
previously cached broker generation to send requests to (C-A).
- When processing onBrokerFailure(), we use the previously cached
broker generation to send requests to C.

From the implementation point of view, here are the steps we need to
follow when processing BrokerChangeEvent:
-  Reads all child nodes in /brokers/ids/ to get current brokers with
broker generation
-  Detect new brokers, dead brokers and bounced brokers
-  Update the live broker ids in controller context
-  Update broker generations for new brokers in controller context
-  Invoke onBrokerStartup(new brokers)
-  Invoke onBrokerFailure(bounced brokers)
-  Update broker generations for bounce brokers in controller context
-  Invoke onBrokerStartup(bounced brokers)
-  Invoke onBrokerFailure(dead brokers)
We can further optimize the flow by avoiding sending requests to a
broker if its broker generation is larger than the one in the controller
context.

I will also update the KIP to clarify how it works for BrokerChangeEvent
processing in more detail.

Thanks,
Patrick



On Thu, Oct 11, 2018 at 12:12 PM Jun Rao  wrote:

> Hi, Patrick,
>
> Thanks for the KIP. Looks good to me overall and very useful. A few
> comments below.
>
> 1. "will reject the requests with smaller broker generation than its
> current generation." Is czxid monotonically increasing?
>
> 2. To clarify on the issue of the controller missing a ZK watcher. ZK
> watchers are one-time watchers. Once a watcher is fired, one needs to
> register it again before the watcher can be triggered. So, when the
> controller is busy and a broker goes down and comes up, the first event
> will trigger the ZK watcher. Since the controller is busy and hasn't
> registered the watcher again, the second event actually won't fire. By the
> time the controller reads from ZK, it sees that the broker is still
> registered and thus thinks that nothing has happened to the broker, which
> is causing the problem.
>
> 3. "Handle broker state change: invoke onBrokerFailure(...) first, then
> invoke onBrokerStartUp(...)". We probably want to be a bit careful here.
> Could you clarify the broker list and the broker epoch used when making
> these calls? We want to prevent the restarted broker from receiving a
> partial replica list on the first LeaderAndIsr request because of this.
>
> Thanks,
>
> Jun
>
> On Wed, Oct 10, 2018 at 12:51 PM, Patrick Huang 
> wrote:
>
> > Hey Stanislav,
> >
> > Sure. Thanks for your interest in this KIP. I am glad to provide more
> > detail.
> >
> > broker A is initiating a controlled shutdown (restart). The Controller
> > sends a StopReplicaRequest but it reaches broker A after it has started
> up
> > again. He therefore stops replicating those partitions even though he
> > should just be starting to
> > This is right.
> >
> > Controller sends a LeaderAndIsrRequest before broker A initiates a
> restart.
> > Broker A restarts and receives the LeaderAndIsrRequest then. It therefore
> > starts leading for the partitions sent by that request and might stop
> > leading partitions that it was leading previously.
> > This was well explained in the linked JIRA, but I cannot understand why
> > that would happen due to my limited experience. If Broker A leads p1 and
> > p2, when would a Controller send a LeaderAndIsrRequest with p1 only and
> not
> > want Broker A to drop leadership for p2?
> > The root cause of the issue is that after a broker just restarts, it
> > relies on the first LeaderAndIsrRequest to populate the partition state
> and
> > initializes the highwater mark checkpoint thread. The highwater mark
> > checkpoint thread will overwrite the highwater mark checkpoint file based
> > on the broker's in-memory partition states. In other words, If 

Re: [DISCUSS] KIP-380: Detect outdated control requests and bounced brokers using broker generation

2018-10-11 Thread Patrick Huang
Hi Jun,

Thanks a lot for the comments.

1. czxid is globally unique and monotonically increasing based on the zookeeper 
doc.
References (from 
https://zookeeper.apache.org/doc/r3.1.2/zookeeperProgrammers.html):
"Every change to the ZooKeeper state receives a stamp in the form of a zxid 
(ZooKeeper Transaction Id). This exposes the total ordering of all changes to 
ZooKeeper. Each change will have a unique zxid and if zxid1 is smaller than 
zxid2 then zxid1 happened before zxid2."
"czxid: The zxid of the change that caused this znode to be created."

2. You are right. There will be only on broker change event fired in the case I 
mentioned because we will not register the watcher before the read.

3. Let's say we want to initialize the states of broker set A and we want the 
cluster to be aware of the absence of broker set B. The currently live broker 
set in the cluster is C.

From the design point of view, here are the rules for broker state 
transition:
- Pass in broker ids of A for onBrokerStartup() and pass in broker ids of B 
for onBrokerFailure().
- When processing onBrokerStartup(), we use the broker generation 
controller read from zk to send requests to broker set A and use the previously 
cached broker generation to send requests to (C-A).
- When processing onBrokerFailure(), we use the previously cached broker 
generation to send requests to C.

From the implementation point of view, here are the steps we need to follow 
when processing BrokerChangeEvent:
-  Reads all child nodes in /brokers/ids/ to get current brokers with 
broker generation
-  Detect new brokers, dead brokers and bounced brokers
-  Update the live broker ids in controller context
-  Update broker generations for new brokers in controller context
-  Invoke onBrokerStartup(new brokers)
-  Invoke onBrokerFailure(bounced brokers)
-  Update broker generations for bounce brokers in controller context
-  Invoke onBrokerStartup(bounced brokers)
-  Invoke onBrokerFailure(dead brokers)
We can further optimize the flow by avoiding sending requests to a broker 
if its broker generation is larger than the one in the controller context.

I will also update the KIP to clarify how it works for BrokerChangeEvent 
processing in more detail.


Best,
Zhanxiang (Patrick) Huang


From: Jun Rao 
Sent: Thursday, October 11, 2018 16:12
To: dev
Subject: Re: [DISCUSS] KIP-380: Detect outdated control requests and bounced 
brokers using broker generation

Hi, Patrick,

Thanks for the KIP. Looks good to me overall and very useful. A few
comments below.

1. "will reject the requests with smaller broker generation than its
current generation." Is czxid monotonically increasing?

2. To clarify on the issue of the controller missing a ZK watcher. ZK
watchers are one-time watchers. Once a watcher is fired, one needs to
register it again before the watcher can be triggered. So, when the
controller is busy and a broker goes down and comes up, the first event
will trigger the ZK watcher. Since the controller is busy and hasn't
registered the watcher again, the second event actually won't fire. By the
time the controller reads from ZK, it sees that the broker is still
registered and thus thinks that nothing has happened to the broker, which
is causing the problem.

3. "Handle broker state change: invoke onBrokerFailure(...) first, then
invoke onBrokerStartUp(...)". We probably want to be a bit careful here.
Could you clarify the broker list and the broker epoch used when making
these calls? We want to prevent the restarted broker from receiving a
partial replica list on the first LeaderAndIsr request because of this.

Thanks,

Jun

On Wed, Oct 10, 2018 at 12:51 PM, Patrick Huang  wrote:

> Hey Stanislav,
>
> Sure. Thanks for your interest in this KIP. I am glad to provide more
> detail.
>
> broker A is initiating a controlled shutdown (restart). The Controller
> sends a StopReplicaRequest but it reaches broker A after it has started up
> again. He therefore stops replicating those partitions even though he
> should just be starting to
> This is right.
>
> Controller sends a LeaderAndIsrRequest before broker A initiates a restart.
> Broker A restarts and receives the LeaderAndIsrRequest then. It therefore
> starts leading for the partitions sent by that request and might stop
> leading partitions that it was leading previously.
> This was well explained in the linked JIRA, but I cannot understand why
> that would happen due to my limited experience. If Broker A leads p1 and
> p2, when would a Controller send a LeaderAndIsrRequest with p1 only and not
> want Broker A to drop leadership for p2?
> The root cause of the issue is that after a broker just restarts, it
> relies on the first LeaderAndIsrRequest to populate the partition state and
> initializes the highwater mark checkpoint thread. The highwater mark
> checkpoint thread will overwrite the highwater mark checkpoint 

Re: [ANNOUNCE] New Committer: Manikumar Reddy

2018-10-11 Thread Srinivas Reddy
Congratulations Mani. We'll deserved 

-
Srinivas

- Typed on tiny keys. pls ignore typos.{mobile app}

On Fri 12 Oct, 2018, 01:39 Jason Gustafson,  wrote:

> Hi all,
>
> The PMC for Apache Kafka has invited Manikumar Reddy as a committer and we
> are
> pleased to announce that he has accepted!
>
> Manikumar has contributed 134 commits including significant work to add
> support for delegation tokens in Kafka:
>
> KIP-48:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+Kafka
> KIP-249
> 
> :
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-249%3A+Add+Delegation+Token+Operations+to+KafkaAdminClient
>
> He has broad experience working with many of the core components in Kafka
> and he has reviewed over 80 PRs. He has also made huge progress addressing
> some of our technical debt.
>
> We appreciate the contributions and we are looking forward to more.
> Congrats Manikumar!
>
> Jason, on behalf of the Apache Kafka PMC
>


Jenkins build is back to normal : kafka-trunk-jdk8 #3103

2018-10-11 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-7499) Extend ProductionExceptionHandler to cover serialization exceptions

2018-10-11 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-7499:
--

 Summary: Extend ProductionExceptionHandler to cover serialization 
exceptions
 Key: KAFKA-7499
 URL: https://issues.apache.org/jira/browse/KAFKA-7499
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Matthias J. Sax


In 
[KIP-210|https://cwiki.apache.org/confluence/display/KAFKA/KIP-210+-+Provide+for+custom+error+handling++when+Kafka+Streams+fails+to+produce],
 an exception handler for the write path was introduced. This exception handler 
covers exception that are raised in the producer callback.

However, serialization happens before the data is handed to the producer with 
Kafka Streams itself and the producer uses `byte[]/byte[]` key-value-pair types.

Thus, we might want to extend the ProductionExceptionHandler to cover 
serialization exception, too, to skip over corrupted output messages. An 
example could be a "String" message that contains invalid JSON and should be 
serialized as JSON.

This ticket might required a KIP.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [ANNOUNCE] New Committer: Manikumar Reddy

2018-10-11 Thread Matthias J. Sax
Congrats!


On 10/11/18 2:31 PM, Yishun Guan wrote:
> Congrats Manikumar!
> On Thu, Oct 11, 2018 at 1:20 PM Sönke Liebau
>  wrote:
>>
>> Great news, congratulations Manikumar!!
>>
>> On Thu, Oct 11, 2018 at 9:08 PM Vahid Hashemian 
>> wrote:
>>
>>> Congrats Manikumar!
>>>
>>> On Thu, Oct 11, 2018 at 11:49 AM Ryanne Dolan 
>>> wrote:
>>>
 Bravo!

 On Thu, Oct 11, 2018 at 1:48 PM Ismael Juma  wrote:

> Congratulations Manikumar! Thanks for your continued contributions.
>
> Ismael
>
> On Thu, Oct 11, 2018 at 10:39 AM Jason Gustafson 
> wrote:
>
>> Hi all,
>>
>> The PMC for Apache Kafka has invited Manikumar Reddy as a committer
>>> and
> we
>> are
>> pleased to announce that he has accepted!
>>
>> Manikumar has contributed 134 commits including significant work to
>>> add
>> support for delegation tokens in Kafka:
>>
>> KIP-48:
>>
>>
>

>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+Kafka
>> KIP-249
>> <
>

>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+KafkaKIP-249
>>
>> :
>>
>>
>

>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-249%3A+Add+Delegation+Token+Operations+to+KafkaAdminClient
>>
>> He has broad experience working with many of the core components in
 Kafka
>> and he has reviewed over 80 PRs. He has also made huge progress
> addressing
>> some of our technical debt.
>>
>> We appreciate the contributions and we are looking forward to more.
>> Congrats Manikumar!
>>
>> Jason, on behalf of the Apache Kafka PMC
>>
>

>>>
>>
>>
>> --
>> Sönke Liebau
>> Partner
>> Tel. +49 179 7940878
>> OpenCore GmbH & Co. KG - Thomas-Mann-Straße 8 - 22880 Wedel - Germany



signature.asc
Description: OpenPGP digital signature


Inconsistent Replica list for a partition

2018-10-11 Thread Koushik Chitta
Hi,

The no. of replicas of partition : 1 is less than rest of the partitions, In 
which scenarios this can happen ?

Kafka version - 0.10.2
[cid:image001.png@01D4616F.405D2FA0]

Thanks,
Koushik


Re: [ANNOUNCE] New Committer: Manikumar Reddy

2018-10-11 Thread Yishun Guan
Congrats Manikumar!
On Thu, Oct 11, 2018 at 1:20 PM Sönke Liebau
 wrote:
>
> Great news, congratulations Manikumar!!
>
> On Thu, Oct 11, 2018 at 9:08 PM Vahid Hashemian 
> wrote:
>
> > Congrats Manikumar!
> >
> > On Thu, Oct 11, 2018 at 11:49 AM Ryanne Dolan 
> > wrote:
> >
> > > Bravo!
> > >
> > > On Thu, Oct 11, 2018 at 1:48 PM Ismael Juma  wrote:
> > >
> > > > Congratulations Manikumar! Thanks for your continued contributions.
> > > >
> > > > Ismael
> > > >
> > > > On Thu, Oct 11, 2018 at 10:39 AM Jason Gustafson 
> > > > wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > The PMC for Apache Kafka has invited Manikumar Reddy as a committer
> > and
> > > > we
> > > > > are
> > > > > pleased to announce that he has accepted!
> > > > >
> > > > > Manikumar has contributed 134 commits including significant work to
> > add
> > > > > support for delegation tokens in Kafka:
> > > > >
> > > > > KIP-48:
> > > > >
> > > > >
> > > >
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+Kafka
> > > > > KIP-249
> > > > > <
> > > >
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+KafkaKIP-249
> > > > >
> > > > > :
> > > > >
> > > > >
> > > >
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-249%3A+Add+Delegation+Token+Operations+to+KafkaAdminClient
> > > > >
> > > > > He has broad experience working with many of the core components in
> > > Kafka
> > > > > and he has reviewed over 80 PRs. He has also made huge progress
> > > > addressing
> > > > > some of our technical debt.
> > > > >
> > > > > We appreciate the contributions and we are looking forward to more.
> > > > > Congrats Manikumar!
> > > > >
> > > > > Jason, on behalf of the Apache Kafka PMC
> > > > >
> > > >
> > >
> >
>
>
> --
> Sönke Liebau
> Partner
> Tel. +49 179 7940878
> OpenCore GmbH & Co. KG - Thomas-Mann-Straße 8 - 22880 Wedel - Germany


[jira] [Resolved] (KAFKA-7264) Initial Kafka support for Java 11

2018-10-11 Thread Ismael Juma (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-7264.

Resolution: Fixed
  Assignee: Ismael Juma

> Initial Kafka support for Java 11
> -
>
> Key: KAFKA-7264
> URL: https://issues.apache.org/jira/browse/KAFKA-7264
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Major
> Fix For: 2.1.0
>
>
> Java 11 is the next LTS release and it should be released by the end of 
> September. Kafka should ideally support that in the 2.1.0 release. A few 
> known issues/requirements have been captured via subtasks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-376: Implement AutoClosable on appropriate classes that want to be used in a try-with-resource statement

2018-10-11 Thread Yishun Guan
Hi,

Just to bump this voting thread up again. Thanks!

Best,
Yishun
On Fri, Oct 5, 2018 at 12:58 PM Yishun Guan  wrote:
>
> Hi,
>
> I think we have discussed this well enough to put this into a vote.
>
> Suggestions are welcome!
>
> Best,
> Yishun
>
> On Wed, Oct 3, 2018, 2:30 PM Yishun Guan  wrote:
>>
>> Hi All,
>>
>> I want to start a voting on this KIP:
>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=93325308
>>
>> Here is the discussion thread:
>> https://lists.apache.org/thread.html/9f6394c28d3d11a67600d5d7001e8aaa318f1ad497b50645654bbe3f@%3Cdev.kafka.apache.org%3E
>>
>> Thanks,
>> Yishun


Build failed in Jenkins: kafka-trunk-jdk11 #28

2018-10-11 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] KAFKA-6863 Kafka clients should try to use multiple DNS 
resolved IP

--
[...truncated 2.35 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldCloseProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldCloseProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFeedStoreFromGlobalKTable[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFeedStoreFromGlobalKTable[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 

Build failed in Jenkins: kafka-trunk-jdk8 #3102

2018-10-11 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] KAFKA-7475 - capture remote address on connection authetication 
errors,

--
[...truncated 2.88 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > shouldCloseProcessor[Eos 
enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFeedStoreFromGlobalKTable[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFeedStoreFromGlobalKTable[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopic[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopic[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest 

Build failed in Jenkins: kafka-2.1-jdk8 #18

2018-10-11 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] KAFKA-6863 Kafka clients should try to use multiple DNS 
resolved IP

--
[...truncated 456.92 KB...]
  found:StoreBuilder
  where S is a type-variable:
S extends StateStore declared in class TableSourceNode
:133:
 warning: [unchecked] unchecked conversion
this.consumedInternal = consumedInternal;
^
  required: ConsumedInternal
  found:ConsumedInternal
  where K,V are type-variables:
K extends Object declared in class TableSourceNodeBuilder
V extends Object declared in class TableSourceNodeBuilder
37 warnings

> Task :kafka-2.1-jdk8:streams:processResources NO-SOURCE
> Task :kafka-2.1-jdk8:streams:classes
> Task :kafka-2.1-jdk8:streams:copyDependantLibs
> Task :kafka-2.1-jdk8:streams:jar
> Task :kafka-2.1-jdk8:streams:test-utils:compileJava
> Task :kafka-2.1-jdk8:streams:test-utils:processResources NO-SOURCE
> Task :kafka-2.1-jdk8:streams:test-utils:classes
> Task :kafka-2.1-jdk8:streams:test-utils:copyDependantLibs
> Task :kafka-2.1-jdk8:streams:test-utils:jar

> Task :kafka-2.1-jdk8:streams:compileTestJava
:173:
 warning: non-varargs call of varargs method with inexact argument type for 
last parameter;
builder.addProcessor("processor", new MockProcessorSupplier(), null);
   ^
  cast to String for a varargs call
  cast to String[] for a non-varargs call and to suppress this warning
:204:
 warning: non-varargs call of varargs method with inexact argument type for 
last parameter;
builder.addSink("sink", "topic", null, null, null, null);
   ^
  cast to String for a varargs call
  cast to String[] for a non-varargs call and to suppress this warning
Note: Some input files use or override a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
2 warnings

> Task :kafka-2.1-jdk8:streams:processTestResources
> Task :kafka-2.1-jdk8:streams:testClasses
> Task :kafka-2.1-jdk8:streams:streams-scala:compileJava NO-SOURCE

> Task :kafka-2.1-jdk8:streams:streams-scala:compileScala
Pruning sources from previous analysis, due to incompatible CompileSetup.
:382:
 method groupByKey in trait KStream is deprecated: see corresponding Javadoc 
for more information.
inner.groupByKey(serialized)
  ^
:416:
 method groupBy in trait KStream is deprecated: see corresponding Javadoc for 
more information.
inner.groupBy(selector.asKeyValueMapper, serialized)
  ^
:224:
 method groupBy in trait KTable is deprecated: see corresponding Javadoc for 
more information.
inner.groupBy(selector.asKeyValueMapper, serialized)
  ^
:34:
 class Serialized in package kstream is deprecated: see corresponding Javadoc 
for more information.
  def `with`[K, V](implicit keySerde: Serde[K], valueSerde: Serde[V]): 
SerializedJ[K, V] =
   ^
:23:
 class Serialized in package kstream is deprecated: see corresponding Javadoc 
for more information.
  type Serialized[K, V] = org.apache.kafka.streams.kstream.Serialized[K, V]
   ^
5 warnings found

> Task :kafka-2.1-jdk8:streams:streams-scala:processResources NO-SOURCE
> Task :kafka-2.1-jdk8:streams:streams-scala:classes
> Task :kafka-2.1-jdk8:streams:streams-scala:checkstyleMain NO-SOURCE
> Task :kafka-2.1-jdk8:streams:streams-scala:compileTestJava NO-SOURCE

> Task :kafka-2.1-jdk8:streams:streams-scala:compileTestScala
Pruning sources from previous analysis, due to 

Re: [ANNOUNCE] New Committer: Manikumar Reddy

2018-10-11 Thread Sönke Liebau
Great news, congratulations Manikumar!!

On Thu, Oct 11, 2018 at 9:08 PM Vahid Hashemian 
wrote:

> Congrats Manikumar!
>
> On Thu, Oct 11, 2018 at 11:49 AM Ryanne Dolan 
> wrote:
>
> > Bravo!
> >
> > On Thu, Oct 11, 2018 at 1:48 PM Ismael Juma  wrote:
> >
> > > Congratulations Manikumar! Thanks for your continued contributions.
> > >
> > > Ismael
> > >
> > > On Thu, Oct 11, 2018 at 10:39 AM Jason Gustafson 
> > > wrote:
> > >
> > > > Hi all,
> > > >
> > > > The PMC for Apache Kafka has invited Manikumar Reddy as a committer
> and
> > > we
> > > > are
> > > > pleased to announce that he has accepted!
> > > >
> > > > Manikumar has contributed 134 commits including significant work to
> add
> > > > support for delegation tokens in Kafka:
> > > >
> > > > KIP-48:
> > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+Kafka
> > > > KIP-249
> > > > <
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+KafkaKIP-249
> > > >
> > > > :
> > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-249%3A+Add+Delegation+Token+Operations+to+KafkaAdminClient
> > > >
> > > > He has broad experience working with many of the core components in
> > Kafka
> > > > and he has reviewed over 80 PRs. He has also made huge progress
> > > addressing
> > > > some of our technical debt.
> > > >
> > > > We appreciate the contributions and we are looking forward to more.
> > > > Congrats Manikumar!
> > > >
> > > > Jason, on behalf of the Apache Kafka PMC
> > > >
> > >
> >
>


-- 
Sönke Liebau
Partner
Tel. +49 179 7940878
OpenCore GmbH & Co. KG - Thomas-Mann-Straße 8 - 22880 Wedel - Germany


Re: [DISCUSS] KIP-380: Detect outdated control requests and bounced brokers using broker generation

2018-10-11 Thread Jun Rao
Hi, Patrick,

Thanks for the KIP. Looks good to me overall and very useful. A few
comments below.

1. "will reject the requests with smaller broker generation than its
current generation." Is czxid monotonically increasing?

2. To clarify on the issue of the controller missing a ZK watcher. ZK
watchers are one-time watchers. Once a watcher is fired, one needs to
register it again before the watcher can be triggered. So, when the
controller is busy and a broker goes down and comes up, the first event
will trigger the ZK watcher. Since the controller is busy and hasn't
registered the watcher again, the second event actually won't fire. By the
time the controller reads from ZK, it sees that the broker is still
registered and thus thinks that nothing has happened to the broker, which
is causing the problem.

3. "Handle broker state change: invoke onBrokerFailure(...) first, then
invoke onBrokerStartUp(...)". We probably want to be a bit careful here.
Could you clarify the broker list and the broker epoch used when making
these calls? We want to prevent the restarted broker from receiving a
partial replica list on the first LeaderAndIsr request because of this.

Thanks,

Jun

On Wed, Oct 10, 2018 at 12:51 PM, Patrick Huang  wrote:

> Hey Stanislav,
>
> Sure. Thanks for your interest in this KIP. I am glad to provide more
> detail.
>
> broker A is initiating a controlled shutdown (restart). The Controller
> sends a StopReplicaRequest but it reaches broker A after it has started up
> again. He therefore stops replicating those partitions even though he
> should just be starting to
> This is right.
>
> Controller sends a LeaderAndIsrRequest before broker A initiates a restart.
> Broker A restarts and receives the LeaderAndIsrRequest then. It therefore
> starts leading for the partitions sent by that request and might stop
> leading partitions that it was leading previously.
> This was well explained in the linked JIRA, but I cannot understand why
> that would happen due to my limited experience. If Broker A leads p1 and
> p2, when would a Controller send a LeaderAndIsrRequest with p1 only and not
> want Broker A to drop leadership for p2?
> The root cause of the issue is that after a broker just restarts, it
> relies on the first LeaderAndIsrRequest to populate the partition state and
> initializes the highwater mark checkpoint thread. The highwater mark
> checkpoint thread will overwrite the highwater mark checkpoint file based
> on the broker's in-memory partition states. In other words, If a partition
> that is physically hosted by the broker is missing in the in-memory
> partition states map, its highwater mark will be lost after the highwater
> mark checkpoint thread overwrites the file. (Related codes:
> https://github.com/apache/kafka/blob/ed3bd79633ae227ad995dafc3d9f38
> 4a5534d4e9/core/src/main/scala/kafka/server/ReplicaManager.scala#L1091)
> [https://avatars3.githubusercontent.com/u/47359?s=400=4]<
> https://github.com/apache/kafka/blob/ed3bd79633ae227ad995dafc3d9f38
> 4a5534d4e9/core/src/main/scala/kafka/server/ReplicaManager.scala#L1091>
>
> apache/kafka ed3bd79633ae227ad995dafc3d9f384a5534d4e9/core/src/main/scala/kafka/server/
> ReplicaManager.scala#L1091>
> Mirror of Apache Kafka. Contribute to apache/kafka development by creating
> an account on GitHub.
> github.com
>
>
> In your example, assume the first LeaderAndIsrRequest broker A receives is
> the one initiated in the controlled shutdown logic in Controller to move
> leadership away from broker A. This LeaderAndIsrRequest only contains
> partitions that broker A leads, not all the partitions that broker A hosts
> (i.e. no follower partitions), so the highwater mark for the follower
> partitions will be lost. Also, the first LeaderAndIsrRequst broker A
> receives may not necessarily be the one initiated in controlled shutdown
> logic (e.g. there can be an ongoing preferred leader election), although I
> think this may not be very common.
>
> Here the controller will start processing the BrokerChange event (that says
> that broker A shutdown) after the broker has come back up and re-registered
> himself in ZK?
> How will the Controller miss the restart, won't he subsequently receive
> another ZK event saying that broker A has come back up?
> Controller will not miss the BrokerChange event and actually there will be
> two BrokerChange events fired in this case (one for broker deregistration
> in zk and one for registration). However, when processing the
> BrokerChangeEvent, controller needs to do a read from zookeeper to get back
> the current brokers in the cluster and if the bounced broker already joined
> the cluster by this time, controller will not know this broker has been
> bounced because it sees no diff between zk and its in-memory cache. So
> basically both of the BrokerChange event processing become no-op.
>
>
> Hope that I answer your questions. Feel free to follow up if I am missing
> something.

Re: [ANNOUNCE] New Committer: Manikumar Reddy

2018-10-11 Thread Vahid Hashemian
Congrats Manikumar!

On Thu, Oct 11, 2018 at 11:49 AM Ryanne Dolan  wrote:

> Bravo!
>
> On Thu, Oct 11, 2018 at 1:48 PM Ismael Juma  wrote:
>
> > Congratulations Manikumar! Thanks for your continued contributions.
> >
> > Ismael
> >
> > On Thu, Oct 11, 2018 at 10:39 AM Jason Gustafson 
> > wrote:
> >
> > > Hi all,
> > >
> > > The PMC for Apache Kafka has invited Manikumar Reddy as a committer and
> > we
> > > are
> > > pleased to announce that he has accepted!
> > >
> > > Manikumar has contributed 134 commits including significant work to add
> > > support for delegation tokens in Kafka:
> > >
> > > KIP-48:
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+Kafka
> > > KIP-249
> > > <
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+KafkaKIP-249
> > >
> > > :
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-249%3A+Add+Delegation+Token+Operations+to+KafkaAdminClient
> > >
> > > He has broad experience working with many of the core components in
> Kafka
> > > and he has reviewed over 80 PRs. He has also made huge progress
> > addressing
> > > some of our technical debt.
> > >
> > > We appreciate the contributions and we are looking forward to more.
> > > Congrats Manikumar!
> > >
> > > Jason, on behalf of the Apache Kafka PMC
> > >
> >
>


[jira] [Created] (KAFKA-7498) common.requests.CreatePartitionsRequest uses clients.admin.NewPartitions

2018-10-11 Thread Rajini Sivaram (JIRA)
Rajini Sivaram created KAFKA-7498:
-

 Summary: common.requests.CreatePartitionsRequest uses 
clients.admin.NewPartitions
 Key: KAFKA-7498
 URL: https://issues.apache.org/jira/browse/KAFKA-7498
 Project: Kafka
  Issue Type: Bug
  Components: clients
Reporter: Rajini Sivaram
Assignee: Rajini Sivaram
 Fix For: 2.1.0


`org.apache.kafka.common.requests.CreatePartitionsRequest` currently uses 
`org.apache.kafka.clients.admin.NewPartitions`. We shouldn't have references 
from `common` to `clients`. Since `org.apache.kafka.clients.admin` is a public 
package, we cannot use a common class for Admin API and requests. So we should 
do something similar to CreateTopicsRequest for which we have 
`org.apache.kafka.clients.admin.NewTopic` class used for the admin API and an 
equivalent `org.apache.kafka.common.requests.CreateTopicsRequest.TopicDetails` 
class that doesn't refer to `clients.admin`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [ANNOUNCE] New Committer: Manikumar Reddy

2018-10-11 Thread Ryanne Dolan
Bravo!

On Thu, Oct 11, 2018 at 1:48 PM Ismael Juma  wrote:

> Congratulations Manikumar! Thanks for your continued contributions.
>
> Ismael
>
> On Thu, Oct 11, 2018 at 10:39 AM Jason Gustafson 
> wrote:
>
> > Hi all,
> >
> > The PMC for Apache Kafka has invited Manikumar Reddy as a committer and
> we
> > are
> > pleased to announce that he has accepted!
> >
> > Manikumar has contributed 134 commits including significant work to add
> > support for delegation tokens in Kafka:
> >
> > KIP-48:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+Kafka
> > KIP-249
> > <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+KafkaKIP-249
> >
> > :
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-249%3A+Add+Delegation+Token+Operations+to+KafkaAdminClient
> >
> > He has broad experience working with many of the core components in Kafka
> > and he has reviewed over 80 PRs. He has also made huge progress
> addressing
> > some of our technical debt.
> >
> > We appreciate the contributions and we are looking forward to more.
> > Congrats Manikumar!
> >
> > Jason, on behalf of the Apache Kafka PMC
> >
>


Re: [ANNOUNCE] New Committer: Manikumar Reddy

2018-10-11 Thread Ismael Juma
Congratulations Manikumar! Thanks for your continued contributions.

Ismael

On Thu, Oct 11, 2018 at 10:39 AM Jason Gustafson  wrote:

> Hi all,
>
> The PMC for Apache Kafka has invited Manikumar Reddy as a committer and we
> are
> pleased to announce that he has accepted!
>
> Manikumar has contributed 134 commits including significant work to add
> support for delegation tokens in Kafka:
>
> KIP-48:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+Kafka
> KIP-249
> 
> :
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-249%3A+Add+Delegation+Token+Operations+to+KafkaAdminClient
>
> He has broad experience working with many of the core components in Kafka
> and he has reviewed over 80 PRs. He has also made huge progress addressing
> some of our technical debt.
>
> We appreciate the contributions and we are looking forward to more.
> Congrats Manikumar!
>
> Jason, on behalf of the Apache Kafka PMC
>


Re: [ANNOUNCE] New Committer: Manikumar Reddy

2018-10-11 Thread Mickael Maison
Congrats Manikumar!
On Thu, Oct 11, 2018 at 7:17 PM Dong Lin  wrote:
>
> Congratulations Manikumar!!
>
> On Thu, Oct 11, 2018 at 10:39 AM Jason Gustafson  wrote:
>
> > Hi all,
> >
> > The PMC for Apache Kafka has invited Manikumar Reddy as a committer and we
> > are
> > pleased to announce that he has accepted!
> >
> > Manikumar has contributed 134 commits including significant work to add
> > support for delegation tokens in Kafka:
> >
> > KIP-48:
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+Kafka
> > KIP-249
> > 
> > :
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-249%3A+Add+Delegation+Token+Operations+to+KafkaAdminClient
> >
> > He has broad experience working with many of the core components in Kafka
> > and he has reviewed over 80 PRs. He has also made huge progress addressing
> > some of our technical debt.
> >
> > We appreciate the contributions and we are looking forward to more.
> > Congrats Manikumar!
> >
> > Jason, on behalf of the Apache Kafka PMC
> >


Re: [ANNOUNCE] New Committer: Manikumar Reddy

2018-10-11 Thread Dong Lin
Congratulations Manikumar!!

On Thu, Oct 11, 2018 at 10:39 AM Jason Gustafson  wrote:

> Hi all,
>
> The PMC for Apache Kafka has invited Manikumar Reddy as a committer and we
> are
> pleased to announce that he has accepted!
>
> Manikumar has contributed 134 commits including significant work to add
> support for delegation tokens in Kafka:
>
> KIP-48:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+Kafka
> KIP-249
> 
> :
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-249%3A+Add+Delegation+Token+Operations+to+KafkaAdminClient
>
> He has broad experience working with many of the core components in Kafka
> and he has reviewed over 80 PRs. He has also made huge progress addressing
> some of our technical debt.
>
> We appreciate the contributions and we are looking forward to more.
> Congrats Manikumar!
>
> Jason, on behalf of the Apache Kafka PMC
>


Re: [ANNOUNCE] New Committer: Manikumar Reddy

2018-10-11 Thread Jun Rao
Congratulations, Mani!

Jun

On Thu, Oct 11, 2018 at 10:39 AM, Jason Gustafson 
wrote:

> Hi all,
>
> The PMC for Apache Kafka has invited Manikumar Reddy as a committer and we
> are
> pleased to announce that he has accepted!
>
> Manikumar has contributed 134 commits including significant work to add
> support for delegation tokens in Kafka:
>
> KIP-48:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 48+Delegation+token+support+for+Kafka
> KIP-249:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 249%3A+Add+Delegation+Token+Operations+to+KafkaAdminClient
>
> He has broad experience working with many of the core components in Kafka
> and he has reviewed over 80 PRs. He has also made huge progress addressing
> some of our technical debt.
>
> We appreciate the contributions and we are looking forward to more.
> Congrats Manikumar!
>
> Jason, on behalf of the Apache Kafka PMC
>


Re: [ANNOUNCE] New Committer: Manikumar Reddy

2018-10-11 Thread Hugo Louro
Congrats Manikumar!
Hugo

> On Oct 11, 2018, at 11:04 AM, Kamal Chandraprakash 
>  wrote:
> 
> Congratulations, Manikumar!
> 
>> On Thu, 11 Oct 2018, 23:29 Rajini Sivaram,  wrote:
>> 
>> Congratulations, Manikumar!
>> 
>>> On Thu, Oct 11, 2018 at 6:57 PM Suman B N  wrote:
>>> 
>>> Congratulations Manikumar!
>>> 
>>> On Thu, Oct 11, 2018 at 11:09 PM Jason Gustafson 
>>> wrote:
>>> 
 Hi all,
 
 The PMC for Apache Kafka has invited Manikumar Reddy as a committer and
>>> we
 are
 pleased to announce that he has accepted!
 
 Manikumar has contributed 134 commits including significant work to add
 support for delegation tokens in Kafka:
 
 KIP-48:
 
 
>>> 
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+Kafka
 KIP-249
 <
>>> 
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+KafkaKIP-249
 
 :
 
 
>>> 
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-249%3A+Add+Delegation+Token+Operations+to+KafkaAdminClient
 
 He has broad experience working with many of the core components in
>> Kafka
 and he has reviewed over 80 PRs. He has also made huge progress
>>> addressing
 some of our technical debt.
 
 We appreciate the contributions and we are looking forward to more.
 Congrats Manikumar!
 
 Jason, on behalf of the Apache Kafka PMC
 
>>> 
>>> 
>>> --
>>> *Suman*
>>> *OlaCabs*
>>> 
>> 


Re: [ANNOUNCE] New Committer: Manikumar Reddy

2018-10-11 Thread Bill Bejeck
Congrats Manikumar!

On Thu, Oct 11, 2018 at 2:07 PM Hugo Louro  wrote:

> Congrats Manikumar!
> Hugo
>
> > On Oct 11, 2018, at 11:04 AM, Kamal Chandraprakash <
> kamal.chandraprak...@gmail.com> wrote:
> >
> > Congratulations, Manikumar!
> >
> >> On Thu, 11 Oct 2018, 23:29 Rajini Sivaram, 
> wrote:
> >>
> >> Congratulations, Manikumar!
> >>
> >>> On Thu, Oct 11, 2018 at 6:57 PM Suman B N 
> wrote:
> >>>
> >>> Congratulations Manikumar!
> >>>
> >>> On Thu, Oct 11, 2018 at 11:09 PM Jason Gustafson 
> >>> wrote:
> >>>
>  Hi all,
> 
>  The PMC for Apache Kafka has invited Manikumar Reddy as a committer
> and
> >>> we
>  are
>  pleased to announce that he has accepted!
> 
>  Manikumar has contributed 134 commits including significant work to
> add
>  support for delegation tokens in Kafka:
> 
>  KIP-48:
> 
> 
> >>>
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+Kafka
>  KIP-249
>  <
> >>>
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+KafkaKIP-249
> 
>  :
> 
> 
> >>>
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-249%3A+Add+Delegation+Token+Operations+to+KafkaAdminClient
> 
>  He has broad experience working with many of the core components in
> >> Kafka
>  and he has reviewed over 80 PRs. He has also made huge progress
> >>> addressing
>  some of our technical debt.
> 
>  We appreciate the contributions and we are looking forward to more.
>  Congrats Manikumar!
> 
>  Jason, on behalf of the Apache Kafka PMC
> 
> >>>
> >>>
> >>> --
> >>> *Suman*
> >>> *OlaCabs*
> >>>
> >>
>


Re: [ANNOUNCE] New Committer: Manikumar Reddy

2018-10-11 Thread Kamal Chandraprakash
Congratulations, Manikumar!

On Thu, 11 Oct 2018, 23:29 Rajini Sivaram,  wrote:

> Congratulations, Manikumar!
>
> On Thu, Oct 11, 2018 at 6:57 PM Suman B N  wrote:
>
> > Congratulations Manikumar!
> >
> > On Thu, Oct 11, 2018 at 11:09 PM Jason Gustafson 
> > wrote:
> >
> > > Hi all,
> > >
> > > The PMC for Apache Kafka has invited Manikumar Reddy as a committer and
> > we
> > > are
> > > pleased to announce that he has accepted!
> > >
> > > Manikumar has contributed 134 commits including significant work to add
> > > support for delegation tokens in Kafka:
> > >
> > > KIP-48:
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+Kafka
> > > KIP-249
> > > <
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+KafkaKIP-249
> > >
> > > :
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-249%3A+Add+Delegation+Token+Operations+to+KafkaAdminClient
> > >
> > > He has broad experience working with many of the core components in
> Kafka
> > > and he has reviewed over 80 PRs. He has also made huge progress
> > addressing
> > > some of our technical debt.
> > >
> > > We appreciate the contributions and we are looking forward to more.
> > > Congrats Manikumar!
> > >
> > > Jason, on behalf of the Apache Kafka PMC
> > >
> >
> >
> > --
> > *Suman*
> > *OlaCabs*
> >
>


Re: [ANNOUNCE] New Committer: Manikumar Reddy

2018-10-11 Thread Rajini Sivaram
Congratulations, Manikumar!

On Thu, Oct 11, 2018 at 6:57 PM Suman B N  wrote:

> Congratulations Manikumar!
>
> On Thu, Oct 11, 2018 at 11:09 PM Jason Gustafson 
> wrote:
>
> > Hi all,
> >
> > The PMC for Apache Kafka has invited Manikumar Reddy as a committer and
> we
> > are
> > pleased to announce that he has accepted!
> >
> > Manikumar has contributed 134 commits including significant work to add
> > support for delegation tokens in Kafka:
> >
> > KIP-48:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+Kafka
> > KIP-249
> > <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+KafkaKIP-249
> >
> > :
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-249%3A+Add+Delegation+Token+Operations+to+KafkaAdminClient
> >
> > He has broad experience working with many of the core components in Kafka
> > and he has reviewed over 80 PRs. He has also made huge progress
> addressing
> > some of our technical debt.
> >
> > We appreciate the contributions and we are looking forward to more.
> > Congrats Manikumar!
> >
> > Jason, on behalf of the Apache Kafka PMC
> >
>
>
> --
> *Suman*
> *OlaCabs*
>


Re: [ANNOUNCE] New Committer: Manikumar Reddy

2018-10-11 Thread Suman B N
Congratulations Manikumar!

On Thu, Oct 11, 2018 at 11:09 PM Jason Gustafson  wrote:

> Hi all,
>
> The PMC for Apache Kafka has invited Manikumar Reddy as a committer and we
> are
> pleased to announce that he has accepted!
>
> Manikumar has contributed 134 commits including significant work to add
> support for delegation tokens in Kafka:
>
> KIP-48:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+Kafka
> KIP-249
> 
> :
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-249%3A+Add+Delegation+Token+Operations+to+KafkaAdminClient
>
> He has broad experience working with many of the core components in Kafka
> and he has reviewed over 80 PRs. He has also made huge progress addressing
> some of our technical debt.
>
> We appreciate the contributions and we are looking forward to more.
> Congrats Manikumar!
>
> Jason, on behalf of the Apache Kafka PMC
>


-- 
*Suman*
*OlaCabs*


[jira] [Resolved] (KAFKA-6863) Kafka clients should try to use multiple DNS resolved IP addresses if the first one fails

2018-10-11 Thread Rajini Sivaram (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-6863.
---
Resolution: Fixed
  Reviewer: Rajini Sivaram

> Kafka clients should try to use multiple DNS resolved IP addresses if the 
> first one fails
> -
>
> Key: KAFKA-6863
> URL: https://issues.apache.org/jira/browse/KAFKA-6863
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 1.0.0, 1.1.0
>Reporter: Edoardo Comar
>Assignee: Edoardo Comar
>Priority: Major
> Fix For: 2.1.0
>
>
> Currently Kafka clients resolve a symbolic hostname using
>   {{new InetSocketAddress(String hostname, int port)}}
> which only picks one IP address even if the DNS has multiple records for the 
> hostname, as it calls
>  {{InetAddress.getAllByName(host)[0]}}
> For some environments where the hostnames are mapped by the DNS to multiple 
> IPs, e.g. in clouds where the IPs point to the external load balancers, it 
> would be preferable that the client, on failing to connect to one of the IPs, 
> would try the other ones before giving up the connection.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[ANNOUNCE] New Committer: Manikumar Reddy

2018-10-11 Thread Jason Gustafson
Hi all,

The PMC for Apache Kafka has invited Manikumar Reddy as a committer and we
are
pleased to announce that he has accepted!

Manikumar has contributed 134 commits including significant work to add
support for delegation tokens in Kafka:

KIP-48:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+Kafka
KIP-249:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-249%3A+Add+Delegation+Token+Operations+to+KafkaAdminClient

He has broad experience working with many of the core components in Kafka
and he has reviewed over 80 PRs. He has also made huge progress addressing
some of our technical debt.

We appreciate the contributions and we are looking forward to more.
Congrats Manikumar!

Jason, on behalf of the Apache Kafka PMC


Build failed in Jenkins: kafka-2.1-jdk8 #17

2018-10-11 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] KAFKA-7475 - capture remote address on connection authetication 
errors,

--
[...truncated 2.80 MB...]
:38:
 warning: [deprecation] Serialized in org.apache.kafka.streams.kstream has been 
deprecated
import org.apache.kafka.streams.kstream.Serialized;
   ^
:36:
 warning: [deprecation] Serialized in org.apache.kafka.streams.kstream has been 
deprecated
import org.apache.kafka.streams.kstream.Serialized;
   ^
:90:
 warning: [deprecation] Serialized in org.apache.kafka.streams.kstream has been 
deprecated
.groupByKey(Serialized.with(Serdes.String(), jsonSerde))
^
:90:
 warning: [deprecation] groupByKey(Serialized) in KStream has been 
deprecated
.groupByKey(Serialized.with(Serdes.String(), jsonSerde))
^
  where K,V are type-variables:
K extends Object declared in interface KStream
V extends Object declared in interface KStream
:209:
 warning: [deprecation] Serialized in org.apache.kafka.streams.kstream has been 
deprecated
.groupByKey(Serialized.with(Serdes.String(), new JSONSerde<>()))
^
:209:
 warning: [deprecation] groupByKey(Serialized) in KStream has been 
deprecated
.groupByKey(Serialized.with(Serdes.String(), new JSONSerde<>()))
^
  where K,V are type-variables:
K extends Object declared in interface KStream
V extends Object declared in interface KStream
6 warnings

> Task :streams:examples:processResources NO-SOURCE
> Task :streams:examples:classes
> Task :streams:examples:checkstyleMain
> Task :streams:examples:compileTestJava
> Task :streams:examples:processTestResources NO-SOURCE
> Task :streams:examples:testClasses
> Task :streams:examples:checkstyleTest
> Task :streams:examples:spotbugsMain

> Task :streams:examples:test

org.apache.kafka.streams.examples.wordcount.WordCountProcessorTest > test 
STARTED

org.apache.kafka.streams.examples.wordcount.WordCountProcessorTest > test PASSED

> Task :spotlessScala UP-TO-DATE
> Task :spotlessScalaCheck UP-TO-DATE
> Task :streams:streams-scala:compileJava NO-SOURCE

> Task :streams:streams-scala:compileScala
Pruning sources from previous analysis, due to incompatible CompileSetup.
:382:
 method groupByKey in trait KStream is deprecated: see corresponding Javadoc 
for more information.
inner.groupByKey(serialized)
  ^
:416:
 method groupBy in trait KStream is deprecated: see corresponding Javadoc for 
more information.
inner.groupBy(selector.asKeyValueMapper, serialized)
  ^
:224:
 method groupBy in trait KTable is deprecated: see corresponding Javadoc for 
more information.
inner.groupBy(selector.asKeyValueMapper, serialized)
  ^
:34:
 class Serialized in package kstream is deprecated: see corresponding Javadoc 
for more information.
  def `with`[K, V](implicit keySerde: Serde[K], valueSerde: Serde[V]): 
SerializedJ[K, V] =
   ^
:23:
 class Serialized in package kstream is deprecated: see corresponding Javadoc 
for more information.
  type Serialized[K, V] = org.apache.kafka.streams.kstream.Serialized[K, V]
  

Re: Transitioning from Java 10 to Java 11 in Jenkins

2018-10-11 Thread Ismael Juma
Updated PR jobs, they are now configured in the following way:

https://builds.apache.org/job/kafka-pr-jdk7-scala2.10: Only enabled for
0.10.0, 0.10.1, 0.10.2 branches as Scala 2.10 support was dropped in 0.11.0.
https://builds.apache.org/job/kafka-pr-jdk7-scala2.11: Only enabled for
0.11.0, 1.0, 1.1 branches as Java 7 support was dropped in 2.0 and we have
kafka-pr-jdk7-scala2.10 for 0.10.x branches.
https://builds.apache.org/job/kafka-pr-jdk8-scala2.11: Enabled for all
branches.
https://builds.apache.org/job/kafka-pr-jdk10-scala2.12: Only enabled for
1.0, 1.1 and 2.0 branches. Java 9/10 support was added in 1.0 and
kafka-pr-jdk11-scala2.12 replaced this in 2.1 and newer.
https://builds.apache.org/job/kafka-pr-jdk11-scala2.12: Enabled for trunk,
2.1 and newer release branches. Java 11 support was added in 2.1.

Ismael

On Wed, Oct 10, 2018 at 1:47 PM Ismael Juma  wrote:

> Hi all,
>
> Java 11 was released recently and Java 10 is no longer supported. Java 11
> is the first LTS release since the new Java release cadence was announced.
> As of today, all of our tests pass with Java 11 so it's time to transition
> Jenkins builds to use Java 11 instead of Java 10. I have updated the trunk
> job[1] and will update the PR job in a couple of days to give time for PRs
> to be rebased to include the required commits.
>
> Let me know if you have any questions.
>
> Ismael
>
> [1] https://builds.apache.org/job/kafka-trunk-jdk11/
>


[jira] [Resolved] (KAFKA-7494) Update Jenkins jobs to use Java 11 instead of Java 10

2018-10-11 Thread Ismael Juma (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-7494.

Resolution: Fixed

Updated PR jobs, they are now configured in the following way:

* https://builds.apache.org/job/kafka-pr-jdk7-scala2.10: Only enabled for 
0.10.0, 0.10.1, 0.10.2 branches as Scala 2.10 support was dropped in 0.11.0.
* https://builds.apache.org/job/kafka-pr-jdk7-scala2.11: Only enabled for 
0.11.0, 1.0, 1.1 branches as Java 7 support was dropped in 2.0 and we have 
kafka-pr-jdk7-scala2.10 for 0.10.x branches.
* https://builds.apache.org/job/kafka-pr-jdk8-scala2.11: Enabled for all 
branches.
* https://builds.apache.org/job/kafka-pr-jdk10-scala2.12: Only enabled for 1.0, 
1.1 and 2.0 branches. Java 9/10 support was added in 1.0 and 
kafka-pr-jdk11-scala2.12 replaced this in 2.1 and newer.
* https://builds.apache.org/job/kafka-pr-jdk11-scala2.12: Enabled for trunk, 
2.1 and newer release branches. Java 11 support was added in 2.1.

> Update Jenkins jobs to use Java 11 instead of Java 10
> -
>
> Key: KAFKA-7494
> URL: https://issues.apache.org/jira/browse/KAFKA-7494
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Major
>
> Will update kafka-trunk first and the PR job in a few days to allow people to 
> include the commits needed for it to work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7475) print the actual cluster bootstrap address on authentication failures

2018-10-11 Thread radai rosenblatt (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

radai rosenblatt resolved KAFKA-7475.
-
   Resolution: Fixed
 Reviewer: Rajini Sivaram
Fix Version/s: 2.1.0

> print the actual cluster bootstrap address on authentication failures
> -
>
> Key: KAFKA-7475
> URL: https://issues.apache.org/jira/browse/KAFKA-7475
> Project: Kafka
>  Issue Type: Improvement
>Reporter: radai rosenblatt
>Assignee: radai rosenblatt
>Priority: Major
> Fix For: 2.1.0
>
>
> currently when a kafka client fails to connect to a cluster, users see 
> something like this:
> {code}
> Connection to node -1 terminated during authentication. This may indicate 
> that authentication failed due to invalid credentials. 
> {code}
> that log line is mostly useless in identifying which (of potentially many) 
> kafka client is having issues and what kafka cluster is it having issues with.
> would be nice to record the remote host/port



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: New release branch 2.1.0

2018-10-11 Thread Edoardo Comar
Thanks Rajini for the PR review,

as agreed we have updated KIP-302 to match the code 
so the new config value for the entry client.dns.lookup is 
"use_all_dns_ips"

(underscores instead of dot separators)
--

Edoardo Comar

IBM Event Streams
IBM UK Ltd, Hursley Park, SO21 2JN




From:   Rajini Sivaram 
To: Edoardo Comar , dev 
Cc: "Skrzypek, Jonathan" , Dong Lin 
, Mickael Maison 
Date:   11/10/2018 15:03
Subject:Re: New release branch 2.1.0



Copying this discussion on the dev mailing list as well.

So the consensus was to use "_" instead of "." in the config values. Can 
we update both the KIPs to reflect this?

As Dong has agreed for these two PRs to go into 2.1.0, I will review and 
merge them to trunk and 2.1 once the PR builds complete.


On Thu, Oct 11, 2018 at 12:49 PM Edoardo Comar  wrote:
Thanks Rajini! 

Going to fix / rebase asap. 

Commented on the Config in https://github.com/apache/kafka/pull/4485/files 
and waiting for consensus on '.' vs '_' - 

Maybe if you cast your vote, Rajini, you can break the tie , and gain one 
friend and one foe :-) 
... 
cheers 
-- 
Edoardo Comar 
IBM Event Streams 
IBM UK Ltd, Hursley Park, SO21 2JN 




From:Rajini Sivaram  
To:Dong Lin  
Cc:"Skrzypek, Jonathan" , Edoardo Comar 
, Mickael Maison  
Date:11/10/2018 12:11 
Subject:Re: New release branch 2.1.0 



Hi all, 

I have reviewed both the DNS related PRs. They are both minor KIPs and not 
particularly risky. There is a config naming inconsistency which needs to 
be sorted out and build conflicts to be resolved (and a few minor comments 
to be addressed). If all that is sorted out by tomorrow, then I can do 
another review and merge both to 2.1.0 and trunk, ideally by tomorrow. 

Regards, 

Rajini 



On Thu, Oct 11, 2018 at 10:22 AM Rajini Sivaram  
wrote: 
Hi Jonathan/Dong, 

I will review both the PRs today to see where we are. 

Thanks, 

Rajini 

On Wed, Oct 10, 2018 at 7:12 PM Dong Lin  wrote: 
Hey Jonathan, 

If these two PRs can be completed before Oct 15 (the Code Freeze date) and 
Rajini thinks these two patches are safe to be committed, then we can 
include them in the 2.1.0 release. 

Thanks, 
Dong 

On Wed, Oct 10, 2018 at 2:53 AM Skrzypek, Jonathan <
jonathan.skrzy...@gs.com> wrote: 
Hi,

Could https://github.com/apache/kafka/pull/4485 be included in this 
release ?
I see that https://github.com/apache/kafka/pull/4987 is in the list, but 
the two KIPs are kind of linked to each other.

I also see that the client parameters in both jiras don't follow same 
convention (KIP-235 goes with "." like most settings in 
CommonClientConfig, whereas KIP-302 went for "_" as Rajini pointed out in 
the PR).

I think both PRs should be merged or none, to avoid inconsistency issues 
in a future release.

Jonathan Skrzypek


-Original Message-
From: Dong Lin [mailto:lindon...@gmail.com]
Sent: 05 October 2018 01:22
To: dev; Users; kafka-clients
Subject: New release branch 2.1.0

Hello Kafka developers and users,

As promised in the release plan
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=91554044,
we now have a release branch for 2.1.0 release. Trunk will soon be bumped 
to 2.2.0-SNAPSHOT.

I'll be going over the JIRAs to move every non-blocker from this release 
to the next release.

>From this point, most changes should go to trunk. Blockers (existing and 
new that we discover while testing the release) will be double-committed.
Please discuss with your reviewer whether your PR should go to trunk or to
trunk+release so they can merge accordingly.

Please help us test the release!

Thanks!
Dong



Your Personal Data: We may collect and process information about you that 
may be subject to data protection laws. For more information about how we 
use and disclose your personal data, how we protect your information, our 
legal basis to use your information, your rights and who you can contact, 
please refer to: www.gs.com/privacy-notices<
http://www.gs.com/privacy-notices> 


Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU


Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU


Re: [VOTE] KIP-302 - Enable Kafka clients to use all DNS resolved IP addresses

2018-10-11 Thread Edoardo Comar
Thanks Rajini for the PR review,

as agreed we have updated KIP-302 to match the code 
so the new config value for the entry client.dns.lookup is 
"use_all_dns_ips"

(underscores instead of dot separators)
cheers
Edo
--

Edoardo Comar

IBM Event Streams
IBM UK Ltd, Hursley Park, SO21 2JN




From:   Edoardo Comar 
To: dev@kafka.apache.org
Date:   25/09/2018 09:41
Subject:Re: [VOTE] KIP-302 - Enable Kafka clients to use all DNS 
resolved IP addresses



Many thanks to all that voted,

This KIP has passed the vote - within the COB of 24/09/2018 - with 3 
binding votes (Rajini, Damian, Gwen) and 4 non-binding votes (Eno, 
Jonathan and implicitly :-) Mickael and me)

Edo
--

Edoardo Comar

IBM Event Streams
IBM UK Ltd, Hursley Park, SO21 2JN




From:   Gwen Shapira 
To: dev 
Date:   24/09/2018 19:13
Subject:Re: [VOTE] KIP-302 - Enable Kafka clients to use all DNS 
resolved IP addresses



+1 (binding)

On Tue, Sep 18, 2018 at 7:51 AM Edoardo Comar  wrote:

> Hi All,
>
> I'd like to start the vote on KIP-302:
>
>
> 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-302+-+Enable+Kafka+clients+to+use+all+DNS+resolved+IP+addresses


>
> We'd love to get this in 2.1.0
> Kip freeze is just a few days away ... please cast your votes  :-):-)
>
> Thanks!!
> Edo
>
> --
>
> Edoardo Comar
>
> IBM Message Hub
>
> IBM UK Ltd, Hursley Park, SO21 2JN
> Unless stated otherwise above:
> IBM United Kingdom Limited - Registered in England and Wales with number
> 741598.
> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 
3AU
>


-- 
*Gwen Shapira*
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter <
https://twitter.com/ConfluentInc

> | blog
<
http://www.confluent.io/blog

>



Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU



Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU


Build failed in Jenkins: kafka-trunk-jdk8 #3101

2018-10-11 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H32 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 379211134740268b570fc8edd59ae78df0dffee9
remote: Enumerating objects: 4736, done.
remote: Counting objects:   0% (1/4736)   remote: Counting objects:   
1% (48/4736)   remote: Counting objects:   2% (95/4736)   
remote: Counting objects:   3% (143/4736)   remote: Counting objects:   
4% (190/4736)   remote: Counting objects:   5% (237/4736)   
remote: Counting objects:   6% (285/4736)   remote: Counting objects:   
7% (332/4736)   remote: Counting objects:   8% (379/4736)   
remote: Counting objects:   9% (427/4736)   remote: Counting objects:  
10% (474/4736)   remote: Counting objects:  11% (521/4736)   
remote: Counting objects:  12% (569/4736)   remote: Counting objects:  
13% (616/4736)   remote: Counting objects:  14% (664/4736)   
remote: Counting objects:  15% (711/4736)   remote: Counting objects:  
16% (758/4736)   remote: Counting objects:  17% (806/4736)   
remote: Counting objects:  18% (853/4736)   remote: Counting objects:  
19% (900/4736)   remote: Counting objects:  20% (948/4736)   
remote: Counting objects:  21% (995/4736)   remote: Counting objects:  
22% (1042/4736)   remote: Counting objects:  23% (1090/4736)   
remote: Counting objects:  24% (1137/4736)   remote: Counting objects:  
25% (1184/4736)   remote: Counting objects:  26% (1232/4736)   
remote: Counting objects:  27% (1279/4736)   remote: Counting objects:  
28% (1327/4736)   remote: Counting objects:  29% (1374/4736)   
remote: Counting objects:  30% (1421/4736)   remote: Counting objects:  
31% (1469/4736)   remote: Counting objects:  32% (1516/4736)   
remote: Counting objects:  33% (1563/4736)   remote: Counting objects:  
34% (1611/4736)   remote: Counting objects:  35% (1658/4736)   
remote: Counting objects:  36% (1705/4736)   remote: Counting objects:  
37% (1753/4736)   remote: Counting objects:  38% (1800/4736)   
remote: Counting objects:  39% (1848/4736)   remote: Counting objects:  
40% (1895/4736)   remote: Counting objects:  41% (1942/4736)   
remote: Counting objects:  42% (1990/4736)   remote: Counting objects:  
43% (2037/4736)   remote: Counting objects:  44% (2084/4736)   
remote: Counting objects:  45% (2132/4736)   remote: Counting objects:  
46% (2179/4736)   remote: Counting objects:  47% (2226/4736)   
remote: Counting objects:  48% (2274/4736)   remote: Counting objects:  
49% (2321/4736)   remote: Counting objects:  50% (2368/4736)   
remote: Counting objects:  51% (2416/4736)   remote: Counting objects:  
52% (2463/4736)   remote: Counting objects:  53% (2511/4736)   
remote: Counting objects:  54% (2558/4736)   remote: Counting objects:  
55% (2605/4736)   remote: Counting objects:  56% 

Re: New release branch 2.1.0

2018-10-11 Thread Rajini Sivaram
Copying this discussion on the dev mailing list as well.

So the consensus was to use "_" instead of "." in the config values. Can we
update both the KIPs to reflect this?

As Dong has agreed for these two PRs to go into 2.1.0, I will review and
merge them to trunk and 2.1 once the PR builds complete.


On Thu, Oct 11, 2018 at 12:49 PM Edoardo Comar  wrote:

> Thanks Rajini!
>
> Going to fix / rebase asap.
>
> Commented on the Config in https://github.com/apache/kafka/pull/4485/files
> and waiting for consensus on '.' vs '_' -
>
> Maybe if you cast your vote, Rajini, you can break the tie , and gain one
> friend and one foe :-)
> ...
> cheers
> --
> Edoardo Comar
> IBM Event Streams
> IBM UK Ltd, Hursley Park, SO21 2JN
>
>
>
>
> From:Rajini Sivaram 
> To:Dong Lin 
> Cc:"Skrzypek, Jonathan" , Edoardo Comar
> , Mickael Maison 
> Date:11/10/2018 12:11
> Subject:Re: New release branch 2.1.0
> --
>
>
>
> Hi all,
>
> I have reviewed both the DNS related PRs. They are both minor KIPs and not
> particularly risky. There is a config naming inconsistency which needs to
> be sorted out and build conflicts to be resolved (and a few minor comments
> to be addressed). If all that is sorted out by tomorrow, then I can do
> another review and merge both to 2.1.0 and trunk, ideally by tomorrow.
>
> Regards,
>
> Rajini
>
>
>
> On Thu, Oct 11, 2018 at 10:22 AM Rajini Sivaram <*rajinisiva...@gmail.com*
> > wrote:
> Hi Jonathan/Dong,
>
> I will review both the PRs today to see where we are.
>
> Thanks,
>
> Rajini
>
> On Wed, Oct 10, 2018 at 7:12 PM Dong Lin <*lindon...@gmail.com*
> > wrote:
> Hey Jonathan,
>
> If these two PRs can be completed before Oct 15 (the Code Freeze date) and
> Rajini thinks these two patches are safe to be committed, then we can
> include them in the 2.1.0 release.
>
> Thanks,
> Dong
>
> On Wed, Oct 10, 2018 at 2:53 AM Skrzypek, Jonathan <
> *jonathan.skrzy...@gs.com* > wrote:
> Hi,
>
> Could *https://github.com/apache/kafka/pull/4485*
>  be included in this release ?
> I see that *https://github.com/apache/kafka/pull/4987*
>  is in the list, but the two
> KIPs are kind of linked to each other.
>
> I also see that the client parameters in both jiras don't follow same
> convention (KIP-235 goes with "." like most settings in CommonClientConfig,
> whereas KIP-302 went for "_" as Rajini pointed out in the PR).
>
> I think both PRs should be merged or none, to avoid inconsistency issues
> in a future release.
>
> Jonathan Skrzypek
>
>
> -Original Message-
> From: Dong Lin [mailto:*lindon...@gmail.com* ]
> Sent: 05 October 2018 01:22
> To: dev; Users; kafka-clients
> Subject: New release branch 2.1.0
>
> Hello Kafka developers and users,
>
> As promised in the release plan
> *https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=91554044*
> 
> ,
> we now have a release branch for 2.1.0 release. Trunk will soon be bumped
> to 2.2.0-SNAPSHOT.
>
> I'll be going over the JIRAs to move every non-blocker from this release
> to the next release.
>
> From this point, most changes should go to trunk. Blockers (existing and
> new that we discover while testing the release) will be double-committed.
> Please discuss with your reviewer whether your PR should go to trunk or to
> trunk+release so they can merge accordingly.
>
> Please help us test the release!
>
> Thanks!
> Dong
>
> 
>
> Your Personal Data: We may collect and process information about you that
> may be subject to data protection laws. For more information about how we
> use and disclose your personal data, how we protect your information, our
> legal basis to use your information, your rights and who you can contact,
> please refer to: *www.gs.com/privacy-notices*
> <*http://www.gs.com/privacy-notices*
> >
>
>
> Unless stated otherwise above:
> IBM United Kingdom Limited - Registered in England and Wales with number
> 741598.
> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
>


Build failed in Jenkins: kafka-trunk-jdk8 #3100

2018-10-11 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H32 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 379211134740268b570fc8edd59ae78df0dffee9
remote: Enumerating objects: 4736, done.
remote: Counting objects:   0% (1/4736)   remote: Counting objects:   
1% (48/4736)   remote: Counting objects:   2% (95/4736)   
remote: Counting objects:   3% (143/4736)   remote: Counting objects:   
4% (190/4736)   remote: Counting objects:   5% (237/4736)   
remote: Counting objects:   6% (285/4736)   remote: Counting objects:   
7% (332/4736)   remote: Counting objects:   8% (379/4736)   
remote: Counting objects:   9% (427/4736)   remote: Counting objects:  
10% (474/4736)   remote: Counting objects:  11% (521/4736)   
remote: Counting objects:  12% (569/4736)   remote: Counting objects:  
13% (616/4736)   remote: Counting objects:  14% (664/4736)   
remote: Counting objects:  15% (711/4736)   remote: Counting objects:  
16% (758/4736)   remote: Counting objects:  17% (806/4736)   
remote: Counting objects:  18% (853/4736)   remote: Counting objects:  
19% (900/4736)   remote: Counting objects:  20% (948/4736)   
remote: Counting objects:  21% (995/4736)   remote: Counting objects:  
22% (1042/4736)   remote: Counting objects:  23% (1090/4736)   
remote: Counting objects:  24% (1137/4736)   remote: Counting objects:  
25% (1184/4736)   remote: Counting objects:  26% (1232/4736)   
remote: Counting objects:  27% (1279/4736)   remote: Counting objects:  
28% (1327/4736)   remote: Counting objects:  29% (1374/4736)   
remote: Counting objects:  30% (1421/4736)   remote: Counting objects:  
31% (1469/4736)   remote: Counting objects:  32% (1516/4736)   
remote: Counting objects:  33% (1563/4736)   remote: Counting objects:  
34% (1611/4736)   remote: Counting objects:  35% (1658/4736)   
remote: Counting objects:  36% (1705/4736)   remote: Counting objects:  
37% (1753/4736)   remote: Counting objects:  38% (1800/4736)   
remote: Counting objects:  39% (1848/4736)   remote: Counting objects:  
40% (1895/4736)   remote: Counting objects:  41% (1942/4736)   
remote: Counting objects:  42% (1990/4736)   remote: Counting objects:  
43% (2037/4736)   remote: Counting objects:  44% (2084/4736)   
remote: Counting objects:  45% (2132/4736)   remote: Counting objects:  
46% (2179/4736)   remote: Counting objects:  47% (2226/4736)   
remote: Counting objects:  48% (2274/4736)   remote: Counting objects:  
49% (2321/4736)   remote: Counting objects:  50% (2368/4736)   
remote: Counting objects:  51% (2416/4736)   remote: Counting objects:  
52% (2463/4736)   remote: Counting objects:  53% (2511/4736)   
remote: Counting objects:  54% (2558/4736)   remote: Counting objects:  
55% (2605/4736)   remote: Counting objects:  56% 

Build failed in Jenkins: kafka-trunk-jdk8 #3099

2018-10-11 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-eu2 (ubuntu trusty) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 6e79e5da0308783ba646378efc44f018cb4f39ac
error: Could not read bb745c0f9142717ddf68dc83bbd940dfe0c59b9a
remote: Enumerating objects: 3743, done.
remote: Counting objects:   0% (1/3743)   remote: Counting objects:   
1% (38/3743)   remote: Counting objects:   2% (75/3743)   
remote: Counting objects:   3% (113/3743)   remote: Counting objects:   
4% (150/3743)   remote: Counting objects:   5% (188/3743)   
remote: Counting objects:   6% (225/3743)   remote: Counting objects:   
7% (263/3743)   remote: Counting objects:   8% (300/3743)   
remote: Counting objects:   9% (337/3743)   remote: Counting objects:  
10% (375/3743)   remote: Counting objects:  11% (412/3743)   
remote: Counting objects:  12% (450/3743)   remote: Counting objects:  
13% (487/3743)   remote: Counting objects:  14% (525/3743)   
remote: Counting objects:  15% (562/3743)   remote: Counting objects:  
16% (599/3743)   remote: Counting objects:  17% (637/3743)   
remote: Counting objects:  18% (674/3743)   remote: Counting objects:  
19% (712/3743)   remote: Counting objects:  20% (749/3743)   
remote: Counting objects:  21% (787/3743)   remote: Counting objects:  
22% (824/3743)   remote: Counting objects:  23% (861/3743)   
remote: Counting objects:  24% (899/3743)   remote: Counting objects:  
25% (936/3743)   remote: Counting objects:  26% (974/3743)   
remote: Counting objects:  27% (1011/3743)   remote: Counting objects:  
28% (1049/3743)   remote: Counting objects:  29% (1086/3743)   
remote: Counting objects:  30% (1123/3743)   remote: Counting objects:  
31% (1161/3743)   remote: Counting objects:  32% (1198/3743)   
remote: Counting objects:  33% (1236/3743)   remote: Counting objects:  
34% (1273/3743)   remote: Counting objects:  35% (1311/3743)   
remote: Counting objects:  36% (1348/3743)   remote: Counting objects:  
37% (1385/3743)   remote: Counting objects:  38% (1423/3743)   
remote: Counting objects:  39% (1460/3743)   remote: Counting objects:  
40% (1498/3743)   remote: Counting objects:  41% (1535/3743)   
remote: Counting objects:  42% (1573/3743)   remote: Counting objects:  
43% (1610/3743)   remote: Counting objects:  44% (1647/3743)   
remote: Counting objects:  45% (1685/3743)   remote: Counting objects:  
46% (1722/3743)   remote: Counting objects:  47% (1760/3743)   
remote: Counting objects:  48% (1797/3743)   remote: Counting objects:  
49% (1835/3743)   remote: Counting objects:  50% (1872/3743)   
remote: Counting objects:  51% (1909/3743)   remote: Counting objects:  
52% (1947/3743)   remote: Counting objects:  53% (1984/3743)   
remote: Counting objects:  54% (2022/3743)   remote: Counting objects:  

[jira] [Created] (KAFKA-7497) Kafka Streams should support self-join on streams

2018-10-11 Thread Robin Moffatt (JIRA)
Robin Moffatt created KAFKA-7497:


 Summary: Kafka Streams should support self-join on streams
 Key: KAFKA-7497
 URL: https://issues.apache.org/jira/browse/KAFKA-7497
 Project: Kafka
  Issue Type: Bug
Reporter: Robin Moffatt


ref [https://github.com/confluentinc/ksql/issues/2030]

 

There are valid reasons to want to join a stream to itself, but Kafka Streams 
does not currently support this ({{Invalid topology: Topic foo has already been 
registered by another source.}}).  To perform the join requires creating a 
second stream as a clone of the first, and then doing a join between the two. 
This is a clunky workaround and results in unnecessary duplication of data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-2.1-jdk8 #16

2018-10-11 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-280: Enhanced log compaction

2018-10-11 Thread Luís Cabral
 Hi Matthias,

How can this be done?

Kind Regards,
Luis
 
On Sunday, September 30, 2018, 9:10:01 PM GMT+2, Matthias J. Sax 
 wrote:  
 
 Luis,

What is the status of this KIP?

I tend to agree, that introducing the feature only globally, might be
less useful (I would assume that people want to turn it on, on a
per-topic basis). As I am not familiar with the corresponding code, I
cannot judge the complexity to add topic level configs, however, it
seems to be worth to include it in the KIP.


-Matthias



On 9/21/18 1:59 PM, Bertus Greeff wrote:
> Someone pointed out to me that my scenario is also resolved by using Kafka 
> transactions.  Zombie fencing which is essentially my scenario was one of the 
> scenarios that transactions were designed to solve.  I was going to use the 
> ideas of this KIP to solve it but using transactions seems even better 
> because out of order messages never even make it into the topic.  They are 
> blocked by the broker.
> 
> -Original Message-
> From: Guozhang Wang  
> Sent: Saturday, September 1, 2018 11:33 AM
> To: dev 
> Subject: Re: [DISCUSS] KIP-280: Enhanced log compaction
> 
> Hello Luis,
> 
> Thanks for your thoughtful responses, here are my two cents:
> 
> 1) I think having the new configs with per-topic granularity would not 
> introduce much memory overhead or logic complexity, as all you need is to 
> remember this at the topic metadata cache. If I've missed some details about 
> the complexity, could you elaborate a bit more?
> 
> 2) I agree with you: the current `ConfigDef.Validator` only scope on the 
> validated config value itself, and hence cannot be dependent on another 
> config.
> 
> 4) I think Jun's point is that since we need the latest message in the log 
> segment for the timestamp tracking, we cannot delete it actually: with offset 
> based only policy, this is naturally guaranteed; but now with other policy, 
> it is not guaranteed to never be compacted away, and hence we need to 
> "enforce" to special-handle this case and not delete it.
> 
> 
> 
> Guozhang
> 
> 
> 
> Guozhang
> 
> 
> On Wed, Aug 29, 2018 at 9:25 AM, Luís Cabral 
> wrote:
> 
>> Hi all,
>>
>> Since there has been a rejuvenated interest in this KIP, it felt 
>> better to downgrade it back down to [DISCUSSION], as we aren't really 
>> voting on it anymore.
>> I'll try to address the currently pending questions on the following 
>> points, so please bear with me while we go through them all:
>>
>> 1) Configuration: Topic vs Global
>>
>> Here we all agree that having a configuration per-topic is the best 
>> option. However, this is not possible with the current design of the 
>> compaction solution. Yes, it is true that "some" properties that 
>> relate to compaction are configurable per-topic, but those are only 
>> the properties that act outside(!) of the actual compaction logic, 
>> such as configuring the start-compaction trigger with "ratio" or 
>> compaction duration with "lag.ms ".
>> This logic can, of course, be re-designed to suit our wishes, but this 
>> is not a direct effort, and if we have spent months arguing about the 
>> extra 8 bytes per record, for sure we would spend another few dozen 
>> months discussing the memory impact that doing this change to the 
>> properties will invariably have.
>> As such, I will limit this KIP to ONLY have these configurations globally.
>>
>> 2) Configuration: Fail-fast vs Fallback
>>
>>
>> Ideally, I would also like to prevent the application to start if the 
>> configuration is somehow invalid.
>> However (another 'however'), the way the configuration is handled 
>> prevents adding dependencies between them, so we can't add logic that 
>> says "configuration X is invalid if configuration Y is so-and-such".
>> Again, this can be re-designed to add this feature to the 
>> configuration logic, but it would again be a big change just by 
>> itself, so this KIP is again limited to use ONLY what is already in place.
>>
>> 3) Documenting the memory impact on the KIP
>>
>> This is now added to the KIP, though this topic is more complicated 
>> than 'memory impact'. E.g.: this change doesn't translate to an actual 
>> memory impact, it just means that the compaction will potentially 
>> encompass less records per execution.
>>
>> 4) Documenting how we deal with the last message in the log
>>
>> I have 2 interpretations for this request: "the last message in the log"
>> or "the last message with a shared key on the log"
>> For the former: there is no change to the logic on how the last 
>> message is handled. Only the "tail" gets compacted, so the "head" 
>> (which includes the last message) still keeps the last message
>>
>> 5) Documenting how the key deletion will be handled
>>
>> I'm having some trouble understanding this one; do you mean how keys 
>> are deleted in general, or?
>>
>> Cheers,
>> Luis Cabral
>>
>>    On Friday, August 24, 2018, 1:54:54 AM GMT+2, Jun Rao 
>> 
>> wrote:
>>
>>  Hi, Luis,
>>
>> Thanks for the