Build failed in Jenkins: kafka-trunk-jdk11 #710

2019-07-22 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-8454; Add Java AdminClient Interface (KIP-476) (#7087)

[jason] KAFKA-8678; Fix leave group protocol bug in throttling and error

--
[...truncated 2.57 MB...]

org.apache.kafka.connect.json.JsonConverterTest > timestampToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > timestampToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
timestampToConnectWithDefaultValue STARTED

org.apache.kafka.connect.json.JsonConverterTest > 
timestampToConnectWithDefaultValue PASSED

org.apache.kafka.connect.json.JsonConverterTest > timeToConnectOptional STARTED

org.apache.kafka.connect.json.JsonConverterTest > timeToConnectOptional PASSED

org.apache.kafka.connect.json.JsonConverterTest > dateToConnectWithDefaultValue 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > dateToConnectWithDefaultValue 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullValueToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullValueToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > decimalToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > decimalToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > stringHeaderToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > stringHeaderToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > mapToJsonNonStringKeys STARTED

org.apache.kafka.connect.json.JsonConverterTest > mapToJsonNonStringKeys PASSED

org.apache.kafka.connect.json.JsonConverterTest > longToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > longToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > mismatchSchemaJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > mismatchSchemaJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > mapToConnectNonStringKeys 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > mapToConnectNonStringKeys 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
testJsonSchemaMetadataTranslation STARTED

org.apache.kafka.connect.json.JsonConverterTest > 
testJsonSchemaMetadataTranslation PASSED

org.apache.kafka.connect.json.JsonConverterTest > bytesToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > bytesToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > shortToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > shortToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > intToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > intToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndNullValueToJson 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndNullValueToJson 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > timestampToConnectOptional 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > timestampToConnectOptional 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > structToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > structToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > stringToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > stringToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndArrayToJson 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndArrayToJson 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > byteToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > byteToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaPrimitiveToConnect 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaPrimitiveToConnect 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > byteToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > byteToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > intToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > intToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > dateToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > dateToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndPrimitiveToJson 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndPrimitiveToJson 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
decimalToConnectOptionalWithDefaultValue STARTED

org.apache.kafka.connect.json.JsonConverterTest > 
decimalToConnectOptionalWithDefaultValue PASSED

org.apache.kafka.connect.json.JsonConverterTest > 

Re: JIRA and KIP contributor permissions

2019-07-22 Thread Matthias J. Sax
Hi Alexandre,

I added you to the list of contributors in JIRA, so you can self-assign
ticket. However, I did not find any corresponding wiki. Note, that both
are independent accounts and you might need to create a wiki account
first (and share you ID so we can grant write permission).


-Matthias

On 7/22/19 1:16 PM, Alexandre Dupriez wrote:
> Hello Community,
> 
> In order to start contributing to Apache Kafka project, could I please
> request contributor access to JIRA and be granted write permissions to the
> Kafka wiki?
> 
> JIRA username: adupriez
> Committer email: alexandre.dupr...@amazon.com 
> 
> Thank you in advance,
> Alexandre
> 



signature.asc
Description: OpenPGP digital signature


Jenkins build is back to normal : kafka-2.2-jdk8 #154

2019-07-22 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk8 #3807

2019-07-22 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-8454; Add Java AdminClient Interface (KIP-476) (#7087)

[jason] KAFKA-8678; Fix leave group protocol bug in throttling and error

--
[...truncated 2.56 MB...]
org.apache.kafka.connect.transforms.RegexRouterTest > identity PASSED

org.apache.kafka.connect.transforms.RegexRouterTest > addPrefix STARTED

org.apache.kafka.connect.transforms.RegexRouterTest > addPrefix PASSED

org.apache.kafka.connect.transforms.RegexRouterTest > addSuffix STARTED

org.apache.kafka.connect.transforms.RegexRouterTest > addSuffix PASSED

org.apache.kafka.connect.transforms.RegexRouterTest > slice STARTED

org.apache.kafka.connect.transforms.RegexRouterTest > slice PASSED

org.apache.kafka.connect.transforms.RegexRouterTest > staticReplacement STARTED

org.apache.kafka.connect.transforms.RegexRouterTest > staticReplacement PASSED

org.apache.kafka.connect.transforms.FlattenTest > testKey STARTED

org.apache.kafka.connect.transforms.FlattenTest > testKey PASSED

org.apache.kafka.connect.transforms.FlattenTest > 
testOptionalAndDefaultValuesNested STARTED

org.apache.kafka.connect.transforms.FlattenTest > 
testOptionalAndDefaultValuesNested PASSED

org.apache.kafka.connect.transforms.FlattenTest > topLevelMapRequired STARTED

org.apache.kafka.connect.transforms.FlattenTest > topLevelMapRequired PASSED

org.apache.kafka.connect.transforms.FlattenTest > topLevelStructRequired STARTED

org.apache.kafka.connect.transforms.FlattenTest > topLevelStructRequired PASSED

org.apache.kafka.connect.transforms.FlattenTest > testOptionalFieldStruct 
STARTED

org.apache.kafka.connect.transforms.FlattenTest > testOptionalFieldStruct PASSED

org.apache.kafka.connect.transforms.FlattenTest > testOptionalNestedStruct 
STARTED

org.apache.kafka.connect.transforms.FlattenTest > testOptionalNestedStruct 
PASSED

org.apache.kafka.connect.transforms.FlattenTest > testNestedMapWithDelimiter 
STARTED

org.apache.kafka.connect.transforms.FlattenTest > testNestedMapWithDelimiter 
PASSED

org.apache.kafka.connect.transforms.FlattenTest > testOptionalFieldMap STARTED

org.apache.kafka.connect.transforms.FlattenTest > testOptionalFieldMap PASSED

org.apache.kafka.connect.transforms.FlattenTest > testUnsupportedTypeInMap 
STARTED

org.apache.kafka.connect.transforms.FlattenTest > testUnsupportedTypeInMap 
PASSED

org.apache.kafka.connect.transforms.FlattenTest > testOptionalStruct STARTED

org.apache.kafka.connect.transforms.FlattenTest > testOptionalStruct PASSED

org.apache.kafka.connect.transforms.FlattenTest > testNestedStruct STARTED

org.apache.kafka.connect.transforms.FlattenTest > testNestedStruct PASSED

org.apache.kafka.connect.transforms.HoistFieldTest > withSchema STARTED

org.apache.kafka.connect.transforms.HoistFieldTest > withSchema PASSED

org.apache.kafka.connect.transforms.HoistFieldTest > schemaless STARTED

org.apache.kafka.connect.transforms.HoistFieldTest > schemaless PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeDateRecordValueWithSchemaString STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeDateRecordValueWithSchemaString PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessBooleanTrue STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessBooleanTrue PASSED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidTargetType 
STARTED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidTargetType 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
testConfigMixWholeAndFieldTransformation STARTED

org.apache.kafka.connect.transforms.CastTest > 
testConfigMixWholeAndFieldTransformation PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessFloat32 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessFloat32 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessFloat64 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessFloat64 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessUnsupportedType STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessUnsupportedType PASSED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeySchemaless 
STARTED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeySchemaless 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaFloat32 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaFloat32 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaFloat64 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaFloat64 PASSED

org.apache.kafka.connect.transforms.CastTest > 

Re: Stopping All Tasks When a New Connector Added

2019-07-22 Thread Liu Luying
Hi Konstantine,
My username is Echolly, with the full name of Luying Liu.

Best,
Luying Liu

From: Konstantine Karantasis 
Sent: Monday, July 22, 2019 9:30
To: dev@kafka.apache.org 
Subject: Re: Stopping All Tasks When a New Connector Added

Liu feel free to share your jira account id on a separate email, so one of
the committers can add you to the project.
Then you or someone else will be able to assign this ticket to you.

I'll review the fix some time this week.

Thanks!
Konstantine

On Mon, Jul 22, 2019 at 5:13 AM Liu Luying  wrote:

> Hi Adam,
> I have already opened a JIRA ticket(KAFKA-8676<
> https://issues.apache.org/jira/browse/KAFKA-8676>) and a PR(with the
> title of KAFKA-8676)
> for this.
>
> Best,
> Luying
> 
> From: Adam Bellemare 
> Sent: Friday, July 19, 2019 10:36
> To: dev@kafka.apache.org 
> Subject: Re: Stopping All Tasks When a New Connector Added
>
> Hi Luying
>
> Would you be willing to make a PR to address this? It seems that you have
> already done most of the work.
>
> Thanks
> Adam
>
> On Thu, Jul 18, 2019 at 11:00 PM Liu Luying  wrote:
>
> > Hi all,
> > I have noticed that Kafka Connect 2.3.0 will stop all existing tasks and
> > then start all the tasks, including the new tasks and the existing ones
> > when adding a new connector or changing a connector configuration.
> However,
> > I do not think it is a must. Only the new connector and tasks need to be
> > started. As the rebalancing can be applied for both running and suspended
> > tasks.
> >
> > The problem lies in the
> > KafkaConfigBackingStore.ConsumeCallback.onCompletion() function (line 623
> > in KafkaConfigBackingStore.java). When record.key() startsWith "commit-",
> > the tasks are being committed, and the deferred tasks are processed, Some
> > new tasks are added to the 'updatedTasks'(line 623 in
> > KafkaConfigBackingStore.java), and the 'updatedTasks' are sent to
> > updateListener to complete the task configuration update(line 638 in
> > KafkaConfigBackingStore.java). In the updateListener.onTaskConfigUpdate()
> > function, the  'updatedTasks' are added to the member variable,
> > 'taskConfigUpdates', of class DistributedHerder(line 1295 in
> > DistributedHerder.java).
> >
> > In another thread, 'taskConfigUpdates' is copied to
> > 'taskConfigUpdatesCopy' in updateConfigsWithIncrementalCooperative()
> (line
> > 445 in DistributedHerder.java). The 'taskConfigUpdatesCopy' is
> subsequently
> > used in processTaskConfigUpdatesWithIncrementalCooperative() (line 345 in
> > DistributedHerder.java). This function then uses  'taskConfigUpdatesCopy'
> > to find connectors to stop(line 492 in DistributedHerder.java), and
> finally
> > get the tasks to stop, which are all the tasks. The worker thread does
> the
> > actual job of stop(line 499 in DistributedHerder.java).
> >
> > In the original code, all the tasks are added to the 'updatedTasks' (line
> > 623 in KafkaConfigBackingStore.java), which means all the active
> connectors
> > are in the 'connectorsWhoseTasksToStop' set, and all the tasks are in the
> > 'tasksToStop' list. This causes the stops, and of course the subsequent
> > restarts, of all the tasks.
> >
> > So, adding only the 'deferred' tasks to the  'updatedTasks' can avoid the
> > stops and restarts of unnecessary tasks.
> >
> > Best,
> > Luying
> >
> >
>


Re: Jira Cleanup

2019-07-22 Thread Guozhang Wang
Thanks Sönke! I just made a quick pass on those tickets as well I think
your assessments are right.


Guozhang


On Fri, Jul 19, 2019 at 4:09 AM Sönke Liebau
 wrote:

> All,
>
> I left a few comments on some old but still open jiras in an attempt to
> clean up a little bit.
>
> Since probably no one would notice these comments I thought I'd quickly
> list them here to give people a chance to check on them:
>
> KAFKA-822 : Reassignment
> of partitions needs a cleanup
> KAFKA-1016 : Broker
> should limit purgatory size
> KAFKA-1099 :
> StopReplicaRequest
> and StopReplicaResponse should also carry the replica ids
> KAFKA- : Broker
> prematurely accepts TopicMetadataRequests on startup
> KAFKA-1234:  All
> kafka-run-class.sh to source in user config file (to set env vars like
> KAFKA_OPTS)
>
>  I'll wait a few days for objections and then close these issues if none
> are forthcoming.
>
> Best regards,
> Sönke
>


-- 
-- Guozhang


Re: [DISCUSS] KIP-444: Augment Metrics for Kafka Streams

2019-07-22 Thread Guozhang Wang
Thanks everyone for your inputs, I've updated the wiki page accordingly.

@Bruno: please let me know if you have any further thoughts per my replies
above.


Guozhang


On Mon, Jul 22, 2019 at 6:30 PM Guozhang Wang  wrote:

> Thanks Boyang,
>
> I've thought about exposing time via metrics in Streams. The tricky part
> though is which layer of time we should expose: right now we have
> task-level and partition-level stream time (what you suggested), and also
> some processor internally maintain their own observed time. Today we are
> still trying to get a clear and simple way of exposing a single time
> concept for users to reason about their application's progress. So before
> we come up with a good solution I'd postpone adding it in a future KIP.
>
>
> Guozhang
>
>
> On Thu, Jul 18, 2019 at 1:21 PM Boyang Chen 
> wrote:
>
>> I mean the partition time.
>>
>> On Thu, Jul 18, 2019 at 11:29 AM Guozhang Wang 
>> wrote:
>>
>> > Hi Boyang,
>> >
>> > What do you mean by `per partition latency`?
>> >
>> > Guozhang
>> >
>> > On Mon, Jul 1, 2019 at 9:28 AM Boyang Chen 
>> > wrote:
>> >
>> > > Hey Guozhang,
>> > >
>> > > do we plan to add per partition latency in this KIP?
>> > >
>> > > On Mon, Jul 1, 2019 at 7:08 AM Bruno Cadonna 
>> wrote:
>> > >
>> > > > Hi Guozhang,
>> > > >
>> > > > Thank you for the KIP.
>> > > >
>> > > > 1) As far as I understand, the StreamsMetrics interface is there for
>> > > > user-defined processors. Would it make sense to also add a method to
>> > > > the interface to specify a sensor that records skipped records?
>> > > >
>> > > > 2) What are the semantics of active-task-process and
>> > standby-task-process
>> > > >
>> > > > 3) How do dropped-late-records and expired-window-record-drop relate
>> > > > to each other? I guess the former is for records that fall outside
>> the
>> > > > grace period and the latter is for records that are processed after
>> > > > the retention period of the window. Is this correct?
>> > > >
>> > > > 4) Is there an actual difference between skipped and dropped
>> records?
>> > > > If not, shall we unify the terminology?
>> > > >
>> > > > 5) What happens with removed metrics when the user sets the version
>> of
>> > > > "built.in.metrics.version" to 2.2-
>> > > >
>> > > > Best,
>> > > > Bruno
>> > > >
>> > > > On Thu, Jun 27, 2019 at 6:11 PM Guozhang Wang 
>> > > wrote:
>> > > > >
>> > > > > Hello folks,
>> > > > >
>> > > > > As 2.3 is released now, I'd like to bump up this KIP discussion
>> again
>> > > for
>> > > > > your reviews.
>> > > > >
>> > > > >
>> > > > > Guozhang
>> > > > >
>> > > > >
>> > > > > On Thu, May 23, 2019 at 4:44 PM Guozhang Wang > >
>> > > > wrote:
>> > > > >
>> > > > > > Hello Patrik,
>> > > > > >
>> > > > > > Since we are rolling out 2.3 and everyone is busy with the
>> release
>> > > now
>> > > > > > this KIP does not have much discussion involved yet and will
>> slip
>> > > into
>> > > > the
>> > > > > > next release cadence.
>> > > > > >
>> > > > > > This KIP itself contains several parts itself: 1. refactoring
>> the
>> > > > existing
>> > > > > > metrics hierarchy to cleanup some redundancy and also get more
>> > > > clarity; 2.
>> > > > > > add instance-level metrics like rebalance and state metrics, as
>> > well
>> > > as
>> > > > > > other static metrics.
>> > > > > >
>> > > > > >
>> > > > > > Guozhang
>> > > > > >
>> > > > > >
>> > > > > >
>> > > > > > On Thu, May 23, 2019 at 5:34 AM Patrik Kleindl <
>> pklei...@gmail.com
>> > >
>> > > > wrote:
>> > > > > >
>> > > > > >> Hi Guozhang
>> > > > > >> Thanks for the KIP, this looks very helpful.
>> > > > > >> Could you please provide more detail on the metrics planned for
>> > the
>> > > > state?
>> > > > > >> We were just considering how to implement this ourselves
>> because
>> > we
>> > > > need
>> > > > > >> to
>> > > > > >> track the history of stage changes.
>> > > > > >> The idea was to have an accumulated "seconds in state x" metric
>> > for
>> > > > every
>> > > > > >> state.
>> > > > > >> The new rebalance metric might solve part of our use case, but
>> it
>> > is
>> > > > > >> interesting what you have planned for the state metric.
>> > > > > >> best regards
>> > > > > >> Patrik
>> > > > > >>
>> > > > > >> On Fri, 29 Mar 2019 at 18:56, Guozhang Wang <
>> wangg...@gmail.com>
>> > > > wrote:
>> > > > > >>
>> > > > > >> > Hello folks,
>> > > > > >> >
>> > > > > >> > I'd like to propose the following KIP to improve the Kafka
>> > Streams
>> > > > > >> metrics
>> > > > > >> > mechanism to users. This includes 1) a minor change in the
>> > public
>> > > > > >> > StreamsMetrics API, and 2) a major cleanup on the Streams'
>> own
>> > > > built-in
>> > > > > >> > metrics hierarchy.
>> > > > > >> >
>> > > > > >> > Details can be found here:
>> > > > > >> >
>> > > > > >> >
>> > > > > >> >
>> > > > > >>
>> > > >
>> > >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-444%3A+Augment+metrics+for+Kafka+Streams
>> > > > > >> >
>> > > > > >> > I'd love to hear your 

Re: [DISCUSS] KIP-444: Augment Metrics for Kafka Streams

2019-07-22 Thread Guozhang Wang
Thanks Boyang,

I've thought about exposing time via metrics in Streams. The tricky part
though is which layer of time we should expose: right now we have
task-level and partition-level stream time (what you suggested), and also
some processor internally maintain their own observed time. Today we are
still trying to get a clear and simple way of exposing a single time
concept for users to reason about their application's progress. So before
we come up with a good solution I'd postpone adding it in a future KIP.


Guozhang


On Thu, Jul 18, 2019 at 1:21 PM Boyang Chen 
wrote:

> I mean the partition time.
>
> On Thu, Jul 18, 2019 at 11:29 AM Guozhang Wang  wrote:
>
> > Hi Boyang,
> >
> > What do you mean by `per partition latency`?
> >
> > Guozhang
> >
> > On Mon, Jul 1, 2019 at 9:28 AM Boyang Chen 
> > wrote:
> >
> > > Hey Guozhang,
> > >
> > > do we plan to add per partition latency in this KIP?
> > >
> > > On Mon, Jul 1, 2019 at 7:08 AM Bruno Cadonna 
> wrote:
> > >
> > > > Hi Guozhang,
> > > >
> > > > Thank you for the KIP.
> > > >
> > > > 1) As far as I understand, the StreamsMetrics interface is there for
> > > > user-defined processors. Would it make sense to also add a method to
> > > > the interface to specify a sensor that records skipped records?
> > > >
> > > > 2) What are the semantics of active-task-process and
> > standby-task-process
> > > >
> > > > 3) How do dropped-late-records and expired-window-record-drop relate
> > > > to each other? I guess the former is for records that fall outside
> the
> > > > grace period and the latter is for records that are processed after
> > > > the retention period of the window. Is this correct?
> > > >
> > > > 4) Is there an actual difference between skipped and dropped records?
> > > > If not, shall we unify the terminology?
> > > >
> > > > 5) What happens with removed metrics when the user sets the version
> of
> > > > "built.in.metrics.version" to 2.2-
> > > >
> > > > Best,
> > > > Bruno
> > > >
> > > > On Thu, Jun 27, 2019 at 6:11 PM Guozhang Wang 
> > > wrote:
> > > > >
> > > > > Hello folks,
> > > > >
> > > > > As 2.3 is released now, I'd like to bump up this KIP discussion
> again
> > > for
> > > > > your reviews.
> > > > >
> > > > >
> > > > > Guozhang
> > > > >
> > > > >
> > > > > On Thu, May 23, 2019 at 4:44 PM Guozhang Wang 
> > > > wrote:
> > > > >
> > > > > > Hello Patrik,
> > > > > >
> > > > > > Since we are rolling out 2.3 and everyone is busy with the
> release
> > > now
> > > > > > this KIP does not have much discussion involved yet and will slip
> > > into
> > > > the
> > > > > > next release cadence.
> > > > > >
> > > > > > This KIP itself contains several parts itself: 1. refactoring the
> > > > existing
> > > > > > metrics hierarchy to cleanup some redundancy and also get more
> > > > clarity; 2.
> > > > > > add instance-level metrics like rebalance and state metrics, as
> > well
> > > as
> > > > > > other static metrics.
> > > > > >
> > > > > >
> > > > > > Guozhang
> > > > > >
> > > > > >
> > > > > >
> > > > > > On Thu, May 23, 2019 at 5:34 AM Patrik Kleindl <
> pklei...@gmail.com
> > >
> > > > wrote:
> > > > > >
> > > > > >> Hi Guozhang
> > > > > >> Thanks for the KIP, this looks very helpful.
> > > > > >> Could you please provide more detail on the metrics planned for
> > the
> > > > state?
> > > > > >> We were just considering how to implement this ourselves because
> > we
> > > > need
> > > > > >> to
> > > > > >> track the history of stage changes.
> > > > > >> The idea was to have an accumulated "seconds in state x" metric
> > for
> > > > every
> > > > > >> state.
> > > > > >> The new rebalance metric might solve part of our use case, but
> it
> > is
> > > > > >> interesting what you have planned for the state metric.
> > > > > >> best regards
> > > > > >> Patrik
> > > > > >>
> > > > > >> On Fri, 29 Mar 2019 at 18:56, Guozhang Wang  >
> > > > wrote:
> > > > > >>
> > > > > >> > Hello folks,
> > > > > >> >
> > > > > >> > I'd like to propose the following KIP to improve the Kafka
> > Streams
> > > > > >> metrics
> > > > > >> > mechanism to users. This includes 1) a minor change in the
> > public
> > > > > >> > StreamsMetrics API, and 2) a major cleanup on the Streams' own
> > > > built-in
> > > > > >> > metrics hierarchy.
> > > > > >> >
> > > > > >> > Details can be found here:
> > > > > >> >
> > > > > >> >
> > > > > >> >
> > > > > >>
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-444%3A+Augment+metrics+for+Kafka+Streams
> > > > > >> >
> > > > > >> > I'd love to hear your thoughts and feedbacks. Thanks!
> > > > > >> >
> > > > > >> > --
> > > > > >> > -- Guozhang
> > > > > >> >
> > > > > >>
> > > > > >
> > > > > >
> > > > > > --
> > > > > > -- Guozhang
> > > > > >
> > > > >
> > > > >
> > > > > --
> > > > > -- Guozhang
> > > >
> > >
> >
> >
> > --
> > -- Guozhang
> >
>


-- 
-- Guozhang


Re: [DISCUSS] KIP-447: Producer scalability for exactly once semantics

2019-07-22 Thread Guozhang Wang
On Sat, Jul 20, 2019 at 9:50 AM Boyang Chen 
wrote:

> Thank you Guozhang for the suggestion! I would normally prefer naming a
> flag corresponding to its functionality. Seems to me `isolation_level`
> makes us another hop on information track.
>
> Fair enough, let's use a separate flag name then :)


> As for the generation.id exposure, I'm fine leveraging the new API from
> 429, but however is that design finalized yet, and whether the API will be
> added on the generic Consumer interface?
>
> The current PartitionAssignor is inside `internals` package and in KIP-429
we are going to create a new interface out of `internals` to really make it
public APIs, and as part of that we are refactoring some of its method
signatures. I just feel some of the newly introduced classes can be reused
in your KIP as well, i.e. just for code succinctness, but no semantical
indications.


> Boyang
>
> On Fri, Jul 19, 2019 at 3:57 PM Guozhang Wang  wrote:
>
> > Boyang, thanks for the updated proposal!
> >
> > 3.a. As Jason mentioned, with EOS enabled we still need to augment the
> > offset fetch request with a boolean to indicate "give me an retriable
> error
> > code if there's pending offset, rather than sending me the committed
> offset
> > immediately". Personally I still feel it is okay to piggy-back on the
> > ISOLATION_LEVEL boolean, but I'm also fine with another
> `await_transaction`
> > boolean if you feel strongly about it.
> >
> > 10. About the exposure of generation id, there may be some refactoring
> work
> > coming from KIP-429 that can benefit KIP-447 as well since we are
> wrapping
> > the consumer subscription / assignment data in new classes. Note that
> > current proposal does not `generationId` since with the cooperative
> sticky
> > assignor we think it is not necessary for correctness, but also if we
> agree
> > it is okay to expose it we can potentially include it in
> > `ConsumerAssignmentData` as well.
> >
> >
> > Guozhang
> >
> >
> > On Thu, Jul 18, 2019 at 3:55 PM Boyang Chen 
> > wrote:
> >
> > > Thank you Jason for the ideas.
> > >
> > > On Mon, Jul 15, 2019 at 5:28 PM Jason Gustafson 
> > > wrote:
> > >
> > > > Hi Boyang,
> > > >
> > > > Thanks for the updates. A few comments below:
> > > >
> > > > 1. The KIP mentions that `transaction.timeout.ms` should be reduced
> to
> > > > 10s.
> > > > I think this makes sense for Kafka Streams which is tied to the
> > consumer
> > > > group semantics and uses a default 10s session timeout. However, it
> > > seems a
> > > > bit dangerous to make this change for the producer generally. Could
> we
> > > just
> > > > change it for streams?
> > > >
> > > > That sounds good to me.
> > >
> > > > 2. The new `initTransactions` API takes a `Consumer` instance. I
> think
> > > the
> > > > idea is to basically put in a backdoor to give the producer access to
> > the
> > > > group generationId. It's not clear to me how this would work given
> > > package
> > > > restrictions. I wonder if it would be better to just expose the state
> > we
> > > > need from the consumer. I know we have been reluctant to do this so
> far
> > > > because we treat the generationId as an implementation detail.
> > However, I
> > > > think we might just bite the bullet and expose it rather than coming
> up
> > > > with a messy hack. Concepts such as memberIds have already been
> exposed
> > > in
> > > > the AdminClient, so maybe it is not too bad. Alternatively, we could
> > use
> > > an
> > > > opaque type. For example:
> > > >
> > > > // public
> > > > interface GroupMetadata {}
> > > >
> > > > // private
> > > > interface ConsumerGroupMetadata {
> > > >   final int generationId;
> > > >   final String memberId;
> > > > }
> > > >
> > > > // Consumer API
> > > > public GroupMetadata groupMetadata();
> > > >
> > > > I am probably leaning toward just exposing the state we need.
> > > >
> > > > Yes, also to mention that Kafka Streams use generic Cosnumer API
> which
> > > doesn't have rich
> > > states like a full `KafkaConsumer`. The hack will not work as expected.
> > >
> > > Instead, just exposing the consumer generation.id seems a way easier
> > work.
> > > We could consolidate
> > > the API and make it
> > >
> > > 3. Given that we are already providing a way to propagate group state
> > from
> > > > the consumer to the producer, I wonder if we may as well include the
> > > > memberId and groupInstanceId. This would make the validation we do
> for
> > > > TxnOffsetCommit consistent with OffsetCommit. If for no other
> benefit,
> > at
> > > > least this may help with debugging.
> > > >
> > >
> > > Yes, we could put them into the GroupMetadata struct.
> > >
> > >
> > > > 4. I like the addition of isolation_level to the offset fetch. At the
> > > same
> > > > time, its behavior is a bit inconsistent with how it is used in the
> > > > consumer generally. There is no reason for the group coordinator to
> > ever
> > > > expose aborted data, so this is mostly about awaiting pending offset
> > > 

[jira] [Resolved] (KAFKA-8678) LeaveGroup request getErrorResponse is incorrect on throttle time and error setting

2019-07-22 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-8678.

   Resolution: Fixed
Fix Version/s: 2.3.1
   2.2.2

> LeaveGroup request getErrorResponse is incorrect on throttle time and error 
> setting
> ---
>
> Key: KAFKA-8678
> URL: https://issues.apache.org/jira/browse/KAFKA-8678
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 2.2.0, 2.3.0
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
> Fix For: 2.2.2, 2.3.1
>
>
> [https://github.com/apache/kafka/pull/6188] accidentally changed the version 
> of setting throttle time from v1 to v2, and neglected the passed in 
> exception. We should fix this change.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (KAFKA-8454) Add Java AdminClient interface

2019-07-22 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-8454.

   Resolution: Fixed
Fix Version/s: 2.4.0

> Add Java AdminClient interface
> --
>
> Key: KAFKA-8454
> URL: https://issues.apache.org/jira/browse/KAFKA-8454
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, clients
>Reporter: Andy Coates
>Assignee: Andy Coates
>Priority: Minor
> Fix For: 2.4.0
>
>
> Task to track the work of [KIP-476: Add Java AdminClient 
> Interface|https://cwiki.apache.org/confluence/display/KAFKA/KIP-476%3A+Add+Java+AdminClient+Interface]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


Re: KIP-382 + Kafka Streams Question

2019-07-22 Thread Ryanne Dolan
Hello Adam, thanks for the questions. Yes my organization uses Streams, and
yes you can use Streams with MM2/KIP-382, though perhaps not in the way you
are describing.

The architecture you mention is more "active/standby" than "active/active"
IMO. The "secondary" cluster is not being used until a failure, at which
point you migrate your app and expect the data to already be there. This
works for normal consumers where you can seek() and --reset-offsets.
Streams apps can be reset with the kafka-streams-application-reset tool,
but as you point out, that doesn't help with rebuilding an app's internal
state, which would be missing on the secondary cluster. (Granted, that may
be okay depending on your particular application.)

A true "active/active" solution IMO would be to run your same Streams app
in _both_ clusters (primary, secondary), s.t. the entire application state
is available and continuously updated in both clusters. As with normal
consumers, the Streams app should subscribe to any remote topics, e.g. with
a regex, s.t. the application state will reflect input from either source
cluster.

This is essentially what Streams' "standby replicas" are -- extra copies of
application state to support quicker failover. Without these replicas,
Streams would need to start back at offset 0 and re-process everything in
order to rebuild state (which you don't want to do during a disaster,
especially!). The same logic applies to using Streams with MM2. You _could_
failover by resetting the app and rebuilding all the missing state, or you
could have a copy of everything sitting there ready when you need it. The
easiest way to do the latter is to run your app in both clusters.

Hope that helps.

Ryanne

On Mon, Jul 22, 2019 at 3:11 PM Adam Bellemare 
wrote:

> Hi Ryanne
>
> I have a quick question for you about Active+Active replication and Kafka
> Streams. First, does your org /do you use Kafka Streams? If not then I
> think this conversation can end here. ;)
>
> Secondly, and for the broader Kafka Dev group - what happens if I want to
> use Active+Active replication with my Kafka Streams app, say, to
> materialize a simple KTable? Based on my understanding, I topic "table" on
> the primary cluster will be replicated to the secondary cluster as
> "primary.table". In the case of a full cluster failure for primary, the
> producer to topic "table" on the primary switches over to the secondary
> cluster, creates its own "table" topic and continues to write to there. So
> now, assuming we have had no data loss, we end up with:
>
>
> *Primary Cluster: (Dead)*
>
>
> *Secondary Cluster: (Live)*
> Topic: "primary.table" (contains data from T = 0 to T = n)
> Topic: "table" (contains data from T = n+1 to now)
>
> If I want to materialize state from using Kafka Streams, obviously I am
> now in a bit of a pickle since I need to consume "primary.table" before I
> consume "table". Have you encountered rebuilding state in Kafka Streams
> using Active-Active? For non-Kafka Streams I can see using a single
> consumer for "primary.table" and one for "table", interleaving the
> timestamps and performing basic event dispatching based on my own tracked
> stream-time, but for Kafka Streams I don't think there exists a solution to
> this.
>
> If you have any thoughts on this or some recommendations for Kafka Streams
> with Active-Active I would be very appreciative.
>
> Thanks
> Adam
>
>
>


Re: [DISCUSS] KIP-490: log when consumer groups lose a message because offset has been deleted

2019-07-22 Thread Jose M
Hi Colin,

Thanks a lot for your feedback. Please note that I only propose to log when
a message is lost this for a set of consumer groups, not as default
behaviour for all consumer groups.
But in fact, I agree with you that to log a line per message expired can be
quite lot, and that is not the better way do it. I can propose to add a
dedicated JMX metric of type counter "expired messages" per consumer group.
What do you think ?

About monitoring the lag to ensure that messages are not lost, I know that
is what clients do, to set up alerting when the lag is above a threshold.
But even if the alert is triggered, we dont know if messages have been lost
or not. Implementing this KIP clients would know if something has been
missed or not.


Thanks,


Jose

On Mon, Jul 22, 2019 at 5:51 PM Colin McCabe  wrote:

> Hi Jose,
>
> One issue that I see here is that the number of log messages could be
> huge.  I've seen people create tens of thousands of consumer groups.
> People can also have settings that create pretty small log files.  A
> message per log file per group could be quite a lot of messages.
>
> A log message on the broker is also not that useful for detecting bad
> client behavior.  People generally only look at the server logs after they
> become aware that something is wrong through some other means.
>
> Perhaps the clients should just monitor their lag?  There is a JMX metric
> for this, which means it can be hooked into traditional metrics / reporting
> systems.
>
> best,
> Colin
>
>
> On Mon, Jul 22, 2019, at 03:12, Jose M wrote:
> > Hello,
> >
> > I didn't get any feedback on this small KIP-490
> > <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-490%3A+log+when+consumer+groups+lose+a+message+because+offset+has+been+deleted
> >.
> > In summary, I propose a way to be noticed when messages are being
> > removed
> > due to retention policy, without being consumed by a given consumer
> > group.
> > It will be useful to realize that some important messages have been
> > lost.
> >
> > As Im new to the codebase, I have technical questions about how to
> achieve
> > this, but before going deeper, I would like your feedback on the feature.
> >
> > Thanks a lot,
> >
> >
> > Jose Morales
> >
> > On Sun, Jul 14, 2019 at 12:51 AM Jose M  wrote:
> >
> > > Hello,
> > >
> > > I would like to know what do you think on KIP-490:
> > >
> > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-490%3A+log+when+consumer+groups+lose+a+message+because+offset+has+been+deleted
> > > <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-490%3A+log+when+consumer+groups+lose+a+message+because+offset+has+expired
> >
> > >
> > >
> > > Thanks a lot !
> > > --
> > > Jose M
> > >
> >
> >
> > --
> > J
> >
>


-- 
J


JIRA and KIP contributor permissions

2019-07-22 Thread Alexandre Dupriez
Hello Community,

In order to start contributing to Apache Kafka project, could I please
request contributor access to JIRA and be granted write permissions to the
Kafka wiki?

JIRA username: adupriez
Committer email: alexandre.dupr...@amazon.com 

Thank you in advance,
Alexandre


KIP-382 + Kafka Streams Question

2019-07-22 Thread Adam Bellemare
Hi Ryanne

I have a quick question for you about Active+Active replication and Kafka
Streams. First, does your org /do you use Kafka Streams? If not then I
think this conversation can end here. ;)

Secondly, and for the broader Kafka Dev group - what happens if I want to
use Active+Active replication with my Kafka Streams app, say, to
materialize a simple KTable? Based on my understanding, I topic "table" on
the primary cluster will be replicated to the secondary cluster as
"primary.table". In the case of a full cluster failure for primary, the
producer to topic "table" on the primary switches over to the secondary
cluster, creates its own "table" topic and continues to write to there. So
now, assuming we have had no data loss, we end up with:


*Primary Cluster: (Dead)*


*Secondary Cluster: (Live)*
Topic: "primary.table" (contains data from T = 0 to T = n)
Topic: "table" (contains data from T = n+1 to now)

If I want to materialize state from using Kafka Streams, obviously I am now
in a bit of a pickle since I need to consume "primary.table" before I
consume "table". Have you encountered rebuilding state in Kafka Streams
using Active-Active? For non-Kafka Streams I can see using a single
consumer for "primary.table" and one for "table", interleaving the
timestamps and performing basic event dispatching based on my own tracked
stream-time, but for Kafka Streams I don't think there exists a solution to
this.

If you have any thoughts on this or some recommendations for Kafka Streams
with Active-Active I would be very appreciative.

Thanks
Adam


[jira] [Created] (KAFKA-8696) Clean up Sum/Count/Total metrics

2019-07-22 Thread John Roesler (JIRA)
John Roesler created KAFKA-8696:
---

 Summary: Clean up Sum/Count/Total metrics
 Key: KAFKA-8696
 URL: https://issues.apache.org/jira/browse/KAFKA-8696
 Project: Kafka
  Issue Type: Improvement
Reporter: John Roesler
Assignee: John Roesler


Kafka has a family of metrics consisting of:

org.apache.kafka.common.metrics.stats.Count
org.apache.kafka.common.metrics.stats.Sum
org.apache.kafka.common.metrics.stats.Total
org.apache.kafka.common.metrics.stats.Rate.SampledTotal
org.apache.kafka.streams.processor.internals.metrics.CumulativeCount
These metrics are all related to each other, but their relationship is obscure 
(and one is redundant) (and another is internal).

I've recently been involved in a third  recapitulation of trying to work out 
which metric does what. It seems like it's time to clean up the mess and save 
everyone from having to work out the mystery for themselves.

I've proposed https://cwiki.apache.org/confluence/x/kkAyBw to fix it.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


Re: [VOTE] KIP-488: Clean up Sum,Count,Total Metrics

2019-07-22 Thread John Roesler
Hello all,

Thanks for the thoughtful consideration on this KIP.

The vote has passed with 3 binding (Guozhang, Bill, and Matthias)
votes and 2 non-binding ones (Ryanne and myself). I'll update the PR
and submit it for review shortly!

Thanks again,
-John

On Thu, Jul 18, 2019 at 7:55 PM Matthias J. Sax  wrote:
>
> +1 (binding)
>
>
> On 7/17/19 1:20 PM, Bill Bejeck wrote:
> > +1 (binding) for the updated KIP.
> >
> > On Wed, Jul 17, 2019 at 4:09 PM John Roesler  wrote:
> >
> >> Hey, Bruno and Bill,
> >>
> >> Since you cast your votes before the KIP was updated, do you mind
> >> re-casting just so we can be sure you're still in favor?
> >>
> >> Thanks,
> >> -John
> >>
> >> On Wed, Jul 17, 2019 at 2:01 PM Guozhang Wang  wrote:
> >>>
> >>> +1 (binging).
> >>>
> >>> This is a great cleanup, thanks John!
> >>>
> >>> Guozhang
> >>>
> >>> On Wed, Jul 17, 2019 at 11:26 AM Ryanne Dolan 
> >> wrote:
> >>>
>  +1 (non-binding)
> 
>  Thanks for the interesting discussion.
> 
>  Ryanne
> 
>  On Fri, Jul 12, 2019, 2:49 PM Ryanne Dolan 
> >> wrote:
> 
> > John, I'm glad to learn I'm not the only one who's re-read the
> >> metrics
> > code multiple times.
> >
> > I do wonder if the proposed names could be improved further though,
> >> given
> > that "sum", "total", and "count" are roughly synonymous. I'm already
> > scratching my head at what "TotalSum" means. It's clear in the
> >> context of
> > your matrix, juxtaposed with the alternatives, but when I come
> >> across the
> > name in isolation I suspect I'll be back looking at the
> >> implementation
> > again.
> >
> > Ryanne
> >
> > On Fri, Jul 12, 2019, 1:45 PM John Roesler 
> >> wrote:
> >
> >> Hi Kafka devs,
> >>
> >> Yesterday, I proposed KIP-488 as a minor cleanup of some of our
> >> metric
> >> implementations.
> >>
> >> KIP-488: https://cwiki.apache.org/confluence/x/kkAyBw
> >>
> >> The change seems pretty uncontroversial, so I'm just going to open
> >> the
> >> vote now.
> >>
> >> Feel free to veto or just request more discussion if you disagree
> >> with
> >> the KIP. The vote will remain open for 72 hours.
> >>
> >> Thanks,
> >> -John
> >>
> >
> 
> >>>
> >>>
> >>> --
> >>> -- Guozhang
> >>
> >
>


Fwd: [DISCUSS] KIP-492 Add java security providers in Kafka Security config

2019-07-22 Thread Sandeep Mopuri
Hi Rajini,
 Thanks for raising the above questions. Please find the
replies below

On Wed, Jul 17, 2019 at 2:49 AM Rajini Sivaram 
wrote:

> Hi Sandeep,
>
> Thanks for the KIP. A few questions below:
>
>1. Is the main use case for this KIP adding security providers for SSL?
>If so, wouldn't a more generic solution like KIP-383 work for this?
>
   We’re trying to solve this for both SSL and SASL. KIP-383 allows the
creation of custom SSLFactory implementation, however adding the provides
to new security algorithms doesn’t involve any new implementation of
SSLFactory. Even after the KIP 383, people still are finding a need for
loading custom keymanager and trustmanager implementations (KIP 486)

   2. Presumably this config would also apply to clients. If so, have we
>thought through the implications of changing static JVM-wide security
>providers in the client applications?
>
   Yes, this config will be applied to clients as well and the
responsibility to face the consequences of adding the security providers
need to be taken by the clients. In cases of resource manager running
streaming applications such as Yarn, Mesos etc.. each user needs to make
sure they are passing these JVM arguments.

   3. Since client applications can programmatically invoke the Java
>Security API anyway, isn't the system property described in `Rejected
>Alternatives` a reasonable solution for brokers?
>
  This is true in a kafka only environment, but with an eco-system of
streaming applications like flink, spark etc which might produce to kafka,
it’s difficult to make changes to all the clients

   4. We have SASL login modules in Kafka that automatically add security
>providers for SASL mechanisms not supported by the JVM. We should
> describe
>the impact of this KIP on those and whether we would now require a
> config
>to enable these security providers

In a single JVM, one can register multiple security.providers. By default
JVM itself provides multiple providers and these will not stepped over each
other. The only way to activate a provider is through their registered names
Example:

$ cat /usr/lib/jvm/jdk-8-oracle-x64/jre/lib/security/java.security
...
security.provider.1=sun.security.provider.Sun
security.provider.2=sun.security.rsa.SunRsaSign
security.provider.3=sun.security.ec.SunEC
security.provider.4=com.sun.net.ssl.internal.ssl.Provider
security.provider.5=com.sun.crypto.provider.SunJCE
security.provider.6=sun.security.jgss.SunProvider
security.provider.7=com.sun.security.sasl.Provider
security.provider.8=org.jcp.xml.dsig.internal.dom.XMLDSigRI
security.provider.9=sun.security.smartcardio.SunPCSC
...

   A user of a provider will refer through their registered provider

https://github.com/spiffe/spiffe-example/blob/master/java-spiffe/spiffe-security-provider/src/main/java/spiffe/api/provider/SpiffeProvider.java#L31

   In the above example , we can register the SpiffeProvider and
multiple other providers into the JVM. When a client or a broker wants to
integrate with SpiffeProvider they need to add the config
ssl.keymanager.alogirhtm = "Spiffe" . Another client can refer to a
different provider or use a default one in the same JVM.


>5. We have been moving away from JVM-wide configs like the default JAAS
>config since they are hard to test reliably or update dynamically. The
>replacement config `sasl.jaas.config` doesn't insert a JVM-wide
>configuration. Have we investigated similar options for the specific
>scenario we are addressing here?
>
   Yes, that is the case with jaas config, however in the case of
security providers, along with adding the security providers to JVM
properties, one also need to configure the provider algorithm. For example,
in the case of SSL configuration, besides adding the security provider to
the JVM, we need to configure the “ssl.trustmanager.algorithm” and
“ssl.keymanager.algorithm” inorder for the provider implementation to
apply. Different components can opt for different key and trustmanager
algorithms and can work independently simultaneously in the same JVM. This
case is different from the jaas config.


>6. Are we always going to insert new providers at the start of the
>provider list?

   Can you please explain what exactly do you mean by this

>
>
> Regards,
>
> Rajini
>
>
>
> On Wed, Jul 17, 2019 at 5:05 AM Harsha  wrote:
>
> > Thanks for the KIP Sandeep. LGTM.
> >
> > Mani & Rajini, can you please look at the KIP as well.
> >
> > Thanks,
> > Harsha
> >
> > On Tue, Jul 16, 2019, at 2:54 PM, Sandeep Mopuri wrote:
> > > Thanks for the suggestions, made changes accordingly.
> > >
> > > On Tue, Jul 16, 2019 at 9:27 AM Satish Duggana <
> satish.dugg...@gmail.com
> > >
> > > wrote:
> > >
> > > > Hi Sandeep,
> > > > Thanks for the KIP, I have few comments below.
> > > >
> > > > >>“To take advantage of these custom algorithms, we want to support
> > java
> > > > security 

Re: [DISCUSS] KIP-490: log when consumer groups lose a message because offset has been deleted

2019-07-22 Thread Colin McCabe
Hi Jose,

One issue that I see here is that the number of log messages could be huge.  
I've seen people create tens of thousands of consumer groups.  People can also 
have settings that create pretty small log files.  A message per log file per 
group could be quite a lot of messages.

A log message on the broker is also not that useful for detecting bad client 
behavior.  People generally only look at the server logs after they become 
aware that something is wrong through some other means.

Perhaps the clients should just monitor their lag?  There is a JMX metric for 
this, which means it can be hooked into traditional metrics / reporting systems.

best,
Colin


On Mon, Jul 22, 2019, at 03:12, Jose M wrote:
> Hello,
> 
> I didn't get any feedback on this small KIP-490
> .
> In summary, I propose a way to be noticed when messages are being 
> removed
> due to retention policy, without being consumed by a given consumer 
> group.
> It will be useful to realize that some important messages have been 
> lost.
> 
> As Im new to the codebase, I have technical questions about how to achieve
> this, but before going deeper, I would like your feedback on the feature.
> 
> Thanks a lot,
> 
> 
> Jose Morales
> 
> On Sun, Jul 14, 2019 at 12:51 AM Jose M  wrote:
> 
> > Hello,
> >
> > I would like to know what do you think on KIP-490:
> >
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-490%3A+log+when+consumer+groups+lose+a+message+because+offset+has+been+deleted
> > 
> >
> >
> > Thanks a lot !
> > --
> > Jose M
> >
> 
> 
> -- 
> J
>


Re: [VOTE] KIP-455: Create an Administrative API for Replica Reassignment

2019-07-22 Thread Colin McCabe
Hi all,

With three non-binding +1 votes from Viktor Somogyi-Vass, Robert Barrett, and 
George Li, and 3 binding +1 votes from Gwen Shapira, Jason Gustafson, and 
myself, the vote passes.  Thanks, everyone!

best,
Colin

On Fri, Jul 19, 2019, at 08:55, Robert Barrett wrote:
> +1 (non-binding). Thanks for the KIP!
> 
> On Thu, Jul 18, 2019 at 5:59 PM George Li 
> wrote:
> 
> >  +1 (non-binding)
> >
> >
> >
> > Thanks for addressing the comments.
> > George
> >
> > On Thursday, July 18, 2019, 05:03:58 PM PDT, Gwen Shapira <
> > g...@confluent.io> wrote:
> >
> >  Renewing my +1, thank you Colin and Stan for working through all the
> > questions, edge cases, requests and alternatives. We ended up with a
> > great protocol.
> >
> > On Thu, Jul 18, 2019 at 4:54 PM Jason Gustafson 
> > wrote:
> > >
> > > +1 Thanks for the KIP. Really looking forward to this!
> > >
> > > -Jason
> > >
> > > On Wed, Jul 17, 2019 at 1:41 PM Colin McCabe  wrote:
> > >
> > > > Thanks, Stanislav.  Let's restart the vote to reflect the fact that
> > we've
> > > > made significant changes.  The new vote will go for 3 days as usual.
> > > >
> > > > I'll start with my +1 (binding).
> > > >
> > > > best,
> > > > Colin
> > > >
> > > >
> > > > On Wed, Jul 17, 2019, at 08:56, Stanislav Kozlovski wrote:
> > > > > Hey everybody,
> > > > >
> > > > > We have further iterated on the KIP in the accompanying discussion
> > thread
> > > > > and I'd like to propose we resume the vote.
> > > > >
> > > > > Some notable changes:
> > > > > - we will store reassignment information in the
> > `/brokers/topics/[topic]`
> > > > > - we will internally use two collections to represent a reassignment
> > -
> > > > > "addingReplicas" and "removingReplicas". LeaderAndIsr has been
> > updated
> > > > > accordingly
> > > > > - the Alter API will still use the "targetReplicas" collection, but
> > the
> > > > > List API will now return three separate collections - the full
> > replica
> > > > set,
> > > > > the replicas we are adding as part of this reassignment
> > > > ("addingReplicas")
> > > > > and the replicas we are removing ("removingReplicas")
> > > > > - cancellation of a reassignment now means a proper rollback of the
> > > > > assignment to its original state prior to the API call
> > > > >
> > > > > As always, you can re-read the KIP here
> > > > >
> > > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-455%3A+Create+an+Administrative+API+for+Replica+Reassignment
> > > > >
> > > > > Best,
> > > > > Stanislav
> > > > >
> > > > > On Wed, May 22, 2019 at 6:12 PM Colin McCabe 
> > wrote:
> > > > >
> > > > > > Hi George,
> > > > > >
> > > > > > Thanks for taking a look.  I am working on getting a PR done as a
> > > > > > proof-of-concept.  I'll post it soon.  Then we'll finish up the
> > vote.
> > > > > >
> > > > > > best,
> > > > > > Colin
> > > > > >
> > > > > > On Tue, May 21, 2019, at 17:33, George Li wrote:
> > > > > > >  Hi Colin,
> > > > > > >
> > > > > > >  Great! Looking forward to these features.+1 (non-binding)
> > > > > > >
> > > > > > > What is the estimated timeline to have this implemented?  If any
> > help
> > > > > > > is needed in the implementation of cancelling reassignments,  I
> > can
> > > > > > > help if there is spare cycle.
> > > > > > >
> > > > > > >
> > > > > > > Thanks,
> > > > > > > George
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >On Thursday, May 16, 2019, 9:48:56 AM PDT, Colin McCabe
> > > > > > >  wrote:
> > > > > > >
> > > > > > >  Hi George,
> > > > > > >
> > > > > > > Yes, KIP-455 allows the reassignment of individual partitions to
> > be
> > > > > > > cancelled.  I think it's very important for these operations to
> > be at
> > > > > > > the partition level.
> > > > > > >
> > > > > > > best,
> > > > > > > Colin
> > > > > > >
> > > > > > > On Tue, May 14, 2019, at 16:34, George Li wrote:
> > > > > > > >  Hi Colin,
> > > > > > > >
> > > > > > > > Thanks for the updated KIP.  It has very good improvements of
> > Kafka
> > > > > > > > reassignment operations.
> > > > > > > >
> > > > > > > > One question, looks like the KIP includes the Cancellation of
> > > > > > > > individual pending reassignments as well when the
> > > > > > > > AlterPartitionReasisgnmentRequest has empty replicas for the
> > > > > > > > topic/partition. Will you also be implementing the the
> > partition
> > > > > > > > cancellation/rollback in the PR ?If yes,  it will make
> > KIP-236
> > > > (it
> > > > > > > > has PR already) trivial, since the cancel all pending
> > > > reassignments,
> > > > > > > > one just needs to do a ListPartitionRessignmentRequest, then
> > submit
> > > > > > > > empty replicas for all those topic/partitions in
> > > > > > > > one AlterPartitionReasisgnmentRequest.
> > > > > > > >
> > > > > > > >
> > > > > > > > Thanks,
> > > > > > > > George
> > > > > > > >
> > > > > > > >On Friday, May 10, 2019, 8:44:31 PM PDT, Colin McCabe
> > > > > > > >  wrote:
> > > > > > > >
> > > > > > > >  On Fri, 

Re: [DISCUSS] KIP-466: Add support for List serialization and deserialization

2019-07-22 Thread Development
Hey Matthias,

It looks a little confusing, but I don’t have enough expertise to judge on the 
configuration placement.

If you think, it is fine I’ll go ahead with this approach.

Best,
Daniyar Yeralin

> On Jul 19, 2019, at 5:49 PM, Matthias J. Sax  wrote:
> 
> Good point.
> 
> I guess the simplest solution is, to actually add
> 
>>> default.list.key/value.serde.type
>>> default.list.key/value.serde.inner
> 
> to both `CommonClientConfigs` and `StreamsConfig`.
> 
> It's not super clean, but I think it's the best we can do. Thoughts?
> 
> 
> -Matthias
> 
> On 7/19/19 1:23 PM, Development wrote:
>> Hi Matthias,
>> 
>> I agree, ConsumerConfig did not seem like a right place for these 
>> configurations.
>> I’ll put them in ProducerConfig, ConsumerConfig, and StreamsConfig.
>> 
>> However, I have a question. What should I do in "configure(Map 
>> configs, boolean isKey)” methods? Which configurations should I try to 
>> locate? I was comparing my (de)serializer implementations with 
>> SessionWindows(De)serializer classes, and they use StreamsConfig class to 
>> get  either StreamsConfig.DEFAULT_WINDOWED_KEY_SERDE_INNER_CLASS : 
>> StreamsConfig.DEFAULT_WINDOWED_VALUE_SERDE_INNER_CLASS
>> 
>> In my case, as I mentioned earlier, StreamsConfig class is not accessible 
>> from org.apache.kafka.common.serialization package. So, I can’t utilize it. 
>> Any suggestions here?
>> 
>> Best,
>> Daniyar Yeralin
>> 
>> 
>>> On Jul 18, 2019, at 8:46 PM, Matthias J. Sax  wrote:
>>> 
>>> Thanks!
>>> 
>>> One minor question about the configs. The KIP adds three classes, a
>>> Serializer, a Deserializer, and a Serde.
>>> 
>>> Hence, would it make sense to add the corresponding configs to
>>> `ConsumerConfig`, `ProducerConfig`, and `StreamsConfig` using slightly
>>> different names each time?
>>> 
>>> 
>>> Somethin like this:
>>> 
>>> ProducerConfig:
>>> 
>>> list.key/value.serializer.type
>>> list.key/value.serializer.inner
>>> 
>>> ConsumerConfig:
>>> 
>>> list.key/value.deserializer.type
>>> list.key/value.deserializer.inner
>>> 
>>> StreamsConfig:
>>> 
>>> default.list.key/value.serde.type
>>> default.list.key/value.serde.inner
>>> 
>>> 
>>> Adding `d.l.k/v.serde.t/i` to `CommonClientConfigs does not sound right
>>> to me. Also note, that it seems better to avoid the `default.` prefix
>>> for consumers and producers because there is only one Serializer or
>>> Deserializer anyway. Only for Streams, there are multiple and
>>> StreamsConfig specifies the default one of an operator does not
>>> overwrite it.
>>> 
>>> Thoughts?
>>> 
>>> 
>>> Also, the KIP should explicitly mention to what classed certain configs
>>> are added. Atm, the KIP only list parameter names, but does not state
>>> where those are added.
>>> 
>>> 
>>> -Matthias
>>> 
>>> 
>>> 
>>> 
>>> 
>>> On 7/16/19 1:11 PM, Development wrote:
 Hi,
 
 Yes, totally forgot about the statement. KIP-466 is updated.
 
 Thank you so much John Roesler, Matthias J. Sax, Sophie Blee-Goldman for 
 your valuable input!
 
 I hope I did not cause too much trouble :)
 
 I’ll start the vote now.
 
 Best,
 Daniyar Yeralin
 
> On Jul 16, 2019, at 3:17 PM, John Roesler  wrote:
> 
> Hi Daniyar,
> 
> Thanks for that update. I took a look, and I think this is in good shape.
> 
> One note, the statement "New method public static  Serde>
> ListSerde() in org.apache.kafka.common.serialization.Serdes class
> (infers list implementation and inner serde from config file)" is
> still present in the KIP, although I do it is was removed from the PR.
> 
> Once you remove that statement from the KIP, then I think this KIP is
> ready to go up for a vote! Then, we can really review the PR in
> earnest and get this thing merged.
> 
> Thanks,
> -john
> 
> On Tue, Jul 16, 2019 at 2:05 PM Development  wrote:
>> 
>> Hi,
>> 
>> Pushed new changes under my PR: 
>> https://github.com/apache/kafka/pull/6592 
>> 
>> 
>> Feel free to put any comments in there.
>> 
>> Best,
>> Daniyar Yeralin
>> 
>>> On Jul 15, 2019, at 1:06 PM, Development  wrote:
>>> 
>>> Hi John,
>>> 
>>> I knew I was missing something. Yes, that makes sense now, I removed 
>>> all `listSerde()` methods, and left empty constructors instead.
>>> 
>>> As per `CommonClientConfigs` I looked at the class, it doesn’t have any 
>>> properties related to serdes, and that bothers me a little.
>>> 
>>> All properties like `default.key.serde` `default.windowed.key.serde.*` 
>>> are located in StreamsConfig. I don’t want to create a confusion.
>>> What also doesn’t make sense to me is that `WindowedSerdes` and its 
>>> (de)serializers are not located in 
>>> org.apache.kafka.common.serialization. I guess it kind of makes sense 
>>> since windowed serdes are only available 

[jira] [Created] (KAFKA-8695) Metrics UnderReplicated and UnderMinSir are diverging when configuration is inconsistent

2019-07-22 Thread Alexandre Dupriez (JIRA)
Alexandre Dupriez created KAFKA-8695:


 Summary: Metrics UnderReplicated and UnderMinSir are diverging 
when configuration is inconsistent
 Key: KAFKA-8695
 URL: https://issues.apache.org/jira/browse/KAFKA-8695
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.3.0, 2.1.1, 2.2.0, 2.1.0
Reporter: Alexandre Dupriez


As of now, Kafka allows the replication factor of a topic and 
"min.insync.replicas" to be set such that "min.insync.replicas" > the topic's 
replication factor.

As a consequences, the JMX beans
{code:java}
kafka.cluster:type=Partition,name=UnderReplicated{code}
and 
{code:java}
kafka.cluster:type=Partition,name=UnderMinIsr{code}
can report diverging views on the replication for a topic. The former can 
report no under replicated partition, while the second will report under 
in-sync replicas.

 

Even worse, consumption of topics which exhibit this behaviour seems to fail, 
the Kafka broker throwing a NotEnoughReplicasException.

 

 
{code:java}
[2019-07-22 10:44:29,913] ERROR [ReplicaManager broker=0] Error processing 
append operation on partition __consumer_offsets-0 (kafka.server.ReplicaManager)
org.apache.kafka.common.errors.NotEnoughReplicasException: The size of the 
current ISR Set(0) is insufficient to satisfy the min.isr requirement of 2 for 
partition __consumer_offsets-0{code}
 

 

In order to avoid this scenario, one possibility would be to check the values 
of "min.insync.replicas" and "default.replication.factor" when the broker 
starts, and "min.insync.replicas" and the replication factor given to a topic 
at creation time, and refuses to create the topic if those are inconsistently 
set.

 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


Re: Stopping All Tasks When a New Connector Added

2019-07-22 Thread Konstantine Karantasis
Liu feel free to share your jira account id on a separate email, so one of
the committers can add you to the project.
Then you or someone else will be able to assign this ticket to you.

I'll review the fix some time this week.

Thanks!
Konstantine

On Mon, Jul 22, 2019 at 5:13 AM Liu Luying  wrote:

> Hi Adam,
> I have already opened a JIRA ticket(KAFKA-8676<
> https://issues.apache.org/jira/browse/KAFKA-8676>) and a PR(with the
> title of KAFKA-8676)
> for this.
>
> Best,
> Luying
> 
> From: Adam Bellemare 
> Sent: Friday, July 19, 2019 10:36
> To: dev@kafka.apache.org 
> Subject: Re: Stopping All Tasks When a New Connector Added
>
> Hi Luying
>
> Would you be willing to make a PR to address this? It seems that you have
> already done most of the work.
>
> Thanks
> Adam
>
> On Thu, Jul 18, 2019 at 11:00 PM Liu Luying  wrote:
>
> > Hi all,
> > I have noticed that Kafka Connect 2.3.0 will stop all existing tasks and
> > then start all the tasks, including the new tasks and the existing ones
> > when adding a new connector or changing a connector configuration.
> However,
> > I do not think it is a must. Only the new connector and tasks need to be
> > started. As the rebalancing can be applied for both running and suspended
> > tasks.
> >
> > The problem lies in the
> > KafkaConfigBackingStore.ConsumeCallback.onCompletion() function (line 623
> > in KafkaConfigBackingStore.java). When record.key() startsWith "commit-",
> > the tasks are being committed, and the deferred tasks are processed, Some
> > new tasks are added to the 'updatedTasks'(line 623 in
> > KafkaConfigBackingStore.java), and the 'updatedTasks' are sent to
> > updateListener to complete the task configuration update(line 638 in
> > KafkaConfigBackingStore.java). In the updateListener.onTaskConfigUpdate()
> > function, the  'updatedTasks' are added to the member variable,
> > 'taskConfigUpdates', of class DistributedHerder(line 1295 in
> > DistributedHerder.java).
> >
> > In another thread, 'taskConfigUpdates' is copied to
> > 'taskConfigUpdatesCopy' in updateConfigsWithIncrementalCooperative()
> (line
> > 445 in DistributedHerder.java). The 'taskConfigUpdatesCopy' is
> subsequently
> > used in processTaskConfigUpdatesWithIncrementalCooperative() (line 345 in
> > DistributedHerder.java). This function then uses  'taskConfigUpdatesCopy'
> > to find connectors to stop(line 492 in DistributedHerder.java), and
> finally
> > get the tasks to stop, which are all the tasks. The worker thread does
> the
> > actual job of stop(line 499 in DistributedHerder.java).
> >
> > In the original code, all the tasks are added to the 'updatedTasks' (line
> > 623 in KafkaConfigBackingStore.java), which means all the active
> connectors
> > are in the 'connectorsWhoseTasksToStop' set, and all the tasks are in the
> > 'tasksToStop' list. This causes the stops, and of course the subsequent
> > restarts, of all the tasks.
> >
> > So, adding only the 'deferred' tasks to the  'updatedTasks' can avoid the
> > stops and restarts of unnecessary tasks.
> >
> > Best,
> > Luying
> >
> >
>


Re: [DISCUSS] KIP-490: log when consumer groups lose a message because offset has been deleted

2019-07-22 Thread Jose M
Hello,

I didn't get any feedback on this small KIP-490
.
In summary, I propose a way to be noticed when messages are being removed
due to retention policy, without being consumed by a given consumer group.
It will be useful to realize that some important messages have been lost.

As Im new to the codebase, I have technical questions about how to achieve
this, but before going deeper, I would like your feedback on the feature.

Thanks a lot,


Jose Morales

On Sun, Jul 14, 2019 at 12:51 AM Jose M  wrote:

> Hello,
>
> I would like to know what do you think on KIP-490:
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-490%3A+log+when+consumer+groups+lose+a+message+because+offset+has+been+deleted
> 
>
>
> Thanks a lot !
> --
> Jose M
>


-- 
J


Metrics UnderReplicated and UnderMinSir are diverging when configuration is inconsistent

2019-07-22 Thread Alexandre Dupriez
Hello Community,

I noticed Kafka allows the replication factor of a topic and
"min.insync.replicas" to be set such that "min.insync.replicas" > the
topic's replication factor.

As a consequences, the JMX beans
kafka.cluster:type=Partition,name=UnderReplicated and
kafka.cluster:type=Partition,name=UnderMinIsr
can report diverging views on the replication for a topic. The former can
report no under replicated partition, while the second will report under
in-sync replicas.

Even worse, consumption of topics which exhibit this behaviour seems to
fail, the Kafka broker throwing a NotEnoughReplicasException.

[2019-07-22 10:44:29,913] ERROR [ReplicaManager broker=0] Error processing
> append operation on partition __consumer_offsets-0
> (kafka.server.ReplicaManager)
> org.apache.kafka.common.errors.NotEnoughReplicasException: The size of the
> current ISR Set(0) is insufficient to satisfy the min.isr requirement of 2
> for partition __consumer_offsets-0


In order to avoid this scenario, one possibility would be to check the
values of "min.insync.replicas" and "default.replication.factor" when the
broker starts, and "min.insync.replicas" and the replication factor given
to a topic at creation time, and refuses to create the topic if those are
inconsistently set.

This was reproduced with Kafka 2.1.0, 2.2.0 and 2.3.0.

What do you think?

Alexandre


Build failed in Jenkins: kafka-trunk-jdk11 #709

2019-07-22 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] MINOR: kafkatest - adding whitelist for interbroker sasl 
configs (#7093)

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H44 (ubuntu bionic) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision e5f7220b23ba556352d80a0575fcb6cbfe2d576d 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f e5f7220b23ba556352d80a0575fcb6cbfe2d576d
Commit message: "MINOR: kafkatest - adding whitelist for interbroker sasl 
configs (#7093)"
 > git rev-list --no-walk e3524ef350830e3e1fa918ec0ae93b09d51fcf37 # timeout=10
ERROR: No tool found matching GRADLE_4_10_2_HOME
Setting GRADLE_4_10_3_HOME=/home/jenkins/tools/gradle/4.10.3
[kafka-trunk-jdk11] $ /bin/bash -xe /tmp/jenkins6064544410355678446.sh
+ rm -rf 
+ /home/jenkins/tools/gradle/4.10.3/bin/gradle
/tmp/jenkins6064544410355678446.sh: line 4: 
/home/jenkins/tools/gradle/4.10.3/bin/gradle: No such file or directory
Build step 'Execute shell' marked build as failure
Recording test results
ERROR: No tool found matching GRADLE_4_10_2_HOME
Setting GRADLE_4_10_3_HOME=/home/jenkins/tools/gradle/4.10.3
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
ERROR: No tool found matching GRADLE_4_10_2_HOME
Setting GRADLE_4_10_3_HOME=/home/jenkins/tools/gradle/4.10.3
Not sending mail to unregistered user rajinisiva...@googlemail.com


Build failed in Jenkins: kafka-trunk-jdk8 #3806

2019-07-22 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] MINOR: kafkatest - adding whitelist for interbroker sasl 
configs (#7093)

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H44 (ubuntu bionic) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision e5f7220b23ba556352d80a0575fcb6cbfe2d576d 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f e5f7220b23ba556352d80a0575fcb6cbfe2d576d
Commit message: "MINOR: kafkatest - adding whitelist for interbroker sasl 
configs (#7093)"
 > git rev-list --no-walk e3524ef350830e3e1fa918ec0ae93b09d51fcf37 # timeout=10
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/jenkins8631899412759442773.sh
+ rm -rf 
+ /home/jenkins/tools/gradle/4.8.1/bin/gradle
/tmp/jenkins8631899412759442773.sh: line 4: 
/home/jenkins/tools/gradle/4.8.1/bin/gradle: No such file or directory
Build step 'Execute shell' marked build as failure
[FINDBUGS] Collecting findbugs analysis files...
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
[FINDBUGS] Searching for all files in 
 that match the pattern 
**/build/reports/*bugs/*.xml
[FINDBUGS] No files found. Configuration error?
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
No credentials specified
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
 Using GitBlamer to create author and commit information for all 
warnings.
 GIT_COMMIT=e5f7220b23ba556352d80a0575fcb6cbfe2d576d, 
workspace=
[FINDBUGS] Computing warning deltas based on reference build #3796
Recording test results
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
Not sending mail to unregistered user nore...@github.com
Not sending mail to unregistered user wangg...@gmail.com
Not sending mail to unregistered user rajinisiva...@googlemail.com