Build failed in Jenkins: kafka-trunk-jdk11 #507

2019-05-09 Thread Apache Jenkins Server
See 


Changes:

[cshapi] KAFKA-8344; Fix vagrant-up.sh to work with AWS properly

[cshapi] MINOR: Add missing option for running vagrant-up.sh with AWS to

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H32 (ubuntu xenial) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 31859a452b68ddd98281ff341a6b75c579f3c050 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 31859a452b68ddd98281ff341a6b75c579f3c050
Commit message: "MINOR: Add missing option for running vagrant-up.sh with AWS 
to vagrant/README.md"
 > git rev-list --no-walk 96096cebe1ec77b3f7e0c60cf9f4ce575adccd67 # timeout=10
ERROR: No tool found matching GRADLE_4_10_2_HOME
ERROR: No tool found matching GRADLE_4_10_2_HOME
[kafka-trunk-jdk11] $ /bin/bash -xe /tmp/jenkins1171300458844352842.sh
+ rm -rf 
+ /bin/gradle
/tmp/jenkins1171300458844352842.sh: line 4: /bin/gradle: No such file or 
directory
Build step 'Execute shell' marked build as failure
Recording test results
ERROR: No tool found matching GRADLE_4_10_2_HOME
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
ERROR: No tool found matching GRADLE_4_10_2_HOME
Not sending mail to unregistered user nore...@github.com
Not sending mail to unregistered user csh...@gmail.com
Not sending mail to unregistered user wangg...@gmail.com


[jira] [Created] (KAFKA-8348) Document of kafkaStreams improvement

2019-05-09 Thread Lifei Chen (JIRA)
Lifei Chen created KAFKA-8348:
-

 Summary: Document of kafkaStreams improvement
 Key: KAFKA-8348
 URL: https://issues.apache.org/jira/browse/KAFKA-8348
 Project: Kafka
  Issue Type: Improvement
  Components: documentation, streams
Reporter: Lifei Chen
Assignee: Lifei Chen


there is an out of date and error example in kafkaStreams.java for current 
version.
 * Map is not supported for initial StreamsConfig properties
 * `int` does not support `toString`

related code:
{code:java}
// kafkaStreams.java

* 
* A simple example might look like this:
* {@code
* Properties props = new Properties();
* props.put(StreamsConfig.APPLICATION_ID_CONFIG, 
"my-stream-processing-application");
* props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
* props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, 
Serdes.String().getClass());
* props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, 
Serdes.String().getClass());
*
* StreamsBuilder builder = new StreamsBuilder();
* builder.stream("my-input-topic").mapValues(value -> 
String.valueOf(value.length())).to("my-output-topic");
*
* KafkaStreams streams = new KafkaStreams(builder.build(), props);
* streams.start();
* }{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8344) Fix vagrant-up.sh to work with AWS properly

2019-05-09 Thread Gwen Shapira (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-8344.
-
   Resolution: Fixed
Fix Version/s: 2.3.0

> Fix vagrant-up.sh to work with AWS properly
> ---
>
> Key: KAFKA-8344
> URL: https://issues.apache.org/jira/browse/KAFKA-8344
> Project: Kafka
>  Issue Type: Bug
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Major
> Fix For: 2.3.0
>
>
> I tried to run {{vagrant/vagrant-up.sh --aws}} with the following 
> Vagrantfile.local.
> {code}
> enable_dns = true
> enable_hostmanager = false
> # EC2
> ec2_access_key = ""
> ec2_secret_key = ""
> ec2_keypair_name = "keypair"
> ec2_keypair_file = "/path/to/keypair/file"
> ec2_region = "ap-northeast-1"
> ec2_ami = "ami-0905ffddadbfd01b7"
> ec2_security_groups = "sg-"
> ec2_subnet_id = "subnet-"
> {code}
> EC2 instances were successfully created, but it failed with the following 
> error after that.
> {code}
> $ vagrant/vagrant-up.sh --aws
> (snip)
> An active machine was found with a different provider. Vagrant
> currently allows each machine to be brought up with only a single
> provider at a time. A future version will remove this limitation.
> Until then, please destroy the existing machine to up with a new
> provider.
> Machine name: zk1
> Active provider: aws
> Requested provider: virtualbox
> {code}
> It seems that the {{vagrant hostmanager}} command also requires 
> {{--provider=aws}} option, in addition to {{vagrant up}}.
> With that option, it succeeded as follows:
> {code}
> $ git diff
> diff --git a/vagrant/vagrant-up.sh b/vagrant/vagrant-up.sh
> index 6a4ef9564..9210a5357 100755
> --- a/vagrant/vagrant-up.sh
> +++ b/vagrant/vagrant-up.sh
> @@ -220,7 +220,7 @@ function bring_up_aws {
>  # We still have to bring up zookeeper/broker nodes serially
>  echo "Bringing up zookeeper/broker machines serially"
>  vagrant up --provider=aws --no-parallel --no-provision 
> $zk_broker_machines $debug
> -vagrant hostmanager
> +vagrant hostmanager --provider=aws
>  vagrant provision
>  fi
> @@ -231,11 +231,11 @@ function bring_up_aws {
>  local vagrant_rsync_temp_dir=$(mktemp -d);
>  TMPDIR=$vagrant_rsync_temp_dir vagrant_batch_command "vagrant up 
> $debug --provider=aws" "$worker_machines" "$max_parallel"
>  rm -rf $vagrant_rsync_temp_dir
> -vagrant hostmanager
> +vagrant hostmanager --provider=aws
>  fi
>  else
>  vagrant up --provider=aws --no-parallel --no-provision $debug
> -vagrant hostmanager
> +vagrant hostmanager --provider=aws
>  vagrant provision
>  fi
> $ vagrant/vagrant-up.sh --aws
> (snip)
> ==> broker3: Running provisioner: shell...
> broker3: Running: /tmp/vagrant-shell20190509-25399-8f1wgz.sh
> broker3: Killing server
> broker3: No kafka server to stop
> broker3: Starting server
> $ vagrant status
> Current machine states:
> zk1   running (aws)
> broker1   running (aws)
> broker2   running (aws)
> broker3   running (aws)
> This environment represents multiple VMs. The VMs are all listed
> above with their current state. For more information about a specific
> VM, run `vagrant status NAME`.
> $ vagrant ssh broker1
> (snip)
> ubuntu@ip-172-16-0-62:~$ /opt/kafka-dev/bin/kafka-topics.sh 
> --bootstrap-server broker1:9092,broker2:9092,broker3:9092 --create 
> --partitions 1 --replication-factor 3 --topic sandbox
> (snip)
> ubuntu@ip-172-16-0-62:~$ /opt/kafka-dev/bin/kafka-topics.sh 
> --bootstrap-server broker1:9092,broker2:9092,broker3:9092 --list
> (snip)
> sandbox
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk11 #506

2019-05-09 Thread Apache Jenkins Server
See 


Changes:

[cshapi] MINOR: docs typo in '--zookeeper myhost:2181--execute'

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-4 (ubuntu trusty) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 96096cebe1ec77b3f7e0c60cf9f4ce575adccd67 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 96096cebe1ec77b3f7e0c60cf9f4ce575adccd67
Commit message: "MINOR: docs typo in '--zookeeper myhost:2181--execute'"
 > git rev-list --no-walk 7a4618a793aacd240745c8de0ad7e502121f5dc2 # timeout=10
ERROR: No tool found matching GRADLE_4_10_2_HOME
ERROR: No tool found matching GRADLE_4_10_2_HOME
[kafka-trunk-jdk11] $ /bin/bash -xe /tmp/jenkins7012714518466325561.sh
+ rm -rf 
+ /bin/gradle
/tmp/jenkins7012714518466325561.sh: line 4: /bin/gradle: No such file or 
directory
Build step 'Execute shell' marked build as failure
Recording test results
ERROR: No tool found matching GRADLE_4_10_2_HOME
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
ERROR: No tool found matching GRADLE_4_10_2_HOME
Not sending mail to unregistered user nore...@github.com
Not sending mail to unregistered user csh...@gmail.com
Not sending mail to unregistered user wangg...@gmail.com


Build failed in Jenkins: kafka-2.1-jdk8 #185

2019-05-09 Thread Apache Jenkins Server
See 


Changes:

[rhauch] MINOR: Remove header and key/value converter config value logging

--
[...truncated 2.58 MB...]

org.apache.kafka.streams.StreamsBuilderTest > 
shouldAllowJoinMaterializedMapValuedKTable PASSED

org.apache.kafka.streams.StreamsBuilderTest > 
shouldAllowJoinUnmaterializedMapValuedKTable STARTED

org.apache.kafka.streams.StreamsBuilderTest > 
shouldAllowJoinUnmaterializedMapValuedKTable PASSED

org.apache.kafka.streams.StreamsBuilderTest > 
shouldAllowJoinUnmaterializedFilteredKTable STARTED

org.apache.kafka.streams.StreamsBuilderTest > 
shouldAllowJoinUnmaterializedFilteredKTable PASSED

org.apache.kafka.streams.StreamsBuilderTest > 
shouldNotReuseSourceTopicAsChangelogsByDefault STARTED

org.apache.kafka.streams.StreamsBuilderTest > 
shouldNotReuseSourceTopicAsChangelogsByDefault PASSED

org.apache.kafka.streams.StreamsBuilderTest > 
shouldUseSerdesDefinedInMaterializedToConsumeTable STARTED

org.apache.kafka.streams.StreamsBuilderTest > 
shouldUseSerdesDefinedInMaterializedToConsumeTable PASSED

org.apache.kafka.streams.StreamsBuilderTest > 
shouldThrowExceptionWhenTopicNamesAreNull STARTED

org.apache.kafka.streams.StreamsBuilderTest > 
shouldThrowExceptionWhenTopicNamesAreNull PASSED

org.apache.kafka.streams.StreamsBuilderTest > 
shouldAllowJoinMaterializedSourceKTable STARTED

org.apache.kafka.streams.StreamsBuilderTest > 
shouldAllowJoinMaterializedSourceKTable PASSED

org.apache.kafka.streams.StreamsBuilderTest > 
shouldThrowExceptionWhenNoTopicPresent STARTED

org.apache.kafka.streams.StreamsBuilderTest > 
shouldThrowExceptionWhenNoTopicPresent PASSED

org.apache.kafka.streams.StreamsBuilderTest > shouldUseDefaultNodeAndStoreNames 
STARTED

org.apache.kafka.streams.StreamsBuilderTest > shouldUseDefaultNodeAndStoreNames 
PASSED

org.apache.kafka.streams.StreamsBuilderTest > 
shouldAllowJoinUnmaterializedJoinedKTable STARTED

org.apache.kafka.streams.StreamsBuilderTest > 
shouldAllowJoinUnmaterializedJoinedKTable PASSED

org.apache.kafka.streams.StreamsBuilderTest > shouldProcessingFromSinkTopic 
STARTED

org.apache.kafka.streams.StreamsBuilderTest > shouldProcessingFromSinkTopic 
PASSED

org.apache.kafka.streams.StreamsBuilderTest > shouldProcessViaThroughTopic 
STARTED

org.apache.kafka.streams.StreamsBuilderTest > shouldProcessViaThroughTopic 
PASSED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testShiftOffsetByWhenBetweenBeginningAndEndOffset STARTED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testShiftOffsetByWhenBetweenBeginningAndEndOffset PASSED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testResetToSpecificOffsetWhenAfterEndOffset STARTED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testResetToSpecificOffsetWhenAfterEndOffset PASSED

org.apache.kafka.streams.tools.StreamsResetterTest > 
shouldAcceptValidDateFormats STARTED

org.apache.kafka.streams.tools.StreamsResetterTest > 
shouldAcceptValidDateFormats PASSED

org.apache.kafka.streams.tools.StreamsResetterTest > shouldDeleteTopic STARTED

org.apache.kafka.streams.tools.StreamsResetterTest > shouldDeleteTopic PASSED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testResetToSpecificOffsetWhenBetweenBeginningAndEndOffset STARTED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testResetToSpecificOffsetWhenBetweenBeginningAndEndOffset PASSED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testShiftOffsetByWhenAfterEndOffset STARTED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testShiftOffsetByWhenAfterEndOffset PASSED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testResetToSpecificOffsetWhenBeforeBeginningOffset STARTED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testResetToSpecificOffsetWhenBeforeBeginningOffset PASSED

org.apache.kafka.streams.tools.StreamsResetterTest > shouldSeekToEndOffset 
STARTED

org.apache.kafka.streams.tools.StreamsResetterTest > shouldSeekToEndOffset 
PASSED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testResetUsingPlanWhenBetweenBeginningAndEndOffset STARTED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testResetUsingPlanWhenBetweenBeginningAndEndOffset PASSED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testResetUsingPlanWhenBeforeBeginningOffset STARTED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testResetUsingPlanWhenBeforeBeginningOffset PASSED

org.apache.kafka.streams.tools.StreamsResetterTest > 
shouldThrowOnInvalidDateFormat STARTED

org.apache.kafka.streams.tools.StreamsResetterTest > 
shouldThrowOnInvalidDateFormat PASSED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testResetUsingPlanWhenAfterEndOffset STARTED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testResetUsingPlanWhenAfterEndOffset PASSED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testShiftOffsetByWhenBeforeBeginningOffset STARTED

org.a

Build failed in Jenkins: kafka-2.0-jdk8 #261

2019-05-09 Thread Apache Jenkins Server
See 


Changes:

[rhauch] MINOR: Remove header and key/value converter config value logging

--
[...truncated 434.64 KB...]
kafka.controller.ReplicaStateMachineTest > 
testInvalidOfflineReplicaToNonexistentReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidOfflineReplicaToNonexistentReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidOnlineReplicaToReplicaDeletionIneligibleTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidOnlineReplicaToReplicaDeletionIneligibleTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionSuccessfulToReplicaDeletionStartedTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionSuccessfulToReplicaDeletionStartedTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNewReplicaToReplicaDeletionSuccessfulTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNewReplicaToReplicaDeletionSuccessfulTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionIneligibleToReplicaDeletionStartedTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionIneligibleToReplicaDeletionStartedTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionStartedToOfflineReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionStartedToOfflineReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNewReplicaToReplicaDeletionStartedTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNewReplicaToReplicaDeletionStartedTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testOnlineReplicaToOnlineReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testOnlineReplicaToOnlineReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNewReplicaToReplicaDeletionIneligibleTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNewReplicaToReplicaDeletionIneligibleTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNonexistentReplicaToOfflineReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNonexistentReplicaToOfflineReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testReplicaDeletionStartedToReplicaDeletionSuccessfulTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testReplicaDeletionStartedToReplicaDeletionSuccessfulTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNonexistentReplicaToOnlineReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNonexistentReplicaToOnlineReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidOnlineReplicaToNewReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidOnlineReplicaToNewReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testReplicaDeletionStartedToReplicaDeletionIneligibleTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testReplicaDeletionStartedToReplicaDeletionIneligibleTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionSuccessfulToOfflineReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionSuccessfulToOfflineReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionIneligibleToNewReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionIneligibleToNewReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testReplicaDeletionIneligibleToOnlineReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testReplicaDeletionIneligibleToOnlineReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionStartedToNewReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionStartedToNewReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNonexistentReplicaToReplicaDeletionSuccessfulTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNonexistentReplicaToReplicaDeletionSuccessfulTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testNewReplicaToOnlineReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testNewReplicaToOnlineReplicaTransition PASSED

kafka.controller.ControllerFailoverTest > testHandleIllegalStateException 
STARTED

kafka.controller.ControllerFailoverTest > testHandleIllegalStateException PASSED

kafka.controller.ControllerEventManagerTest > testEventThatThrowsException 
STARTED

kafka.controller.ControllerEventManagerTest > testEventThatThrowsException 
PA

Jenkins build is back to normal : kafka-1.1-jdk7 #262

2019-05-09 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-2.0-jdk8 #260

2019-05-09 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-6789; Handle retriable group errors in AdminClient API (#5578)

--
[...truncated 2.51 MB...]
org.apache.kafka.streams.TopologyTest > shouldNotAddNullStateStoreSupplier 
STARTED

org.apache.kafka.streams.TopologyTest > shouldNotAddNullStateStoreSupplier 
PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowToAddStateStoreToNonExistingProcessor STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowToAddStateStoreToNonExistingProcessor PASSED

org.apache.kafka.streams.TopologyTest > 
sourceAndProcessorShouldHaveSingleSubtopology STARTED

org.apache.kafka.streams.TopologyTest > 
sourceAndProcessorShouldHaveSingleSubtopology PASSED

org.apache.kafka.streams.TopologyTest > 
tableNamedMaterializedCountShouldPreserveTopologyStructure STARTED

org.apache.kafka.streams.TopologyTest > 
tableNamedMaterializedCountShouldPreserveTopologyStructure PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullStoreNameWhenConnectingProcessorAndStateStores STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullStoreNameWhenConnectingProcessorAndStateStores PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullNameWhenAddingSourceWithTopic STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullNameWhenAddingSourceWithTopic PASSED

org.apache.kafka.streams.TopologyTest > 
sourceAndProcessorWithStateShouldHaveSingleSubtopology STARTED

org.apache.kafka.streams.TopologyTest > 
sourceAndProcessorWithStateShouldHaveSingleSubtopology PASSED

org.apache.kafka.streams.TopologyTest > shouldNotAllowNullTopicWhenAddingSink 
STARTED

org.apache.kafka.streams.TopologyTest > shouldNotAllowNullTopicWhenAddingSink 
PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowToAddGlobalStoreWithSourceNameEqualsProcessorName STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowToAddGlobalStoreWithSourceNameEqualsProcessorName PASSED

org.apache.kafka.streams.TopologyTest > 
singleSourceWithListOfTopicsShouldHaveSingleSubtopology STARTED

org.apache.kafka.streams.TopologyTest > 
singleSourceWithListOfTopicsShouldHaveSingleSubtopology PASSED

org.apache.kafka.streams.TopologyTest > 
sourceWithMultipleProcessorsShouldHaveSingleSubtopology STARTED

org.apache.kafka.streams.TopologyTest > 
sourceWithMultipleProcessorsShouldHaveSingleSubtopology PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowZeroStoreNameWhenConnectingProcessorAndStateStores STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowZeroStoreNameWhenConnectingProcessorAndStateStores PASSED

org.apache.kafka.streams.TopologyTest > 
kTableNamedMaterializedMapValuesShouldPreserveTopologyStructure STARTED

org.apache.kafka.streams.TopologyTest > 
kTableNamedMaterializedMapValuesShouldPreserveTopologyStructure PASSED

org.apache.kafka.streams.TopologyTest > 
timeWindowZeroArgCountShouldPreserveTopologyStructure STARTED

org.apache.kafka.streams.TopologyTest > 
timeWindowZeroArgCountShouldPreserveTopologyStructure PASSED

org.apache.kafka.streams.TopologyTest > 
multipleSourcesWithProcessorsShouldHaveDistinctSubtopologies STARTED

org.apache.kafka.streams.TopologyTest > 
multipleSourcesWithProcessorsShouldHaveDistinctSubtopologies PASSED

org.apache.kafka.streams.TopologyTest > shouldThrowOnUnassignedStateStoreAccess 
STARTED

org.apache.kafka.streams.TopologyTest > shouldThrowOnUnassignedStateStoreAccess 
PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullProcessorNameWhenConnectingProcessorAndStateStores STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullProcessorNameWhenConnectingProcessorAndStateStores PASSED

org.apache.kafka.streams.TopologyTest > 
kTableAnonymousMaterializedMapValuesShouldPreserveTopologyStructure STARTED

org.apache.kafka.streams.TopologyTest > 
kTableAnonymousMaterializedMapValuesShouldPreserveTopologyStructure PASSED

org.apache.kafka.streams.TopologyTest > 
sessionWindowZeroArgCountShouldPreserveTopologyStructure STARTED

org.apache.kafka.streams.TopologyTest > 
sessionWindowZeroArgCountShouldPreserveTopologyStructure PASSED

org.apache.kafka.streams.TopologyTest > 
tableAnonymousMaterializedCountShouldPreserveTopologyStructure STARTED

org.apache.kafka.streams.TopologyTest > 
tableAnonymousMaterializedCountShouldPreserveTopologyStructure PASSED

org.apache.kafka.streams.TopologyTest > shouldDescribeGlobalStoreTopology 
STARTED

org.apache.kafka.streams.TopologyTest > shouldDescribeGlobalStoreTopology PASSED

org.apache.kafka.streams.TopologyTest > 
kTableNonMaterializedMapValuesShouldPreserveTopologyStructure STARTED

org.apache.kafka.streams.TopologyTest > 
kTableNonMaterializedMapValuesShouldPreserveTopologyStructure PASSED

org.apache.kafka.streams.TopologyTest > 
kGroupedStreamAnonymousMaterializedCountShouldPreserveTopologyStructure STARTED

org.apache.kafka.streams.TopologyT

Build failed in Jenkins: kafka-trunk-jdk11 #505

2019-05-09 Thread Apache Jenkins Server
See 


Changes:

[rhauch] KAFKA-8231: Expansion of ConnectClusterState interface (#6584)

[rhauch] MINOR: Remove header and key/value converter config value logging

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H32 (ubuntu xenial) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 7a4618a793aacd240745c8de0ad7e502121f5dc2 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 7a4618a793aacd240745c8de0ad7e502121f5dc2
Commit message: "MINOR: Remove header and key/value converter config value 
logging (#6660)"
 > git rev-list --no-walk b2826c6c2bfc3360913e63e0b65f9d79e782fd50 # timeout=10
ERROR: No tool found matching GRADLE_4_10_2_HOME
ERROR: No tool found matching GRADLE_4_10_2_HOME
[kafka-trunk-jdk11] $ /bin/bash -xe /tmp/jenkins452861290722474963.sh
+ rm -rf 
+ /bin/gradle
/tmp/jenkins452861290722474963.sh: line 4: /bin/gradle: No such file or 
directory
Build step 'Execute shell' marked build as failure
Recording test results
ERROR: No tool found matching GRADLE_4_10_2_HOME
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
ERROR: No tool found matching GRADLE_4_10_2_HOME
Not sending mail to unregistered user nore...@github.com
Not sending mail to unregistered user csh...@gmail.com
Not sending mail to unregistered user wangg...@gmail.com


Build failed in Jenkins: kafka-trunk-jdk8 #3615

2019-05-09 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-6789; Handle retriable group errors in AdminClient API (#5578)

[bbejeck] KAFKA-8324: Add close() method to RocksDBConfigSetter (#6697)

--
[...truncated 2.41 MB...]
org.apache.kafka.connect.json.JsonConverterTest > booleanToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > bytesToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > bytesToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > doubleToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > doubleToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
timestampToConnectOptionalWithDefaultValue STARTED

org.apache.kafka.connect.json.JsonConverterTest > 
timestampToConnectOptionalWithDefaultValue PASSED

> Task :streams:streams-scala:test

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionWithNamedRepartitionTopic STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionWithNamedRepartitionTopic PASSED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionJava STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionJava PASSED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegion STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegion PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaJoin STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaJoin PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaSimple STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaSimple PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaAggregate STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaAggregate PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaProperties STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaProperties PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaTransform STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaTransform PASSED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsMaterialized 
STARTED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsMaterialized 
PASSED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsJava STARTED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsJava PASSED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWords STARTED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWords PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > filter a KTable should 
filter records satisfying the predicate STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > filter a KTable should 
filter records satisfying the predicate PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > filterNot a KTable should 
filter records not satisfying the predicate STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > filterNot a KTable should 
filter records not satisfying the predicate PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables should join 
correctly records STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables should join 
correctly records PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables with a 
Materialized should join correctly records and state store STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables with a 
Materialized should join correctly records and state store PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > windowed KTable#suppress 
should correctly suppress results using Suppressed.untilTimeLimit STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > windowed KTable#suppress 
should correctly suppress results using Suppressed.untilTimeLimit PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > windowed KTable#suppress 
should correctly suppress results using Suppressed.untilWindowCloses STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > windowed KTable#suppress 
should correctly suppress results using Suppressed.untilWindowCloses PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > session windowed 
KTable#suppress should correc

[jira] [Resolved] (KAFKA-8231) Expansion of ConnectClusterState interface

2019-05-09 Thread Randall Hauch (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-8231.
--
Resolution: Fixed

> Expansion of ConnectClusterState interface
> --
>
> Key: KAFKA-8231
> URL: https://issues.apache.org/jira/browse/KAFKA-8231
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Major
> Fix For: 2.3.0
>
>
> This covers [KIP-454: Expansion of the ConnectClusterState 
> interface|https://cwiki.apache.org/confluence/display/KAFKA/KIP-454%3A+Expansion+of+the+ConnectClusterState+interface]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-458: Connector Client Config Override Policy

2019-05-09 Thread Randall Hauch
I'm still +1 and like the simplification.

Randall

On Thu, May 9, 2019 at 5:54 PM Magesh Nandakumar 
wrote:

> I have updated the KIP to remove the `Ignore` policy and also the
> useOverrides()
> method in the interface.
> Thanks a lot for your thoughts, Colin. I believe this certainly simplifies
> the KIP.
>
> On Thu, May 9, 2019 at 3:44 PM Magesh Nandakumar 
> wrote:
>
> > Unless anyone has objections, I'm going to update the KIP to remove the
> > `Ignore` policy and make `None` as the default. I will also remove the `
> > default boolean useOverrides()` in the interface which was introduced for
> > the purpose of backward compatibility.
> >
> > On Thu, May 9, 2019 at 3:27 PM Randall Hauch  wrote:
> >
> >> I have also seen users include in connector configs the `producer.*` and
> >> `consumer.*` properties that should go into the worker configs. But
> those
> >> don't match, and the likelihood that someone is already using
> >> `producer.override.*` or `consumer.override.*` properties in their
> >> connector configs does seem pretty tiny.
> >>
> >> I'd be fine with removing the `Ignore` for backward compatibility. Still
> >> +1
> >> either way.
> >>
> >> On Thu, May 9, 2019 at 5:23 PM Magesh Nandakumar 
> >> wrote:
> >>
> >> > To add more details regarding the backward compatibility; I have
> >> generally
> >> > seen users trying to set "producer.request.timeout.ms
> >> > " in their connector
> >> config
> >> > under the assumption that it will get used and would never come back
> to
> >> > remove it. The initial intent of the KIP was to use the same prefix
> but
> >> > since that potentially collided with MM2 configs, we agreed to use a
> >> > different prefix "producer.override". With this context, I think the
> >> > likelihood of someone using this is very small and should generally
> not
> >> be
> >> > a problem.
> >> >
> >> > On Thu, May 9, 2019 at 3:15 PM Magesh Nandakumar <
> mage...@confluent.io>
> >> > wrote:
> >> >
> >> > > Colin,
> >> > >
> >> > > Thanks a lot for the feedback.  As you said, the possibilities of
> >> someone
> >> > > having "producer.override.request.timeout.ms" in their connector
> >> config
> >> > > in AK 2.2 or lower is very slim. But the key thing is if in case,
> >> someone
> >> > > has it AK2.2 doesn't do anything with it and it silently ignores the
> >> > > configuration. If others think that it's not really a problem, then
> >> I'm
> >> > > fine with removing the complicated compatibility issue.
> >> > >
> >> > > I have explicitly called out the behavior when the exception is
> >> thrown.
> >> > >
> >> > > Let me know what you think.
> >> > >
> >> > > Thanks,
> >> > > Magesh
> >> > >
> >> > > On Thu, May 9, 2019 at 2:45 PM Colin McCabe 
> >> wrote:
> >> > >
> >> > >> Hi Magesh,
> >> > >>
> >> > >> Thanks for the KIP.  It looks good overall.
> >> > >>
> >> > >> >default boolean useOverrides() {
> >> > >> >return true;
> >> > >> >}
> >> > >>
> >> > >> Is this method really needed?  As I understand, nobody should have
> >> any
> >> > >> connector client config overrides set right now, since they don't
> do
> >> > >> anything right now.
> >> > >>
> >> > >> For example, you wouldn't expect a Kafka 2.2 installation to have "
> >> > >> producer.override.request.timeout.ms" set, since that doesn't do
> >> > >> anything in Kafka 2.2.  So is the option to ignore it in Kafka 2.3
> >> > really
> >> > >> necessary?
> >> > >>
> >> > >> Can you add some details about what happens if a
> >> > >> PolicyValidationException is thrown?  I'm assuming that we fail to
> >> > create
> >> > >> the new Connector, I'm not sure if that's completely spelled out
> >> > (unless I
> >> > >> missed it).
> >> > >>
> >> > >> best,
> >> > >> Colin
> >> > >>
> >> > >>
> >> > >> On Thu, May 9, 2019, at 08:05, Rajini Sivaram wrote:
> >> > >> > Hi Magesh,
> >> > >> >
> >> > >> > Thanks for the KIP, +1 (binding)
> >> > >> >
> >> > >> > Regards,
> >> > >> >
> >> > >> > Rajini
> >> > >> >
> >> > >> >
> >> > >> > On Thu, May 9, 2019 at 3:55 PM Randall Hauch 
> >> > wrote:
> >> > >> >
> >> > >> > > Nice work, Magesh.
> >> > >> > >
> >> > >> > > +1 (binding)
> >> > >> > >
> >> > >> > > Randall
> >> > >> > >
> >> > >> > > On Wed, May 8, 2019 at 7:22 PM Magesh Nandakumar <
> >> > >> mage...@confluent.io>
> >> > >> > > wrote:
> >> > >> > >
> >> > >> > > > Thanks a lot Chris. So far, the KIP has one non-binding vote
> >> and
> >> > I'm
> >> > >> > > still
> >> > >> > > > looking forward to the KIP to be voted by Friday's deadline.
> >> > >> > > >
> >> > >> > > > On Tue, May 7, 2019 at 10:00 AM Chris Egerton <
> >> > chr...@confluent.io>
> >> > >> > > wrote:
> >> > >> > > >
> >> > >> > > > > Hi Magesh,
> >> > >> > > > >
> >> > >> > > > > This looks great! Very excited to see these changes finally
> >> > >> coming to
> >> > >> > > > > Connect.
> >> > >> > > > > +1 (non-binding)
> >> > >> > > > >
> >> > >> > > > > Cheers,
> >> > >> > > > >
> >> > >> > > > > C

Build failed in Jenkins: kafka-2.1-jdk8 #184

2019-05-09 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-6789; Handle retriable group errors in AdminClient API (#5578)

--
[...truncated 467.53 KB...]

kafka.zk.ReassignPartitionsZNodeTest > testDecodeInvalidJson STARTED

kafka.zk.ReassignPartitionsZNodeTest > testDecodeInvalidJson PASSED

kafka.zk.ReassignPartitionsZNodeTest > testEncode STARTED

kafka.zk.ReassignPartitionsZNodeTest > testEncode PASSED

kafka.zk.ReassignPartitionsZNodeTest > testDecodeValidJson STARTED

kafka.zk.ReassignPartitionsZNodeTest > testDecodeValidJson PASSED

kafka.zk.KafkaZkClientTest > testZNodeChangeHandlerForDataChange STARTED

kafka.zk.KafkaZkClientTest > testZNodeChangeHandlerForDataChange PASSED

kafka.zk.KafkaZkClientTest > testCreateAndGetTopicPartitionStatesRaw STARTED

kafka.zk.KafkaZkClientTest > testCreateAndGetTopicPartitionStatesRaw PASSED

kafka.zk.KafkaZkClientTest > testLogDirGetters STARTED

kafka.zk.KafkaZkClientTest > testLogDirGetters PASSED

kafka.zk.KafkaZkClientTest > testSetGetAndDeletePartitionReassignment STARTED

kafka.zk.KafkaZkClientTest > testSetGetAndDeletePartitionReassignment PASSED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationsDeletion STARTED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationsDeletion PASSED

kafka.zk.KafkaZkClientTest > testGetDataAndVersion STARTED

kafka.zk.KafkaZkClientTest > testGetDataAndVersion PASSED

kafka.zk.KafkaZkClientTest > testGetChildren STARTED

kafka.zk.KafkaZkClientTest > testGetChildren PASSED

kafka.zk.KafkaZkClientTest > testSetAndGetConsumerOffset STARTED

kafka.zk.KafkaZkClientTest > testSetAndGetConsumerOffset PASSED

kafka.zk.KafkaZkClientTest > testClusterIdMethods STARTED

kafka.zk.KafkaZkClientTest > testClusterIdMethods PASSED

kafka.zk.KafkaZkClientTest > testEntityConfigManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testEntityConfigManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testUpdateLeaderAndIsr STARTED

kafka.zk.KafkaZkClientTest > testUpdateLeaderAndIsr PASSED

kafka.zk.KafkaZkClientTest > testUpdateBrokerInfo STARTED

kafka.zk.KafkaZkClientTest > testUpdateBrokerInfo PASSED

kafka.zk.KafkaZkClientTest > testCreateRecursive STARTED

kafka.zk.KafkaZkClientTest > testCreateRecursive PASSED

kafka.zk.KafkaZkClientTest > testGetConsumerOffsetNoData STARTED

kafka.zk.KafkaZkClientTest > testGetConsumerOffsetNoData PASSED

kafka.zk.KafkaZkClientTest > testDeleteTopicPathMethods STARTED

kafka.zk.KafkaZkClientTest > testDeleteTopicPathMethods PASSED

kafka.zk.KafkaZkClientTest > testSetTopicPartitionStatesRaw STARTED

kafka.zk.KafkaZkClientTest > testSetTopicPartitionStatesRaw PASSED

kafka.zk.KafkaZkClientTest > testAclManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testAclManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testPreferredReplicaElectionMethods STARTED

kafka.zk.KafkaZkClientTest > testPreferredReplicaElectionMethods PASSED

kafka.zk.KafkaZkClientTest > testPropagateLogDir STARTED

kafka.zk.KafkaZkClientTest > testPropagateLogDir PASSED

kafka.zk.KafkaZkClientTest > testGetDataAndStat STARTED

kafka.zk.KafkaZkClientTest > testGetDataAndStat PASSED

kafka.zk.KafkaZkClientTest > testReassignPartitionsInProgress STARTED

kafka.zk.KafkaZkClientTest > testReassignPartitionsInProgress PASSED

kafka.zk.KafkaZkClientTest > testCreateTopLevelPaths STARTED

kafka.zk.KafkaZkClientTest > testCreateTopLevelPaths PASSED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationGetters STARTED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationGetters PASSED

kafka.zk.KafkaZkClientTest > testLogDirEventNotificationsDeletion STARTED

kafka.zk.KafkaZkClientTest > testLogDirEventNotificationsDeletion PASSED

kafka.zk.KafkaZkClientTest > testGetLogConfigs STARTED

kafka.zk.KafkaZkClientTest > testGetLogConfigs PASSED

kafka.zk.KafkaZkClientTest > testBrokerSequenceIdMethods STARTED

kafka.zk.KafkaZkClientTest > testBrokerSequenceIdMethods PASSED

kafka.zk.KafkaZkClientTest > testCreateSequentialPersistentPath STARTED

kafka.zk.KafkaZkClientTest > testCreateSequentialPersistentPath PASSED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath STARTED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath PASSED

kafka.zk.KafkaZkClientTest > testDeleteTopicZNode STARTED

kafka.zk.KafkaZkClientTest > testDeleteTopicZNode PASSED

kafka.zk.KafkaZkClientTest > testDeletePath STARTED

kafka.zk.KafkaZkClientTest > testDeletePath PASSED

kafka.zk.KafkaZkClientTest > testGetBrokerMethods STARTED

kafka.zk.KafkaZkClientTest > testGetBrokerMethods PASSED

kafka.zk.KafkaZkClientTest > testCreateTokenChangeNotification STARTED

kafka.zk.KafkaZkClientTest > testCreateTokenChangeNotification PASSED

kafka.zk.KafkaZkClientTest > testGetTopicsAndPartitions STARTED

kafka.zk.KafkaZkClientTest > testGetTopicsAndPartitions PASSED

kafka.zk.KafkaZkClientTest > testRegisterBrokerInfo STARTED

kafka.zk.KafkaZkClientTest > testRegist

[jira] [Resolved] (KAFKA-8313) KafkaStreams state not being updated properly after shutdown

2019-05-09 Thread Guozhang Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-8313.
--
   Resolution: Fixed
 Assignee: Guozhang Wang
Fix Version/s: 2.3.0

> KafkaStreams state not being updated properly after shutdown
> 
>
> Key: KAFKA-8313
> URL: https://issues.apache.org/jira/browse/KAFKA-8313
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.2.0
> Environment: Single broker running on Ubuntu Linux.
>Reporter: Eric
>Assignee: Guozhang Wang
>Priority: Minor
> Fix For: 2.3.0
>
> Attachments: kafka-8313-src.tgz, log.txt
>
>
> I am running a KafkaStreams inside a DropWizard server and I am trying to 
> detect when my stream shuts down (in case a non-recoverable error occurs).  I 
> was hoping I could use KafkaStreams.setStateListener() to be notified when a 
> state change occurs.  When I query the state, KafkaStreams is stuck in the 
> REBALANCING state even though its threads are all DEAD.
>  
> You can easily reproduce this by doing the following:
>  # Create a topic (I have one with 5 partitions)
>  # Create a simple Kafka stream consuming from that topic
>  # Create a StateListener and register it on that KafkaStreams
>  # Start the Kafka stream
>  # Once everything runs, delete the topic using kafka-topics.sh
> When deleting the topic, you will see the StreamThreads' state transition 
> from RUNNING to PARTITION_REVOKED and you will be notified with the 
> KafkaStreams REBALANCING state.  That's all good and expected.  Then the 
> StreamThreads transition to PENDING_SHUTDOWN and eventually to DEAD and the 
> KafkaStreams state is stuck into the REBALANCING thread.  I was expecting to 
> see a NOT_RUNNING state eventually... am I right?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8347) Choose next record to process by timestamp

2019-05-09 Thread Sophie Blee-Goldman (JIRA)
Sophie Blee-Goldman created KAFKA-8347:
--

 Summary: Choose next record to process by timestamp
 Key: KAFKA-8347
 URL: https://issues.apache.org/jira/browse/KAFKA-8347
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Sophie Blee-Goldman


Currently PartitionGroup will determine the next record to process by choosing 
the partition with the lowest stream time. However if a partition contains out 
of order data its stream time may be significantly larger than the timestamp of 
the next record. The next record should instead be chosen as the record with 
the lowest timestamp across all partitions, regardless of which partition it 
comes from or what its partition time is.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8346) Improve replica fetcher behavior in handling partition failures

2019-05-09 Thread Aishwarya Gune (JIRA)
Aishwarya Gune created KAFKA-8346:
-

 Summary: Improve replica fetcher behavior in handling partition 
failures
 Key: KAFKA-8346
 URL: https://issues.apache.org/jira/browse/KAFKA-8346
 Project: Kafka
  Issue Type: Improvement
  Components: core
Reporter: Aishwarya Gune


The replica fetcher thread terminates in case one of the partitions being 
monitors fails. It leads to under-replicated partitions. The thread behavior 
can be improved by dropping that particular partition and continue handling 
rest of the partitions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Hello, I`d like to get access right to contribute kafka.

2019-05-09 Thread Guozhang Wang
Hello WooYoung,

I saw your name on contributors list already, you should be good to go now.

Thanks for your interest in contributing to Kafka!


Guozhang

On Thu, May 9, 2019 at 8:08 AM WooYoung Jeong  wrote:

> These days I`m analyzing Kafka source project.
> I`m interested in contributing Kafka Project, so I have commented that to
> assign me on a Jira ticket.
> It is https://issues.apache.org/jira/browse/KAFKA-8311
>
> However, I have no right to access Jira and wiki.
>
> So Could you give me access ?
>
> Thank you, have a nice day
>


-- 
-- Guozhang


Build failed in Jenkins: kafka-trunk-jdk11 #504

2019-05-09 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-6789; Handle retriable group errors in AdminClient API (#5578)

[bbejeck] KAFKA-8324: Add close() method to RocksDBConfigSetter (#6697)

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H33 (ubuntu xenial) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision b2826c6c2bfc3360913e63e0b65f9d79e782fd50 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f b2826c6c2bfc3360913e63e0b65f9d79e782fd50
Commit message: "KAFKA-8324: Add close() method to RocksDBConfigSetter (#6697)"
 > git rev-list --no-walk a97e55b83868ff786e740db55e73116f85456dcb # timeout=10
ERROR: No tool found matching GRADLE_4_10_2_HOME
ERROR: No tool found matching GRADLE_4_10_2_HOME
[kafka-trunk-jdk11] $ /bin/bash -xe /tmp/jenkins2343731693651101923.sh
+ rm -rf 
+ /bin/gradle
/tmp/jenkins2343731693651101923.sh: line 4: /bin/gradle: No such file or 
directory
Build step 'Execute shell' marked build as failure
Recording test results
ERROR: No tool found matching GRADLE_4_10_2_HOME
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
ERROR: No tool found matching GRADLE_4_10_2_HOME
Not sending mail to unregistered user nore...@github.com
Not sending mail to unregistered user csh...@gmail.com
Not sending mail to unregistered user wangg...@gmail.com


[jira] [Resolved] (KAFKA-6789) Add retry logic in AdminClient requests

2019-05-09 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-6789.

   Resolution: Fixed
Fix Version/s: 2.2.2
   2.1.2
   2.0.2

> Add retry logic in AdminClient requests
> ---
>
> Key: KAFKA-6789
> URL: https://issues.apache.org/jira/browse/KAFKA-6789
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin
>Reporter: Guozhang Wang
>Assignee: Manikumar
>Priority: Major
> Fix For: 2.0.2, 2.1.2, 2.2.2
>
>
> In KafkaAdminClient, today we treat all error codes as fatal and set the 
> exception accordingly in the returned futures. But for some error codes they 
> can be retried internally, for example, COORDINATOR_LOADING_IN_PROGRESS. We 
> could consider adding the retry logic internally in the admin client so that 
> users would not need to retry themselves.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-429 : Smooth Auto-Scaling for Kafka Streams

2019-05-09 Thread Guozhang Wang
Hello Matthias,

I'm proposing to change this behavior holistically inside
ConsumerCoordinator actually. In other words I'm trying to piggy-back this
behavioral fix of KAFKA-4600 along with this KIP, and the motivation for me
to do this piggy-backing is that, with incremental rebalancing, there would
be partial affected partitions as we are not revoking every body any more.


Guozhang


On Thu, May 9, 2019 at 6:21 AM Matthias J. Sax 
wrote:

> Thanks Guozhang!
>
> The simplified upgrade path is great!
>
>
> Just a clarification question about the "Rebalance Callback Error
> Handling" -- does this change affect the `ConsumerCoordinator` only if
> incremental rebalancing is use? Or does the behavior also change for the
> eager rebalancing case?
>
>
> -Matthias
>
>
> On 5/9/19 3:37 AM, Guozhang Wang wrote:
> > Hello all,
> >
> > Thanks for everyone who've shared their feedbacks for this KIP! If
> there's
> > no further comments I'll start the voting thread by end of tomorrow.
> >
> >
> > Guozhang.
> >
> > On Wed, May 8, 2019 at 6:36 PM Guozhang Wang  wrote:
> >
> >> Hello Boyang,
> >>
> >> On Wed, May 1, 2019 at 4:51 PM Boyang Chen  wrote:
> >>
> >>> Hey Guozhang,
> >>>
> >>> thank you for the great write up. Overall the motivation and changes
> >>> LGTM, just some minor comments:
> >>>
> >>>
> >>>   1.  In "Consumer Coordinator Algorithm", we could reorder alphabet
> >>> points for 3d~3f from ["ready-to-migrate-partitions",
> >>> "unknown-but-owned-partitions",  "maybe-revoking-partitions"] to
> >>> ["maybe-revoking-partitions", "ready-to-migrate-partitions",
> >>> "unknown-but-owned-partitions"] in order to be consistent with 3c1~3.
> >>>
> >>
> >> Ack. Updated.
> >>
> >>
> >>>   2.  In "Consumer Coordinator Algorithm", 1c suggests to revoke all
> >>> partition upon heartbeat/commit fail. What's the gain here? Do we want
> to
> >>> keep all partitions running at this moment, to be optimistic for the
> case
> >>> when no partitions get reassigned?
> >>>
> >>
> >> That's a good catch. When REBALANCE_IN_PROGRESS is received, we can just
> >> re-join the group with all the currently owned partitions encoded.
> Updated.
> >>
> >>
> >>>   3.  In "Recommended Upgrade Procedure", remove extra 'those': " The
> >>> 'sticky' assignor works even those there are "
> >>>
> >>
> >> Ack, should be `even when`.
> >>
> >>
> >>>   4.  Put two "looking into the future" into a separate category from
> >>> migration session. It seems inconsistent for readers to see this
> before we
> >>> finished discussion for everything.
> >>>
> >>
> >> Ack.
> >>
> >>
> >>>   5.  Have we discussed the concern on the serialization? Could the new
> >>> metadata we are adding grow larger than the message size cap?
> >>>
> >>
> >> We're completing https://issues.apache.org/jira/browse/KAFKA-7149 which
> >> should largely reduce the message size (will update the wiki
> accordingly as
> >> well).
> >>
> >>
> >>>
> >>> Boyang
> >>>
> >>> 
> >>> From: Guozhang Wang 
> >>> Sent: Monday, April 15, 2019 9:20 AM
> >>> To: dev
> >>> Subject: Re: [DISCUSS] KIP-429 : Smooth Auto-Scaling for Kafka Streams
> >>>
> >>> Hello Jason,
> >>>
> >>> I agree with you that for range / round-robin it makes less sense to be
> >>> compatible with cooperative rebalance protocol.
> >>>
> >>> As for StickyAssignor, however, I think it would still be possible to
> make
> >>> the current implementation to be compatible with cooperative
> rebalance. So
> >>> after pondering on different options at hand I'm now proposing this
> >>> approach as listed in the upgrade section:
> >>>
> >>>
> >>>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-429%3A+Kafka+Consumer+Incremental+Rebalance+Protocol#KIP-429:KafkaConsumerIncrementalRebalanceProtocol-CompatibilityandUpgradePath
> >>>
> >>> The idea is to let assignors specify which protocols it would work
> with,
> >>> associating with a different name; then the upgrade path would involve
> a
> >>> "compatible" protocol which actually still use eager behavior while
> >>> encoding two assignors if possible. In "Rejected Section" (just to
> clarify
> >>> I'm not finalizing it as rejected, just putting it there for now, and
> if
> >>> we
> >>> like this one instead we can always switch them) I listed the other
> >>> approach we once discussed about, and arguing its cons of duplicated
> class
> >>> seems overwhelm the pros of saving the  "rebalance.protocol" config.
> >>>
> >>> Let me know WDYT.
> >>>
> >>> Guozhang
> >>>
> >>> On Fri, Apr 12, 2019 at 6:08 PM Jason Gustafson 
> >>> wrote:
> >>>
>  Hi Guozhang,
> 
>  Responses below:
> 
>  2. The interface's default implementation will just be
> > `onPartitionRevoked`, so for user's instantiation if they do not make
> >>> any
> > code changes they should be able to recompile the code and continue.
> 
> 
>  Ack, makes sense.
> 
>  4. Hmm.. not sure if it will work. The main issue is that the
> > consumer-

Re: [VOTE] KIP-458: Connector Client Config Override Policy

2019-05-09 Thread Magesh Nandakumar
I have updated the KIP to remove the `Ignore` policy and also the
useOverrides()
method in the interface.
Thanks a lot for your thoughts, Colin. I believe this certainly simplifies
the KIP.

On Thu, May 9, 2019 at 3:44 PM Magesh Nandakumar 
wrote:

> Unless anyone has objections, I'm going to update the KIP to remove the
> `Ignore` policy and make `None` as the default. I will also remove the `
> default boolean useOverrides()` in the interface which was introduced for
> the purpose of backward compatibility.
>
> On Thu, May 9, 2019 at 3:27 PM Randall Hauch  wrote:
>
>> I have also seen users include in connector configs the `producer.*` and
>> `consumer.*` properties that should go into the worker configs. But those
>> don't match, and the likelihood that someone is already using
>> `producer.override.*` or `consumer.override.*` properties in their
>> connector configs does seem pretty tiny.
>>
>> I'd be fine with removing the `Ignore` for backward compatibility. Still
>> +1
>> either way.
>>
>> On Thu, May 9, 2019 at 5:23 PM Magesh Nandakumar 
>> wrote:
>>
>> > To add more details regarding the backward compatibility; I have
>> generally
>> > seen users trying to set "producer.request.timeout.ms
>> > " in their connector
>> config
>> > under the assumption that it will get used and would never come back to
>> > remove it. The initial intent of the KIP was to use the same prefix but
>> > since that potentially collided with MM2 configs, we agreed to use a
>> > different prefix "producer.override". With this context, I think the
>> > likelihood of someone using this is very small and should generally not
>> be
>> > a problem.
>> >
>> > On Thu, May 9, 2019 at 3:15 PM Magesh Nandakumar 
>> > wrote:
>> >
>> > > Colin,
>> > >
>> > > Thanks a lot for the feedback.  As you said, the possibilities of
>> someone
>> > > having "producer.override.request.timeout.ms" in their connector
>> config
>> > > in AK 2.2 or lower is very slim. But the key thing is if in case,
>> someone
>> > > has it AK2.2 doesn't do anything with it and it silently ignores the
>> > > configuration. If others think that it's not really a problem, then
>> I'm
>> > > fine with removing the complicated compatibility issue.
>> > >
>> > > I have explicitly called out the behavior when the exception is
>> thrown.
>> > >
>> > > Let me know what you think.
>> > >
>> > > Thanks,
>> > > Magesh
>> > >
>> > > On Thu, May 9, 2019 at 2:45 PM Colin McCabe 
>> wrote:
>> > >
>> > >> Hi Magesh,
>> > >>
>> > >> Thanks for the KIP.  It looks good overall.
>> > >>
>> > >> >default boolean useOverrides() {
>> > >> >return true;
>> > >> >}
>> > >>
>> > >> Is this method really needed?  As I understand, nobody should have
>> any
>> > >> connector client config overrides set right now, since they don't do
>> > >> anything right now.
>> > >>
>> > >> For example, you wouldn't expect a Kafka 2.2 installation to have "
>> > >> producer.override.request.timeout.ms" set, since that doesn't do
>> > >> anything in Kafka 2.2.  So is the option to ignore it in Kafka 2.3
>> > really
>> > >> necessary?
>> > >>
>> > >> Can you add some details about what happens if a
>> > >> PolicyValidationException is thrown?  I'm assuming that we fail to
>> > create
>> > >> the new Connector, I'm not sure if that's completely spelled out
>> > (unless I
>> > >> missed it).
>> > >>
>> > >> best,
>> > >> Colin
>> > >>
>> > >>
>> > >> On Thu, May 9, 2019, at 08:05, Rajini Sivaram wrote:
>> > >> > Hi Magesh,
>> > >> >
>> > >> > Thanks for the KIP, +1 (binding)
>> > >> >
>> > >> > Regards,
>> > >> >
>> > >> > Rajini
>> > >> >
>> > >> >
>> > >> > On Thu, May 9, 2019 at 3:55 PM Randall Hauch 
>> > wrote:
>> > >> >
>> > >> > > Nice work, Magesh.
>> > >> > >
>> > >> > > +1 (binding)
>> > >> > >
>> > >> > > Randall
>> > >> > >
>> > >> > > On Wed, May 8, 2019 at 7:22 PM Magesh Nandakumar <
>> > >> mage...@confluent.io>
>> > >> > > wrote:
>> > >> > >
>> > >> > > > Thanks a lot Chris. So far, the KIP has one non-binding vote
>> and
>> > I'm
>> > >> > > still
>> > >> > > > looking forward to the KIP to be voted by Friday's deadline.
>> > >> > > >
>> > >> > > > On Tue, May 7, 2019 at 10:00 AM Chris Egerton <
>> > chr...@confluent.io>
>> > >> > > wrote:
>> > >> > > >
>> > >> > > > > Hi Magesh,
>> > >> > > > >
>> > >> > > > > This looks great! Very excited to see these changes finally
>> > >> coming to
>> > >> > > > > Connect.
>> > >> > > > > +1 (non-binding)
>> > >> > > > >
>> > >> > > > > Cheers,
>> > >> > > > >
>> > >> > > > > Chris
>> > >> > > > >
>> > >> > > > > On Tue, May 7, 2019 at 9:51 AM Magesh Nandakumar <
>> > >> mage...@confluent.io
>> > >> > > >
>> > >> > > > > wrote:
>> > >> > > > >
>> > >> > > > > > Hi All,
>> > >> > > > > >
>> > >> > > > > > I would like to start a vote on
>> > >> > > > > >
>> > >> > > > > >
>> > >> > > > >
>> > >> > > >
>> > >> > >
>> > >>
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-458%3A+Connecto

Re: [VOTE] KIP-458: Connector Client Config Override Policy

2019-05-09 Thread Magesh Nandakumar
Unless anyone has objections, I'm going to update the KIP to remove the
`Ignore` policy and make `None` as the default. I will also remove the `
default boolean useOverrides()` in the interface which was introduced for
the purpose of backward compatibility.

On Thu, May 9, 2019 at 3:27 PM Randall Hauch  wrote:

> I have also seen users include in connector configs the `producer.*` and
> `consumer.*` properties that should go into the worker configs. But those
> don't match, and the likelihood that someone is already using
> `producer.override.*` or `consumer.override.*` properties in their
> connector configs does seem pretty tiny.
>
> I'd be fine with removing the `Ignore` for backward compatibility. Still +1
> either way.
>
> On Thu, May 9, 2019 at 5:23 PM Magesh Nandakumar 
> wrote:
>
> > To add more details regarding the backward compatibility; I have
> generally
> > seen users trying to set "producer.request.timeout.ms
> > " in their connector
> config
> > under the assumption that it will get used and would never come back to
> > remove it. The initial intent of the KIP was to use the same prefix but
> > since that potentially collided with MM2 configs, we agreed to use a
> > different prefix "producer.override". With this context, I think the
> > likelihood of someone using this is very small and should generally not
> be
> > a problem.
> >
> > On Thu, May 9, 2019 at 3:15 PM Magesh Nandakumar 
> > wrote:
> >
> > > Colin,
> > >
> > > Thanks a lot for the feedback.  As you said, the possibilities of
> someone
> > > having "producer.override.request.timeout.ms" in their connector
> config
> > > in AK 2.2 or lower is very slim. But the key thing is if in case,
> someone
> > > has it AK2.2 doesn't do anything with it and it silently ignores the
> > > configuration. If others think that it's not really a problem, then I'm
> > > fine with removing the complicated compatibility issue.
> > >
> > > I have explicitly called out the behavior when the exception is thrown.
> > >
> > > Let me know what you think.
> > >
> > > Thanks,
> > > Magesh
> > >
> > > On Thu, May 9, 2019 at 2:45 PM Colin McCabe 
> wrote:
> > >
> > >> Hi Magesh,
> > >>
> > >> Thanks for the KIP.  It looks good overall.
> > >>
> > >> >default boolean useOverrides() {
> > >> >return true;
> > >> >}
> > >>
> > >> Is this method really needed?  As I understand, nobody should have any
> > >> connector client config overrides set right now, since they don't do
> > >> anything right now.
> > >>
> > >> For example, you wouldn't expect a Kafka 2.2 installation to have "
> > >> producer.override.request.timeout.ms" set, since that doesn't do
> > >> anything in Kafka 2.2.  So is the option to ignore it in Kafka 2.3
> > really
> > >> necessary?
> > >>
> > >> Can you add some details about what happens if a
> > >> PolicyValidationException is thrown?  I'm assuming that we fail to
> > create
> > >> the new Connector, I'm not sure if that's completely spelled out
> > (unless I
> > >> missed it).
> > >>
> > >> best,
> > >> Colin
> > >>
> > >>
> > >> On Thu, May 9, 2019, at 08:05, Rajini Sivaram wrote:
> > >> > Hi Magesh,
> > >> >
> > >> > Thanks for the KIP, +1 (binding)
> > >> >
> > >> > Regards,
> > >> >
> > >> > Rajini
> > >> >
> > >> >
> > >> > On Thu, May 9, 2019 at 3:55 PM Randall Hauch 
> > wrote:
> > >> >
> > >> > > Nice work, Magesh.
> > >> > >
> > >> > > +1 (binding)
> > >> > >
> > >> > > Randall
> > >> > >
> > >> > > On Wed, May 8, 2019 at 7:22 PM Magesh Nandakumar <
> > >> mage...@confluent.io>
> > >> > > wrote:
> > >> > >
> > >> > > > Thanks a lot Chris. So far, the KIP has one non-binding vote and
> > I'm
> > >> > > still
> > >> > > > looking forward to the KIP to be voted by Friday's deadline.
> > >> > > >
> > >> > > > On Tue, May 7, 2019 at 10:00 AM Chris Egerton <
> > chr...@confluent.io>
> > >> > > wrote:
> > >> > > >
> > >> > > > > Hi Magesh,
> > >> > > > >
> > >> > > > > This looks great! Very excited to see these changes finally
> > >> coming to
> > >> > > > > Connect.
> > >> > > > > +1 (non-binding)
> > >> > > > >
> > >> > > > > Cheers,
> > >> > > > >
> > >> > > > > Chris
> > >> > > > >
> > >> > > > > On Tue, May 7, 2019 at 9:51 AM Magesh Nandakumar <
> > >> mage...@confluent.io
> > >> > > >
> > >> > > > > wrote:
> > >> > > > >
> > >> > > > > > Hi All,
> > >> > > > > >
> > >> > > > > > I would like to start a vote on
> > >> > > > > >
> > >> > > > > >
> > >> > > > >
> > >> > > >
> > >> > >
> > >>
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-458%3A+Connector+Client+Config+Override+Policy
> > >> > > > > >
> > >> > > > > > The discussion thread can be found here
> > >> > > > > > <
> > >> https://www.mail-archive.com/dev@kafka.apache.org/msg97124.html>.
> > >> > > > > >
> > >> > > > > > Thanks,
> > >> > > > > > Magesh
> > >> > > > > >
> > >> > > > >
> > >> > > >
> > >> > >
> > >> >
> > >>
> > >
> >
>


Re: golang library for kafka streams

2019-05-09 Thread Guozhang Wang
cc'ing Magnus who may have some ideas :)

Guozhang

On Thu, May 9, 2019 at 9:48 AM Jay Heydt  wrote:

> any estimate of when a kafka streams library will be available for golang?
>


-- 
-- Guozhang


Re: [VOTE] KIP-464: Defaults for AdminClient#createTopic

2019-05-09 Thread Almog Gavra
Thanks Colin! Since the discussion around the builder is here I'll copy
over my comment from the discuss thread:

If we want the flexibility that the builder provides we would need to add
three constructors:
- no partitions/replicas
- just partitions
- just replicas

I see good use cases for the first two - the third (just replicas) seems
less necessary but complicates the API a bit (you have to differentiate
NewTopic(int) with NewTopic(short) or something like that). If we're happy
with a KIP that covers just the first two then I can remove the builder to
simplify things. Otherwise, I think the builder is an important addition.

Thoughts?

On Thu, May 9, 2019 at 2:50 PM Colin McCabe  wrote:

> +1 (binding).
>
> Re: the builder discussion.  I don't feel strongly either way-- the
> builder sketched out in the KIP looks reasonable, but I can also understand
> Ismael's argument for keeping the KIP minimal.
>
> best,
> Colin
>
>
> On Thu, May 9, 2019, at 08:09, Randall Hauch wrote:
> > I'm fine with simplifying the KIP by removing the Builder (which seems
> > ancillary), or keeping the KIP as-is. I'll wait to vote until Almog says
> > which way he'd like to proceed.
> >
> > On Thu, May 9, 2019 at 9:45 AM Ismael Juma  wrote:
> >
> > > Hi Almog,
> > >
> > > Adding a Builder seems unrelated to this change. Do we need it? Given
> the
> > > imminent KIP deadline, I'd keep it simple and just have the constructor
> > > with just the name parameter.
> > >
> > > Ismael
> > >
> > > On Thu, May 2, 2019 at 1:58 AM Mickael Maison <
> mickael.mai...@gmail.com>
> > > wrote:
> > >
> > > > I was planning to write a KIP for the exact same feature!
> > > > +1 (non binding)
> > > >
> > > > Thanks for the KIP
> > > >
> > > > On Wed, May 1, 2019 at 7:24 PM Almog Gavra 
> wrote:
> > > > >
> > > > > Hello Everyone!
> > > > >
> > > > > Kicking off the voting for
> > > > >
> > > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-464%3A+Defaults+for+AdminClient%23createTopic
> > > > >
> > > > >
> > > > > You can see discussion thread here (please respond with
> suggestions on
> > > > that
> > > > > thread):
> > > > >
> > > >
> > >
> https://lists.apache.org/thread.html/c0adfd2457e5984be7471fe6ade8a94d52c647356c81c039445d6b34@%3Cdev.kafka.apache.org%3E
> > > > >
> > > > >
> > > > > Cheers,
> > > > > Almog
> > > >
> > >
> >
>


Re: [VOTE] KIP-458: Connector Client Config Override Policy

2019-05-09 Thread Randall Hauch
I have also seen users include in connector configs the `producer.*` and
`consumer.*` properties that should go into the worker configs. But those
don't match, and the likelihood that someone is already using
`producer.override.*` or `consumer.override.*` properties in their
connector configs does seem pretty tiny.

I'd be fine with removing the `Ignore` for backward compatibility. Still +1
either way.

On Thu, May 9, 2019 at 5:23 PM Magesh Nandakumar 
wrote:

> To add more details regarding the backward compatibility; I have generally
> seen users trying to set "producer.request.timeout.ms
> " in their connector config
> under the assumption that it will get used and would never come back to
> remove it. The initial intent of the KIP was to use the same prefix but
> since that potentially collided with MM2 configs, we agreed to use a
> different prefix "producer.override". With this context, I think the
> likelihood of someone using this is very small and should generally not be
> a problem.
>
> On Thu, May 9, 2019 at 3:15 PM Magesh Nandakumar 
> wrote:
>
> > Colin,
> >
> > Thanks a lot for the feedback.  As you said, the possibilities of someone
> > having "producer.override.request.timeout.ms" in their connector config
> > in AK 2.2 or lower is very slim. But the key thing is if in case, someone
> > has it AK2.2 doesn't do anything with it and it silently ignores the
> > configuration. If others think that it's not really a problem, then I'm
> > fine with removing the complicated compatibility issue.
> >
> > I have explicitly called out the behavior when the exception is thrown.
> >
> > Let me know what you think.
> >
> > Thanks,
> > Magesh
> >
> > On Thu, May 9, 2019 at 2:45 PM Colin McCabe  wrote:
> >
> >> Hi Magesh,
> >>
> >> Thanks for the KIP.  It looks good overall.
> >>
> >> >default boolean useOverrides() {
> >> >return true;
> >> >}
> >>
> >> Is this method really needed?  As I understand, nobody should have any
> >> connector client config overrides set right now, since they don't do
> >> anything right now.
> >>
> >> For example, you wouldn't expect a Kafka 2.2 installation to have "
> >> producer.override.request.timeout.ms" set, since that doesn't do
> >> anything in Kafka 2.2.  So is the option to ignore it in Kafka 2.3
> really
> >> necessary?
> >>
> >> Can you add some details about what happens if a
> >> PolicyValidationException is thrown?  I'm assuming that we fail to
> create
> >> the new Connector, I'm not sure if that's completely spelled out
> (unless I
> >> missed it).
> >>
> >> best,
> >> Colin
> >>
> >>
> >> On Thu, May 9, 2019, at 08:05, Rajini Sivaram wrote:
> >> > Hi Magesh,
> >> >
> >> > Thanks for the KIP, +1 (binding)
> >> >
> >> > Regards,
> >> >
> >> > Rajini
> >> >
> >> >
> >> > On Thu, May 9, 2019 at 3:55 PM Randall Hauch 
> wrote:
> >> >
> >> > > Nice work, Magesh.
> >> > >
> >> > > +1 (binding)
> >> > >
> >> > > Randall
> >> > >
> >> > > On Wed, May 8, 2019 at 7:22 PM Magesh Nandakumar <
> >> mage...@confluent.io>
> >> > > wrote:
> >> > >
> >> > > > Thanks a lot Chris. So far, the KIP has one non-binding vote and
> I'm
> >> > > still
> >> > > > looking forward to the KIP to be voted by Friday's deadline.
> >> > > >
> >> > > > On Tue, May 7, 2019 at 10:00 AM Chris Egerton <
> chr...@confluent.io>
> >> > > wrote:
> >> > > >
> >> > > > > Hi Magesh,
> >> > > > >
> >> > > > > This looks great! Very excited to see these changes finally
> >> coming to
> >> > > > > Connect.
> >> > > > > +1 (non-binding)
> >> > > > >
> >> > > > > Cheers,
> >> > > > >
> >> > > > > Chris
> >> > > > >
> >> > > > > On Tue, May 7, 2019 at 9:51 AM Magesh Nandakumar <
> >> mage...@confluent.io
> >> > > >
> >> > > > > wrote:
> >> > > > >
> >> > > > > > Hi All,
> >> > > > > >
> >> > > > > > I would like to start a vote on
> >> > > > > >
> >> > > > > >
> >> > > > >
> >> > > >
> >> > >
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-458%3A+Connector+Client+Config+Override+Policy
> >> > > > > >
> >> > > > > > The discussion thread can be found here
> >> > > > > > <
> >> https://www.mail-archive.com/dev@kafka.apache.org/msg97124.html>.
> >> > > > > >
> >> > > > > > Thanks,
> >> > > > > > Magesh
> >> > > > > >
> >> > > > >
> >> > > >
> >> > >
> >> >
> >>
> >
>


Re: [VOTE] KIP-458: Connector Client Config Override Policy

2019-05-09 Thread Magesh Nandakumar
To add more details regarding the backward compatibility; I have generally
seen users trying to set "producer.request.timeout.ms
" in their connector config
under the assumption that it will get used and would never come back to
remove it. The initial intent of the KIP was to use the same prefix but
since that potentially collided with MM2 configs, we agreed to use a
different prefix "producer.override". With this context, I think the
likelihood of someone using this is very small and should generally not be
a problem.

On Thu, May 9, 2019 at 3:15 PM Magesh Nandakumar 
wrote:

> Colin,
>
> Thanks a lot for the feedback.  As you said, the possibilities of someone
> having "producer.override.request.timeout.ms" in their connector config
> in AK 2.2 or lower is very slim. But the key thing is if in case, someone
> has it AK2.2 doesn't do anything with it and it silently ignores the
> configuration. If others think that it's not really a problem, then I'm
> fine with removing the complicated compatibility issue.
>
> I have explicitly called out the behavior when the exception is thrown.
>
> Let me know what you think.
>
> Thanks,
> Magesh
>
> On Thu, May 9, 2019 at 2:45 PM Colin McCabe  wrote:
>
>> Hi Magesh,
>>
>> Thanks for the KIP.  It looks good overall.
>>
>> >default boolean useOverrides() {
>> >return true;
>> >}
>>
>> Is this method really needed?  As I understand, nobody should have any
>> connector client config overrides set right now, since they don't do
>> anything right now.
>>
>> For example, you wouldn't expect a Kafka 2.2 installation to have "
>> producer.override.request.timeout.ms" set, since that doesn't do
>> anything in Kafka 2.2.  So is the option to ignore it in Kafka 2.3 really
>> necessary?
>>
>> Can you add some details about what happens if a
>> PolicyValidationException is thrown?  I'm assuming that we fail to create
>> the new Connector, I'm not sure if that's completely spelled out (unless I
>> missed it).
>>
>> best,
>> Colin
>>
>>
>> On Thu, May 9, 2019, at 08:05, Rajini Sivaram wrote:
>> > Hi Magesh,
>> >
>> > Thanks for the KIP, +1 (binding)
>> >
>> > Regards,
>> >
>> > Rajini
>> >
>> >
>> > On Thu, May 9, 2019 at 3:55 PM Randall Hauch  wrote:
>> >
>> > > Nice work, Magesh.
>> > >
>> > > +1 (binding)
>> > >
>> > > Randall
>> > >
>> > > On Wed, May 8, 2019 at 7:22 PM Magesh Nandakumar <
>> mage...@confluent.io>
>> > > wrote:
>> > >
>> > > > Thanks a lot Chris. So far, the KIP has one non-binding vote and I'm
>> > > still
>> > > > looking forward to the KIP to be voted by Friday's deadline.
>> > > >
>> > > > On Tue, May 7, 2019 at 10:00 AM Chris Egerton 
>> > > wrote:
>> > > >
>> > > > > Hi Magesh,
>> > > > >
>> > > > > This looks great! Very excited to see these changes finally
>> coming to
>> > > > > Connect.
>> > > > > +1 (non-binding)
>> > > > >
>> > > > > Cheers,
>> > > > >
>> > > > > Chris
>> > > > >
>> > > > > On Tue, May 7, 2019 at 9:51 AM Magesh Nandakumar <
>> mage...@confluent.io
>> > > >
>> > > > > wrote:
>> > > > >
>> > > > > > Hi All,
>> > > > > >
>> > > > > > I would like to start a vote on
>> > > > > >
>> > > > > >
>> > > > >
>> > > >
>> > >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-458%3A+Connector+Client+Config+Override+Policy
>> > > > > >
>> > > > > > The discussion thread can be found here
>> > > > > > <
>> https://www.mail-archive.com/dev@kafka.apache.org/msg97124.html>.
>> > > > > >
>> > > > > > Thanks,
>> > > > > > Magesh
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>>
>


Re: [VOTE] KIP-458: Connector Client Config Override Policy

2019-05-09 Thread Magesh Nandakumar
Colin,

Thanks a lot for the feedback.  As you said, the possibilities of someone
having "producer.override.request.timeout.ms" in their connector config in
AK 2.2 or lower is very slim. But the key thing is if in case, someone has
it AK2.2 doesn't do anything with it and it silently ignores the
configuration. If others think that it's not really a problem, then I'm
fine with removing the complicated compatibility issue.

I have explicitly called out the behavior when the exception is thrown.

Let me know what you think.

Thanks,
Magesh

On Thu, May 9, 2019 at 2:45 PM Colin McCabe  wrote:

> Hi Magesh,
>
> Thanks for the KIP.  It looks good overall.
>
> >default boolean useOverrides() {
> >return true;
> >}
>
> Is this method really needed?  As I understand, nobody should have any
> connector client config overrides set right now, since they don't do
> anything right now.
>
> For example, you wouldn't expect a Kafka 2.2 installation to have "
> producer.override.request.timeout.ms" set, since that doesn't do anything
> in Kafka 2.2.  So is the option to ignore it in Kafka 2.3 really necessary?
>
> Can you add some details about what happens if a PolicyValidationException
> is thrown?  I'm assuming that we fail to create the new Connector, I'm not
> sure if that's completely spelled out (unless I missed it).
>
> best,
> Colin
>
>
> On Thu, May 9, 2019, at 08:05, Rajini Sivaram wrote:
> > Hi Magesh,
> >
> > Thanks for the KIP, +1 (binding)
> >
> > Regards,
> >
> > Rajini
> >
> >
> > On Thu, May 9, 2019 at 3:55 PM Randall Hauch  wrote:
> >
> > > Nice work, Magesh.
> > >
> > > +1 (binding)
> > >
> > > Randall
> > >
> > > On Wed, May 8, 2019 at 7:22 PM Magesh Nandakumar  >
> > > wrote:
> > >
> > > > Thanks a lot Chris. So far, the KIP has one non-binding vote and I'm
> > > still
> > > > looking forward to the KIP to be voted by Friday's deadline.
> > > >
> > > > On Tue, May 7, 2019 at 10:00 AM Chris Egerton 
> > > wrote:
> > > >
> > > > > Hi Magesh,
> > > > >
> > > > > This looks great! Very excited to see these changes finally coming
> to
> > > > > Connect.
> > > > > +1 (non-binding)
> > > > >
> > > > > Cheers,
> > > > >
> > > > > Chris
> > > > >
> > > > > On Tue, May 7, 2019 at 9:51 AM Magesh Nandakumar <
> mage...@confluent.io
> > > >
> > > > > wrote:
> > > > >
> > > > > > Hi All,
> > > > > >
> > > > > > I would like to start a vote on
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-458%3A+Connector+Client+Config+Override+Policy
> > > > > >
> > > > > > The discussion thread can be found here
> > > > > >  >.
> > > > > >
> > > > > > Thanks,
> > > > > > Magesh
> > > > > >
> > > > >
> > > >
> > >
> >
>


Re: [VOTE] KIP-465: Add Consolidated Connector Endpoint to Connect REST API

2019-05-09 Thread dan
After 80 hours, the results of the voting:
+3 binding
+1 non-binding
(+1 from me too :)

I'll update this wiki to show this KIP as approved. 🥳

thanks
dan

On Thu, May 9, 2019 at 8:11 AM Rajini Sivaram 
wrote:

> Hi Dan,
>
> Thanks for the KIP, +1 (binding)
>
> Regards,
>
> Rajini
>
>
> On Mon, May 6, 2019 at 8:08 PM Konstantine Karantasis <
> konstant...@confluent.io> wrote:
>
> > Useful and concise KIP
> >
> > +1 (non-binding)
> >
> > Konstantine
> >
> > On Mon, May 6, 2019 at 10:43 AM Randall Hauch  wrote:
> >
> > > Thanks, Dan. As mentioned on the discussion, this is really a nice
> little
> > > addition that was alway missing from the API.
> > >
> > > +1 (binding)
> > >
> > > Randall
> > >
> > > On Mon, May 6, 2019 at 9:23 AM dan  wrote:
> > >
> > > > I would like to start voting for
> > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-465%3A+Add+Consolidated+Connector+Endpoint+to+Connect+REST+API
> > > >
> > > > thanks
> > > > dan
> > > >
> > >
> >
>


Re: [VOTE] KIP-464: Defaults for AdminClient#createTopic

2019-05-09 Thread Colin McCabe
+1 (binding).

Re: the builder discussion.  I don't feel strongly either way-- the builder 
sketched out in the KIP looks reasonable, but I can also understand Ismael's 
argument for keeping the KIP minimal.

best,
Colin


On Thu, May 9, 2019, at 08:09, Randall Hauch wrote:
> I'm fine with simplifying the KIP by removing the Builder (which seems
> ancillary), or keeping the KIP as-is. I'll wait to vote until Almog says
> which way he'd like to proceed.
> 
> On Thu, May 9, 2019 at 9:45 AM Ismael Juma  wrote:
> 
> > Hi Almog,
> >
> > Adding a Builder seems unrelated to this change. Do we need it? Given the
> > imminent KIP deadline, I'd keep it simple and just have the constructor
> > with just the name parameter.
> >
> > Ismael
> >
> > On Thu, May 2, 2019 at 1:58 AM Mickael Maison 
> > wrote:
> >
> > > I was planning to write a KIP for the exact same feature!
> > > +1 (non binding)
> > >
> > > Thanks for the KIP
> > >
> > > On Wed, May 1, 2019 at 7:24 PM Almog Gavra  wrote:
> > > >
> > > > Hello Everyone!
> > > >
> > > > Kicking off the voting for
> > > >
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-464%3A+Defaults+for+AdminClient%23createTopic
> > > >
> > > >
> > > > You can see discussion thread here (please respond with suggestions on
> > > that
> > > > thread):
> > > >
> > >
> > https://lists.apache.org/thread.html/c0adfd2457e5984be7471fe6ade8a94d52c647356c81c039445d6b34@%3Cdev.kafka.apache.org%3E
> > > >
> > > >
> > > > Cheers,
> > > > Almog
> > >
> >
>


Re: [VOTE] KIP-458: Connector Client Config Override Policy

2019-05-09 Thread Colin McCabe
Hi Magesh,

Thanks for the KIP.  It looks good overall.

>default boolean useOverrides() {
>return true;
>}

Is this method really needed?  As I understand, nobody should have any 
connector client config overrides set right now, since they don't do anything 
right now.

For example, you wouldn't expect a Kafka 2.2 installation to have 
"producer.override.request.timeout.ms" set, since that doesn't do anything in 
Kafka 2.2.  So is the option to ignore it in Kafka 2.3 really necessary?

Can you add some details about what happens if a PolicyValidationException is 
thrown?  I'm assuming that we fail to create the new Connector, I'm not sure if 
that's completely spelled out (unless I missed it).

best,
Colin


On Thu, May 9, 2019, at 08:05, Rajini Sivaram wrote:
> Hi Magesh,
> 
> Thanks for the KIP, +1 (binding)
> 
> Regards,
> 
> Rajini
> 
> 
> On Thu, May 9, 2019 at 3:55 PM Randall Hauch  wrote:
> 
> > Nice work, Magesh.
> >
> > +1 (binding)
> >
> > Randall
> >
> > On Wed, May 8, 2019 at 7:22 PM Magesh Nandakumar 
> > wrote:
> >
> > > Thanks a lot Chris. So far, the KIP has one non-binding vote and I'm
> > still
> > > looking forward to the KIP to be voted by Friday's deadline.
> > >
> > > On Tue, May 7, 2019 at 10:00 AM Chris Egerton 
> > wrote:
> > >
> > > > Hi Magesh,
> > > >
> > > > This looks great! Very excited to see these changes finally coming to
> > > > Connect.
> > > > +1 (non-binding)
> > > >
> > > > Cheers,
> > > >
> > > > Chris
> > > >
> > > > On Tue, May 7, 2019 at 9:51 AM Magesh Nandakumar  > >
> > > > wrote:
> > > >
> > > > > Hi All,
> > > > >
> > > > > I would like to start a vote on
> > > > >
> > > > >
> > > >
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-458%3A+Connector+Client+Config+Override+Policy
> > > > >
> > > > > The discussion thread can be found here
> > > > > .
> > > > >
> > > > > Thanks,
> > > > > Magesh
> > > > >
> > > >
> > >
> >
>


Build failed in Jenkins: kafka-trunk-jdk8 #3614

2019-05-09 Thread Apache Jenkins Server
See 


Changes:

[cmccabe] KAFKA-8332: Refactor ImplicitLinkedHashSet to avoid losing ordering 
when

--
[...truncated 2.41 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTime

Build failed in Jenkins: kafka-2.1-jdk8 #183

2019-05-09 Thread Apache Jenkins Server
See 


Changes:

[matthias] KAFKA-8240: Fix NPE in Source.equals() (#6685)

--
[...truncated 459.43 KB...]

kafka.server.AbstractFetcherThreadTest > 
testTruncateToHighWatermarkIfLeaderEpochRequestNotSupported STARTED

kafka.server.AbstractFetcherThreadTest > 
testTruncateToHighWatermarkIfLeaderEpochRequestNotSupported PASSED

kafka.server.AbstractFetcherThreadTest > testMetricsRemovedOnShutdown STARTED

kafka.server.AbstractFetcherThreadTest > testMetricsRemovedOnShutdown PASSED

kafka.server.AbstractFetcherThreadTest > 
testTruncateToHighWatermarkDuringRemovePartitions STARTED

kafka.server.AbstractFetcherThreadTest > 
testTruncateToHighWatermarkDuringRemovePartitions PASSED

kafka.server.AbstractFetcherThreadTest > 
testLeaderEpochChangeDuringFencedFetchEpochsFromLeader STARTED

kafka.server.AbstractFetcherThreadTest > 
testLeaderEpochChangeDuringFencedFetchEpochsFromLeader PASSED

kafka.server.AbstractFetcherThreadTest > testCorruptMessage STARTED

kafka.server.AbstractFetcherThreadTest > testCorruptMessage PASSED

kafka.server.AbstractFetcherThreadTest > testTruncation STARTED

kafka.server.AbstractFetcherThreadTest > testTruncation PASSED

kafka.server.MetadataCacheTest > 
getTopicMetadataPartitionListenerNotAvailableOnLeader STARTED

kafka.server.MetadataCacheTest > 
getTopicMetadataPartitionListenerNotAvailableOnLeader PASSED

kafka.server.MetadataCacheTest > 
getTopicMetadataWithNonSupportedSecurityProtocol STARTED

kafka.server.MetadataCacheTest > 
getTopicMetadataWithNonSupportedSecurityProtocol PASSED

kafka.server.MetadataCacheTest > getTopicMetadataIsrNotAvailable STARTED

kafka.server.MetadataCacheTest > getTopicMetadataIsrNotAvailable PASSED

kafka.server.MetadataCacheTest > getTopicMetadata STARTED

kafka.server.MetadataCacheTest > getTopicMetadata PASSED

kafka.server.MetadataCacheTest > getTopicMetadataReplicaNotAvailable STARTED

kafka.server.MetadataCacheTest > getTopicMetadataReplicaNotAvailable PASSED

kafka.server.MetadataCacheTest > 
getTopicMetadataPartitionListenerNotAvailableOnLeaderOldMetadataVersion STARTED

kafka.server.MetadataCacheTest > 
getTopicMetadataPartitionListenerNotAvailableOnLeaderOldMetadataVersion PASSED

kafka.server.MetadataCacheTest > getTopicMetadataPartitionLeaderNotAvailable 
STARTED

kafka.server.MetadataCacheTest > getTopicMetadataPartitionLeaderNotAvailable 
PASSED

kafka.server.MetadataCacheTest > getAliveBrokersShouldNotBeMutatedByUpdateCache 
STARTED

kafka.server.MetadataCacheTest > getAliveBrokersShouldNotBeMutatedByUpdateCache 
PASSED

kafka.server.MetadataCacheTest > getTopicMetadataNonExistingTopics STARTED

kafka.server.MetadataCacheTest > getTopicMetadataNonExistingTopics PASSED

kafka.server.KafkaConfigTest > testAdvertiseConfigured STARTED

kafka.server.KafkaConfigTest > testAdvertiseConfigured PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeHoursProvided STARTED

kafka.server.KafkaConfigTest > testLogRetentionTimeHoursProvided PASSED

kafka.server.KafkaConfigTest > testLogRollTimeBothMsAndHoursProvided STARTED

kafka.server.KafkaConfigTest > testLogRollTimeBothMsAndHoursProvided PASSED

kafka.server.KafkaConfigTest > testLogRetentionValid STARTED

kafka.server.KafkaConfigTest > testLogRetentionValid PASSED

kafka.server.KafkaConfigTest > testSpecificProperties STARTED

kafka.server.KafkaConfigTest > testSpecificProperties PASSED

kafka.server.KafkaConfigTest > testDefaultCompressionType STARTED

kafka.server.KafkaConfigTest > testDefaultCompressionType PASSED

kafka.server.KafkaConfigTest > testDuplicateListeners STARTED

kafka.server.KafkaConfigTest > testDuplicateListeners PASSED

kafka.server.KafkaConfigTest > testLogRetentionUnlimited STARTED

kafka.server.KafkaConfigTest > testLogRetentionUnlimited PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeMsProvided STARTED

kafka.server.KafkaConfigTest > testLogRetentionTimeMsProvided PASSED

kafka.server.KafkaConfigTest > 
testInterBrokerListenerNameMissingFromListenerSecurityProtocolMap STARTED

kafka.server.KafkaConfigTest > 
testInterBrokerListenerNameMissingFromListenerSecurityProtocolMap PASSED

kafka.server.KafkaConfigTest > testMaxConnectionsPerIpProp STARTED

kafka.server.KafkaConfigTest > testMaxConnectionsPerIpProp PASSED

kafka.server.KafkaConfigTest > testLogRollTimeNoConfigProvided STARTED

kafka.server.KafkaConfigTest > testLogRollTimeNoConfigProvided PASSED

kafka.server.KafkaConfigTest > testInvalidInterBrokerSecurityProtocol STARTED

kafka.server.KafkaConfigTest > testInvalidInterBrokerSecurityProtocol PASSED

kafka.server.KafkaConfigTest > testAdvertiseDefaults STARTED

kafka.server.KafkaConfigTest > testAdvertiseDefaults PASSED

kafka.server.KafkaConfigTest > testBadListenerProtocol STARTED

kafka.server.KafkaConfigTest > testBadListenerProtocol PASSED

kafka.server.KafkaConfigTest > testListenerDefaults STARTED

kafka.server.KafkaConf

Re: [VOT] KIP-449: Add connector contexts to Connect worker logs (vote thread)

2019-05-09 Thread Randall Hauch
Thanks everyone! The vote has been open for about 9 days, and the KIP is
adopted with +4 binding votes (Randall, Manikumar, Gwen, and Bill) and no
-1 votes. Thanks also to Jeremy, Magesh, Chris, and Konstantine for their
non-binding votes.

On Thu, May 9, 2019 at 1:59 PM Bill Bejeck  wrote:

> +1 (binding)
> Makes a lot of sense to me.
>
> -Bill
>
> On Thu, May 9, 2019 at 2:05 PM Gwen Shapira  wrote:
>
> > +1 (binding)
> > Hell yeah!
> >
> > On Mon, Apr 29, 2019 at 3:34 PM Randall Hauch  wrote:
> >
> > > I would like to start the vote for KIP-258:
> > >
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-449%3A+Add+connector+contexts+to+Connect+worker+logs
> > >
> > > The KIP uses the Mapped Diagnostic Contexts (MDC) feature of SLF4J API
> to
> > > add more context to log messages from within Connect workers and
> > connector
> > > implementations. This would not be enabled by default, though it would
> be
> > > easy to enable within the Connect Log4J configuration.
> > >
> > > Thanks!
> > >
> > > Randall
> > >
> >
> >
> > --
> > *Gwen Shapira*
> > Product Manager | Confluent
> > 650.450.2760 | @gwenshap
> > Follow us: Twitter  | blog
> > 
> >
>


Re: [VOT] KIP-449: Add connector contexts to Connect worker logs (vote thread)

2019-05-09 Thread Bill Bejeck
+1 (binding)
Makes a lot of sense to me.

-Bill

On Thu, May 9, 2019 at 2:05 PM Gwen Shapira  wrote:

> +1 (binding)
> Hell yeah!
>
> On Mon, Apr 29, 2019 at 3:34 PM Randall Hauch  wrote:
>
> > I would like to start the vote for KIP-258:
> >
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-449%3A+Add+connector+contexts+to+Connect+worker+logs
> >
> > The KIP uses the Mapped Diagnostic Contexts (MDC) feature of SLF4J API to
> > add more context to log messages from within Connect workers and
> connector
> > implementations. This would not be enabled by default, though it would be
> > easy to enable within the Connect Log4J configuration.
> >
> > Thanks!
> >
> > Randall
> >
>
>
> --
> *Gwen Shapira*
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter  | blog
> 
>


Re: [VOTE] KIP-453: Add close() method to RocksDBConfigSetter

2019-05-09 Thread Sophie Blee-Goldman
Thanks everyone for voting! Closing the vote on this KIP which passes with
three +1 (binding) from Guozhang, Bill, and Matthias, and two +1
(non-binding) from Kamal and John.

For those interested you can check out the PR here:
https://github.com/apache/kafka/pull/6697

On Tue, May 7, 2019 at 7:31 AM Guozhang Wang  wrote:

> +1 (binding)
>
> On Mon, May 6, 2019 at 6:44 PM Kamal Chandraprakash <
> kamal.chandraprak...@gmail.com> wrote:
>
> > +1 (non-binding). Thanks for the KIP.
> >
> > On Tue, May 7, 2019 at 3:50 AM Bill Bejeck  wrote:
> >
> > > Thanks for the KIP Sophie.
> > >
> > > +1(binding)
> > >
> > > On Mon, May 6, 2019 at 4:51 PM John Roesler  wrote:
> > >
> > > > Thanks, Sophie, I reviewed the KIP, and I agree this is the best /
> > > > only-practical approach.
> > > >
> > > > +1 (non-binding)
> > > >
> > > > On Mon, May 6, 2019 at 2:23 PM Matthias J. Sax <
> matth...@confluent.io>
> > > > wrote:
> > > >
> > > > > +1 (binding)
> > > > >
> > > > >
> > > > >
> > > > > On 5/6/19 6:28 PM, Sophie Blee-Goldman wrote:
> > > > > > Hi all,
> > > > > >
> > > > > > I'd like to call for a vote on a minor KIP that adds a close()
> > method
> > > > to
> > > > > > the RocksDBConfigSetter interface of Streams.
> > > > > >
> > > > > > Link:
> > > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-453%3A+Add+close%28%29+method+to+RocksDBConfigSetter
> > > > > >
> > > > > > This is important for users who have created RocksOBjects in
> > > > > > RocksDBConfigSetter#setConfig to avoid leaking off-heap memory
> > > > > >
> > > > > > Thanks!
> > > > > > Sophie
> > > > > >
> > > > >
> > > > >
> > > >
> > >
> >
>
>
> --
> -- Guozhang
>


Build failed in Jenkins: kafka-trunk-jdk11 #503

2019-05-09 Thread Apache Jenkins Server
See 


Changes:

[cmccabe] KAFKA-8332: Refactor ImplicitLinkedHashSet to avoid losing ordering 
when

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H28 (ubuntu xenial) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision a97e55b83868ff786e740db55e73116f85456dcb 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f a97e55b83868ff786e740db55e73116f85456dcb
Commit message: "KAFKA-8332: Refactor ImplicitLinkedHashSet to avoid losing 
ordering when converting to Scala"
 > git rev-list --no-walk 53dec548b6149274e6c2acf832386dba6ec3c1e1 # timeout=10
ERROR: No tool found matching GRADLE_4_10_2_HOME
ERROR: No tool found matching GRADLE_4_10_2_HOME
[kafka-trunk-jdk11] $ /bin/bash -xe /tmp/jenkins7634845322413134461.sh
+ rm -rf 
+ /bin/gradle
/tmp/jenkins7634845322413134461.sh: line 4: /bin/gradle: No such file or 
directory
Build step 'Execute shell' marked build as failure
Recording test results
ERROR: No tool found matching GRADLE_4_10_2_HOME
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
ERROR: No tool found matching GRADLE_4_10_2_HOME
Not sending mail to unregistered user nore...@github.com
Not sending mail to unregistered user csh...@gmail.com
Not sending mail to unregistered user wangg...@gmail.com


Re: [DISCUSS] KIP-466: Add support for List serialization and deserialization

2019-05-09 Thread Development
Hey Sophie,

Thank you for your input. I think I’d rather finish this KIP as is, and then 
open a new one for the Collections (if everyone agrees). I don’t want to extend 
the current KIP-466, since most of the work is already done for it.

Meanwhile, I’ll start adding some test cases for this new list serde since this 
discussion seems to be approaching its logical end.

Best,
Daniyar Yeralin

> On May 9, 2019, at 1:35 PM, Sophie Blee-Goldman  wrote:
> 
> Good point about serdes for other Collections. On the one hand I'd guess
> that non-List Collections are probably relatively rare in practice (if
> anyone disagrees please correct me!) but on the other hand, a) even if just
> a small number of people benefit I think it's worth the extra effort and b)
> if we do end up needing/wanting them in the future it would save us a KIP
> to just add them now. Personally I feel it would make sense to expand the
> scope of this KIP a bit to include all Collections as a logical unit, but
> the ROI could be low..
> 
> (I know of at least one instance in the unit tests where a Set serde could
> be useful, and there may be more)
> 
> On Thu, May 9, 2019 at 7:27 AM Development  wrote:
> 
>> Hey,
>> 
>> I don’t see any replies. Seems like this proposal can be finalized and
>> called for a vote?
>> 
>> Also I’ve been thinking. Do we need more serdes for other Collections?
>> Like queue or set for example
>> 
>> Best,
>> Daniyar Yeralin
>> 
>>> On May 8, 2019, at 2:28 PM, John Roesler  wrote:
>>> 
>>> Hi Daniyar,
>>> 
>>> No worries about the procedural stuff. Prior experience with KIPs is
>>> not required :)
>>> 
>>> I was just trying to help you propose this stuff in a way that the
>>> others will find easy to review.
>>> 
>>> Thanks for updating the KIP. Thanks to the others for helping out with
>>> the syntax.
>>> 
>>> Given these updates, I'm curious if anyone else has feedback about
>>> this proposal. Personally, I think it sounds fine!
>>> 
>>> -John
>>> 
>>> On Wed, May 8, 2019 at 1:01 PM Development  wrote:
 
 Hey,
 
 That worked! I certainly lack Java generics knowledge. Thanks for the
>> snippet. I’ll update KIP again.
 
 Best,
 Daniyar Yeralin
 
> On May 8, 2019, at 1:39 PM, Chris Egerton  wrote:
> 
> Hi Daniyar,
> 
> I think you may want to tweak your syntax a little:
> 
> public static  Serde> List(Serde innerSerde) {
> return new ListSerde(innerSerde);
> }
> 
> Does that work?
> 
> Cheers,
> 
> Chris
> 
> On Wed, May 8, 2019 at 10:29 AM Development > d...@yeralin.net>> wrote:
> Hi John,
> 
> I updated JIRA and KIP.
> 
> I didn’t know about the process, and created PR before I knew about
>> KIPs :)
> 
> As per static declaration, I don’t think Java allows that:
> 
> 
> Best,
> Daniyar Yeralin
> 
>> On May 7, 2019, at 2:22 PM, John Roesler > j...@confluent.io>> wrote:
>> 
>> Thanks for that update. Do you mind making changes primarily on the
>> KIP document ? (
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-466%3A+Add+support+for+List%3CT%3E+serialization+and+deserialization
>> <
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-466%3A+Add+support+for+List%3CT%3E+serialization+and+deserialization
>>> )
>> 
>> This is the design document that we have to agree on and vote for, the
>> PR comes later. It can be nice to have an implementation to look at,
>> but the KIP is the main artifact for this discussion.
>> 
>> With this in mind, it will help get more reviewers to look at it if
>> you can tidy up the KIP document so that it stands on its own. People
>> shouldn't have to look at any other document to understand the
>> motivation of the proposal, and they shouldn't have to look at a PR to
>> see what the public API will look like. If it helps, you can take a
>> look at some other recent KIPs.
>> 
>> Given that the list serde needs an inner serde, I agree you can't have
>> a zero-argument static factory method for it, but it seems you could
>> still have a static method:
>> `public static Serde> List(Serde innerSerde)`.
>> 
>> Thoughts?
>> 
>> On Tue, May 7, 2019 at 12:18 PM Development > d...@yeralin.net>> wrote:
>>> 
>>> Absolutely agree. Already pushed another commit to remove comparator
>> argument: https://github.com/apache/kafka/pull/6592 <
>> https://github.com/apache/kafka/pull/6592> <
>> https://github.com/apache/kafka/pull/6592 <
>> https://github.com/apache/kafka/pull/6592>>
>>> 
>>> Thank you for your input John! I really appreciate it.
>>> 
>>> What about this point I made:
>>> 
>>> 1. Since type for List serde needs to be declared before hand, I
>> could not create a static method for List Serde under
>> org.apache.kafka.common.serialization.Serdes. I addressed it in the KIP:
>>> P.S. Static method correspond

Re: [VOT] KIP-449: Add connector contexts to Connect worker logs (vote thread)

2019-05-09 Thread Gwen Shapira
+1 (binding)
Hell yeah!

On Mon, Apr 29, 2019 at 3:34 PM Randall Hauch  wrote:

> I would like to start the vote for KIP-258:
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-449%3A+Add+connector+contexts+to+Connect+worker+logs
>
> The KIP uses the Mapped Diagnostic Contexts (MDC) feature of SLF4J API to
> add more context to log messages from within Connect workers and connector
> implementations. This would not be enabled by default, though it would be
> easy to enable within the Connect Log4J configuration.
>
> Thanks!
>
> Randall
>


-- 
*Gwen Shapira*
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter  | blog



Re: [VOTE] KIP-461 Improve Replica Fetcher behavior at handling partition failure

2019-05-09 Thread Harsha Chintalapani
Thanks for the KIP. +1(binding)

Thanks,
Harsha
On May 9, 2019, 9:58 AM -0700, Jun Rao , wrote:
> Hi, Aishwarya,
>
> Thanks for the KIP. +1
>
> Jun
>
> On Wed, May 8, 2019 at 4:30 PM Aishwarya Gune 
> wrote:
>
> > Hi All!
> >
> > I would like to call for a vote on KIP-461 that would improve the behavior
> > of replica fetcher in case of partition failure. The fetcher thread would
> > just stop monitoring the crashed partition instead of terminating.
> >
> > Here's a link to the KIP -
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-461+-+Improve+Replica+Fetcher+behavior+at+handling+partition+failure
> >
> > Discussion thread -
> > https://www.mail-archive.com/dev@kafka.apache.org/msg97559.html
> >
> > --
> > Thank you,
> > Aishwarya
> >


Re: [DISCUSS] KIP-466: Add support for List serialization and deserialization

2019-05-09 Thread Sophie Blee-Goldman
Good point about serdes for other Collections. On the one hand I'd guess
that non-List Collections are probably relatively rare in practice (if
anyone disagrees please correct me!) but on the other hand, a) even if just
a small number of people benefit I think it's worth the extra effort and b)
if we do end up needing/wanting them in the future it would save us a KIP
to just add them now. Personally I feel it would make sense to expand the
scope of this KIP a bit to include all Collections as a logical unit, but
the ROI could be low..

(I know of at least one instance in the unit tests where a Set serde could
be useful, and there may be more)

On Thu, May 9, 2019 at 7:27 AM Development  wrote:

> Hey,
>
> I don’t see any replies. Seems like this proposal can be finalized and
> called for a vote?
>
> Also I’ve been thinking. Do we need more serdes for other Collections?
> Like queue or set for example
>
> Best,
> Daniyar Yeralin
>
> > On May 8, 2019, at 2:28 PM, John Roesler  wrote:
> >
> > Hi Daniyar,
> >
> > No worries about the procedural stuff. Prior experience with KIPs is
> > not required :)
> >
> > I was just trying to help you propose this stuff in a way that the
> > others will find easy to review.
> >
> > Thanks for updating the KIP. Thanks to the others for helping out with
> > the syntax.
> >
> > Given these updates, I'm curious if anyone else has feedback about
> > this proposal. Personally, I think it sounds fine!
> >
> > -John
> >
> > On Wed, May 8, 2019 at 1:01 PM Development  wrote:
> >>
> >> Hey,
> >>
> >> That worked! I certainly lack Java generics knowledge. Thanks for the
> snippet. I’ll update KIP again.
> >>
> >> Best,
> >> Daniyar Yeralin
> >>
> >>> On May 8, 2019, at 1:39 PM, Chris Egerton  wrote:
> >>>
> >>> Hi Daniyar,
> >>>
> >>> I think you may want to tweak your syntax a little:
> >>>
> >>> public static  Serde> List(Serde innerSerde) {
> >>>  return new ListSerde(innerSerde);
> >>> }
> >>>
> >>> Does that work?
> >>>
> >>> Cheers,
> >>>
> >>> Chris
> >>>
> >>> On Wed, May 8, 2019 at 10:29 AM Development  d...@yeralin.net>> wrote:
> >>> Hi John,
> >>>
> >>> I updated JIRA and KIP.
> >>>
> >>> I didn’t know about the process, and created PR before I knew about
> KIPs :)
> >>>
> >>> As per static declaration, I don’t think Java allows that:
> >>>
> >>>
> >>> Best,
> >>> Daniyar Yeralin
> >>>
>  On May 7, 2019, at 2:22 PM, John Roesler  j...@confluent.io>> wrote:
> 
>  Thanks for that update. Do you mind making changes primarily on the
>  KIP document ? (
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-466%3A+Add+support+for+List%3CT%3E+serialization+and+deserialization
> <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-466%3A+Add+support+for+List%3CT%3E+serialization+and+deserialization
> >)
> 
>  This is the design document that we have to agree on and vote for, the
>  PR comes later. It can be nice to have an implementation to look at,
>  but the KIP is the main artifact for this discussion.
> 
>  With this in mind, it will help get more reviewers to look at it if
>  you can tidy up the KIP document so that it stands on its own. People
>  shouldn't have to look at any other document to understand the
>  motivation of the proposal, and they shouldn't have to look at a PR to
>  see what the public API will look like. If it helps, you can take a
>  look at some other recent KIPs.
> 
>  Given that the list serde needs an inner serde, I agree you can't have
>  a zero-argument static factory method for it, but it seems you could
>  still have a static method:
>  `public static Serde> List(Serde innerSerde)`.
> 
>  Thoughts?
> 
>  On Tue, May 7, 2019 at 12:18 PM Development  d...@yeralin.net>> wrote:
> >
> > Absolutely agree. Already pushed another commit to remove comparator
> argument: https://github.com/apache/kafka/pull/6592 <
> https://github.com/apache/kafka/pull/6592> <
> https://github.com/apache/kafka/pull/6592 <
> https://github.com/apache/kafka/pull/6592>>
> >
> > Thank you for your input John! I really appreciate it.
> >
> > What about this point I made:
> >
> > 1. Since type for List serde needs to be declared before hand, I
> could not create a static method for List Serde under
> org.apache.kafka.common.serialization.Serdes. I addressed it in the KIP:
> > P.S. Static method corresponding to ListSerde under
> org.apache.kafka.common.serialization.Serdes (something like static public
> Serde> List() {...} inorg.apache.kafka.common.serialization.Serdes)
> class cannot be added because type needs to be defined beforehand. That's
> why one needs to create List Serde in the following fashion:
> > new Serdes.ListSerde(Serdes.String(),
> Comparator.comparing(String::length));
> > (can possibly be simplified by declaring import static
> org.apache.kafka.common.serialization.Serdes.ListSerde)
> >
> >> On May 7, 2019, 

Re: [VOTE] KIP-461 Improve Replica Fetcher behavior at handling partition failure

2019-05-09 Thread Jun Rao
Hi, Aishwarya,

Thanks for the KIP. +1

Jun

On Wed, May 8, 2019 at 4:30 PM Aishwarya Gune 
wrote:

> Hi All!
>
> I would like to call for a vote on KIP-461 that would improve the behavior
> of replica fetcher in case of partition failure. The fetcher thread would
> just stop monitoring the crashed partition instead of terminating.
>
> Here's a link to the KIP -
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-461+-+Improve+Replica+Fetcher+behavior+at+handling+partition+failure
>
> Discussion thread -
> https://www.mail-archive.com/dev@kafka.apache.org/msg97559.html
>
> --
> Thank you,
> Aishwarya
>


Re: [DISCUSS] KIP-464 Default Replication Factor for AdminClient#createTopic

2019-05-09 Thread Almog Gavra
Moving discussion on VOTE thread here:

Ismael asked:

> Adding a Builder seems unrelated to this change. Do we need it? Given the
> imminent KIP deadline, I'd keep it simple and just have the constructor
> with just the name parameter.

If we want the flexibility that the builder provides we would need to add
three constructors:
- no partitions/replicas
- just partitions
- just replicas

I see good use cases for the first two - the third (just replicas) seems
less necessary and complicates the API a bit (you have to differentiate
NewTopic(int) with NewTopic(short) or something like that). If we're happy
with a KIP that covers just the first two then I can remove the builder to
simplify things. Otherwise, I think the builder is an important addition.

Thoughts?

On Fri, May 3, 2019 at 11:43 AM Randall Hauch  wrote:

> I personally like those extra methods, rather than relying upon the generic
> properties. But I'm fine if others think they should be removed. I'm also
> fine with not deprecating the Connect version of the builder.
>
> On Fri, May 3, 2019 at 11:27 AM Almog Gavra  wrote:
>
> > Ack. KIP updated :) Perhaps instead of deprecating the Connect builder,
> > then, we can indeed just subclass it and move some of the less common
> build
> > methods (e.g. uncleanLeaderElection) there.
> >
> > On Fri, May 3, 2019 at 11:20 AM Randall Hauch  wrote:
> >
> > > Thanks for updating, Almog. I have a few suggestions specific to the
> > > builder:
> > >
> > > 1. The AK pattern for builder classes that are nested is to name the
> > class
> > > "Builder" and to make it publicly visible. We should follow that
> pattern
> > > here, too.
> > > 2. The builder's private constructor makes it impossible to subclass,
> > > should we ever want to do that (e.g, in Connect). If we make it
> protected
> > > or public, then subclassing is easier.
> > >
> > > On Thu, May 2, 2019 at 9:44 AM Almog Gavra  wrote:
> > >
> > > > Thanks for the input Randall. I'm happy adding it natively to
> NewTopic
> > > > instead of introducing more verbosity - updating the KIP to reflect
> > this
> > > > now.
> > > >
> > > > On Thu, May 2, 2019 at 9:28 AM Randall Hauch 
> wrote:
> > > >
> > > > > I wrote the `NewTopicBuilder` in Connect, and it was simply a
> > > convenience
> > > > > to more easily set some of the frequently-used properties and the #
> > of
> > > > > partitions and replicas for the new topic in the same way. An
> example
> > > is:
> > > > >
> > > > > NewTopic topicDescription = TopicAdmin.defineTopic(topic).
> > > > > compacted().
> > > > > partitions(1).
> > > > > replicationFactor(3).
> > > > > build();
> > > > >
> > > > > Arguably it should have been added to clients from the beginning.
> So
> > > I'm
> > > > > fine with that being moved to clients, as long as Connect is
> changed
> > to
> > > > use
> > > > > the new clients class. However, even though Connect's
> > `NewTopicBuilder`
> > > > is
> > > > > in the runtime and technically not part of the public API, things
> > like
> > > > this
> > > > > still tend to get reused elsewhere. Let's keep the Connect
> > > > > `NewTopicBuilder` but deprecate it and have it extend the one in
> > > clients.
> > > > > The `TopicAdmin` class in Connect can then refer to the new one in
> > > > clients.
> > > > >
> > > > > The KIP now talks about having a constructor for the builder:
> > > > >
> > > > > NewTopic myTopic = new
> > > > >
> > > > >
> > > >
> > >
> >
> NewTopicBuilder(name).compacted().partitions(1).replicationFactor(3).build();
> > > > >
> > > > > How about adding the builder to the NewTopic class itself:
> > > > >
> > > > > NewTopic myTopic =
> > > > >
> > > > >
> > > >
> > >
> >
> NewTopic.build(name).compacted().partitions(1).replicationFactor(3).build();
> > > > >
> > > > > This is a bit shorter, a bit easier to read (no "new New..."), and
> > more
> > > > > discoverable since anyone looking at the NewTopic source or JavaDoc
> > > will
> > > > > maybe notice it.
> > > > >
> > > > > Randall
> > > > >
> > > > >
> > > > > On Thu, May 2, 2019 at 8:56 AM Almog Gavra 
> > wrote:
> > > > >
> > > > > > Sure thing, added more detail to the KIP! To clarify, the plan is
> > to
> > > > move
> > > > > > an existing API from one package to another (NewTopicBuilder
> exists
> > > in
> > > > > the
> > > > > > connect.runtime package) leaving the old in place for
> compatibility
> > > and
> > > > > > deprecating it.
> > > > > >
> > > > > > I'm happy to hear thoughts on whether we should (a) move it to
> the
> > > same
> > > > > > package in a new module so that we don't need to deprecate it or
> > (b)
> > > > take
> > > > > > this opportunity to change any of the APIs.
> > > > > >
> > > > > > On Thu, May 2, 2019 at 8:22 AM Ismael Juma 
> > > wrote:
> > > > > >
> > > > > > > If you are adding new API, you need to specify it all in the
> KIP.
> > > > > > >
> > > > > > > Ismael
> > > > > > >
> > > > > > > On Thu, May 

golang library for kafka streams

2019-05-09 Thread Jay Heydt
any estimate of when a kafka streams library will be available for golang?


Re: [VOTE] KIP-455: Create an Administrative API for Replica Reassignment

2019-05-09 Thread Robert Barrett
+1 (non-binding)

Thanks for the KIP, Colin!

On Thu, May 9, 2019 at 8:27 AM Colin McCabe  wrote:

> Hi Viktor,
>
> There is a jira -- KAFKA-8345.  The PR is not quite ready yet, but
> hopefully soon :)
>
> best,
> Colin
>
> On Thu, May 9, 2019, at 01:13, Viktor Somogyi-Vass wrote:
> > +1 (non-binding)
> >
> > Thanks Colin, this is great stuff. Does a jira (or maybe even a PR :) )
> for
> > this exist yet?
> >
> > Viktor
> >
> > On Thu, May 9, 2019 at 7:23 AM Colin McCabe  wrote:
> >
> > > Hi all,
> > >
> > > I'd like to start the vote for KIP-455: Create an Administrative API
> for
> > > Replica Reassignment.  I think this KIP is important since it will
> unlock
> > > many follow-on improvements to Kafka reassignment (see the "Future
> work"
> > > section, plus a lot of the other discussions we've had recently about
> > > reassignment).  It also furthers the important KIP-4 goal of removing
> > > direct access to ZK.
> > >
> > > I made a few changes based on the discussion in the [DISCUSS] thread.
> As
> > > Robert suggested, I removed the need to explicitly cancel a
> reassignment
> > > for a partition before setting up a different reassignment for that
> > > specific partition.  I also simplified the API a bit by adding a
> > > PartitionReassignment class which is used by both the alter and list
> APIs.
> > >
> > > I modified the proposal so that we now deprecate the old znode-based
> API
> > > rather than removing it completely.  That should give external
> rebalancing
> > > tools some time to transition to the new API.
> > >
> > > To clarify a question Viktor asked, I added a note that the
> > > kafka-reassign-partitions.sh will now use a --bootstrap-server
> argument to
> > > contact the admin APIs.
> > >
> > > thanks,
> > > Colin
> > >
> >
>


Build failed in Jenkins: kafka-trunk-jdk8 #3613

2019-05-09 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Fix bug in Struct.equals and use Objects.equals/Long.hashCode

[github] MINOR: improve JavaDocs for KStream.through() (#6639)

--
[...truncated 2.43 MB...]

org.apache.kafka.connect.converters.FloatConverterTest > 
testConvertingSamplesToAndFromBytes STARTED

org.apache.kafka.connect.converters.FloatConverterTest > 
testConvertingSamplesToAndFromBytes PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectSchemaless STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectSchemaless PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > testToConnectNull 
STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > testToConnectNull 
PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectBadSchema STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectBadSchema PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectNull STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectNull PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > testToConnect 
STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > testToConnect 
PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > testFromConnect 
STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > testFromConnect 
PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectInvalidValue STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectInvalidValue PASSED

org.apache.kafka.connect.converters.LongConverterTest > testBytesNullToNumber 
STARTED

org.apache.kafka.connect.converters.LongConverterTest > testBytesNullToNumber 
PASSED

org.apache.kafka.connect.converters.LongConverterTest > 
testSerializingIncorrectType STARTED

org.apache.kafka.connect.converters.LongConverterTest > 
testSerializingIncorrectType PASSED

org.apache.kafka.connect.converters.LongConverterTest > 
testDeserializingHeaderWithTooManyBytes STARTED

org.apache.kafka.connect.converters.LongConverterTest > 
testDeserializingHeaderWithTooManyBytes PASSED

org.apache.kafka.connect.converters.LongConverterTest > testNullToBytes STARTED

org.apache.kafka.connect.converters.LongConverterTest > testNullToBytes PASSED

org.apache.kafka.connect.converters.LongConverterTest > 
testSerializingIncorrectHeader STARTED

org.apache.kafka.connect.converters.LongConverterTest > 
testSerializingIncorrectHeader PASSED

org.apache.kafka.connect.converters.LongConverterTest > 
testDeserializingDataWithTooManyBytes STARTED

org.apache.kafka.connect.converters.LongConverterTest > 
testDeserializingDataWithTooManyBytes PASSED

org.apache.kafka.connect.converters.LongConverterTest > 
testConvertingSamplesToAndFromBytes STARTED

org.apache.kafka.connect.converters.LongConverterTest > 
testConvertingSamplesToAndFromBytes PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testBytesNullToNumber STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testBytesNullToNumber PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testSerializingIncorrectType STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testSerializingIncorrectType PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testDeserializingHeaderWithTooManyBytes STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testDeserializingHeaderWithTooManyBytes PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > testNullToBytes 
STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > testNullToBytes 
PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testSerializingIncorrectHeader STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testSerializingIncorrectHeader PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testDeserializingDataWithTooManyBytes STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testDeserializingDataWithTooManyBytes PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testConvertingSamplesToAndFromBytes STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testConvertingSamplesToAndFromBytes PASSED

org.apache.kafka.connect.converters.DoubleConverterTest > testBytesNullToNumber 
STARTED

org.apache.kafka.connect.converters.DoubleConverterTest > testBytesNullToNumber 
PASSED

org.apache.kafka.connect.converters.DoubleConverterTest > 
testSerializingIncorrectType STARTED

org.apache.kafka.connect.converters.DoubleConverterTest > 
testSerializingIncorrectType PASSED

org.apache.kafka.connect.converters.DoubleConverterTest > 
testDeserializingHeaderWithTooManyBytes STARTED

Re: [DISCUSS] KIP-382: MirrorMaker 2.0

2019-05-09 Thread Ryanne Dolan
Hey y'all, I'm happy to announce that the PR for "MirrorMaker 2.0" is ready
for review, after a long spell in "draft".

https://github.com/apache/kafka/pull/6295

MirrorMaker 2.0 is in the Kafka 2.3.0 release plan. Please take a look so
we can get this merged.

Also, shameless plug: I'm giving a talk on MM2 in a few days at Kafka
Summit London. Hope to see you there.

https://kafka-summit.org/sessions/disaster-recovery-mirrormaker-2-0/

Thanks for the reviews so far!

Ryanne


Re: [VOTE] KIP-455: Create an Administrative API for Replica Reassignment

2019-05-09 Thread Colin McCabe
Hi Viktor,

There is a jira -- KAFKA-8345.  The PR is not quite ready yet, but hopefully 
soon :)

best,
Colin

On Thu, May 9, 2019, at 01:13, Viktor Somogyi-Vass wrote:
> +1 (non-binding)
> 
> Thanks Colin, this is great stuff. Does a jira (or maybe even a PR :) ) for
> this exist yet?
> 
> Viktor
> 
> On Thu, May 9, 2019 at 7:23 AM Colin McCabe  wrote:
> 
> > Hi all,
> >
> > I'd like to start the vote for KIP-455: Create an Administrative API for
> > Replica Reassignment.  I think this KIP is important since it will unlock
> > many follow-on improvements to Kafka reassignment (see the "Future work"
> > section, plus a lot of the other discussions we've had recently about
> > reassignment).  It also furthers the important KIP-4 goal of removing
> > direct access to ZK.
> >
> > I made a few changes based on the discussion in the [DISCUSS] thread.  As
> > Robert suggested, I removed the need to explicitly cancel a reassignment
> > for a partition before setting up a different reassignment for that
> > specific partition.  I also simplified the API a bit by adding a
> > PartitionReassignment class which is used by both the alter and list APIs.
> >
> > I modified the proposal so that we now deprecate the old znode-based API
> > rather than removing it completely.  That should give external rebalancing
> > tools some time to transition to the new API.
> >
> > To clarify a question Viktor asked, I added a note that the
> > kafka-reassign-partitions.sh will now use a --bootstrap-server argument to
> > contact the admin APIs.
> >
> > thanks,
> > Colin
> >
>


Re: [VOT] KIP-449: Add connector contexts to Connect worker logs (vote thread)

2019-05-09 Thread Manikumar
HI Randall,

+1 (binding), Thanks for the KIP.

Thanks,
Manikumar

On Wed, May 8, 2019 at 4:29 AM Randall Hauch  wrote:

> +1 (binding)
>
> On Mon, May 6, 2019 at 3:53 PM Konstantine Karantasis <
> konstant...@confluent.io> wrote:
>
> > Great improvement for multi-tenancy.
> > Thanks Randall!
> >
> > +1 (non-binding)
> >
> > Konstantine
> >
> > On Tue, Apr 30, 2019 at 9:18 PM Chris Egerton 
> wrote:
> >
> > > +1 (non-binding)
> > >
> > > Really looking forward to this. Thanks, Randall!
> > >
> > > On Tue, Apr 30, 2019, 20:47 Magesh Nandakumar 
> > > wrote:
> > >
> > > > This will make connect debugging so much easier. Thanks a lot for
> > driving
> > > > this Randall.
> > > >
> > > > +1 (non-binding)
> > > >
> > > > Thanks,
> > > > Magesh
> > > >
> > > > On Tue, Apr 30, 2019 at 7:19 PM Jeremy Custenborder <
> > > > jcustenbor...@gmail.com>
> > > > wrote:
> > > >
> > > > > +1 non binding
> > > > >
> > > > > On Mon, Apr 29, 2019 at 5:34 PM Randall Hauch 
> > > wrote:
> > > > > >
> > > > > > I would like to start the vote for KIP-258:
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-449%3A+Add+connector+contexts+to+Connect+worker+logs
> > > > > >
> > > > > > The KIP uses the Mapped Diagnostic Contexts (MDC) feature of
> SLF4J
> > > API
> > > > to
> > > > > > add more context to log messages from within Connect workers and
> > > > > connector
> > > > > > implementations. This would not be enabled by default, though it
> > > would
> > > > be
> > > > > > easy to enable within the Connect Log4J configuration.
> > > > > >
> > > > > > Thanks!
> > > > > >
> > > > > > Randall
> > > > >
> > > >
> > >
> >
>


[jira] [Created] (KAFKA-8345) Create an Administrative API for Replica Reassignment

2019-05-09 Thread Colin P. McCabe (JIRA)
Colin P. McCabe created KAFKA-8345:
--

 Summary: Create an Administrative API for Replica Reassignment
 Key: KAFKA-8345
 URL: https://issues.apache.org/jira/browse/KAFKA-8345
 Project: Kafka
  Issue Type: Improvement
Reporter: Colin P. McCabe
Assignee: Colin P. McCabe


Create an Administrative API for Replica Reassignment, as discussed in KIP-455



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-440: Extend Connect Converter to support headers

2019-05-09 Thread Andrew Schofield
+1 (non-binding).

Looks good.

On 09/05/2019, 15:55, "Gwen Shapira"  wrote:

+1 (binding)
Thank you!

On Mon, May 6, 2019, 11:25 PM Yaroslav Tkachenko  wrote:

> Hi everyone,
>
> I'd like to start a vote for KIP-440: Extend Connect Converter to support
> headers (
>
> 
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FKAFKA%2FKIP-440%253A%2BExtend%2BConnect%2BConverter%2Bto%2Bsupport%2Bheaders&data=02%7C01%7C%7C82864a9a5f3049e8ca1d08d6d48e5147%7C84df9e7fe9f640afb435%7C1%7C0%7C636930105039335723&sdata=FyQT5pfTwtT2NTMStgSl6zRjiaWFVJqG3%2B4vt5nRYmI%3D&reserved=0
> )
>
> Discussion:
>
> 
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.apache.org%2Fthread.html%2F1fc1e3d2cddd8311d3db7c98f0d09a1a137ca4b20d1f3c8ab203a855%40%253Cdev.kafka.apache.org%253E&data=02%7C01%7C%7C82864a9a5f3049e8ca1d08d6d48e5147%7C84df9e7fe9f640afb435%7C1%7C0%7C636930105039335723&sdata=B2a4Mx9ScWaO3HEEw0LoIRX0ajETAcwUmDxt5Ir5FIs%3D&reserved=0
>
> Thanks!
>




Re: [VOTE] KIP-465: Add Consolidated Connector Endpoint to Connect REST API

2019-05-09 Thread Rajini Sivaram
Hi Dan,

Thanks for the KIP, +1 (binding)

Regards,

Rajini


On Mon, May 6, 2019 at 8:08 PM Konstantine Karantasis <
konstant...@confluent.io> wrote:

> Useful and concise KIP
>
> +1 (non-binding)
>
> Konstantine
>
> On Mon, May 6, 2019 at 10:43 AM Randall Hauch  wrote:
>
> > Thanks, Dan. As mentioned on the discussion, this is really a nice little
> > addition that was alway missing from the API.
> >
> > +1 (binding)
> >
> > Randall
> >
> > On Mon, May 6, 2019 at 9:23 AM dan  wrote:
> >
> > > I would like to start voting for
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-465%3A+Add+Consolidated+Connector+Endpoint+to+Connect+REST+API
> > >
> > > thanks
> > > dan
> > >
> >
>


Re: [VOTE] KIP-465: Add Consolidated Connector Endpoint to Connect REST API

2019-05-09 Thread Gwen Shapira
Great improvement, I definitely missed having a simple API for status.

+1 (binding)

On Mon, May 6, 2019, 7:23 AM dan  wrote:

> I would like to start voting for
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-465%3A+Add+Consolidated+Connector+Endpoint+to+Connect+REST+API
>
> thanks
> dan
>


Re: [VOTE] KIP-464: Defaults for AdminClient#createTopic

2019-05-09 Thread Randall Hauch
I'm fine with simplifying the KIP by removing the Builder (which seems
ancillary), or keeping the KIP as-is. I'll wait to vote until Almog says
which way he'd like to proceed.

On Thu, May 9, 2019 at 9:45 AM Ismael Juma  wrote:

> Hi Almog,
>
> Adding a Builder seems unrelated to this change. Do we need it? Given the
> imminent KIP deadline, I'd keep it simple and just have the constructor
> with just the name parameter.
>
> Ismael
>
> On Thu, May 2, 2019 at 1:58 AM Mickael Maison 
> wrote:
>
> > I was planning to write a KIP for the exact same feature!
> > +1 (non binding)
> >
> > Thanks for the KIP
> >
> > On Wed, May 1, 2019 at 7:24 PM Almog Gavra  wrote:
> > >
> > > Hello Everyone!
> > >
> > > Kicking off the voting for
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-464%3A+Defaults+for+AdminClient%23createTopic
> > >
> > >
> > > You can see discussion thread here (please respond with suggestions on
> > that
> > > thread):
> > >
> >
> https://lists.apache.org/thread.html/c0adfd2457e5984be7471fe6ade8a94d52c647356c81c039445d6b34@%3Cdev.kafka.apache.org%3E
> > >
> > >
> > > Cheers,
> > > Almog
> >
>


Hello, I`d like to get access right to contribute kafka.

2019-05-09 Thread WooYoung Jeong
These days I`m analyzing Kafka source project.
I`m interested in contributing Kafka Project, so I have commented that to
assign me on a Jira ticket.
It is https://issues.apache.org/jira/browse/KAFKA-8311

However, I have no right to access Jira and wiki.

So Could you give me access ?

Thank you, have a nice day


Re: [VOTE] KIP-458: Connector Client Config Override Policy

2019-05-09 Thread Rajini Sivaram
Hi Magesh,

Thanks for the KIP, +1 (binding)

Regards,

Rajini


On Thu, May 9, 2019 at 3:55 PM Randall Hauch  wrote:

> Nice work, Magesh.
>
> +1 (binding)
>
> Randall
>
> On Wed, May 8, 2019 at 7:22 PM Magesh Nandakumar 
> wrote:
>
> > Thanks a lot Chris. So far, the KIP has one non-binding vote and I'm
> still
> > looking forward to the KIP to be voted by Friday's deadline.
> >
> > On Tue, May 7, 2019 at 10:00 AM Chris Egerton 
> wrote:
> >
> > > Hi Magesh,
> > >
> > > This looks great! Very excited to see these changes finally coming to
> > > Connect.
> > > +1 (non-binding)
> > >
> > > Cheers,
> > >
> > > Chris
> > >
> > > On Tue, May 7, 2019 at 9:51 AM Magesh Nandakumar  >
> > > wrote:
> > >
> > > > Hi All,
> > > >
> > > > I would like to start a vote on
> > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-458%3A+Connector+Client+Config+Override+Policy
> > > >
> > > > The discussion thread can be found here
> > > > .
> > > >
> > > > Thanks,
> > > > Magesh
> > > >
> > >
> >
>


Re: [VOTE] KIP-437: Custom replacement for MaskField SMT

2019-05-09 Thread Randall Hauch
Nice work, Valeria.

+1 (binding)

Randall

On Mon, May 6, 2019 at 4:30 PM Konstantine Karantasis <
konstant...@confluent.io> wrote:

> I think is is a useful improvement proposal. Thanks Valeria!
>
> I'm +1 (non-binding) on this KIP
>
> Konstantine
>
> On Mon, Apr 15, 2019 at 2:04 AM Valeria Vasylieva <
> valeria.vasyli...@gmail.com> wrote:
>
> > Hi all,
> >
> > Since there are no more objections/proposals I would like to start the
> > vote on KIP-437.
> >
> >
> > See: KIP-437 <
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-437%3A+Custom+replacement+for+MaskField+SMT
> > >
> > and related PR 
> >
> > I will be grateful to hear your opinion!
> >
> > Valeria
> >
>


Re: [VOTE] KIP-458: Connector Client Config Override Policy

2019-05-09 Thread Randall Hauch
Nice work, Magesh.

+1 (binding)

Randall

On Wed, May 8, 2019 at 7:22 PM Magesh Nandakumar 
wrote:

> Thanks a lot Chris. So far, the KIP has one non-binding vote and I'm still
> looking forward to the KIP to be voted by Friday's deadline.
>
> On Tue, May 7, 2019 at 10:00 AM Chris Egerton  wrote:
>
> > Hi Magesh,
> >
> > This looks great! Very excited to see these changes finally coming to
> > Connect.
> > +1 (non-binding)
> >
> > Cheers,
> >
> > Chris
> >
> > On Tue, May 7, 2019 at 9:51 AM Magesh Nandakumar 
> > wrote:
> >
> > > Hi All,
> > >
> > > I would like to start a vote on
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-458%3A+Connector+Client+Config+Override+Policy
> > >
> > > The discussion thread can be found here
> > > .
> > >
> > > Thanks,
> > > Magesh
> > >
> >
>


Re: [VOTE] KIP-440: Extend Connect Converter to support headers

2019-05-09 Thread Gwen Shapira
+1 (binding)
Thank you!

On Mon, May 6, 2019, 11:25 PM Yaroslav Tkachenko  wrote:

> Hi everyone,
>
> I'd like to start a vote for KIP-440: Extend Connect Converter to support
> headers (
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-440%3A+Extend+Connect+Converter+to+support+headers
> )
>
> Discussion:
>
> https://lists.apache.org/thread.html/1fc1e3d2cddd8311d3db7c98f0d09a1a137ca4b20d1f3c8ab203a855@%3Cdev.kafka.apache.org%3E
>
> Thanks!
>


Re: [VOTE] KIP-464: Defaults for AdminClient#createTopic

2019-05-09 Thread Ismael Juma
Hi Almog,

Adding a Builder seems unrelated to this change. Do we need it? Given the
imminent KIP deadline, I'd keep it simple and just have the constructor
with just the name parameter.

Ismael

On Thu, May 2, 2019 at 1:58 AM Mickael Maison 
wrote:

> I was planning to write a KIP for the exact same feature!
> +1 (non binding)
>
> Thanks for the KIP
>
> On Wed, May 1, 2019 at 7:24 PM Almog Gavra  wrote:
> >
> > Hello Everyone!
> >
> > Kicking off the voting for
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-464%3A+Defaults+for+AdminClient%23createTopic
> >
> >
> > You can see discussion thread here (please respond with suggestions on
> that
> > thread):
> >
> https://lists.apache.org/thread.html/c0adfd2457e5984be7471fe6ade8a94d52c647356c81c039445d6b34@%3Cdev.kafka.apache.org%3E
> >
> >
> > Cheers,
> > Almog
>


Build failed in Jenkins: kafka-trunk-jdk11 #502

2019-05-09 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: improve JavaDocs for KStream.through() (#6639)

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H32 (ubuntu xenial) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 53dec548b6149274e6c2acf832386dba6ec3c1e1 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 53dec548b6149274e6c2acf832386dba6ec3c1e1
Commit message: "MINOR: improve JavaDocs for KStream.through() (#6639)"
 > git rev-list --no-walk c09e25fac2aaea61af892ae3e5273679a4bdbc7d # timeout=10
ERROR: No tool found matching GRADLE_4_10_2_HOME
ERROR: No tool found matching GRADLE_4_10_2_HOME
[kafka-trunk-jdk11] $ /bin/bash -xe /tmp/jenkins1126231319528049930.sh
+ rm -rf 
+ /bin/gradle
/tmp/jenkins1126231319528049930.sh: line 4: /bin/gradle: No such file or 
directory
Build step 'Execute shell' marked build as failure
Recording test results
ERROR: No tool found matching GRADLE_4_10_2_HOME
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
ERROR: No tool found matching GRADLE_4_10_2_HOME
Not sending mail to unregistered user nore...@github.com
Not sending mail to unregistered user csh...@gmail.com
Not sending mail to unregistered user wangg...@gmail.com


Re: [DISCUSS] KIP-466: Add support for List serialization and deserialization

2019-05-09 Thread Development
Hey,

I don’t see any replies. Seems like this proposal can be finalized and called 
for a vote?

Also I’ve been thinking. Do we need more serdes for other Collections? Like 
queue or set for example

Best,
Daniyar Yeralin

> On May 8, 2019, at 2:28 PM, John Roesler  wrote:
> 
> Hi Daniyar,
> 
> No worries about the procedural stuff. Prior experience with KIPs is
> not required :)
> 
> I was just trying to help you propose this stuff in a way that the
> others will find easy to review.
> 
> Thanks for updating the KIP. Thanks to the others for helping out with
> the syntax.
> 
> Given these updates, I'm curious if anyone else has feedback about
> this proposal. Personally, I think it sounds fine!
> 
> -John
> 
> On Wed, May 8, 2019 at 1:01 PM Development  wrote:
>> 
>> Hey,
>> 
>> That worked! I certainly lack Java generics knowledge. Thanks for the 
>> snippet. I’ll update KIP again.
>> 
>> Best,
>> Daniyar Yeralin
>> 
>>> On May 8, 2019, at 1:39 PM, Chris Egerton  wrote:
>>> 
>>> Hi Daniyar,
>>> 
>>> I think you may want to tweak your syntax a little:
>>> 
>>> public static  Serde> List(Serde innerSerde) {
>>>  return new ListSerde(innerSerde);
>>> }
>>> 
>>> Does that work?
>>> 
>>> Cheers,
>>> 
>>> Chris
>>> 
>>> On Wed, May 8, 2019 at 10:29 AM Development >> > wrote:
>>> Hi John,
>>> 
>>> I updated JIRA and KIP.
>>> 
>>> I didn’t know about the process, and created PR before I knew about KIPs :)
>>> 
>>> As per static declaration, I don’t think Java allows that:
>>> 
>>> 
>>> Best,
>>> Daniyar Yeralin
>>> 
 On May 7, 2019, at 2:22 PM, John Roesler >>> > wrote:
 
 Thanks for that update. Do you mind making changes primarily on the
 KIP document ? 
 (https://cwiki.apache.org/confluence/display/KAFKA/KIP-466%3A+Add+support+for+List%3CT%3E+serialization+and+deserialization
  
 )
 
 This is the design document that we have to agree on and vote for, the
 PR comes later. It can be nice to have an implementation to look at,
 but the KIP is the main artifact for this discussion.
 
 With this in mind, it will help get more reviewers to look at it if
 you can tidy up the KIP document so that it stands on its own. People
 shouldn't have to look at any other document to understand the
 motivation of the proposal, and they shouldn't have to look at a PR to
 see what the public API will look like. If it helps, you can take a
 look at some other recent KIPs.
 
 Given that the list serde needs an inner serde, I agree you can't have
 a zero-argument static factory method for it, but it seems you could
 still have a static method:
 `public static Serde> List(Serde innerSerde)`.
 
 Thoughts?
 
 On Tue, May 7, 2019 at 12:18 PM Development >>> > wrote:
> 
> Absolutely agree. Already pushed another commit to remove comparator 
> argument: https://github.com/apache/kafka/pull/6592 
>  
>  >
> 
> Thank you for your input John! I really appreciate it.
> 
> What about this point I made:
> 
> 1. Since type for List serde needs to be declared before hand, I could 
> not create a static method for List Serde under 
> org.apache.kafka.common.serialization.Serdes. I addressed it in the KIP:
> P.S. Static method corresponding to ListSerde under 
> org.apache.kafka.common.serialization.Serdes (something like static 
> public Serde> List() {...} 
> inorg.apache.kafka.common.serialization.Serdes) class cannot be added 
> because type needs to be defined beforehand. That's why one needs to 
> create List Serde in the following fashion:
> new Serdes.ListSerde(Serdes.String(), 
> Comparator.comparing(String::length));
> (can possibly be simplified by declaring import static 
> org.apache.kafka.common.serialization.Serdes.ListSerde)
> 
>> On May 7, 2019, at 11:50 AM, John Roesler > > wrote:
>> 
>> Thanks for the reply Daniyar,
>> 
>> That makes much more sense! I thought I must be missing something, but I
>> couldn't for the life of me figure it out.
>> 
>> What do you think about just taking an argument, instead of for a
>> Comparator, for the Serde of the inner type? That way, the user can 
>> control
>> how exactly the inner data gets serialized, while also bounding the 
>> generic
>> parameter properly. As for the order, since the list is already in a
>> specific order, which the user themselves controls, it doesn't seem
>> strictly necessary to offer an option to sort the data during 
>> serialization.
>> 

Re: [VOTE] KIP-440: Extend Connect Converter to support headers

2019-05-09 Thread Randall Hauch
Thanks, Yaroslav!

+1 (binding)

Randall

On Tue, May 7, 2019 at 1:25 AM Yaroslav Tkachenko 
wrote:

> Hi everyone,
>
> I'd like to start a vote for KIP-440: Extend Connect Converter to support
> headers (
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-440%3A+Extend+Connect+Converter+to+support+headers
> )
>
> Discussion:
>
> https://lists.apache.org/thread.html/1fc1e3d2cddd8311d3db7c98f0d09a1a137ca4b20d1f3c8ab203a855@%3Cdev.kafka.apache.org%3E
>
> Thanks!
>


[jira] [Resolved] (KAFKA-8240) Source.equals() can fail with NPE

2019-05-09 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-8240.

   Resolution: Fixed
Fix Version/s: 2.2.2
   2.1.2
   2.3.0

[~vahid] I mark this as fix version 2.2.2 – if we roll a new RC for the bug-fix 
release, please update this ticket to 2.2.1. Thanks.

> Source.equals() can fail with NPE
> -
>
> Key: KAFKA-8240
> URL: https://issues.apache.org/jira/browse/KAFKA-8240
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.2.0, 2.1.1
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>Priority: Major
>  Labels: beginner, easy-fix, newbie
> Fix For: 2.3.0, 2.1.2, 2.2.2
>
>
> Reported on an PR: 
> [https://github.com/apache/kafka/pull/5284/files/1df6208f48b6b72091fea71323d94a16102ffd13#r270607795]
> InternalTopologyBuilder#Source.equals() might fail with NPE if 
> `topicPattern==null`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk11 #501

2019-05-09 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Fix bug in Struct.equals and use Objects.equals/Long.hashCode

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H32 (ubuntu xenial) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision c09e25fac2aaea61af892ae3e5273679a4bdbc7d 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f c09e25fac2aaea61af892ae3e5273679a4bdbc7d
Commit message: "MINOR: Fix bug in Struct.equals and use 
Objects.equals/Long.hashCode (#6680)"
 > git rev-list --no-walk cc4a7f01e872e6d0664360d5148af09c876ca72b # timeout=10
ERROR: No tool found matching GRADLE_4_10_2_HOME
ERROR: No tool found matching GRADLE_4_10_2_HOME
[kafka-trunk-jdk11] $ /bin/bash -xe /tmp/jenkins512286240873613387.sh
+ rm -rf 
+ /bin/gradle
/tmp/jenkins512286240873613387.sh: line 4: /bin/gradle: No such file or 
directory
Build step 'Execute shell' marked build as failure
Recording test results
ERROR: No tool found matching GRADLE_4_10_2_HOME
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
ERROR: No tool found matching GRADLE_4_10_2_HOME
Not sending mail to unregistered user nore...@github.com
Not sending mail to unregistered user csh...@gmail.com
Not sending mail to unregistered user wangg...@gmail.com


Re: [DISCUSS] KIP-429 : Smooth Auto-Scaling for Kafka Streams

2019-05-09 Thread Matthias J. Sax
Thanks Guozhang!

The simplified upgrade path is great!


Just a clarification question about the "Rebalance Callback Error
Handling" -- does this change affect the `ConsumerCoordinator` only if
incremental rebalancing is use? Or does the behavior also change for the
eager rebalancing case?


-Matthias


On 5/9/19 3:37 AM, Guozhang Wang wrote:
> Hello all,
> 
> Thanks for everyone who've shared their feedbacks for this KIP! If there's
> no further comments I'll start the voting thread by end of tomorrow.
> 
> 
> Guozhang.
> 
> On Wed, May 8, 2019 at 6:36 PM Guozhang Wang  wrote:
> 
>> Hello Boyang,
>>
>> On Wed, May 1, 2019 at 4:51 PM Boyang Chen  wrote:
>>
>>> Hey Guozhang,
>>>
>>> thank you for the great write up. Overall the motivation and changes
>>> LGTM, just some minor comments:
>>>
>>>
>>>   1.  In "Consumer Coordinator Algorithm", we could reorder alphabet
>>> points for 3d~3f from ["ready-to-migrate-partitions",
>>> "unknown-but-owned-partitions",  "maybe-revoking-partitions"] to
>>> ["maybe-revoking-partitions", "ready-to-migrate-partitions",
>>> "unknown-but-owned-partitions"] in order to be consistent with 3c1~3.
>>>
>>
>> Ack. Updated.
>>
>>
>>>   2.  In "Consumer Coordinator Algorithm", 1c suggests to revoke all
>>> partition upon heartbeat/commit fail. What's the gain here? Do we want to
>>> keep all partitions running at this moment, to be optimistic for the case
>>> when no partitions get reassigned?
>>>
>>
>> That's a good catch. When REBALANCE_IN_PROGRESS is received, we can just
>> re-join the group with all the currently owned partitions encoded. Updated.
>>
>>
>>>   3.  In "Recommended Upgrade Procedure", remove extra 'those': " The
>>> 'sticky' assignor works even those there are "
>>>
>>
>> Ack, should be `even when`.
>>
>>
>>>   4.  Put two "looking into the future" into a separate category from
>>> migration session. It seems inconsistent for readers to see this before we
>>> finished discussion for everything.
>>>
>>
>> Ack.
>>
>>
>>>   5.  Have we discussed the concern on the serialization? Could the new
>>> metadata we are adding grow larger than the message size cap?
>>>
>>
>> We're completing https://issues.apache.org/jira/browse/KAFKA-7149 which
>> should largely reduce the message size (will update the wiki accordingly as
>> well).
>>
>>
>>>
>>> Boyang
>>>
>>> 
>>> From: Guozhang Wang 
>>> Sent: Monday, April 15, 2019 9:20 AM
>>> To: dev
>>> Subject: Re: [DISCUSS] KIP-429 : Smooth Auto-Scaling for Kafka Streams
>>>
>>> Hello Jason,
>>>
>>> I agree with you that for range / round-robin it makes less sense to be
>>> compatible with cooperative rebalance protocol.
>>>
>>> As for StickyAssignor, however, I think it would still be possible to make
>>> the current implementation to be compatible with cooperative rebalance. So
>>> after pondering on different options at hand I'm now proposing this
>>> approach as listed in the upgrade section:
>>>
>>>
>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-429%3A+Kafka+Consumer+Incremental+Rebalance+Protocol#KIP-429:KafkaConsumerIncrementalRebalanceProtocol-CompatibilityandUpgradePath
>>>
>>> The idea is to let assignors specify which protocols it would work with,
>>> associating with a different name; then the upgrade path would involve a
>>> "compatible" protocol which actually still use eager behavior while
>>> encoding two assignors if possible. In "Rejected Section" (just to clarify
>>> I'm not finalizing it as rejected, just putting it there for now, and if
>>> we
>>> like this one instead we can always switch them) I listed the other
>>> approach we once discussed about, and arguing its cons of duplicated class
>>> seems overwhelm the pros of saving the  "rebalance.protocol" config.
>>>
>>> Let me know WDYT.
>>>
>>> Guozhang
>>>
>>> On Fri, Apr 12, 2019 at 6:08 PM Jason Gustafson 
>>> wrote:
>>>
 Hi Guozhang,

 Responses below:

 2. The interface's default implementation will just be
> `onPartitionRevoked`, so for user's instantiation if they do not make
>>> any
> code changes they should be able to recompile the code and continue.


 Ack, makes sense.

 4. Hmm.. not sure if it will work. The main issue is that the
> consumer-coordinator behavior (whether to revoke all or none at
> onRebalancePrepare) is independent of the selected protocol's assignor
> (eager or cooperative), so even if the assignor is selected to be the
> old-versioned one, we will still not revoke at the
>>> consumer-coordinator
> layer and hence has the same risk of migrating still-owned partitions,
> right?


 Yeah, basically we would have to push the eager/cooperative logic into
>>> the
 PartitionAssignor itself and make the consumer aware of the rebalance
 protocol it is compatible with. As long as an eager protocol _could_ be
 selected, the consumer would have to be pessimistic and do eager
 revocation. But if all th

Re: [DISCUSS] KIP-456: Helper classes to make it simpler to write test logic with TopologyTestDriver

2019-05-09 Thread Patrik Kleindl
Hi Jukka
Sorry, that was mostly what I had in mind, I didn't have enough time to
look through the KIP.

My question was also if this handling of topics wouldn't make more sense
even outside the TTD, for the general API.

regards
Patrik

On Thu, 9 May 2019 at 14:43, Jukka Karvanen 
wrote:

> Hi Patrick,
>
> Sorry, I need to clarify.
> In this current version of KIP in wiki, topic object are created with
> constructor where driver, topicName and serdes are provided.
>
> TestInputTopic inputTopic = new TestInputTopic String>(testDriver, INPUT_TOPIC, new Serdes.StringSerde(), new
> Serdes.StringSerde());
>
> So if TopologyTestDriver modified, this could be
>
> TestInputTopic inputTopic =
> testDriver.getInputTopic(INPUT_TOPIC, new Serdes.StringSerde(), new
> Serdes.StringSerde());
>
> or preferrable if serders can be found:
>
> TestInputTopic inputTopic =
> testDriver.getInputTopic(INPUT_TOPIC);
>
> This initialization done normally in test setup and after it can be used
> with topic object:
>
> inputTopic.pipeInput("Hello");
>
>
> Or did you mean something else?
>
> Jukka
>
>
>
>
> to 9. toukok. 2019 klo 15.14 Patrik Kleindl (pklei...@gmail.com)
> kirjoitti:
>
> > Hi Jukka
> > Regarding your comment
> > > If there would be a way to find out needed serders for the topic, it
> > would make API even simpler.
> > I was wondering if it wouldn't make more sense to have a "topic object"
> > including the Serdes and use this instead of only passing in the name as
> a
> > string everywhere.
> > From a low-level perspective Kafka does and should not care what is
> inside
> > the topic, but from a user perspective this information usually belongs
> > together.
> > Sidenote: Having topics as objects would probably also make it easier to
> > track them from the outside.
> > regards
> > Patrik
> >
> > On Thu, 9 May 2019 at 10:39, Jukka Karvanen  >
> > wrote:
> >
> > > Hi,
> > >
> > > I will write new KIP for the TestTopologyDriver Input and Output
> > usability
> > > changes.
> > > It is out of the scope of the current title: "Helper classes to make it
> > > simpler to write test logic with TopologyTestDriver"
> > > and we can get back to this KIP if that alternative is not favored.
> > >
> > > So my original approach was not to modify existing classes, but if we
> end
> > > up modifing TTD, I would also change the
> > > way to instantiate these topics. We could add
> getInputTopic("my-topic") /
> > > getOutputTopic("my-topic") to TTD, so it would work
> > > same way as with getStateStore and related methods.
> > >
> > > If there would be a way to find out needed serders for the topic, it
> > would
> > > make API even simpler.
> > >
> > > Generally still as a end user, I would prefer not only swapping the
> > > ConsumerRecord and ProducerRecord, but having
> > > interface accepting and returning Record, not needing to think about
> are
> > > those ConsumerRecord or ProducerRecords.
> > > and that way would could use same classes to pipe in and assert the
> > > result.Something similar than  "private final static class Record"
> > > in TopologyTestDriverTest.
> > >
> > > Jukka
> > >
> > > ke 8. toukok. 2019 klo 17.01 John Roesler (j...@confluent.io)
> kirjoitti:
> > >
> > > > Hi Jukka, thanks for the reply!
> > > >
> > > > I think this is a good summary (the discussion was getting a little
> > > > unwieldy. I'll reply inline.
> > > >
> > > > Also, thanks for clarify about your library vs. this KIP. That makes
> > > > perfect sense to me.
> > > > >
> > > > > 1. Add JavaDoc for KIP
> > > > >
> > > > > Is there a good example of KIP where Javadoc is included, so I can
> > > > follow?
> > > > > I create this KIP based on this as an example::
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-247%3A+Add+public+test+utils+for+Kafka+Streams
> > > > >
> > > > >
> > > > > Now added some comments to KIP page to clarify timestamp handling,
> > but
> > > I
> > > > > did not want to add full JavaDoc of each methods.
> > > > > Is that enough?
> > > >
> > > > That's fine. I was just trying to make the review process more
> > > > efficient for other reviewers (which makes getting your KIP accepted
> > > > more efficient). I reviewed a few recent KIPs, and, indeed, I see
> that
> > > > javadocs are not actually as common as I thought.
> > > >
> > > > > 2. TTD usability changes and swapping ConsumerRecord and
> > ProducerRecord
> > > > in
> > > > > APIs
> > > > >
> > > > > To my point of view only:
> > > > > - changing readRecord to return ConsumerRecord would cause we
> cannot
> > > use
> > > > > OutputVerifier
> > > >
> > > > Yes, we'd likely have to provide new methods in OutputVerifier to
> work
> > > > with ConsumerRecord. If you buy into the plan of deprecating most of
> > > > the current-style interactions, this wouldn't be that confusing,
> since
> > > > all the ProducerRecord verifications would be deprecated, and only
> the
> > > > ConsumerRecord verifications would remain "live".
> > > >
> > > > > 

Re: [DISCUSS] KIP-456: Helper classes to make it simpler to write test logic with TopologyTestDriver

2019-05-09 Thread Jukka Karvanen
Hi Patrick,

Sorry, I need to clarify.
In this current version of KIP in wiki, topic object are created with
constructor where driver, topicName and serdes are provided.

TestInputTopic inputTopic = new TestInputTopic(testDriver, INPUT_TOPIC, new Serdes.StringSerde(), new
Serdes.StringSerde());

So if TopologyTestDriver modified, this could be

TestInputTopic inputTopic =
testDriver.getInputTopic(INPUT_TOPIC, new Serdes.StringSerde(), new
Serdes.StringSerde());

or preferrable if serders can be found:

TestInputTopic inputTopic =
testDriver.getInputTopic(INPUT_TOPIC);

This initialization done normally in test setup and after it can be used
with topic object:

inputTopic.pipeInput("Hello");


Or did you mean something else?

Jukka




to 9. toukok. 2019 klo 15.14 Patrik Kleindl (pklei...@gmail.com) kirjoitti:

> Hi Jukka
> Regarding your comment
> > If there would be a way to find out needed serders for the topic, it
> would make API even simpler.
> I was wondering if it wouldn't make more sense to have a "topic object"
> including the Serdes and use this instead of only passing in the name as a
> string everywhere.
> From a low-level perspective Kafka does and should not care what is inside
> the topic, but from a user perspective this information usually belongs
> together.
> Sidenote: Having topics as objects would probably also make it easier to
> track them from the outside.
> regards
> Patrik
>
> On Thu, 9 May 2019 at 10:39, Jukka Karvanen 
> wrote:
>
> > Hi,
> >
> > I will write new KIP for the TestTopologyDriver Input and Output
> usability
> > changes.
> > It is out of the scope of the current title: "Helper classes to make it
> > simpler to write test logic with TopologyTestDriver"
> > and we can get back to this KIP if that alternative is not favored.
> >
> > So my original approach was not to modify existing classes, but if we end
> > up modifing TTD, I would also change the
> > way to instantiate these topics. We could add getInputTopic("my-topic") /
> > getOutputTopic("my-topic") to TTD, so it would work
> > same way as with getStateStore and related methods.
> >
> > If there would be a way to find out needed serders for the topic, it
> would
> > make API even simpler.
> >
> > Generally still as a end user, I would prefer not only swapping the
> > ConsumerRecord and ProducerRecord, but having
> > interface accepting and returning Record, not needing to think about are
> > those ConsumerRecord or ProducerRecords.
> > and that way would could use same classes to pipe in and assert the
> > result.Something similar than  "private final static class Record"
> > in TopologyTestDriverTest.
> >
> > Jukka
> >
> > ke 8. toukok. 2019 klo 17.01 John Roesler (j...@confluent.io) kirjoitti:
> >
> > > Hi Jukka, thanks for the reply!
> > >
> > > I think this is a good summary (the discussion was getting a little
> > > unwieldy. I'll reply inline.
> > >
> > > Also, thanks for clarify about your library vs. this KIP. That makes
> > > perfect sense to me.
> > > >
> > > > 1. Add JavaDoc for KIP
> > > >
> > > > Is there a good example of KIP where Javadoc is included, so I can
> > > follow?
> > > > I create this KIP based on this as an example::
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-247%3A+Add+public+test+utils+for+Kafka+Streams
> > > >
> > > >
> > > > Now added some comments to KIP page to clarify timestamp handling,
> but
> > I
> > > > did not want to add full JavaDoc of each methods.
> > > > Is that enough?
> > >
> > > That's fine. I was just trying to make the review process more
> > > efficient for other reviewers (which makes getting your KIP accepted
> > > more efficient). I reviewed a few recent KIPs, and, indeed, I see that
> > > javadocs are not actually as common as I thought.
> > >
> > > > 2. TTD usability changes and swapping ConsumerRecord and
> ProducerRecord
> > > in
> > > > APIs
> > > >
> > > > To my point of view only:
> > > > - changing readRecord to return ConsumerRecord would cause we cannot
> > use
> > > > OutputVerifier
> > >
> > > Yes, we'd likely have to provide new methods in OutputVerifier to work
> > > with ConsumerRecord. If you buy into the plan of deprecating most of
> > > the current-style interactions, this wouldn't be that confusing, since
> > > all the ProducerRecord verifications would be deprecated, and only the
> > > ConsumerRecord verifications would remain "live".
> > >
> > > > - changing pipeInput to take in ProducerRecord, but not providing
> easy
> > > way
> > > > to contruct those like ConsumerRecordFactory
> > >
> > > I didn't follow this as well. The ConsumerRecordFactory is there
> > > because it's a pain to construct ConsumerRecords. Conversely,
> > > ProducerRecord has many convenience constructors, so we wouldn't need
> > > a factory at all. This is a net win for users, since there's less
> > > surface area for them to deal with. Under my proposal, we'd deprecate
> > > the whole ConsumerRecordFactory.
> > >
> > > Note that there'

Re: [DISCUSS] KIP-456: Helper classes to make it simpler to write test logic with TopologyTestDriver

2019-05-09 Thread Patrik Kleindl
Hi Jukka
Regarding your comment
> If there would be a way to find out needed serders for the topic, it
would make API even simpler.
I was wondering if it wouldn't make more sense to have a "topic object"
including the Serdes and use this instead of only passing in the name as a
string everywhere.
>From a low-level perspective Kafka does and should not care what is inside
the topic, but from a user perspective this information usually belongs
together.
Sidenote: Having topics as objects would probably also make it easier to
track them from the outside.
regards
Patrik

On Thu, 9 May 2019 at 10:39, Jukka Karvanen 
wrote:

> Hi,
>
> I will write new KIP for the TestTopologyDriver Input and Output usability
> changes.
> It is out of the scope of the current title: "Helper classes to make it
> simpler to write test logic with TopologyTestDriver"
> and we can get back to this KIP if that alternative is not favored.
>
> So my original approach was not to modify existing classes, but if we end
> up modifing TTD, I would also change the
> way to instantiate these topics. We could add getInputTopic("my-topic") /
> getOutputTopic("my-topic") to TTD, so it would work
> same way as with getStateStore and related methods.
>
> If there would be a way to find out needed serders for the topic, it would
> make API even simpler.
>
> Generally still as a end user, I would prefer not only swapping the
> ConsumerRecord and ProducerRecord, but having
> interface accepting and returning Record, not needing to think about are
> those ConsumerRecord or ProducerRecords.
> and that way would could use same classes to pipe in and assert the
> result.Something similar than  "private final static class Record"
> in TopologyTestDriverTest.
>
> Jukka
>
> ke 8. toukok. 2019 klo 17.01 John Roesler (j...@confluent.io) kirjoitti:
>
> > Hi Jukka, thanks for the reply!
> >
> > I think this is a good summary (the discussion was getting a little
> > unwieldy. I'll reply inline.
> >
> > Also, thanks for clarify about your library vs. this KIP. That makes
> > perfect sense to me.
> > >
> > > 1. Add JavaDoc for KIP
> > >
> > > Is there a good example of KIP where Javadoc is included, so I can
> > follow?
> > > I create this KIP based on this as an example::
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-247%3A+Add+public+test+utils+for+Kafka+Streams
> > >
> > >
> > > Now added some comments to KIP page to clarify timestamp handling, but
> I
> > > did not want to add full JavaDoc of each methods.
> > > Is that enough?
> >
> > That's fine. I was just trying to make the review process more
> > efficient for other reviewers (which makes getting your KIP accepted
> > more efficient). I reviewed a few recent KIPs, and, indeed, I see that
> > javadocs are not actually as common as I thought.
> >
> > > 2. TTD usability changes and swapping ConsumerRecord and ProducerRecord
> > in
> > > APIs
> > >
> > > To my point of view only:
> > > - changing readRecord to return ConsumerRecord would cause we cannot
> use
> > > OutputVerifier
> >
> > Yes, we'd likely have to provide new methods in OutputVerifier to work
> > with ConsumerRecord. If you buy into the plan of deprecating most of
> > the current-style interactions, this wouldn't be that confusing, since
> > all the ProducerRecord verifications would be deprecated, and only the
> > ConsumerRecord verifications would remain "live".
> >
> > > - changing pipeInput to take in ProducerRecord, but not providing easy
> > way
> > > to contruct those like ConsumerRecordFactory
> >
> > I didn't follow this as well. The ConsumerRecordFactory is there
> > because it's a pain to construct ConsumerRecords. Conversely,
> > ProducerRecord has many convenience constructors, so we wouldn't need
> > a factory at all. This is a net win for users, since there's less
> > surface area for them to deal with. Under my proposal, we'd deprecate
> > the whole ConsumerRecordFactory.
> >
> > Note that there's an "idea parity check" here: ConsumerRecords are
> > hard to construct because developers aren't meant to ever construct
> > them. They are meant to construct ProducerRecords, which is why it's
> > made easy. TTD has inverted the relationships of these classes, which
> > is why the ConsumerRecordFactory is necessary, but if we correct it,
> > and return to a "normal" interaction with the Client library, then we
> > don't need special support classes.
> >
> > > - if initializing ConsumerRecord to/from  ProducerRecord  in these
> > classes
> > > field by field contructor, there are risk new fields are not added to
> > this
> > > classes if there are changes in ProducerRecord or ConsumerRecord
> >
> > This risk seems pretty low, to be honest. We will have tests that
> > exercise this testing framework, so if anyone changes ProducerRecord
> > or ConsumerRecord, our tests will break. Since both libraries are
> > build together, the problem would be fixed before the change is ever
> > merged to trunk.
> >
> > > I would prop

Re: [DISCUSS] KIP-456: Helper classes to make it simpler to write test logic with TopologyTestDriver

2019-05-09 Thread Jukka Karvanen
Hi,

I will write new KIP for the TestTopologyDriver Input and Output usability
changes.
It is out of the scope of the current title: "Helper classes to make it
simpler to write test logic with TopologyTestDriver"
and we can get back to this KIP if that alternative is not favored.

So my original approach was not to modify existing classes, but if we end
up modifing TTD, I would also change the
way to instantiate these topics. We could add getInputTopic("my-topic") /
getOutputTopic("my-topic") to TTD, so it would work
same way as with getStateStore and related methods.

If there would be a way to find out needed serders for the topic, it would
make API even simpler.

Generally still as a end user, I would prefer not only swapping the
ConsumerRecord and ProducerRecord, but having
interface accepting and returning Record, not needing to think about are
those ConsumerRecord or ProducerRecords.
and that way would could use same classes to pipe in and assert the
result.Something similar than  "private final static class Record"
in TopologyTestDriverTest.

Jukka

ke 8. toukok. 2019 klo 17.01 John Roesler (j...@confluent.io) kirjoitti:

> Hi Jukka, thanks for the reply!
>
> I think this is a good summary (the discussion was getting a little
> unwieldy. I'll reply inline.
>
> Also, thanks for clarify about your library vs. this KIP. That makes
> perfect sense to me.
> >
> > 1. Add JavaDoc for KIP
> >
> > Is there a good example of KIP where Javadoc is included, so I can
> follow?
> > I create this KIP based on this as an example::
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-247%3A+Add+public+test+utils+for+Kafka+Streams
> >
> >
> > Now added some comments to KIP page to clarify timestamp handling, but I
> > did not want to add full JavaDoc of each methods.
> > Is that enough?
>
> That's fine. I was just trying to make the review process more
> efficient for other reviewers (which makes getting your KIP accepted
> more efficient). I reviewed a few recent KIPs, and, indeed, I see that
> javadocs are not actually as common as I thought.
>
> > 2. TTD usability changes and swapping ConsumerRecord and ProducerRecord
> in
> > APIs
> >
> > To my point of view only:
> > - changing readRecord to return ConsumerRecord would cause we cannot use
> > OutputVerifier
>
> Yes, we'd likely have to provide new methods in OutputVerifier to work
> with ConsumerRecord. If you buy into the plan of deprecating most of
> the current-style interactions, this wouldn't be that confusing, since
> all the ProducerRecord verifications would be deprecated, and only the
> ConsumerRecord verifications would remain "live".
>
> > - changing pipeInput to take in ProducerRecord, but not providing easy
> way
> > to contruct those like ConsumerRecordFactory
>
> I didn't follow this as well. The ConsumerRecordFactory is there
> because it's a pain to construct ConsumerRecords. Conversely,
> ProducerRecord has many convenience constructors, so we wouldn't need
> a factory at all. This is a net win for users, since there's less
> surface area for them to deal with. Under my proposal, we'd deprecate
> the whole ConsumerRecordFactory.
>
> Note that there's an "idea parity check" here: ConsumerRecords are
> hard to construct because developers aren't meant to ever construct
> them. They are meant to construct ProducerRecords, which is why it's
> made easy. TTD has inverted the relationships of these classes, which
> is why the ConsumerRecordFactory is necessary, but if we correct it,
> and return to a "normal" interaction with the Client library, then we
> don't need special support classes.
>
> > - if initializing ConsumerRecord to/from  ProducerRecord  in these
> classes
> > field by field contructor, there are risk new fields are not added to
> this
> > classes if there are changes in ProducerRecord or ConsumerRecord
>
> This risk seems pretty low, to be honest. We will have tests that
> exercise this testing framework, so if anyone changes ProducerRecord
> or ConsumerRecord, our tests will break. Since both libraries are
> build together, the problem would be fixed before the change is ever
> merged to trunk.
>
> > I would propose a separate KIP for these and probably other
> enhanchements:
> > -superclass or common interface for ConsumerRecord and ProducerRecord
> > -contructors to ConsumerRecord and ProducerRecord to initialize with this
> > superclass
> > -modify OutputVerifier to work with both ConsumerRecord and
> ProducerRecord
> > -create new RecordFactory to replace ConsumerRecordFactory
>
> I understand your motivation to control the scope of this change, but
> I actually think that it's better for user-facing design changes to
> occur in fewer, bigger, chunks, rather than many small changes. People
> will get fatigued if multiple releases in a row change the
> test-support library from under their feet. Better to do it in one
> shot.
>
> Plus, this is a design discussion. We need to include the whole scope
> of the system in the 

Re: [VOTE] KIP-455: Create an Administrative API for Replica Reassignment

2019-05-09 Thread Viktor Somogyi-Vass
+1 (non-binding)

Thanks Colin, this is great stuff. Does a jira (or maybe even a PR :) ) for
this exist yet?

Viktor

On Thu, May 9, 2019 at 7:23 AM Colin McCabe  wrote:

> Hi all,
>
> I'd like to start the vote for KIP-455: Create an Administrative API for
> Replica Reassignment.  I think this KIP is important since it will unlock
> many follow-on improvements to Kafka reassignment (see the "Future work"
> section, plus a lot of the other discussions we've had recently about
> reassignment).  It also furthers the important KIP-4 goal of removing
> direct access to ZK.
>
> I made a few changes based on the discussion in the [DISCUSS] thread.  As
> Robert suggested, I removed the need to explicitly cancel a reassignment
> for a partition before setting up a different reassignment for that
> specific partition.  I also simplified the API a bit by adding a
> PartitionReassignment class which is used by both the alter and list APIs.
>
> I modified the proposal so that we now deprecate the old znode-based API
> rather than removing it completely.  That should give external rebalancing
> tools some time to transition to the new API.
>
> To clarify a question Viktor asked, I added a note that the
> kafka-reassign-partitions.sh will now use a --bootstrap-server argument to
> contact the admin APIs.
>
> thanks,
> Colin
>


[jira] [Created] (KAFKA-8344) Fix vagrant-up.sh to work with AWS properly

2019-05-09 Thread Kengo Seki (JIRA)
Kengo Seki created KAFKA-8344:
-

 Summary: Fix vagrant-up.sh to work with AWS properly
 Key: KAFKA-8344
 URL: https://issues.apache.org/jira/browse/KAFKA-8344
 Project: Kafka
  Issue Type: Bug
Reporter: Kengo Seki
Assignee: Kengo Seki


I tried to run {{vagrant/vagrant-up.sh --aws}} with the following 
Vagrantfile.local.

{code}
enable_dns = true
enable_hostmanager = false

# EC2
ec2_access_key = ""
ec2_secret_key = ""
ec2_keypair_name = "keypair"
ec2_keypair_file = "/path/to/keypair/file"
ec2_region = "ap-northeast-1"
ec2_ami = "ami-0905ffddadbfd01b7"
ec2_security_groups = "sg-"
ec2_subnet_id = "subnet-"
{code}

EC2 instances were successfully created, but it failed with the following error 
after that.

{code}
$ vagrant/vagrant-up.sh --aws

(snip)

An active machine was found with a different provider. Vagrant
currently allows each machine to be brought up with only a single
provider at a time. A future version will remove this limitation.
Until then, please destroy the existing machine to up with a new
provider.

Machine name: zk1
Active provider: aws
Requested provider: virtualbox
{code}

It seems that the {{vagrant hostmanager}} command also requires 
{{--provider=aws}} option, in addition to {{vagrant up}}.
With that option, it succeeded as follows:

{code}
$ git diff
diff --git a/vagrant/vagrant-up.sh b/vagrant/vagrant-up.sh
index 6a4ef9564..9210a5357 100755
--- a/vagrant/vagrant-up.sh
+++ b/vagrant/vagrant-up.sh
@@ -220,7 +220,7 @@ function bring_up_aws {
 # We still have to bring up zookeeper/broker nodes serially
 echo "Bringing up zookeeper/broker machines serially"
 vagrant up --provider=aws --no-parallel --no-provision 
$zk_broker_machines $debug
-vagrant hostmanager
+vagrant hostmanager --provider=aws
 vagrant provision
 fi

@@ -231,11 +231,11 @@ function bring_up_aws {
 local vagrant_rsync_temp_dir=$(mktemp -d);
 TMPDIR=$vagrant_rsync_temp_dir vagrant_batch_command "vagrant up 
$debug --provider=aws" "$worker_machines" "$max_parallel"
 rm -rf $vagrant_rsync_temp_dir
-vagrant hostmanager
+vagrant hostmanager --provider=aws
 fi
 else
 vagrant up --provider=aws --no-parallel --no-provision $debug
-vagrant hostmanager
+vagrant hostmanager --provider=aws
 vagrant provision
 fi

$ vagrant/vagrant-up.sh --aws

(snip)

==> broker3: Running provisioner: shell...
broker3: Running: /tmp/vagrant-shell20190509-25399-8f1wgz.sh
broker3: Killing server
broker3: No kafka server to stop
broker3: Starting server
$ vagrant status
Current machine states:

zk1   running (aws)
broker1   running (aws)
broker2   running (aws)
broker3   running (aws)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
$ vagrant ssh broker1

(snip)

ubuntu@ip-172-16-0-62:~$ /opt/kafka-dev/bin/kafka-topics.sh --bootstrap-server 
broker1:9092,broker2:9092,broker3:9092 --create --partitions 1 
--replication-factor 3 --topic sandbox

(snip)

ubuntu@ip-172-16-0-62:~$ /opt/kafka-dev/bin/kafka-topics.sh --bootstrap-server 
broker1:9092,broker2:9092,broker3:9092 --list

(snip)

sandbox
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)