Build failed in Jenkins: kafka-2.3-jdk8 #42

2019-06-05 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-8386; Use COORDINATOR_NOT_AVAILABLE error when group is Dead

--
[...truncated 2.92 MB...]
kafka.zk.KafkaZkClientTest > testLogDirGetters PASSED

kafka.zk.KafkaZkClientTest > testSetGetAndDeletePartitionReassignment STARTED

kafka.zk.KafkaZkClientTest > testSetGetAndDeletePartitionReassignment PASSED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationsDeletion STARTED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationsDeletion PASSED

kafka.zk.KafkaZkClientTest > testGetDataAndVersion STARTED

kafka.zk.KafkaZkClientTest > testGetDataAndVersion PASSED

kafka.zk.KafkaZkClientTest > testGetChildren STARTED

kafka.zk.KafkaZkClientTest > testGetChildren PASSED

kafka.zk.KafkaZkClientTest > testSetAndGetConsumerOffset STARTED

kafka.zk.KafkaZkClientTest > testSetAndGetConsumerOffset PASSED

kafka.zk.KafkaZkClientTest > testClusterIdMethods STARTED

kafka.zk.KafkaZkClientTest > testClusterIdMethods PASSED

kafka.zk.KafkaZkClientTest > testEntityConfigManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testEntityConfigManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testUpdateLeaderAndIsr STARTED

kafka.zk.KafkaZkClientTest > testUpdateLeaderAndIsr PASSED

kafka.zk.KafkaZkClientTest > testUpdateBrokerInfo STARTED

kafka.zk.KafkaZkClientTest > testUpdateBrokerInfo PASSED

kafka.zk.KafkaZkClientTest > testCreateRecursive STARTED

kafka.zk.KafkaZkClientTest > testCreateRecursive PASSED

kafka.zk.KafkaZkClientTest > testGetConsumerOffsetNoData STARTED

kafka.zk.KafkaZkClientTest > testGetConsumerOffsetNoData PASSED

kafka.zk.KafkaZkClientTest > testDeleteTopicPathMethods STARTED

kafka.zk.KafkaZkClientTest > testDeleteTopicPathMethods PASSED

kafka.zk.KafkaZkClientTest > testSetTopicPartitionStatesRaw STARTED

kafka.zk.KafkaZkClientTest > testSetTopicPartitionStatesRaw PASSED

kafka.zk.KafkaZkClientTest > testAclManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testAclManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testPreferredReplicaElectionMethods STARTED

kafka.zk.KafkaZkClientTest > testPreferredReplicaElectionMethods PASSED

kafka.zk.KafkaZkClientTest > testPropagateLogDir STARTED

kafka.zk.KafkaZkClientTest > testPropagateLogDir PASSED

kafka.zk.KafkaZkClientTest > testGetDataAndStat STARTED

kafka.zk.KafkaZkClientTest > testGetDataAndStat PASSED

kafka.zk.KafkaZkClientTest > testReassignPartitionsInProgress STARTED

kafka.zk.KafkaZkClientTest > testReassignPartitionsInProgress PASSED

kafka.zk.KafkaZkClientTest > testCreateTopLevelPaths STARTED

kafka.zk.KafkaZkClientTest > testCreateTopLevelPaths PASSED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationGetters STARTED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationGetters PASSED

kafka.zk.KafkaZkClientTest > testLogDirEventNotificationsDeletion STARTED

kafka.zk.KafkaZkClientTest > testLogDirEventNotificationsDeletion PASSED

kafka.zk.KafkaZkClientTest > testGetLogConfigs STARTED

kafka.zk.KafkaZkClientTest > testGetLogConfigs PASSED

kafka.zk.KafkaZkClientTest > testBrokerSequenceIdMethods STARTED

kafka.zk.KafkaZkClientTest > testBrokerSequenceIdMethods PASSED

kafka.zk.KafkaZkClientTest > testAclMethods STARTED

kafka.zk.KafkaZkClientTest > testAclMethods PASSED

kafka.zk.KafkaZkClientTest > testCreateSequentialPersistentPath STARTED

kafka.zk.KafkaZkClientTest > testCreateSequentialPersistentPath PASSED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath STARTED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath PASSED

kafka.zk.KafkaZkClientTest > testDeleteTopicZNode STARTED

kafka.zk.KafkaZkClientTest > testDeleteTopicZNode PASSED

kafka.zk.KafkaZkClientTest > testDeletePath STARTED

kafka.zk.KafkaZkClientTest > testDeletePath PASSED

kafka.zk.KafkaZkClientTest > testGetBrokerMethods STARTED

kafka.zk.KafkaZkClientTest > testGetBrokerMethods PASSED

kafka.zk.KafkaZkClientTest > testCreateTokenChangeNotification STARTED

kafka.zk.KafkaZkClientTest > testCreateTokenChangeNotification PASSED

kafka.zk.KafkaZkClientTest > testGetTopicsAndPartitions STARTED

kafka.zk.KafkaZkClientTest > testGetTopicsAndPartitions PASSED

kafka.zk.KafkaZkClientTest > testRegisterBrokerInfo STARTED

kafka.zk.KafkaZkClientTest > testRegisterBrokerInfo PASSED

kafka.zk.KafkaZkClientTest > testRetryRegisterBrokerInfo STARTED

kafka.zk.KafkaZkClientTest > testRetryRegisterBrokerInfo PASSED

kafka.zk.KafkaZkClientTest > testConsumerOffsetPath STARTED

kafka.zk.KafkaZkClientTest > testConsumerOffsetPath PASSED

kafka.zk.KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck 
STARTED

kafka.zk.KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck 
PASSED

kafka.zk.KafkaZkClientTest > testControllerManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testControllerManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testTopicAssignmentMethods 

[jira] [Created] (KAFKA-8497) kafka streams application占用内存很高

2019-06-05 Thread sunqing (JIRA)
sunqing created KAFKA-8497:
--

 Summary: kafka streams application占用内存很高
 Key: KAFKA-8497
 URL: https://issues.apache.org/jira/browse/KAFKA-8497
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 1.0.0
Reporter: sunqing


一个简单的kafka streams测试应用,使用KStream来消费数据,当所消费的kafka 
Topic中的数据暴涨时,或者要消费的Topic中待消费数据量很大时,消费程序占用的内存会非常高,能达到20多G,

疑问:kafka streams不是逐条消费吗,为啥topic中的数据量很大时会导致程序内存飙升

 

测试程序代码如下:

 

代码如下:

public class RunMain {

public static StreamsBuilder builder = new StreamsBuilder();


 public static void kafkaStreamStart() {
 KStream stream = 
builder.stream(Arrays.asList("wk_wangxin_po"));
 Properties props = new Properties();
 props.put(StreamsConfig.APPLICATION_ID_CONFIG, "testwang_xin");
 
 props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG,
 
"zktj-kafka-broker-out-1:29092,zktj-kafka-broker-out-2:29092,zktj-kafka-broker-out-3:29092");
 props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, 
Serdes.String().getClass());
 props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, 
Serdes.String().getClass());
 props.put(StreamsConfig.COMMIT_INTERVAL_MS_CONFIG, 3000);
 props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
 props.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG, 1);
 props.setProperty("security.protocol", "SASL_PLAINTEXT");
 props.setProperty("sasl.mechanism", "PLAIN");
 props.setProperty("sasl.kerberos.service.name", "kafka");
 System.setProperty("java.security.auth.login.config", 
"./conf/kafka_client_jaas.conf");
 
 stream.foreach(new ForeachAction() {
 @Override
 public void apply(String key, String value) {
 System.out.println("");
 System.out.println(key);
 }
 });
 Topology topo = builder.build();
 KafkaStreams streams = new KafkaStreams(topo, props);
 streams.start();

}

public static void main(String[] args) {
 kafkaStreamStart();
 }

}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Newbie on lookout for work

2019-06-05 Thread Dulvin Witharane
Thanks a bunch for the reply and I’ll make sure to check it out

On Thu, Jun 6, 2019 at 4:43 AM Gwen Shapira  wrote:

> Hi,
>
> You can look at our contribution guidelines:
> https://kafka.apache.org/contributing
>
> We have a number of open "newbie" tickets in JIRA, perhaps one of them
> will be interesting to you:
>
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20status%20%3D%20Open%20AND%20labels%20in%20(Newbie%2C%20newbie)%20ORDER%20BY%20created%20DESC
>
> On Wed, Jun 5, 2019 at 9:36 AM Dulvin Witharane  wrote:
> >
> > Hi All,
> >
> > I'm a newbie for FOSS and I'd like to start contributing to Apache Kafka.
> > Any pointers for beginning would be appreciated.
> >
> > --
> > *Witharane, D.R.H.*
> >
> > R Engineer
> > Synopsys Inc,
> > Colombo 08
> > Mobile : *+94 7 <%2B94%2071%201127241>7 6746781*
> > Skype  : dulvin.rivindu
> > Facebook  | LinkedIn
> > 
>
>
>
> --
> Gwen Shapira
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog
>
-- 
Witharane, DRH
R & D Engineer
Synopsys Lanka (Pvt) Ltd.
Borella, Sri Lanka
0776746781

Sent from my iPhone


Jenkins build is back to normal : kafka-trunk-jdk11 #605

2019-06-05 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk8 #3704

2019-06-05 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-8386; Use COORDINATOR_NOT_AVAILABLE error when group is Dead

[github] KAFKA-8400; Do not update follower replica state if the log read failed

--
[...truncated 2.50 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 

Re: [VOTE] KIP-434: Dead replica fetcher and log cleaner metrics

2019-06-05 Thread Satish Duggana
Thanks Viktor, proposed metrics are really useful to monitor replication
status on brokers.

+1 (non-binding)

On Thu, Jun 6, 2019 at 2:05 AM Colin McCabe  wrote:

> +1 (binding)
>
> best,
> Colin
>
>
> On Wed, Jun 5, 2019, at 03:38, Viktor Somogyi-Vass wrote:
> > Hi Folks,
> >
> > This vote sunk a bit, I'd like to draw some attention to this again in
> the
> > hope I get some feedback or votes.
> >
> > Thanks,
> > Viktor
> >
> > On Tue, May 7, 2019 at 4:28 PM Harsha  wrote:
> >
> > > Thanks for the kip. LGTM +1.
> > >
> > > -Harsha
> > >
> > > On Mon, Apr 29, 2019, at 8:14 AM, Viktor Somogyi-Vass wrote:
> > > > Hi Jason,
> > > >
> > > > I too agree this is more of a problem in older versions and
> therefore we
> > > > could backport it. Were you thinking of any specific versions? I
> guess
> > > the
> > > > 2.x and 1.x versions are definitely targets here but I was thinking
> that
> > > we
> > > > might not want to further.
> > > >
> > > > Viktor
> > > >
> > > > On Mon, Apr 29, 2019 at 12:55 AM Stanislav Kozlovski <
> > > stanis...@confluent.io>
> > > > wrote:
> > > >
> > > > > Thanks for the work done, Viktor! +1 (non-binding)
> > > > >
> > > > > I strongly agree with Jason that this monitoring-focused KIP is
> worth
> > > > > porting back to older versions. I am sure users will find it very
> > > useful
> > > > >
> > > > > Best,
> > > > > Stanislav
> > > > >
> > > > > On Fri, Apr 26, 2019 at 9:38 PM Jason Gustafson <
> ja...@confluent.io>
> > > > > wrote:
> > > > >
> > > > > > Thanks, that works for me. +1
> > > > > >
> > > > > > By the way, we don't normally port KIPs to older releases, but I
> > > wonder
> > > > > if
> > > > > > it's worth making an exception here. From recent experience, it
> > > tends to
> > > > > be
> > > > > > the older versions that are more prone to fetcher failures.
> Thoughts?
> > > > > >
> > > > > > -Jason
> > > > > >
> > > > > > On Fri, Apr 26, 2019 at 5:18 AM Viktor Somogyi-Vass <
> > > > > > viktorsomo...@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > > > Let me have a second thought, I'll just add the clientId
> instead to
> > > > > > follow
> > > > > > > the convention, so it'll change DeadFetcherThreadCount but
> with the
> > > > > > > clientId tag.
> > > > > > >
> > > > > > > On Fri, Apr 26, 2019 at 11:29 AM Viktor Somogyi-Vass <
> > > > > > > viktorsomo...@gmail.com> wrote:
> > > > > > >
> > > > > > > > Hi Jason,
> > > > > > > >
> > > > > > > > Yea I think it could make sense. In this case I would rename
> the
> > > > > > > > DeadFetcherThreadCount to DeadReplicaFetcherThreadCount and
> > > introduce
> > > > > > the
> > > > > > > > metric you're referring to as DeadLogDirFetcherThreadCount.
> > > > > > > > I'll update the KIP to reflect this.
> > > > > > > >
> > > > > > > > Viktor
> > > > > > > >
> > > > > > > > On Thu, Apr 25, 2019 at 8:07 PM Jason Gustafson <
> > > ja...@confluent.io>
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > >> Hi Viktor,
> > > > > > > >>
> > > > > > > >> This looks good. Just one question I had is whether we may
> as
> > > well
> > > > > > cover
> > > > > > > >> the log dir fetchers as well.
> > > > > > > >>
> > > > > > > >> Thanks,
> > > > > > > >> Jason
> > > > > > > >>
> > > > > > > >>
> > > > > > > >> On Thu, Apr 25, 2019 at 7:46 AM Viktor Somogyi-Vass <
> > > > > > > >> viktorsomo...@gmail.com>
> > > > > > > >> wrote:
> > > > > > > >>
> > > > > > > >> > Hi Folks,
> > > > > > > >> >
> > > > > > > >> > This thread sunk a bit but I'd like to bump it hoping to
> get
> > > some
> > > > > > > >> feedback
> > > > > > > >> > and/or votes.
> > > > > > > >> >
> > > > > > > >> > Thanks,
> > > > > > > >> > Viktor
> > > > > > > >> >
> > > > > > > >> > On Thu, Mar 28, 2019 at 8:47 PM Viktor Somogyi-Vass <
> > > > > > > >> > viktorsomo...@gmail.com>
> > > > > > > >> > wrote:
> > > > > > > >> >
> > > > > > > >> > > Sorry, the end of the message cut off.
> > > > > > > >> > >
> > > > > > > >> > > So I tried to be consistent with the convention in
> > > LogManager,
> > > > > > hence
> > > > > > > >> the
> > > > > > > >> > > hyphens and in AbstractFetcherManager, hence the camel
> > > case. It
> > > > > > > would
> > > > > > > >> be
> > > > > > > >> > > nice though to decide with one convention across the
> whole
> > > > > > project,
> > > > > > > >> > however
> > > > > > > >> > > it requires a major refactor (especially for the
> components
> > > that
> > > > > > > >> leverage
> > > > > > > >> > > metrics for monitoring).
> > > > > > > >> > >
> > > > > > > >> > > Thanks,
> > > > > > > >> > > Viktor
> > > > > > > >> > >
> > > > > > > >> > > On Thu, Mar 28, 2019 at 8:44 PM Viktor Somogyi-Vass <
> > > > > > > >> > > viktorsomo...@gmail.com> wrote:
> > > > > > > >> > >
> > > > > > > >> > >> Hi Dhruvil,
> > > > > > > >> > >>
> > > > > > > >> > >> Thanks for the feedback and the vote. I fixed the typo
> in
> > > the
> > > > > > KIP.
> > > > > > > >> > >> The naming is interesting though. Unfortunately kafka
> > > overall
> > > > > is
> > > > > > > 

[jira] [Created] (KAFKA-8496) Add system test for compatibility and upgrade path (part 6)

2019-06-05 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-8496:


 Summary: Add system test for compatibility and upgrade path (part 
6)
 Key: KAFKA-8496
 URL: https://issues.apache.org/jira/browse/KAFKA-8496
 Project: Kafka
  Issue Type: Sub-task
Reporter: Guozhang Wang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8495) Make Round-robin / RangeAssignor to be "sticky" (part 5)

2019-06-05 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-8495:


 Summary: Make Round-robin / RangeAssignor to be "sticky" (part 5)
 Key: KAFKA-8495
 URL: https://issues.apache.org/jira/browse/KAFKA-8495
 Project: Kafka
  Issue Type: Sub-task
Reporter: Guozhang Wang


For this new algorithm to be effective in reducing rebalance costs, it is 
really expecting the plug-in assignor to be "sticky" in some way, such that the 
diff of the newly-assigned-partitions and the existing-assigned-partitions can 
be small, and hence only a few subset of the total number of partitions need to 
be revoked / migrated at each rebalance in practice – otherwise, we are just 
paying more rebalance for little benefits.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8494) Refactor Consumer#StickyAssignor to support incremental protocol (part 4)

2019-06-05 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-8494:


 Summary: Refactor Consumer#StickyAssignor to support incremental 
protocol (part 4)
 Key: KAFKA-8494
 URL: https://issues.apache.org/jira/browse/KAFKA-8494
 Project: Kafka
  Issue Type: Sub-task
Reporter: Guozhang Wang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8493) Add PartitionsLost API in RebalanceListener

2019-06-05 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-8493:


 Summary: Add PartitionsLost API in RebalanceListener
 Key: KAFKA-8493
 URL: https://issues.apache.org/jira/browse/KAFKA-8493
 Project: Kafka
  Issue Type: Sub-task
Reporter: Guozhang Wang
Assignee: Guozhang Wang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8492) Modify ConsumerCoordinator Algorithm with incremental protocol (part 2)

2019-06-05 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-8492:


 Summary: Modify ConsumerCoordinator Algorithm with incremental 
protocol (part 2)
 Key: KAFKA-8492
 URL: https://issues.apache.org/jira/browse/KAFKA-8492
 Project: Kafka
  Issue Type: Sub-task
Reporter: Guozhang Wang
Assignee: Guozhang Wang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8491) Bump up Consumer Protocol to v2 (part 1)

2019-06-05 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-8491:


 Summary: Bump up Consumer Protocol to v2 (part 1)
 Key: KAFKA-8491
 URL: https://issues.apache.org/jira/browse/KAFKA-8491
 Project: Kafka
  Issue Type: Sub-task
Reporter: Guozhang Wang
Assignee: Guozhang Wang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8490) Use `Migrated` and `Deleted` state to replace consumer group `Dead` state

2019-06-05 Thread Boyang Chen (JIRA)
Boyang Chen created KAFKA-8490:
--

 Summary: Use `Migrated` and `Deleted` state to replace consumer 
group `Dead` state
 Key: KAFKA-8490
 URL: https://issues.apache.org/jira/browse/KAFKA-8490
 Project: Kafka
  Issue Type: Improvement
Reporter: Boyang Chen
Assignee: Boyang Chen


Inspired by [https://github.com/apache/kafka/pull/6762], right now the consumer 
group dead state is not clear to the user. It actually suggests 3 transient 
states:
 # a group is emigrated to another broker
 # an empty group is marked as dead by DeleteGroup request and will be deleted 
soon
 # a group is unloaded from cache due to last offset expiring

In case 1, the state name is better defined as `Migrated` to be consistent with 
what's actually going on in the background. for case 2 & 3, the state is better 
defined as `Deleted` which conveys a more accurate group status. By separating 
these two states, our error handling should also be more precise.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8489) Remove group immediately when DeleteGroup request could be completed

2019-06-05 Thread Boyang Chen (JIRA)
Boyang Chen created KAFKA-8489:
--

 Summary: Remove group immediately when DeleteGroup request could 
be completed
 Key: KAFKA-8489
 URL: https://issues.apache.org/jira/browse/KAFKA-8489
 Project: Kafka
  Issue Type: Improvement
Reporter: Boyang Chen


Inspired by discussion in [https://github.com/apache/kafka/pull/6762], we 
should attempt to shorten the time period from receiving the group deletion to 
actually remove it from cache. This saves the client unnecessary round trips to 
rebuild the group if needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-471: Expose RocksDB Metrics in Kafka Streams

2019-06-05 Thread Guozhang Wang
I think Bruno's 2) is that for a segmented store, the access rate on
different segments will very likely be different. And in fact, most of the
access should be on the "latest" segment unless 1) very late arrived data,
which should be captured on the higher-level `lateness` metrics already,
and 2) IQ reads on old windows. The problem is that, say if 99% of reads go
to the latest segment, and 1% goes to rest of the segments, how should
`memtable-hit-rate` be calculated then.

Another wild thought just to throw here: maybe we can just expose the
latest segment's state store as the logical store's metrics? Admittedly it
would not be most accurate, but it is 1) future-proof if we want to
consolidate to 1-1 physical store -> logical store implementation, and 2)
it is as simple and not needing to bookkeep older segments who should be
rarely accessed. My question is though, if upon segment rolling our metrics
can be auto-switched to the new store.


Guozhang

On Tue, Jun 4, 2019 at 3:06 PM Sophie Blee-Goldman 
wrote:

> Hey Bruno,
>
> I tend to agree with Guozhang on this matter although you do bring up some
> good points that should be addressed. Regarding 1) I think it is probably
> fairly uncommon in practice for users to leverage the individual store
> names passed to RocksDBConfigSetter#setConfig in order to specify options
> on a per-store basis. When this actually is used, it does seem likely that
> users would be doing something like pattern matching the physical store
> name prefix in order to apply configs to all physical stores (segments)
> within a single logical RocksDBStore. As you mention this is something of a
> hassle already as physical stores are created/deleted, while most likely
> all anyone cares about is the prefix corresponding to the logical store. It
> seems like rather than persist this hassle to the metric layer, we should
> consider refactoring RocksDBConfigSetter to apply to a logical store rather
> than a specific physical segment. Or maybe providing some kind of tooling
> to at least make this easier on users, but that's definitely outside the
> scope of this KIP.
>
> Regarding 2) can you clarify your point about accessing stores uniformly?
> While I agree there will definitely be variance in the access pattern of
> different segments, I think it's unlikely that it will vary in any kind of
> predictable or deterministic way, hence it is not that useful to know in
> hindsight the difference reflected by the metrics.
>
> Cheers,
> Sophie
>
> On Tue, Jun 4, 2019 at 2:09 PM Bruno Cadonna  wrote:
>
> > Hi Guozhang,
> >
> > After some thoughts, I tend to be in favour of the option with metrics
> > for each physical RocksDB instance for the following reasons:
> >
> > 1) A user already needs to be aware of segmented state stores when
> > providing a custom RocksDBConfigSetter. In RocksDBConfigSetter one can
> > specify settings for a store depending on the name of the store. Since
> > segments (i.e. state store) in a segmented state store have names that
> > share a prefix but have suffixes that are created at runtime, increase
> > with time and are theoretically unbounded, a user needs to take
> > account of the segments to provide the settings for all (i.e. matching
> > the common prefix) or some (i.e. matching the common prefix and for
> > example suffixes according to a specific pattern) of the segments of a
> > specific segmented state store.
> > 2) Currently settings for RocksDB can only be specified by a user per
> > physical instance and not per logical instance. Deriving good settings
> > for physical instances from metrics for a logical instance can be hard
> > if the physical instances are not accessed uniformly. In segmented
> > state stores segments are not accessed uniformly.
> > 3) Simpler to implement and to get things done.
> >
> > Any thoughts on this from anybody?
> >
> > Best,
> > Bruno
> >
> > On Thu, May 30, 2019 at 8:33 PM Guozhang Wang 
> wrote:
> > >
> > > Hi Bruno:
> > >
> > > Regarding 2) I think either way has some shortcomings: exposing the
> > metrics
> > > per rocksDB instance for window / session stores exposed some
> > > implementation internals (that we use segmented stores) to enforce
> users
> > to
> > > be aware of them. E.g. what if we want to silently change the internal
> > > implementation by walking away from the segmented approach? On the
> other
> > > hand, coalescing multiple rocksDB instances' metrics into a single one
> > per
> > > each logical store also has some concerns as I mentioned above. What
> I'm
> > > thinking is actually that, if we can customize the aggregation logic to
> > > still has one set of metrics per each logical store which may be
> composed
> > > of multiple rocksDB ones -- e.g. for `bytes-written-rate` we sum them
> > > across rocksDBs, while for `memtable-hit-rate` we do weighted average?
> > >
> > > Regarding logging levels, I think have DEBUG is fine, but also that
> means
> > > without manually turning it on users 

Re: [DISCUSS]KIP-216: IQ should throw different exceptions for different errors

2019-06-05 Thread Matthias J. Sax
Hi Vito,

sorry for dropping this discussion on the floor a while back. I was just
re-reading the KIP and discussion thread, and I think it is shaping out
nicely!

I like the overall hierarchy of the exception classes.

Some things are still not 100% clear:


You listed all methods that may throw an `InvalidStateStoreException`
atm. For the new exceptions, can any exception be thrown by any method?
It might help to understand this relationship better.

For example, StreamThreadNotStartedException, seems to only make sense
for `KafkaStreams#store()`?


For `StreamThreadNotRunningException` should we rename it to
`KafkaStreamsNotRunningException` ?


The description of `StreamThreadNotRunningException` and
`StateStoreNotAvailableException` seems to be the same? From my
understandng, the description makes sense for
`StreamThreadNotRunningException` -- for
`StateStoreNotAvailableException` I was expecting/inferring from the
name, that it would be thrown if no such store exists in the topology at
all (ie, user passed in a invalid/wrong store name). For this case, this
exception should be thrown only from `KafkaStreams#store()` ?


For the internal exceptions:

`StateStoreClosedException` -- why can it be wrapped as
`StreamThreadNotStartedException` ? It seems that the later would only
be thrown by `KafkaStreams#store()` and thus would be throw directly. A
closed-exception should only happen after a store was successfully
retrieved but cannot be queried any longer? Hence, converting/wrapping
it into a `StateStoreMigratedException` make sense. I am also not sure,
when a closed-exception would be wrapped by a
`StateStoreNotAvailableException` (implying my understanding as describe
above)?

Same questions about `EmptyStateStoreException`.

Thinking about both internal exceptions twice, I am wondering if it
makes sense to have both internal exceptions at all? I have the
impression that it make only sense to wrap them with a
`StateStoreMigragedException`, but if they are wrapped into the same
exception all the time, we can just remove both and throw
`StateStoreMigratedException` directly?


Last point: Why do we need to add?

> QueryableStoreType#setStreams(KafkaStreams streams);

John asked this question already and you replied to it. But I am not
sure what your answer means. Can you explain it in more detail?



Thanks for your patience on this KIP!



-Matthias






On 11/11/18 4:55 AM, Vito Jeng wrote:
> Hi, Matthias,
> 
> KIP already updated.
> 
>> - StateStoreClosedException:
>>   will be wrapped to StateStoreMigratedException or
> StateStoreNotAvailableException later.
>> Can you clarify the cases (ie, when will it be wrapped with the one or
> the other)?
> 
> For example, in the implementation(CompositeReadOnlyKeyValueStore#get), we
> get all stores first, and then call ReadOnlyKeyValueStore#get to get value
> in every store iteration.
> 
> When calling ReadOnlyKeyValueStore#get, the StateStoreClosedException will
> be thrown if the state store is not open.
> We need catch StateStoreClosedException and wrap it in different exception
> type:
>   * If the stream's state is CREATED, we wrap StateStoreClosedException
> with StreamThreadNotStartedException. User can retry until to RUNNING.
>   * If the stream's state is RUNNING / REBALANCING, the state store should
> be migrated, we wrap StateStoreClosedException with
> StateStoreMigratedException. User can rediscover the state store.
>   * If the stream's state is PENDING_SHUTDOWN / NOT_RUNNING / ERROR, the
> stream thread is not available, we wrap StateStoreClosedException with
> StateStoreNotAvailableException. User cannot retry when this exception is
> thrown.
> 
> 
>> - StateStoreIsEmptyException:
>>  I don't understand the semantic of this exception. Maybe it's a naming
> issue?
> 
> I think yes. :)
> Does `EmptyStateStoreException` is better ? (already updated in the KIP)
> 
> 
>> - StateStoreIsEmptyException:
>> will be wrapped to StateStoreMigratedException or
> StateStoreNotAvailableException later.
>> Also, can you clarify the cases (ie, when will it be wrapped with the one
> or the other)?
> 
> For example, in the implementation (CompositeReadOnlyKeyValueStore#get), we
> call StateStoreProvider#stores (WrappingStoreProvider#stores) to get all
> stores. EmptyStateStoreException will be thrown when cannot find any store
> and then we need catch it and wrap it in different exception type:
>   * If the stream's state is CREATED, we wrap EmptyStateStoreException with
> StreamThreadNotStartedException. User can retry until to RUNNING.
>   * If the stream's state is RUNNING / REBALANCING, the state store should
> be migrated, we wrap EmptyStateStoreException with
> StateStoreMigratedException. User can rediscover the state store.
>   * If the stream's state is PENDING_SHUTDOWN / NOT_RUNNING / ERROR, the
> stream thread is not available, we wrap EmptyStateStoreException with
> StateStoreNotAvailableException. User cannot retry when this exception is
> 

Build failed in Jenkins: kafka-trunk-jdk11 #604

2019-06-05 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-8305; Support default partitions & replication factor in

[jason] KAFKA-8386; Use COORDINATOR_NOT_AVAILABLE error when group is Dead

--
[...truncated 2.42 MB...]
kafka.utils.CommandLineUtilsTest > testParseArgs STARTED

kafka.utils.CommandLineUtilsTest > testParseArgs PASSED

kafka.utils.CommandLineUtilsTest > testParseArgsWithMultipleDelimiters STARTED

kafka.utils.CommandLineUtilsTest > testParseArgsWithMultipleDelimiters PASSED

kafka.utils.CommandLineUtilsTest > testMaybeMergeOptionsDefaultValueIfNotExist 
STARTED

kafka.utils.CommandLineUtilsTest > testMaybeMergeOptionsDefaultValueIfNotExist 
PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgWithNoDelimiter STARTED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgWithNoDelimiter PASSED

kafka.utils.CommandLineUtilsTest > 
testMaybeMergeOptionsDefaultOverwriteExisting STARTED

kafka.utils.CommandLineUtilsTest > 
testMaybeMergeOptionsDefaultOverwriteExisting PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgAsValid STARTED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgAsValid PASSED

kafka.utils.CommandLineUtilsTest > testMaybeMergeOptionsNotOverwriteExisting 
STARTED

kafka.utils.CommandLineUtilsTest > testMaybeMergeOptionsNotOverwriteExisting 
PASSED

kafka.utils.LoggingTest > testLog4jControllerIsRegistered STARTED

kafka.utils.LoggingTest > testLog4jControllerIsRegistered PASSED

kafka.utils.LoggingTest > testLogName STARTED

kafka.utils.LoggingTest > testLogName PASSED

kafka.utils.LoggingTest > testLogNameOverride STARTED

kafka.utils.LoggingTest > testLogNameOverride PASSED

> Task :connect:basic-auth-extension:test

org.apache.kafka.connect.rest.basic.auth.extension.JaasBasicAuthFilterTest > 
testSuccess STARTED
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by 
org.powermock.reflect.internal.WhiteboxImpl 
(file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.powermock/powermock-reflect/2.0.2/79df0e5792fba38278b90f9e22617f5684313017/powermock-reflect-2.0.2.jar)
 to method java.lang.Object.clone()
WARNING: Please consider reporting this to the maintainers of 
org.powermock.reflect.internal.WhiteboxImpl
WARNING: Use --illegal-access=warn to enable warnings of further illegal 
reflective access operations
WARNING: All illegal access operations will be denied in a future release

org.apache.kafka.connect.rest.basic.auth.extension.JaasBasicAuthFilterTest > 
testSuccess PASSED

org.apache.kafka.connect.rest.basic.auth.extension.JaasBasicAuthFilterTest > 
testBadCredential STARTED

org.apache.kafka.connect.rest.basic.auth.extension.JaasBasicAuthFilterTest > 
testBadCredential PASSED

org.apache.kafka.connect.rest.basic.auth.extension.JaasBasicAuthFilterTest > 
testBadPassword STARTED

org.apache.kafka.connect.rest.basic.auth.extension.JaasBasicAuthFilterTest > 
testBadPassword PASSED

org.apache.kafka.connect.rest.basic.auth.extension.JaasBasicAuthFilterTest > 
testUnknownBearer STARTED

org.apache.kafka.connect.rest.basic.auth.extension.JaasBasicAuthFilterTest > 
testUnknownBearer PASSED

org.apache.kafka.connect.rest.basic.auth.extension.JaasBasicAuthFilterTest > 
testUnknownLoginModule STARTED

org.apache.kafka.connect.rest.basic.auth.extension.JaasBasicAuthFilterTest > 
testUnknownLoginModule PASSED

org.apache.kafka.connect.rest.basic.auth.extension.JaasBasicAuthFilterTest > 
testUnknownCredentialsFile STARTED

org.apache.kafka.connect.rest.basic.auth.extension.JaasBasicAuthFilterTest > 
testUnknownCredentialsFile PASSED

org.apache.kafka.connect.rest.basic.auth.extension.JaasBasicAuthFilterTest > 
testEmptyCredentialsFile STARTED

org.apache.kafka.connect.rest.basic.auth.extension.JaasBasicAuthFilterTest > 
testEmptyCredentialsFile PASSED

org.apache.kafka.connect.rest.basic.auth.extension.JaasBasicAuthFilterTest > 
testNoFileOption STARTED

org.apache.kafka.connect.rest.basic.auth.extension.JaasBasicAuthFilterTest > 
testNoFileOption PASSED

org.apache.kafka.connect.rest.basic.auth.extension.JaasBasicAuthFilterTest > 
testPostWithoutAppropriateCredential STARTED

org.apache.kafka.connect.rest.basic.auth.extension.JaasBasicAuthFilterTest > 
testPostWithoutAppropriateCredential PASSED

org.apache.kafka.connect.rest.basic.auth.extension.JaasBasicAuthFilterTest > 
testPostNotChangingConnectorTask STARTED

org.apache.kafka.connect.rest.basic.auth.extension.JaasBasicAuthFilterTest > 
testPostNotChangingConnectorTask PASSED

> Task :connect:runtime:test
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by 
org.powermock.reflect.internal.WhiteboxImpl 
(file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.powermock/powermock-reflect/2.0.2/79df0e5792fba38278b90f9e22617f5684313017/powermock-reflect-2.0.2.jar)
 to method java.util.HashMap.hash(java.lang.Object)
WARNING: 

Build failed in Jenkins: kafka-trunk-jdk8 #3703

2019-06-05 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-8305; Support default partitions & replication factor in

--
[...truncated 2.50 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 

Re: [VOTE] KIP-475: New Metric to Measure Number of Tasks on a Connector

2019-06-05 Thread Randall Hauch
Thanks, Cyrus.

+1 (binding)

Randall Hauch

On Wed, Jun 5, 2019 at 10:36 AM Andrew Schofield 
wrote:

> +1 (non-binding)
>
> Andrew Schofield
>
> On 05/06/2019, 14:04, "Ryanne Dolan"  wrote:
>
> +1 (non-binding)
>
> Thanks
> Ryanne
>
> On Tue, Jun 4, 2019, 11:29 PM Cyrus Vafadari 
> wrote:
>
> > Hi all,
> >
> > Like like to start voting in the following KIP:
> >
> >
> https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FKAFKA%2FKIP-475%253A%2BNew%2BMetric%2Bto%2BMeasure%2BNumber%2Bof%2BTasks%2Bon%2Ba%2BConnectordata=02%7C01%7C%7C95f8a8ebb4a44882773808d6e9b65983%7C84df9e7fe9f640afb435%7C1%7C0%7C636953366722392496sdata=vbE%2BjrAapcQ68Vnwh5OkY1FFoOzFHs9rZRaPHlwqxSU%3Dreserved=0
> >
> > Discussion thread:
> >
> >
> https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.apache.org%2Fthread.html%2Fbf7c92224aa798336c14d7e96ec8f2e3406c61879ec381a50652acfe%40%253Cdev.kafka.apache.org%253Edata=02%7C01%7C%7C95f8a8ebb4a44882773808d6e9b65983%7C84df9e7fe9f640afb435%7C1%7C0%7C636953366722402501sdata=0JpQuCpTKwJyOjWH8cM%2B6eU%2FjNT28eE7xvMOBQgghjA%3Dreserved=0
> >
> > Thanks!
> >
> > Cyrus
> >
>
>
>


Build failed in Jenkins: kafka-2.3-jdk8 #41

2019-06-05 Thread Apache Jenkins Server
See 

--
[...truncated 2.10 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED


Re: Newbie on lookout for work

2019-06-05 Thread Gwen Shapira
Hi,

You can look at our contribution guidelines:
https://kafka.apache.org/contributing

We have a number of open "newbie" tickets in JIRA, perhaps one of them
will be interesting to you:
https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20status%20%3D%20Open%20AND%20labels%20in%20(Newbie%2C%20newbie)%20ORDER%20BY%20created%20DESC

On Wed, Jun 5, 2019 at 9:36 AM Dulvin Witharane  wrote:
>
> Hi All,
>
> I'm a newbie for FOSS and I'd like to start contributing to Apache Kafka.
> Any pointers for beginning would be appreciated.
>
> --
> *Witharane, D.R.H.*
>
> R Engineer
> Synopsys Inc,
> Colombo 08
> Mobile : *+94 7 <%2B94%2071%201127241>7 6746781*
> Skype  : dulvin.rivindu
> Facebook  | LinkedIn
> 



-- 
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog


[jira] [Resolved] (KAFKA-8386) Use COORDINATOR_NOT_AVAILABLE to replace UNKNOWN_MEMBER_ID when the group is not available

2019-06-05 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-8386.

   Resolution: Fixed
Fix Version/s: 2.3.0

> Use COORDINATOR_NOT_AVAILABLE to replace UNKNOWN_MEMBER_ID when the group is 
> not available
> --
>
> Key: KAFKA-8386
> URL: https://issues.apache.org/jira/browse/KAFKA-8386
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
> Fix For: 2.3.0
>
>
> When the group is dead or unavailable on the coordinator, current approach is 
> to return `UNKNOWN_MEMBER_ID` to let the member reset generation and rejoin. 
> It is not particularly safe for static members in this case, since resetting 
> `member.id` could delay the detection for duplicate instance.id.
> Also considering the fact that group unavailability could mostly be caused by 
> migration, it is favorable to trigger a coordinator rediscovery immediately 
> than one more bounce. Thus, we decide to use `COORDINATOR_NOT_AVAILABLE` as 
> top line citizen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8400) Do not update follower replica state if the log read failed

2019-06-05 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-8400.

   Resolution: Fixed
Fix Version/s: 2.4.0

> Do not update follower replica state if the log read failed
> ---
>
> Key: KAFKA-8400
> URL: https://issues.apache.org/jira/browse/KAFKA-8400
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
> Fix For: 2.4.0
>
>
> In {{ReplicaManager.fetchMessages}}, we have the following logic to read from 
> the log and update follower state:
> {code}
> val result = readFromLocalLog(
> replicaId = replicaId,
> fetchOnlyFromLeader = fetchOnlyFromLeader,
> fetchIsolation = fetchIsolation,
> fetchMaxBytes = fetchMaxBytes,
> hardMaxBytesLimit = hardMaxBytesLimit,
> readPartitionInfo = fetchInfos,
> quota = quota)
>   if (isFromFollower) updateFollowerLogReadResults(replicaId, result)
>   else result
> {code}
> The call to {{readFromLocalLog}} could fail for many reasons, in which case 
> we return a LogReadResult with an error set and all fields set to -1. The 
> problem is that we do not check for the error when updating the follower 
> state. As far as I can tell, this does not cause any correctness issues, but 
> we're just asking for trouble. It would be better to check the error before 
> proceeding to `Partition.updateReplicaLogReadResult`. 
> Perhaps even better would be to have {{readFromLocalLog}} return something 
> like {{Either[LogReadResult, Errors]}} so that we are forced to handle the 
> error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8305) AdminClient should support creating topics with default partitions and replication factor

2019-06-05 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-8305.

   Resolution: Fixed
 Assignee: Almog Gavra
Fix Version/s: 2.4.0

> AdminClient should support creating topics with default partitions and 
> replication factor
> -
>
> Key: KAFKA-8305
> URL: https://issues.apache.org/jira/browse/KAFKA-8305
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin
>Reporter: Almog Gavra
>Assignee: Almog Gavra
>Priority: Major
> Fix For: 2.4.0
>
>
> Today, the AdminClient creates topics by requiring a `NewTopic` object, which 
> must contain either partitions and replicas or an exact broker mapping (which 
> then infers partitions and replicas). Some users, however, could benefit from 
> just using the cluster default for replication factor but may not want to use 
> auto topic creation.
> NOTE: I am planning on working on this, but I do not have permissions to 
> assign this ticket to myself.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [kafka-clients] [VOTE] 2.3.0 RC1

2019-06-05 Thread Jason Gustafson
If we get in KAFKA-8487, we may as well do KAFKA-8386 since it causes a
similar problem. The patch is ready to be merged.

-Jason

On Wed, Jun 5, 2019 at 1:29 PM Guozhang Wang  wrote:

> Hello Colin,
>
> I caught an issue which would affect KIP-345 (
> https://issues.apache.org/jira/browse/KAFKA-8487) and hence may need to be
> considered a blocker for this release.
>
>
> Guozhang
>
> On Tue, Jun 4, 2019 at 11:31 PM Colin McCabe  wrote:
>
> > On Tue, Jun 4, 2019, at 23:17, Colin McCabe wrote:
> > > Hi all,
> > >
> > > This is the first candidate for the release of Apache Kafka 2.3.0.
> > >
> > > This release includes many new features, including:
> > > * Support for incremental cooperative rebalancing
> > > * An in-memory session store and window store for Kafka Streams
> > > * An API for allowing users to determine what operations they are
> > authorized to perform on topics.
> > > * A new broker start time metric.
> > > * The ability for JMXTool to connect to secured RMI ports.
> > > * A new and improved API for setting topic and broker configurations.
> > > * Support for non-key joining in KTable
> >
> > One small correction here: support for non-key joining (KIP-213) slipped
> > from 2.3 due to time constraints.
> >
> > Regards,
> > Colin
> >
> > > * The ability to track the number of partitions which are under their
> > min ISR count.
> > > * The ability for consumers to opt out of automatic topic creation,
> even
> > when it is enabled on the broker.
> > > * The ability to use external configuration stores.
> > > * Improved replica fetcher behavior when errors are encountered.
> > >
> > > Check out the release notes for the 2.3.0 release here:
> > > https://home.apache.org/~cmccabe/kafka-2.3.0-rc1/RELEASE_NOTES.html
> > >
> > > The vote will go until Friday, June 7th, or until we create another RC.
> > >
> > > * Kafka's KEYS file containing PGP keys we use to sign the release can
> > be found here:
> > > https://kafka.apache.org/KEYS
> > >
> > > * The release artifacts to be voted upon (source and binary) are here:
> > > https://home.apache.org/~cmccabe/kafka-2.3.0-rc1/
> > >
> > > * Maven artifacts to be voted upon:
> > > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> > >
> > > * Javadoc:
> > > https://home.apache.org/~cmccabe/kafka-2.3.0-rc1/javadoc/
> > >
> > > * The tag to be voted upon (off the 2.3 branch) is the 2.3.0 tag:
> > > https://github.com/apache/kafka/releases/tag/2.3.0-rc1
> > >
> > > thanks,
> > > Colin
> > >
> > >
> >
> > > --
> > >  You received this message because you are subscribed to the Google
> > Groups "kafka-clients" group.
> > >  To unsubscribe from this group and stop receiving emails from it, send
> > an email to kafka-clients+unsubscr...@googlegroups.com.
> > >  To post to this group, send email to kafka-clie...@googlegroups.com.
> > >  Visit this group at https://groups.google.com/group/kafka-clients.
> > >  To view this discussion on the web visit
> >
> https://groups.google.com/d/msgid/kafka-clients/461015c6-d018-40f6-a018-eaadf5c25f23%40www.fastmail.com
> > <
> >
> https://groups.google.com/d/msgid/kafka-clients/461015c6-d018-40f6-a018-eaadf5c25f23%40www.fastmail.com?utm_medium=email_source=footer
> > >.
> > >  For more options, visit https://groups.google.com/d/optout.
> >
>
>
> --
> -- Guozhang
>


[jira] [Resolved] (KAFKA-8433) Give the opportunity to use serializers and deserializers with IntegrationTestUtils

2019-06-05 Thread Anthony C (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anthony C resolved KAFKA-8433.
--
Resolution: Won't Fix

> Give the opportunity to use serializers and deserializers with 
> IntegrationTestUtils
> ---
>
> Key: KAFKA-8433
> URL: https://issues.apache.org/jira/browse/KAFKA-8433
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 2.3.0
>Reporter: Anthony C
>Priority: Minor
>
> Currently, each static method using a producer or a consumer don't allow to 
> pass serializers or deserializers as arguments.
> Because of that we are not able to mock schema registry (for example), or 
> other producer / consumer specific attributs.
> To resolve that we just need to add methods using serializers or 
> deserializers as arguments.
> Kafka producer and consumer constructors already accept null serializers or 
> deserializers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-434: Dead replica fetcher and log cleaner metrics

2019-06-05 Thread Colin McCabe
+1 (binding)

best,
Colin


On Wed, Jun 5, 2019, at 03:38, Viktor Somogyi-Vass wrote:
> Hi Folks,
> 
> This vote sunk a bit, I'd like to draw some attention to this again in the
> hope I get some feedback or votes.
> 
> Thanks,
> Viktor
> 
> On Tue, May 7, 2019 at 4:28 PM Harsha  wrote:
> 
> > Thanks for the kip. LGTM +1.
> >
> > -Harsha
> >
> > On Mon, Apr 29, 2019, at 8:14 AM, Viktor Somogyi-Vass wrote:
> > > Hi Jason,
> > >
> > > I too agree this is more of a problem in older versions and therefore we
> > > could backport it. Were you thinking of any specific versions? I guess
> > the
> > > 2.x and 1.x versions are definitely targets here but I was thinking that
> > we
> > > might not want to further.
> > >
> > > Viktor
> > >
> > > On Mon, Apr 29, 2019 at 12:55 AM Stanislav Kozlovski <
> > stanis...@confluent.io>
> > > wrote:
> > >
> > > > Thanks for the work done, Viktor! +1 (non-binding)
> > > >
> > > > I strongly agree with Jason that this monitoring-focused KIP is worth
> > > > porting back to older versions. I am sure users will find it very
> > useful
> > > >
> > > > Best,
> > > > Stanislav
> > > >
> > > > On Fri, Apr 26, 2019 at 9:38 PM Jason Gustafson 
> > > > wrote:
> > > >
> > > > > Thanks, that works for me. +1
> > > > >
> > > > > By the way, we don't normally port KIPs to older releases, but I
> > wonder
> > > > if
> > > > > it's worth making an exception here. From recent experience, it
> > tends to
> > > > be
> > > > > the older versions that are more prone to fetcher failures. Thoughts?
> > > > >
> > > > > -Jason
> > > > >
> > > > > On Fri, Apr 26, 2019 at 5:18 AM Viktor Somogyi-Vass <
> > > > > viktorsomo...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Let me have a second thought, I'll just add the clientId instead to
> > > > > follow
> > > > > > the convention, so it'll change DeadFetcherThreadCount but with the
> > > > > > clientId tag.
> > > > > >
> > > > > > On Fri, Apr 26, 2019 at 11:29 AM Viktor Somogyi-Vass <
> > > > > > viktorsomo...@gmail.com> wrote:
> > > > > >
> > > > > > > Hi Jason,
> > > > > > >
> > > > > > > Yea I think it could make sense. In this case I would rename the
> > > > > > > DeadFetcherThreadCount to DeadReplicaFetcherThreadCount and
> > introduce
> > > > > the
> > > > > > > metric you're referring to as DeadLogDirFetcherThreadCount.
> > > > > > > I'll update the KIP to reflect this.
> > > > > > >
> > > > > > > Viktor
> > > > > > >
> > > > > > > On Thu, Apr 25, 2019 at 8:07 PM Jason Gustafson <
> > ja...@confluent.io>
> > > > > > > wrote:
> > > > > > >
> > > > > > >> Hi Viktor,
> > > > > > >>
> > > > > > >> This looks good. Just one question I had is whether we may as
> > well
> > > > > cover
> > > > > > >> the log dir fetchers as well.
> > > > > > >>
> > > > > > >> Thanks,
> > > > > > >> Jason
> > > > > > >>
> > > > > > >>
> > > > > > >> On Thu, Apr 25, 2019 at 7:46 AM Viktor Somogyi-Vass <
> > > > > > >> viktorsomo...@gmail.com>
> > > > > > >> wrote:
> > > > > > >>
> > > > > > >> > Hi Folks,
> > > > > > >> >
> > > > > > >> > This thread sunk a bit but I'd like to bump it hoping to get
> > some
> > > > > > >> feedback
> > > > > > >> > and/or votes.
> > > > > > >> >
> > > > > > >> > Thanks,
> > > > > > >> > Viktor
> > > > > > >> >
> > > > > > >> > On Thu, Mar 28, 2019 at 8:47 PM Viktor Somogyi-Vass <
> > > > > > >> > viktorsomo...@gmail.com>
> > > > > > >> > wrote:
> > > > > > >> >
> > > > > > >> > > Sorry, the end of the message cut off.
> > > > > > >> > >
> > > > > > >> > > So I tried to be consistent with the convention in
> > LogManager,
> > > > > hence
> > > > > > >> the
> > > > > > >> > > hyphens and in AbstractFetcherManager, hence the camel
> > case. It
> > > > > > would
> > > > > > >> be
> > > > > > >> > > nice though to decide with one convention across the whole
> > > > > project,
> > > > > > >> > however
> > > > > > >> > > it requires a major refactor (especially for the components
> > that
> > > > > > >> leverage
> > > > > > >> > > metrics for monitoring).
> > > > > > >> > >
> > > > > > >> > > Thanks,
> > > > > > >> > > Viktor
> > > > > > >> > >
> > > > > > >> > > On Thu, Mar 28, 2019 at 8:44 PM Viktor Somogyi-Vass <
> > > > > > >> > > viktorsomo...@gmail.com> wrote:
> > > > > > >> > >
> > > > > > >> > >> Hi Dhruvil,
> > > > > > >> > >>
> > > > > > >> > >> Thanks for the feedback and the vote. I fixed the typo in
> > the
> > > > > KIP.
> > > > > > >> > >> The naming is interesting though. Unfortunately kafka
> > overall
> > > > is
> > > > > > not
> > > > > > >> > >> consistent in metric naming but at least I tried to be
> > > > consistent
> > > > > > >> among
> > > > > > >> > the
> > > > > > >> > >> other metrics used in LogManager
> > > > > > >> > >>
> > > > > > >> > >> On Thu, Mar 28, 2019 at 7:32 PM Dhruvil Shah <
> > > > > dhru...@confluent.io
> > > > > > >
> > > > > > >> > >> wrote:
> > > > > > >> > >>
> > > > > > >> > >>> Thanks for the KIP, Viktor! This is a useful addition. +1
> > > > > overall.
> > > > > > >> > >>>
> > > > > > >> > 

Re: [kafka-clients] [VOTE] 2.3.0 RC1

2019-06-05 Thread Guozhang Wang
Hello Colin,

I caught an issue which would affect KIP-345 (
https://issues.apache.org/jira/browse/KAFKA-8487) and hence may need to be
considered a blocker for this release.


Guozhang

On Tue, Jun 4, 2019 at 11:31 PM Colin McCabe  wrote:

> On Tue, Jun 4, 2019, at 23:17, Colin McCabe wrote:
> > Hi all,
> >
> > This is the first candidate for the release of Apache Kafka 2.3.0.
> >
> > This release includes many new features, including:
> > * Support for incremental cooperative rebalancing
> > * An in-memory session store and window store for Kafka Streams
> > * An API for allowing users to determine what operations they are
> authorized to perform on topics.
> > * A new broker start time metric.
> > * The ability for JMXTool to connect to secured RMI ports.
> > * A new and improved API for setting topic and broker configurations.
> > * Support for non-key joining in KTable
>
> One small correction here: support for non-key joining (KIP-213) slipped
> from 2.3 due to time constraints.
>
> Regards,
> Colin
>
> > * The ability to track the number of partitions which are under their
> min ISR count.
> > * The ability for consumers to opt out of automatic topic creation, even
> when it is enabled on the broker.
> > * The ability to use external configuration stores.
> > * Improved replica fetcher behavior when errors are encountered.
> >
> > Check out the release notes for the 2.3.0 release here:
> > https://home.apache.org/~cmccabe/kafka-2.3.0-rc1/RELEASE_NOTES.html
> >
> > The vote will go until Friday, June 7th, or until we create another RC.
> >
> > * Kafka's KEYS file containing PGP keys we use to sign the release can
> be found here:
> > https://kafka.apache.org/KEYS
> >
> > * The release artifacts to be voted upon (source and binary) are here:
> > https://home.apache.org/~cmccabe/kafka-2.3.0-rc1/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >
> > * Javadoc:
> > https://home.apache.org/~cmccabe/kafka-2.3.0-rc1/javadoc/
> >
> > * The tag to be voted upon (off the 2.3 branch) is the 2.3.0 tag:
> > https://github.com/apache/kafka/releases/tag/2.3.0-rc1
> >
> > thanks,
> > Colin
> >
> >
>
> > --
> >  You received this message because you are subscribed to the Google
> Groups "kafka-clients" group.
> >  To unsubscribe from this group and stop receiving emails from it, send
> an email to kafka-clients+unsubscr...@googlegroups.com.
> >  To post to this group, send email to kafka-clie...@googlegroups.com.
> >  Visit this group at https://groups.google.com/group/kafka-clients.
> >  To view this discussion on the web visit
> https://groups.google.com/d/msgid/kafka-clients/461015c6-d018-40f6-a018-eaadf5c25f23%40www.fastmail.com
> <
> https://groups.google.com/d/msgid/kafka-clients/461015c6-d018-40f6-a018-eaadf5c25f23%40www.fastmail.com?utm_medium=email_source=footer
> >.
> >  For more options, visit https://groups.google.com/d/optout.
>


-- 
-- Guozhang


[jira] [Resolved] (KAFKA-8486) How to commit offset via Kafka

2019-06-05 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-8486.

Resolution: Invalid

> How to commit offset via Kafka 
> ---
>
> Key: KAFKA-8486
> URL: https://issues.apache.org/jira/browse/KAFKA-8486
> Project: Kafka
>  Issue Type: Wish
>  Components: consumer
>Affects Versions: 2.2.1
>Reporter: Stanislav 
>Priority: Trivial
>
> from kafka import KafkaConsumer, TopicPartition, OffsetAndMetadata
>  from json import loads
>  import vertica_python
>  from datetime import datetime as dt
>  from time import sleep
> conn_vertica =
> {'host': '', 'port': 5433, 'user': '', 'password': '', 'database': '', 
> 'use_prepared_statements': True}
> conn_to = conn_vertica
> def load():(
>  parsed_topic_name = 'orderSummary'
> consumer = KafkaConsumer(parsed_topic_name, auto_offset_reset='earliest',
>  bootstrap_servers=['us-kafka-broker:9092'],
>  enable_auto_commit=False,
>  group_id="my_group",
>  value_deserializer=lambda x: loads(x.decode('utf-8'))
>  )
>  timeout = 20
>  max_len = 10
>  res = []
>  t1 = dt.now()
>  while (dt.now()-t1).seconds < timeout or len(res) < max_len:
>  msgs = consumer.poll()
>  print(msgs)
>  for v in msgs.values():(
>  res += v
> with vertica_python.connect(**conn_to) as conn_2:
>  curs2 = conn_2.cursor()
>  if res:
>  curs2.executemany('''
>  INSERT INTO stage.FS_Orders_from_kafka (load_dtm,topic_name, partition_id, 
> "offset", value) 
>  VALUES (?, ?, ?, ?, ?)''', [(r.timestamp, r.topic, r.partition, r.offset, 
> r.value) for r in res])
>  curs2.execute('COMMIT')
> else:
>  print('Nothing!')
> consumer.close()
>  #sleep(5)
> load()



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8488) FetchSessionHandler logging create 73 mb allocation in TLAB which could be no op

2019-06-05 Thread Wenshuai Hou (JIRA)
Wenshuai Hou created KAFKA-8488:
---

 Summary: FetchSessionHandler logging create 73 mb allocation in 
TLAB which could be no op 
 Key: KAFKA-8488
 URL: https://issues.apache.org/jira/browse/KAFKA-8488
 Project: Kafka
  Issue Type: Improvement
Reporter: Wenshuai Hou
 Attachments: image-2019-06-05-14-04-35-668.png

!image-2019-06-05-14-04-35-668.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8265) Connect Client Config Override policy

2019-06-05 Thread Colin P. McCabe (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin P. McCabe resolved KAFKA-8265.

   Resolution: Fixed
Fix Version/s: 2.3

> Connect Client Config Override policy
> -
>
> Key: KAFKA-8265
> URL: https://issues.apache.org/jira/browse/KAFKA-8265
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect
>Reporter: Magesh kumar Nandakumar
>Assignee: Magesh kumar Nandakumar
>Priority: Major
> Fix For: 2.3
>
>
> Right now, each source connector and sink connector inherit their client 
> configurations from the worker properties. Within the worker properties, all 
> configurations that have a prefix of "producer." or "consumer." are applied 
> to all source connectors and sink connectors respectively.
> We should allow the  "producer." or "consumer." to be overridden in 
> accordance to an override policy determined by the administrator.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Newbie on lookout for work

2019-06-05 Thread Dulvin Witharane
Hi All,

I'm a newbie for FOSS and I'd like to start contributing to Apache Kafka.
Any pointers for beginning would be appreciated.

-- 
*Witharane, D.R.H.*

R Engineer
Synopsys Inc,
Colombo 08
Mobile : *+94 7 <%2B94%2071%201127241>7 6746781*
Skype  : dulvin.rivindu
Facebook  | LinkedIn



[jira] [Resolved] (KAFKA-3816) Provide more context in Kafka Connect log messages using MDC

2019-06-05 Thread Colin P. McCabe (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin P. McCabe resolved KAFKA-3816.

Resolution: Fixed

> Provide more context in Kafka Connect log messages using MDC
> 
>
> Key: KAFKA-3816
> URL: https://issues.apache.org/jira/browse/KAFKA-3816
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 0.9.0.1
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Critical
> Fix For: 2.3.0
>
>
> Currently it is relatively difficult to correlate individual log messages 
> with the various threads and activities that are going on within a Kafka 
> Connect worker, let along a cluster of workers. Log messages should provide 
> more context to make it easier and to allow log scraping tools to coalesce 
> related log messages.
> One simple way to do this is by using _mapped diagnostic contexts_, or MDC. 
> This is supported by the SLF4J API, and by the Logback and Log4J logging 
> frameworks.
> Basically, the framework would be changed so that each thread is configured 
> with one or more MDC parameters using the 
> {{org.slf4j.MDC.put(String,String)}} method in SLF4J. Once that thread is 
> configured, all log messages made using that thread have that context. The 
> logs can then be configured to use those parameters.
> It would be ideal to define a convention for connectors and the Kafka Connect 
> framework. A single set of MDC parameters means that the logging framework 
> can use the specific parameters on its message formats.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8487) Consumer should not resetGeneration upon REBALANCE_IN_PROGRESS in commit response handler

2019-06-05 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-8487:


 Summary: Consumer should not resetGeneration upon 
REBALANCE_IN_PROGRESS in commit response handler
 Key: KAFKA-8487
 URL: https://issues.apache.org/jira/browse/KAFKA-8487
 Project: Kafka
  Issue Type: Improvement
  Components: consumer
Reporter: Guozhang Wang
Assignee: Guozhang Wang


In consumer, we handle the errors in sync / heartbeat / join response such that:

1. UNKNOWN_MEMBER_ID / ILLEGAL_GENERATION: we reset the generation and request 
re-join.

2. REBALANCE_IN_PROGRESS: do nothing if a rejoin will be executed, or request 
re-join explicitly.

However, for commit response, we require resetGeneration for 
REBALANCE_IN_PROGRESS as well. This is a flaw in two folds:

1. As in KIP-345, with static members, reseting generation will lose the 
member.id and hence may cause incorrect fencing.

2. As in KIP-429, resetting generation will cause partitions to be "lost" 
unnecessarily before re-joining the group. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-475: New Metric to Measure Number of Tasks on a Connector

2019-06-05 Thread Andrew Schofield
+1 (non-binding)

Andrew Schofield

On 05/06/2019, 14:04, "Ryanne Dolan"  wrote:

+1 (non-binding)

Thanks
Ryanne

On Tue, Jun 4, 2019, 11:29 PM Cyrus Vafadari  wrote:

> Hi all,
>
> Like like to start voting in the following KIP:
>
> 
https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FKAFKA%2FKIP-475%253A%2BNew%2BMetric%2Bto%2BMeasure%2BNumber%2Bof%2BTasks%2Bon%2Ba%2BConnectordata=02%7C01%7C%7C95f8a8ebb4a44882773808d6e9b65983%7C84df9e7fe9f640afb435%7C1%7C0%7C636953366722392496sdata=vbE%2BjrAapcQ68Vnwh5OkY1FFoOzFHs9rZRaPHlwqxSU%3Dreserved=0
>
> Discussion thread:
>
> 
https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.apache.org%2Fthread.html%2Fbf7c92224aa798336c14d7e96ec8f2e3406c61879ec381a50652acfe%40%253Cdev.kafka.apache.org%253Edata=02%7C01%7C%7C95f8a8ebb4a44882773808d6e9b65983%7C84df9e7fe9f640afb435%7C1%7C0%7C636953366722402501sdata=0JpQuCpTKwJyOjWH8cM%2B6eU%2FjNT28eE7xvMOBQgghjA%3Dreserved=0
>
> Thanks!
>
> Cyrus
>




Re: [VOTE] KIP-475: New Metric to Measure Number of Tasks on a Connector

2019-06-05 Thread Ryanne Dolan
+1 (non-binding)

Thanks
Ryanne

On Tue, Jun 4, 2019, 11:29 PM Cyrus Vafadari  wrote:

> Hi all,
>
> Like like to start voting in the following KIP:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-475%3A+New+Metric+to+Measure+Number+of+Tasks+on+a+Connector
>
> Discussion thread:
>
> https://lists.apache.org/thread.html/bf7c92224aa798336c14d7e96ec8f2e3406c61879ec381a50652acfe@%3Cdev.kafka.apache.org%3E
>
> Thanks!
>
> Cyrus
>


Re: kafka connect stops consuming data when kafka broker goes down

2019-06-05 Thread Paul Whalen
It’s not totally clear, but this may be 
https://issues.apache.org/jira/plugins/servlet/mobile#issue/KAFKA-7941

For which there is a fix that is very nearly approved: 
https://github.com/apache/kafka/pull/6283

Paul

> On Jun 5, 2019, at 1:26 AM, Srinivas, Kaushik (Nokia - IN/Bangalore) 
>  wrote:
> 
> Hello,
> Anyone has any information on this issue.
> Created a critical ticket for the same, since this is a major stability issue 
> for connect framework.
> https://issues.apache.org/jira/browse/KAFKA-8485?filter=-2
> 
> Thanks.
> Kaushik,
> NOKIA
> 
> From: Srinivas, Kaushik (Nokia - IN/Bangalore)
> Sent: Monday, June 03, 2019 5:22 PM
> To: dev@kafka.apache.org
> Cc: Basil Brito, Aldan (Nokia - IN/Bangalore) 
> Subject: kafka connect stops consuming data when kafka broker goes down
> 
> Hello kafka dev,
> 
> We are encountering an issue when kafka connect is running hdfs sink 
> connector pulling data from kafka and writing to hdfs location.
> In between when the data is flowing in the pipeline from producer -> kafka 
> topic -> kafka connect hdfs sink connector -> hdfs,
> If even one of the kafka broker goes down, the connect framework stops 
> responding. Stops consuming records and REST API also becomes not interactive.
> 
> Until the kafka connect framework is restarted, it would not pull the data 
> from kafka and REST api remains inactive. Nothing is coming in the logs as 
> well.
> Checked the topics in kafka used by connect, everything has been reassigned 
> to another broker and has the leader available.
> 
> Has anyone encountered this issue ? what would be the expected behavior ?
> 
> Thanks in advance
> Kaushik


Re: [VOTE] KIP-434: Dead replica fetcher and log cleaner metrics

2019-06-05 Thread Viktor Somogyi-Vass
Hi Folks,

This vote sunk a bit, I'd like to draw some attention to this again in the
hope I get some feedback or votes.

Thanks,
Viktor

On Tue, May 7, 2019 at 4:28 PM Harsha  wrote:

> Thanks for the kip. LGTM +1.
>
> -Harsha
>
> On Mon, Apr 29, 2019, at 8:14 AM, Viktor Somogyi-Vass wrote:
> > Hi Jason,
> >
> > I too agree this is more of a problem in older versions and therefore we
> > could backport it. Were you thinking of any specific versions? I guess
> the
> > 2.x and 1.x versions are definitely targets here but I was thinking that
> we
> > might not want to further.
> >
> > Viktor
> >
> > On Mon, Apr 29, 2019 at 12:55 AM Stanislav Kozlovski <
> stanis...@confluent.io>
> > wrote:
> >
> > > Thanks for the work done, Viktor! +1 (non-binding)
> > >
> > > I strongly agree with Jason that this monitoring-focused KIP is worth
> > > porting back to older versions. I am sure users will find it very
> useful
> > >
> > > Best,
> > > Stanislav
> > >
> > > On Fri, Apr 26, 2019 at 9:38 PM Jason Gustafson 
> > > wrote:
> > >
> > > > Thanks, that works for me. +1
> > > >
> > > > By the way, we don't normally port KIPs to older releases, but I
> wonder
> > > if
> > > > it's worth making an exception here. From recent experience, it
> tends to
> > > be
> > > > the older versions that are more prone to fetcher failures. Thoughts?
> > > >
> > > > -Jason
> > > >
> > > > On Fri, Apr 26, 2019 at 5:18 AM Viktor Somogyi-Vass <
> > > > viktorsomo...@gmail.com>
> > > > wrote:
> > > >
> > > > > Let me have a second thought, I'll just add the clientId instead to
> > > > follow
> > > > > the convention, so it'll change DeadFetcherThreadCount but with the
> > > > > clientId tag.
> > > > >
> > > > > On Fri, Apr 26, 2019 at 11:29 AM Viktor Somogyi-Vass <
> > > > > viktorsomo...@gmail.com> wrote:
> > > > >
> > > > > > Hi Jason,
> > > > > >
> > > > > > Yea I think it could make sense. In this case I would rename the
> > > > > > DeadFetcherThreadCount to DeadReplicaFetcherThreadCount and
> introduce
> > > > the
> > > > > > metric you're referring to as DeadLogDirFetcherThreadCount.
> > > > > > I'll update the KIP to reflect this.
> > > > > >
> > > > > > Viktor
> > > > > >
> > > > > > On Thu, Apr 25, 2019 at 8:07 PM Jason Gustafson <
> ja...@confluent.io>
> > > > > > wrote:
> > > > > >
> > > > > >> Hi Viktor,
> > > > > >>
> > > > > >> This looks good. Just one question I had is whether we may as
> well
> > > > cover
> > > > > >> the log dir fetchers as well.
> > > > > >>
> > > > > >> Thanks,
> > > > > >> Jason
> > > > > >>
> > > > > >>
> > > > > >> On Thu, Apr 25, 2019 at 7:46 AM Viktor Somogyi-Vass <
> > > > > >> viktorsomo...@gmail.com>
> > > > > >> wrote:
> > > > > >>
> > > > > >> > Hi Folks,
> > > > > >> >
> > > > > >> > This thread sunk a bit but I'd like to bump it hoping to get
> some
> > > > > >> feedback
> > > > > >> > and/or votes.
> > > > > >> >
> > > > > >> > Thanks,
> > > > > >> > Viktor
> > > > > >> >
> > > > > >> > On Thu, Mar 28, 2019 at 8:47 PM Viktor Somogyi-Vass <
> > > > > >> > viktorsomo...@gmail.com>
> > > > > >> > wrote:
> > > > > >> >
> > > > > >> > > Sorry, the end of the message cut off.
> > > > > >> > >
> > > > > >> > > So I tried to be consistent with the convention in
> LogManager,
> > > > hence
> > > > > >> the
> > > > > >> > > hyphens and in AbstractFetcherManager, hence the camel
> case. It
> > > > > would
> > > > > >> be
> > > > > >> > > nice though to decide with one convention across the whole
> > > > project,
> > > > > >> > however
> > > > > >> > > it requires a major refactor (especially for the components
> that
> > > > > >> leverage
> > > > > >> > > metrics for monitoring).
> > > > > >> > >
> > > > > >> > > Thanks,
> > > > > >> > > Viktor
> > > > > >> > >
> > > > > >> > > On Thu, Mar 28, 2019 at 8:44 PM Viktor Somogyi-Vass <
> > > > > >> > > viktorsomo...@gmail.com> wrote:
> > > > > >> > >
> > > > > >> > >> Hi Dhruvil,
> > > > > >> > >>
> > > > > >> > >> Thanks for the feedback and the vote. I fixed the typo in
> the
> > > > KIP.
> > > > > >> > >> The naming is interesting though. Unfortunately kafka
> overall
> > > is
> > > > > not
> > > > > >> > >> consistent in metric naming but at least I tried to be
> > > consistent
> > > > > >> among
> > > > > >> > the
> > > > > >> > >> other metrics used in LogManager
> > > > > >> > >>
> > > > > >> > >> On Thu, Mar 28, 2019 at 7:32 PM Dhruvil Shah <
> > > > dhru...@confluent.io
> > > > > >
> > > > > >> > >> wrote:
> > > > > >> > >>
> > > > > >> > >>> Thanks for the KIP, Viktor! This is a useful addition. +1
> > > > overall.
> > > > > >> > >>>
> > > > > >> > >>> Minor nits:
> > > > > >> > >>> > I propose to add three gauge: DeadFetcherThreadCount
> for the
> > > > > >> fetcher
> > > > > >> > >>> threads, log-cleaner-dead-thread-count for the log
> cleaner.
> > > > > >> > >>> I think you meant two instead of three.
> > > > > >> > >>>
> > > > > >> > >>> Also, would it make sense to name these metrics
> consistency,
> > > > > >> something
> > > > > >> > >>> 

Re: [ANNOUNCE] Apache Kafka 2.2.1

2019-06-05 Thread Stanislav Kozlovski
Thanks to all the contributors whose changes are part of this release and
thank you Vahid for driving the release :)

On Tue, Jun 4, 2019 at 8:03 PM Rajini Sivaram 
wrote:

> Thanks for managing the release, Vahid!
>
> Regards,
>
> Rajini
>
> On Tue, Jun 4, 2019 at 4:45 PM Viktor Somogyi-Vass <
> viktorsomo...@gmail.com>
> wrote:
>
> > Thanks Vahid!
> >
> > On Tue, Jun 4, 2019 at 5:20 PM Colin McCabe  wrote:
> >
> > > Thanks, Vahid.
> > >
> > > best,
> > > Colin
> > >
> > > On Mon, Jun 3, 2019, at 07:23, Vahid Hashemian wrote:
> > > > The Apache Kafka community is pleased to announce the release for
> > Apache
> > > > Kafka 2.2.1
> > > >
> > > > This is a bugfix release for Kafka 2.2.0. All of the changes in this
> > > > release can be found in the release notes:
> > > > https://www.apache.org/dist/kafka/2.2.1/RELEASE_NOTES.html
> > > >
> > > > You can download the source and binary release from:
> > > > https://kafka.apache.org/downloads#2.2.1
> > > >
> > > >
> > >
> >
> ---
> > > >
> > > > Apache Kafka is a distributed streaming platform with four core APIs:
> > > >
> > > > ** The Producer API allows an application to publish a stream records
> > to
> > > > one or more Kafka topics.
> > > >
> > > > ** The Consumer API allows an application to subscribe to one or more
> > > > topics and process the stream of records produced to them.
> > > >
> > > > ** The Streams API allows an application to act as a stream
> processor,
> > > > consuming an input stream from one or more topics and producing an
> > output
> > > > stream to one or more output topics, effectively transforming the
> input
> > > > streams to output streams.
> > > >
> > > > ** The Connector API allows building and running reusable producers
> or
> > > > consumers that connect Kafka topics to existing applications or data
> > > > systems. For example, a connector to a relational database might
> > capture
> > > > every change to a table.
> > > >
> > > > With these APIs, Kafka can be used for two broad classes of
> > application:
> > > >
> > > > ** Building real-time streaming data pipelines that reliably get data
> > > > between systems or applications.
> > > >
> > > > ** Building real-time streaming applications that transform or react
> to
> > > the
> > > > streams of data.
> > > >
> > > > Apache Kafka is in use at large and small companies worldwide,
> > including
> > > > Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest,
> > Rabobank,
> > > > Target, The New York Times, Uber, Yelp, and Zalando, among others.
> > > >
> > > > A big thank you for the following 30 contributors to this release!
> > > >
> > > > Anna Povzner, Arabelle Hou, A. Sophie Blee-Goldman, Bill Bejeck, Bob
> > > > Barrett, Chris Egerton, Colin Patrick McCabe, Cyrus Vafadari, Dhruvil
> > > Shah,
> > > > Doroszlai, Attila, Guozhang Wang, huxi, Jason Gustafson, John
> Roesler,
> > > > Konstantine Karantasis, Kristian Aurlien, Lifei Chen, Magesh
> > Nandakumar,
> > > > Manikumar Reddy, Massimo Siani, Matthias J. Sax, Nicholas Parker,
> > > pkleindl,
> > > > Rajini Sivaram, Randall Hauch, Sebastián Ortega, Vahid Hashemian,
> > > Victoria
> > > > Bialas, Yaroslav Klymko, Zhanxiang (Patrick) Huang
> > > >
> > > > We welcome your help and feedback. For more information on how to
> > report
> > > > problems, and to get involved, visit the project website at
> > > > https://kafka.apache.org/
> > > >
> > > > Thank you!
> > > >
> > > > Regards,
> > > > --Vahid Hashemian
> > > >
> > >
> >
>


-- 
Best,
Stanislav


[jira] [Created] (KAFKA-8486) How to commit offset via Kafka

2019-06-05 Thread Stanislav (JIRA)
Stanislav  created KAFKA-8486:
-

 Summary: How to commit offset via Kafka 
 Key: KAFKA-8486
 URL: https://issues.apache.org/jira/browse/KAFKA-8486
 Project: Kafka
  Issue Type: Wish
  Components: consumer
Affects Versions: 2.2.1
Reporter: Stanislav 






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-2.3-jdk8 #40

2019-06-05 Thread Apache Jenkins Server
See 


Changes:

[cmccabe] MINOR: Update docs to say 2.3 (#6881)

--
[...truncated 2.82 MB...]
kafka.log.ProducerStateManagerTest > testPrepareUpdateDoesNotMutate PASSED

kafka.log.ProducerStateManagerTest > 
testSequenceNotValidatedForGroupMetadataTopic STARTED

kafka.log.ProducerStateManagerTest > 
testSequenceNotValidatedForGroupMetadataTopic PASSED

kafka.log.ProducerStateManagerTest > testLastStableOffsetCompletedTxn STARTED

kafka.log.ProducerStateManagerTest > testLastStableOffsetCompletedTxn PASSED

kafka.log.ProducerStateManagerTest > 
testLoadFromSnapshotRemovesNonRetainedProducers STARTED

kafka.log.ProducerStateManagerTest > 
testLoadFromSnapshotRemovesNonRetainedProducers PASSED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffset STARTED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffset PASSED

kafka.log.ProducerStateManagerTest > testTxnFirstOffsetMetadataCached STARTED

kafka.log.ProducerStateManagerTest > testTxnFirstOffsetMetadataCached PASSED

kafka.log.ProducerStateManagerTest > testCoordinatorFencedAfterReload STARTED

kafka.log.ProducerStateManagerTest > testCoordinatorFencedAfterReload PASSED

kafka.log.ProducerStateManagerTest > testControlRecordBumpsEpoch STARTED

kafka.log.ProducerStateManagerTest > testControlRecordBumpsEpoch PASSED

kafka.log.ProducerStateManagerTest > 
testAcceptAppendWithoutProducerStateOnReplica STARTED

kafka.log.ProducerStateManagerTest > 
testAcceptAppendWithoutProducerStateOnReplica PASSED

kafka.log.ProducerStateManagerTest > testLoadFromCorruptSnapshotFile STARTED

kafka.log.ProducerStateManagerTest > testLoadFromCorruptSnapshotFile PASSED

kafka.log.ProducerStateManagerTest > testProducerSequenceWrapAround STARTED

kafka.log.ProducerStateManagerTest > testProducerSequenceWrapAround PASSED

kafka.log.ProducerStateManagerTest > testPidExpirationTimeout STARTED

kafka.log.ProducerStateManagerTest > testPidExpirationTimeout PASSED

kafka.log.ProducerStateManagerTest > testAcceptAppendWithSequenceGapsOnReplica 
STARTED

kafka.log.ProducerStateManagerTest > testAcceptAppendWithSequenceGapsOnReplica 
PASSED

kafka.log.ProducerStateManagerTest > testAppendTxnMarkerWithNoProducerState 
STARTED

kafka.log.ProducerStateManagerTest > testAppendTxnMarkerWithNoProducerState 
PASSED

kafka.log.ProducerStateManagerTest > testOldEpochForControlRecord STARTED

kafka.log.ProducerStateManagerTest > testOldEpochForControlRecord PASSED

kafka.log.ProducerStateManagerTest > 
testTruncateAndReloadRemovesOutOfRangeSnapshots STARTED

kafka.log.ProducerStateManagerTest > 
testTruncateAndReloadRemovesOutOfRangeSnapshots PASSED

kafka.log.ProducerStateManagerTest > testStartOffset STARTED

kafka.log.ProducerStateManagerTest > testStartOffset PASSED

kafka.log.ProducerStateManagerTest > testProducerSequenceInvalidWrapAround 
STARTED

kafka.log.ProducerStateManagerTest > testProducerSequenceInvalidWrapAround 
PASSED

kafka.log.ProducerStateManagerTest > testTruncateHead STARTED

kafka.log.ProducerStateManagerTest > testTruncateHead PASSED

kafka.log.ProducerStateManagerTest > 
testNonTransactionalAppendWithOngoingTransaction STARTED

kafka.log.ProducerStateManagerTest > 
testNonTransactionalAppendWithOngoingTransaction PASSED

kafka.log.ProducerStateManagerTest > testSkipSnapshotIfOffsetUnchanged STARTED

kafka.log.ProducerStateManagerTest > testSkipSnapshotIfOffsetUnchanged PASSED

kafka.log.LogCleanerLagIntegrationTest > cleanerTest[0] STARTED

kafka.log.LogCleanerLagIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerLagIntegrationTest > cleanerTest[1] STARTED

kafka.log.LogCleanerLagIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerLagIntegrationTest > cleanerTest[2] STARTED

kafka.log.LogCleanerLagIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerLagIntegrationTest > cleanerTest[3] STARTED

kafka.log.LogCleanerLagIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogCleanerLagIntegrationTest > cleanerTest[4] STARTED

kafka.log.LogCleanerLagIntegrationTest > cleanerTest[4] PASSED

kafka.log.LogCleanerParameterizedIntegrationTest > cleanerConfigUpdateTest[0] 
STARTED

kafka.log.LogCleanerParameterizedIntegrationTest > cleanerConfigUpdateTest[0] 
PASSED

kafka.log.LogCleanerParameterizedIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[0] STARTED

kafka.log.LogCleanerParameterizedIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[0] PASSED

kafka.log.LogCleanerParameterizedIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[0] STARTED

kafka.log.LogCleanerParameterizedIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[0] PASSED

kafka.log.LogCleanerParameterizedIntegrationTest > cleanerTest[0] STARTED

kafka.log.LogCleanerParameterizedIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerParameterizedIntegrationTest > 
testCleanerWithMessageFormatV0[0] STARTED

Re: [kafka-clients] [VOTE] 2.3.0 RC1

2019-06-05 Thread Colin McCabe
On Tue, Jun 4, 2019, at 23:17, Colin McCabe wrote:
> Hi all,
> 
> This is the first candidate for the release of Apache Kafka 2.3.0.
> 
> This release includes many new features, including:
> * Support for incremental cooperative rebalancing
> * An in-memory session store and window store for Kafka Streams
> * An API for allowing users to determine what operations they are authorized 
> to perform on topics.
> * A new broker start time metric.
> * The ability for JMXTool to connect to secured RMI ports.
> * A new and improved API for setting topic and broker configurations.
> * Support for non-key joining in KTable

One small correction here: support for non-key joining (KIP-213) slipped from 
2.3 due to time constraints.

Regards,
Colin

> * The ability to track the number of partitions which are under their min ISR 
> count.
> * The ability for consumers to opt out of automatic topic creation, even when 
> it is enabled on the broker.
> * The ability to use external configuration stores.
> * Improved replica fetcher behavior when errors are encountered.
> 
> Check out the release notes for the 2.3.0 release here:
> https://home.apache.org/~cmccabe/kafka-2.3.0-rc1/RELEASE_NOTES.html
> 
> The vote will go until Friday, June 7th, or until we create another RC.
> 
> * Kafka's KEYS file containing PGP keys we use to sign the release can be 
> found here:
> https://kafka.apache.org/KEYS
> 
> * The release artifacts to be voted upon (source and binary) are here:
> https://home.apache.org/~cmccabe/kafka-2.3.0-rc1/
> 
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
> 
> * Javadoc:
> https://home.apache.org/~cmccabe/kafka-2.3.0-rc1/javadoc/
> 
> * The tag to be voted upon (off the 2.3 branch) is the 2.3.0 tag:
> https://github.com/apache/kafka/releases/tag/2.3.0-rc1
> 
> thanks,
> Colin
> 
> 

> --
>  You received this message because you are subscribed to the Google Groups 
> "kafka-clients" group.
>  To unsubscribe from this group and stop receiving emails from it, send an 
> email to kafka-clients+unsubscr...@googlegroups.com.
>  To post to this group, send email to kafka-clie...@googlegroups.com.
>  Visit this group at https://groups.google.com/group/kafka-clients.
>  To view this discussion on the web visit 
> https://groups.google.com/d/msgid/kafka-clients/461015c6-d018-40f6-a018-eaadf5c25f23%40www.fastmail.com
>  
> .
>  For more options, visit https://groups.google.com/d/optout.


RE: kafka connect stops consuming data when kafka broker goes down

2019-06-05 Thread Srinivas, Kaushik (Nokia - IN/Bangalore)
Hello,
Anyone has any information on this issue.
Created a critical ticket for the same, since this is a major stability issue 
for connect framework.
https://issues.apache.org/jira/browse/KAFKA-8485?filter=-2

Thanks.
Kaushik,
NOKIA

From: Srinivas, Kaushik (Nokia - IN/Bangalore)
Sent: Monday, June 03, 2019 5:22 PM
To: dev@kafka.apache.org
Cc: Basil Brito, Aldan (Nokia - IN/Bangalore) 
Subject: kafka connect stops consuming data when kafka broker goes down

Hello kafka dev,

We are encountering an issue when kafka connect is running hdfs sink connector 
pulling data from kafka and writing to hdfs location.
In between when the data is flowing in the pipeline from producer -> kafka 
topic -> kafka connect hdfs sink connector -> hdfs,
If even one of the kafka broker goes down, the connect framework stops 
responding. Stops consuming records and REST API also becomes not interactive.

Until the kafka connect framework is restarted, it would not pull the data from 
kafka and REST api remains inactive. Nothing is coming in the logs as well.
Checked the topics in kafka used by connect, everything has been reassigned to 
another broker and has the leader available.

Has anyone encountered this issue ? what would be the expected behavior ?

Thanks in advance
Kaushik


Re: Contributor Mailing List

2019-06-05 Thread Matthias J. Sax
It's self-service: https://kafka.apache.org/contact



On 6/4/19 11:01 AM, surya teja wrote:
> Hi Team,
> Could you add me to contributor mailing list?
> 
> 
> Thanks,
> Surya
> 



signature.asc
Description: OpenPGP digital signature


[VOTE] 2.3.0 RC1

2019-06-05 Thread Colin McCabe
Hi all,

This is the first candidate for the release of Apache Kafka 2.3.0.

This release includes many new features, including:
* Support for incremental cooperative rebalancing
* An in-memory session store and window store for Kafka Streams
* An API for allowing users to determine what operations they are authorized to 
perform on topics.
* A new broker start time metric.
* The ability for JMXTool to connect to secured RMI ports.
* A new and improved API for setting topic and broker configurations.
* Support for non-key joining in KTable.
* The ability to track the number of partitions which are under their min ISR 
count.
* The ability for consumers to opt out of automatic topic creation, even when 
it is enabled on the broker.
* The ability to use external configuration stores.
* Improved replica fetcher behavior when errors are encountered.

Check out the release notes for the 2.3.0 release here:
https://home.apache.org/~cmccabe/kafka-2.3.0-rc1/RELEASE_NOTES.html

The vote will go until Friday, June 7th, or until we create another RC.

* Kafka's KEYS file containing PGP keys we use to sign the release can be found 
here:
https://kafka.apache.org/KEYS

* The release artifacts to be voted upon (source and binary) are here:
https://home.apache.org/~cmccabe/kafka-2.3.0-rc1/

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/org/apache/kafka/

* Javadoc:
https://home.apache.org/~cmccabe/kafka-2.3.0-rc1/javadoc/

* The tag to be voted upon (off the 2.3 branch) is the 2.3.0 tag:
https://github.com/apache/kafka/releases/tag/2.3.0-rc1

thanks,
Colin


[jira] [Created] (KAFKA-8485) Kafka connect worker does not respond when kafka broker goes down with data streaming in progress

2019-06-05 Thread kaushik srinivas (JIRA)
kaushik srinivas created KAFKA-8485:
---

 Summary: Kafka connect worker does not respond when kafka broker 
goes down with data streaming in progress
 Key: KAFKA-8485
 URL: https://issues.apache.org/jira/browse/KAFKA-8485
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.2.1
Reporter: kaushik srinivas


Below is the scenario

3 kafka brokers are up and running.

Kafka connect worker is installed and a hdfs sink connector is added.

Data streaming started, data being flushed out of kafka into hdfs.

Topic is created with 3 partitons, one leader on all the three brokers.

Now, 2 kafka brokers are restarted. Partition re balance happens.

Now we observe, kafka connect does not respond. REST API keeps timing out. 

Nothing useful is being logged at the connect logs as well.

Only way to get out of this situation currently is to restart the kafka connect 
worker and things gets normal.

 

The same scenario when tried without data being in progress, works fine. 
Meaning REST API does not get into timing out state. 

making this issue a blocker, because of the impact due to kafka broker restart.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)