[jira] [Comment Edited] (KAFKA-3729) Auto-configure non-default SerDes passed alongside the topology builder

2019-03-17 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794549#comment-16794549
 ] 

Ted Yu edited comment on KAFKA-3729 at 3/18/19 3:06 AM:


3729.v6.txt shows the progress toward passing StreamsConfig to Store#init().

The assertion in KStreamAggregationDedupIntegrationTest#shouldGroupByKey passes.
The assertion fails without proper Store init.

Note, PR #6461 doesn't require KIP.


was (Author: yuzhih...@gmail.com):
3729.v6.txt shows the progress toward passing StreamsConfig to Store#init().

The assertion in KStreamAggregationDedupIntegrationTest#shouldGroupByKey passes.
The assertion fails without proper Store init.

>  Auto-configure non-default SerDes passed alongside the topology builder
> 
>
> Key: KAFKA-3729
> URL: https://issues.apache.org/jira/browse/KAFKA-3729
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Fred Patton
>Assignee: Ted Yu
>Priority: Major
>  Labels: api, newbie
> Attachments: 3729.txt, 3729.v6.txt
>
>
> From Guozhang Wang:
> "Only default serdes provided through configs are auto-configured today. But 
> we could auto-configure other serdes passed alongside the topology builder as 
> well."



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-3729) Auto-configure non-default SerDes passed alongside the topology builder

2019-03-17 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794668#comment-16794668
 ] 

ASF GitHub Bot commented on KAFKA-3729:
---

tedyu commented on pull request #6399: [KAFKA-3729] Auto-configure non-default 
SerDes passed alongside the topology builder
URL: https://github.com/apache/kafka/pull/6399
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


>  Auto-configure non-default SerDes passed alongside the topology builder
> 
>
> Key: KAFKA-3729
> URL: https://issues.apache.org/jira/browse/KAFKA-3729
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Fred Patton
>Assignee: Ted Yu
>Priority: Major
>  Labels: api, newbie
> Attachments: 3729.txt, 3729.v6.txt
>
>
> From Guozhang Wang:
> "Only default serdes provided through configs are auto-configured today. But 
> we could auto-configure other serdes passed alongside the topology builder as 
> well."



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-3729) Auto-configure non-default SerDes passed alongside the topology builder

2019-03-17 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794667#comment-16794667
 ] 

ASF GitHub Bot commented on KAFKA-3729:
---

tedyu commented on pull request #6461: [KAFKA-3729] Auto-configure non-default 
SerDes passed alongside the topology builder
URL: https://github.com/apache/kafka/pull/6461
 
 
   Only default serdes provided through configs are auto-configured today. But 
we could auto-configure other serdes passed alongside the topology builder as 
well.
   
   Added new tests.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


>  Auto-configure non-default SerDes passed alongside the topology builder
> 
>
> Key: KAFKA-3729
> URL: https://issues.apache.org/jira/browse/KAFKA-3729
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Fred Patton
>Assignee: Ted Yu
>Priority: Major
>  Labels: api, newbie
> Attachments: 3729.txt, 3729.v6.txt
>
>
> From Guozhang Wang:
> "Only default serdes provided through configs are auto-configured today. But 
> we could auto-configure other serdes passed alongside the topology builder as 
> well."



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-2480) Handle non-CopycatExceptions from SinkTasks

2019-03-17 Thread Ewen Cheslack-Postava (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-2480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794652#comment-16794652
 ] 

Ewen Cheslack-Postava commented on KAFKA-2480:
--

[~olkuznsmith] I see, you probably got confused because the PR for KAFKA-2481 
accidentally pointed here.

The difficulty is that both are semantics might be desired. The intent of 
timeout as implemented is to handle *buffering* connectors. In that case, they 
may *accept* data without necessarily committing it synchronously. If they 
encounter a failure that keeps them from even getting the data sent on the 
network (e.g. downstream system has availability issue), they want to express 
to the framework that they still have work to do, but are having some problem 
accomplishing it, so need to back off; but if *no more data* shows up, they 
don't want to wait indefinitely – they want to express the *maximum* amount of 
time the framework should wait before passing control back to the task to retry 
whatever operation was failing, even if there isn't new data available. But if 
new data becomes available, the connector may want to accept it immediately. It 
may be destined for some location that doesn't have the same issue, or can be 
buffered, etc. For example, this is how the HDFS connector uses this timeout 
functionality.

On the other hand, a connector that, e.g., deals with a rate-limited API may 
know exactly how long it needs to wait before it's worth passing control back 
*at all* (or any other case where you know the issue won't be resolved until 
*at least* some amount of time has passed). This has come up and been discussed 
as a possible improvement to `RetriableException` (since you should be throwing 
that if you can't even buffer the data that's being included in the `put()` 
call). I don't think there's a Jira (at least I'm not finding one), but it was 
probably discussed on the mailing list. There's also KAFKA-3819 on the source 
side, which is another variant of "time management" convenience utilities.

> Handle non-CopycatExceptions from SinkTasks
> ---
>
> Key: KAFKA-2480
> URL: https://issues.apache.org/jira/browse/KAFKA-2480
> Project: Kafka
>  Issue Type: Sub-task
>  Components: KafkaConnect
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>Priority: Major
> Fix For: 0.9.0.0
>
>
> Currently we catch Throwable in WorkerSinkTask, but we just log the 
> exception. This can lead to data loss because it indicates the messages in 
> the {{put(records)}} call probably were not handled properly. We need to 
> decide what the policy for handling these types of exceptions should be -- 
> try repeating the same records again, risking duplication? or skip them, 
> risking loss? or kill the task immediately and require intervention since 
> it's unclear what happened?
> SourceTasks don't have the same concern -- they can throw other exceptions 
> and as long as we catch them, it is up to the connector to ensure that it 
> does not lose data as a result.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7813) JmxTool throws NPE when --object-name is omitted

2019-03-17 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794646#comment-16794646
 ] 

ASF GitHub Bot commented on KAFKA-7813:
---

ewencp commented on pull request #6139: KAFKA-7813: JmxTool throws NPE when 
--object-name is omitted
URL: https://github.com/apache/kafka/pull/6139
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> JmxTool throws NPE when --object-name is omitted
> 
>
> Key: KAFKA-7813
> URL: https://issues.apache.org/jira/browse/KAFKA-7813
> Project: Kafka
>  Issue Type: Bug
>Reporter: Attila Sasvari
>Assignee: huxihx
>Priority: Minor
> Fix For: 2.3.0
>
>
> Running the JMX tool without --object-name parameter, results in a 
> NullPointerException:
> {code}
> $ bin/kafka-run-class.sh kafka.tools.JmxTool  --jmx-url 
> service:jmx:rmi:///jndi/rmi://127.0.0.1:/jmxrmi
> ...
> Exception in thread "main" java.lang.NullPointerException
>   at kafka.tools.JmxTool$$anonfun$3.apply(JmxTool.scala:143)
>   at kafka.tools.JmxTool$$anonfun$3.apply(JmxTool.scala:143)
>   at 
> scala.collection.LinearSeqOptimized$class.exists(LinearSeqOptimized.scala:93)
>   at scala.collection.immutable.List.exists(List.scala:84)
>   at kafka.tools.JmxTool$.main(JmxTool.scala:143)
>   at kafka.tools.JmxTool.main(JmxTool.scala)
> {code} 
> Documentation of the tool says:
> {code}
> --object-name  A JMX object name to use as a query. 
>  
>This can contain wild cards, and   
>  
>this option can be given multiple  
>  
>times to specify more than one 
>  
>query. If no objects are specified 
>  
>all objects will be queried.
> {code}
> Running the tool with {{--object-name ''}}, also results in an NPE:
> {code}
> $ bin/kafka-run-class.sh kafka.tools.JmxTool  --jmx-url 
> service:jmx:rmi:///jndi/rmi://127.0.0.1:/jmxrmi --object-name ''
> ...
> Exception in thread "main" java.lang.NullPointerException
>   at kafka.tools.JmxTool$.main(JmxTool.scala:197)
>   at kafka.tools.JmxTool.main(JmxTool.scala)
> {code}
> Runnig the tool with --object-name without an argument, the tool with 
> OptionMissingRequiredArgumentException:
> {code}
> $ bin/kafka-run-class.sh kafka.tools.JmxTool  --jmx-url 
> service:jmx:rmi:///jndi/rmi://127.0.0.1:/jmxrmi --object-name 
> Exception in thread "main" joptsimple.OptionMissingRequiredArgumentException: 
> Option object-name requires an argument
>   at 
> joptsimple.RequiredArgumentOptionSpec.detectOptionArgument(RequiredArgumentOptionSpec.java:48)
>   at 
> joptsimple.ArgumentAcceptingOptionSpec.handleOption(ArgumentAcceptingOptionSpec.java:257)
>   at joptsimple.OptionParser.handleLongOptionToken(OptionParser.java:513)
>   at 
> joptsimple.OptionParserState$2.handleArgument(OptionParserState.java:56)
>   at joptsimple.OptionParser.parse(OptionParser.java:396)
>   at kafka.tools.JmxTool$.main(JmxTool.scala:104)
>   at kafka.tools.JmxTool.main(JmxTool.scala)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7813) JmxTool throws NPE when --object-name is omitted

2019-03-17 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-7813.
--
   Resolution: Fixed
Fix Version/s: 2.3.0

Issue resolved by pull request 6139
[https://github.com/apache/kafka/pull/6139]

> JmxTool throws NPE when --object-name is omitted
> 
>
> Key: KAFKA-7813
> URL: https://issues.apache.org/jira/browse/KAFKA-7813
> Project: Kafka
>  Issue Type: Bug
>Reporter: Attila Sasvari
>Assignee: huxihx
>Priority: Minor
> Fix For: 2.3.0
>
>
> Running the JMX tool without --object-name parameter, results in a 
> NullPointerException:
> {code}
> $ bin/kafka-run-class.sh kafka.tools.JmxTool  --jmx-url 
> service:jmx:rmi:///jndi/rmi://127.0.0.1:/jmxrmi
> ...
> Exception in thread "main" java.lang.NullPointerException
>   at kafka.tools.JmxTool$$anonfun$3.apply(JmxTool.scala:143)
>   at kafka.tools.JmxTool$$anonfun$3.apply(JmxTool.scala:143)
>   at 
> scala.collection.LinearSeqOptimized$class.exists(LinearSeqOptimized.scala:93)
>   at scala.collection.immutable.List.exists(List.scala:84)
>   at kafka.tools.JmxTool$.main(JmxTool.scala:143)
>   at kafka.tools.JmxTool.main(JmxTool.scala)
> {code} 
> Documentation of the tool says:
> {code}
> --object-name  A JMX object name to use as a query. 
>  
>This can contain wild cards, and   
>  
>this option can be given multiple  
>  
>times to specify more than one 
>  
>query. If no objects are specified 
>  
>all objects will be queried.
> {code}
> Running the tool with {{--object-name ''}}, also results in an NPE:
> {code}
> $ bin/kafka-run-class.sh kafka.tools.JmxTool  --jmx-url 
> service:jmx:rmi:///jndi/rmi://127.0.0.1:/jmxrmi --object-name ''
> ...
> Exception in thread "main" java.lang.NullPointerException
>   at kafka.tools.JmxTool$.main(JmxTool.scala:197)
>   at kafka.tools.JmxTool.main(JmxTool.scala)
> {code}
> Runnig the tool with --object-name without an argument, the tool with 
> OptionMissingRequiredArgumentException:
> {code}
> $ bin/kafka-run-class.sh kafka.tools.JmxTool  --jmx-url 
> service:jmx:rmi:///jndi/rmi://127.0.0.1:/jmxrmi --object-name 
> Exception in thread "main" joptsimple.OptionMissingRequiredArgumentException: 
> Option object-name requires an argument
>   at 
> joptsimple.RequiredArgumentOptionSpec.detectOptionArgument(RequiredArgumentOptionSpec.java:48)
>   at 
> joptsimple.ArgumentAcceptingOptionSpec.handleOption(ArgumentAcceptingOptionSpec.java:257)
>   at joptsimple.OptionParser.handleLongOptionToken(OptionParser.java:513)
>   at 
> joptsimple.OptionParserState$2.handleArgument(OptionParserState.java:56)
>   at joptsimple.OptionParser.parse(OptionParser.java:396)
>   at kafka.tools.JmxTool$.main(JmxTool.scala:104)
>   at kafka.tools.JmxTool.main(JmxTool.scala)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-7983) supporting replication.throttled.replicas in dynamic broker configuration

2019-03-17 Thread lambdaliu (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lambdaliu reassigned KAFKA-7983:


Assignee: (was: lambdaliu)

> supporting replication.throttled.replicas in dynamic broker configuration
> -
>
> Key: KAFKA-7983
> URL: https://issues.apache.org/jira/browse/KAFKA-7983
> Project: Kafka
>  Issue Type: New Feature
>  Components: core
>Reporter: Jun Rao
>Priority: Major
>
> In 
> [KIP-226|https://cwiki.apache.org/confluence/display/KAFKA/KIP-226+-+Dynamic+Broker+Configuration#KIP-226-DynamicBrokerConfiguration-DefaultTopicconfigs],
>  we added the support to change broker defaults dynamically. However, it 
> didn't support changing leader.replication.throttled.replicas and 
> follower.replication.throttled.replicas. These 2 configs were introduced in 
> [KIP-73|https://cwiki.apache.org/confluence/display/KAFKA/KIP-73+Replication+Quotas]
>  and controls the set of topic partitions on which replication throttling 
> will be engaged. One useful case is to be able to set a default value for 
> both configs to * to allow throttling to be engaged for all topic partitions. 
> Currently, the static default value for both configs are ignored for 
> replication throttling, it would be useful to fix that as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-2480) Handle non-CopycatExceptions from SinkTasks

2019-03-17 Thread Oleg Kuznetsov (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-2480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794616#comment-16794616
 ] 

Oleg Kuznetsov commented on KAFKA-2480:
---

[~ewencp]

I was talking about sink connector where put() method is used.

1) what was this Jira intention to add timeout() as an interface method?

2) aleeping as long as I need is good, but it is an antipatter, if the same can 
be easily delegated to the framework.

> Handle non-CopycatExceptions from SinkTasks
> ---
>
> Key: KAFKA-2480
> URL: https://issues.apache.org/jira/browse/KAFKA-2480
> Project: Kafka
>  Issue Type: Sub-task
>  Components: KafkaConnect
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>Priority: Major
> Fix For: 0.9.0.0
>
>
> Currently we catch Throwable in WorkerSinkTask, but we just log the 
> exception. This can lead to data loss because it indicates the messages in 
> the {{put(records)}} call probably were not handled properly. We need to 
> decide what the policy for handling these types of exceptions should be -- 
> try repeating the same records again, risking duplication? or skip them, 
> risking loss? or kill the task immediately and require intervention since 
> it's unclear what happened?
> SourceTasks don't have the same concern -- they can throw other exceptions 
> and as long as we catch them, it is up to the connector to ensure that it 
> does not lose data as a result.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-7197) Release a milestone build for Scala 2.13.0 M3

2019-03-17 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava reassigned KAFKA-7197:


Assignee: Dejan Stojadinović

> Release a milestone build for Scala 2.13.0 M3
> -
>
> Key: KAFKA-7197
> URL: https://issues.apache.org/jira/browse/KAFKA-7197
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Martynas Mickevičius
>Assignee: Dejan Stojadinović
>Priority: Minor
>
> Releasing a milestone version for Scala 2.13.0-M3 (and maybe even for 
> 2.13.0-M4, which has new collections) would be helpful to kickstart Kafka 
> ecosystem adoption for 2.13.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-2480) Handle non-CopycatExceptions from SinkTasks

2019-03-17 Thread Ewen Cheslack-Postava (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-2480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794612#comment-16794612
 ] 

Ewen Cheslack-Postava commented on KAFKA-2480:
--

[~olkuznsmith] This doesn't seem to have anything to do with this JIRA, but the 
behavior is intended. If you want the behavior you're describing, it's easy to 
follow up after poll returns by sleeping as long as you need to in order to 
block until you hit your business-logic-timeout. Or, if you want to continue 
fetching, just wrap the poll calls in a loop and accumulate the results (but be 
sure to have some cap, or else you're bound to OOM). Vast majority of users 
would not want this as it just blocks process and limits throughput, 
potentially to the point that you can't ever catch up. The interface is 
designed to be low-level enough to implement any of these patterns, and it is a 
common pattern (e.g., going way back to `select`).

> Handle non-CopycatExceptions from SinkTasks
> ---
>
> Key: KAFKA-2480
> URL: https://issues.apache.org/jira/browse/KAFKA-2480
> Project: Kafka
>  Issue Type: Sub-task
>  Components: KafkaConnect
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>Priority: Major
> Fix For: 0.9.0.0
>
>
> Currently we catch Throwable in WorkerSinkTask, but we just log the 
> exception. This can lead to data loss because it indicates the messages in 
> the {{put(records)}} call probably were not handled properly. We need to 
> decide what the policy for handling these types of exceptions should be -- 
> try repeating the same records again, risking duplication? or skip them, 
> risking loss? or kill the task immediately and require intervention since 
> it's unclear what happened?
> SourceTasks don't have the same concern -- they can throw other exceptions 
> and as long as we catch them, it is up to the connector to ensure that it 
> does not lose data as a result.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8026) Flaky Test RegexSourceIntegrationTest#testRegexMatchesTopicsAWhenDeleted

2019-03-17 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794602#comment-16794602
 ] 

ASF GitHub Bot commented on KAFKA-8026:
---

bbejeck commented on pull request #6459: KAFKA-8026: Fix for flaky 
RegexSourceIntegrationTest
URL: https://github.com/apache/kafka/pull/6459
 
 
   This PR is an attempt to fix the `RegexSourceIntegrationTest` flakiness
   
   1. Delete and create all topics before each test starts.
   2. Give the streams application in each test a unique application ID
   3. Create a `KafkaStreams` instance local to each test.
   
   I ran the entire suite of streams tests.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Flaky Test RegexSourceIntegrationTest#testRegexMatchesTopicsAWhenDeleted
> 
>
> Key: KAFKA-8026
> URL: https://issues.apache.org/jira/browse/KAFKA-8026
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Affects Versions: 1.0.2, 1.1.1
>Reporter: Matthias J. Sax
>Assignee: Bill Bejeck
>Priority: Critical
>  Labels: flaky-test
> Fix For: 1.0.3, 1.1.2
>
>
> {quote}java.lang.AssertionError: Condition not met within timeout 15000. 
> Stream tasks not updated
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:276)
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:254)
> at 
> org.apache.kafka.streams.integration.RegexSourceIntegrationTest.testRegexMatchesTopicsAWhenDeleted(RegexSourceIntegrationTest.java:215){quote}
> Happend in 1.0 and 1.1 builds:
> [https://builds.apache.org/blue/organizations/jenkins/kafka-1.0-jdk7/detail/kafka-1.0-jdk7/263/tests/]
> and
> [https://builds.apache.org/blue/organizations/jenkins/kafka-1.1-jdk7/detail/kafka-1.1-jdk7/249/tests/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-3729) Auto-configure non-default SerDes passed alongside the topology builder

2019-03-17 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated KAFKA-3729:
--
Attachment: (was: 3729.v6.txt)

>  Auto-configure non-default SerDes passed alongside the topology builder
> 
>
> Key: KAFKA-3729
> URL: https://issues.apache.org/jira/browse/KAFKA-3729
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Fred Patton
>Assignee: Ted Yu
>Priority: Major
>  Labels: api, newbie
> Attachments: 3729.txt, 3729.v6.txt
>
>
> From Guozhang Wang:
> "Only default serdes provided through configs are auto-configured today. But 
> we could auto-configure other serdes passed alongside the topology builder as 
> well."



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-3729) Auto-configure non-default SerDes passed alongside the topology builder

2019-03-17 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794549#comment-16794549
 ] 

Ted Yu edited comment on KAFKA-3729 at 3/17/19 6:39 PM:


3729.v6.txt shows the progress toward passing StreamsConfig to Store#init().

The assertion in KStreamAggregationDedupIntegrationTest#shouldGroupByKey passes.
The assertion fails without proper Store init.


was (Author: yuzhih...@gmail.com):
3729.v6.txt shows the progress toward passing StreamsConfig to Store#init().

>  Auto-configure non-default SerDes passed alongside the topology builder
> 
>
> Key: KAFKA-3729
> URL: https://issues.apache.org/jira/browse/KAFKA-3729
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Fred Patton
>Assignee: Ted Yu
>Priority: Major
>  Labels: api, newbie
> Attachments: 3729.txt, 3729.v6.txt
>
>
> From Guozhang Wang:
> "Only default serdes provided through configs are auto-configured today. But 
> we could auto-configure other serdes passed alongside the topology builder as 
> well."



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-3729) Auto-configure non-default SerDes passed alongside the topology builder

2019-03-17 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated KAFKA-3729:
--
Attachment: 3729.v6.txt

>  Auto-configure non-default SerDes passed alongside the topology builder
> 
>
> Key: KAFKA-3729
> URL: https://issues.apache.org/jira/browse/KAFKA-3729
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Fred Patton
>Assignee: Ted Yu
>Priority: Major
>  Labels: api, newbie
> Attachments: 3729.txt, 3729.v6.txt
>
>
> From Guozhang Wang:
> "Only default serdes provided through configs are auto-configured today. But 
> we could auto-configure other serdes passed alongside the topology builder as 
> well."



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-3729) Auto-configure non-default SerDes passed alongside the topology builder

2019-03-17 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794549#comment-16794549
 ] 

Ted Yu commented on KAFKA-3729:
---

3729.v6.txt shows the progress toward passing StreamsConfig to Store#init().

>  Auto-configure non-default SerDes passed alongside the topology builder
> 
>
> Key: KAFKA-3729
> URL: https://issues.apache.org/jira/browse/KAFKA-3729
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Fred Patton
>Assignee: Ted Yu
>Priority: Major
>  Labels: api, newbie
> Attachments: 3729.txt, 3729.v6.txt
>
>
> From Guozhang Wang:
> "Only default serdes provided through configs are auto-configured today. But 
> we could auto-configure other serdes passed alongside the topology builder as 
> well."



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-3729) Auto-configure non-default SerDes passed alongside the topology builder

2019-03-17 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated KAFKA-3729:
--
Attachment: 3729.v6.txt

>  Auto-configure non-default SerDes passed alongside the topology builder
> 
>
> Key: KAFKA-3729
> URL: https://issues.apache.org/jira/browse/KAFKA-3729
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Fred Patton
>Assignee: Ted Yu
>Priority: Major
>  Labels: api, newbie
> Attachments: 3729.txt, 3729.v6.txt
>
>
> From Guozhang Wang:
> "Only default serdes provided through configs are auto-configured today. But 
> we could auto-configure other serdes passed alongside the topology builder as 
> well."



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8119) KafkaConfig listener accessors may fail during dynamic update

2019-03-17 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794539#comment-16794539
 ] 

ASF GitHub Bot commented on KAFKA-8119:
---

rajinisivaram commented on pull request #6457: KAFKA-8119; Ensure KafkaConfig 
listener accessors work during update
URL: https://github.com/apache/kafka/pull/6457
 
 
   Use the same config to obtain all properties used in each KafkaConfig 
accessor method to ensure that validation doesn't fail if config was updated 
during the method.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> KafkaConfig listener accessors may fail during dynamic update
> -
>
> Key: KAFKA-8119
> URL: https://issues.apache.org/jira/browse/KAFKA-8119
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.2.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Major
> Fix For: 2.3.0, 2.2.1
>
>
> Noticed a test failure in DynamicBrokerReconfigurationTest where the test 
> accessing `KafkaConfig#listeners` during dynamic update of listeners threw an 
> exception. In general, most dynamic configs can be updated independently, but 
> listeners and listener security protocol map need to be updated together when 
> new listeners that are not in the map are added or an entry is removed from 
> the map along with the listener. We don't expect to see this failure in the 
> implementation code because dynamic config updates are on a single thread and 
> SocketServer processes the full update together and validates the full config 
> prior to applying the changes. But we should ensure that 
> KafkaConfig#listeners, KafkaConfig#advertisedListeners etc. work even if a 
> dynamic update occurs during the call since these methods are used in tests 
> and could potentially be used in implementation code in future from different 
> threads.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8026) Flaky Test RegexSourceIntegrationTest#testRegexMatchesTopicsAWhenDeleted

2019-03-17 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794535#comment-16794535
 ] 

Matthias J. Sax commented on KAFKA-8026:


Failed again in `1.1` 
[https://builds.apache.org/blue/organizations/jenkins/kafka-1.1-jdk7/detail/kafka-1.1-jdk7/252/tests]

and `1.0`: 
[https://builds.apache.org/blue/organizations/jenkins/kafka-1.0-jdk7/detail/kafka-1.0-jdk7/264/changes/]

> Flaky Test RegexSourceIntegrationTest#testRegexMatchesTopicsAWhenDeleted
> 
>
> Key: KAFKA-8026
> URL: https://issues.apache.org/jira/browse/KAFKA-8026
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Affects Versions: 1.0.2, 1.1.1
>Reporter: Matthias J. Sax
>Assignee: Bill Bejeck
>Priority: Critical
>  Labels: flaky-test
> Fix For: 1.0.3, 1.1.2
>
>
> {quote}java.lang.AssertionError: Condition not met within timeout 15000. 
> Stream tasks not updated
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:276)
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:254)
> at 
> org.apache.kafka.streams.integration.RegexSourceIntegrationTest.testRegexMatchesTopicsAWhenDeleted(RegexSourceIntegrationTest.java:215){quote}
> Happend in 1.0 and 1.1 builds:
> [https://builds.apache.org/blue/organizations/jenkins/kafka-1.0-jdk7/detail/kafka-1.0-jdk7/263/tests/]
> and
> [https://builds.apache.org/blue/organizations/jenkins/kafka-1.1-jdk7/detail/kafka-1.1-jdk7/249/tests/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8119) KafkaConfig listener accessors may fail during dynamic update

2019-03-17 Thread Rajini Sivaram (JIRA)
Rajini Sivaram created KAFKA-8119:
-

 Summary: KafkaConfig listener accessors may fail during dynamic 
update
 Key: KAFKA-8119
 URL: https://issues.apache.org/jira/browse/KAFKA-8119
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.2.0
Reporter: Rajini Sivaram
Assignee: Rajini Sivaram
 Fix For: 2.3.0, 2.2.1


Noticed a test failure in DynamicBrokerReconfigurationTest where the test 
accessing `KafkaConfig#listeners` during dynamic update of listeners threw an 
exception. In general, most dynamic configs can be updated independently, but 
listeners and listener security protocol map need to be updated together when 
new listeners that are not in the map are added or an entry is removed from the 
map along with the listener. We don't expect to see this failure in the 
implementation code because dynamic config updates are on a single thread and 
SocketServer processes the full update together and validates the full config 
prior to applying the changes. But we should ensure that KafkaConfig#listeners, 
KafkaConfig#advertisedListeners etc. work even if a dynamic update occurs 
during the call since these methods are used in tests and could potentially be 
used in implementation code in future from different threads.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-2480) Handle non-CopycatExceptions from SinkTasks

2019-03-17 Thread Oleg Kuznetsov (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-2480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794524#comment-16794524
 ] 

Oleg Kuznetsov commented on KAFKA-2480:
---

[~gwenshap] [~ewencp]
Looks like the way it was implemented does not guarantee actual waiting will 
happen.

 

The code:
{code:java}
//timeout The time, in milliseconds, spent waiting in poll if data is not 
available in the buffer.
//* If 0, returns immediately with any records that are available currently in 
the buffer, else returns empty.
//* Must not be negative.

consumer.poll(timeoutMs)
{code}
does not have to wait *timeout* ms to return, if there are records in the topic 
available for consumption.

 

Now client code cannot rely on this, for example, trying to meet SLA accessing 
an external storage.

I propose to treat it as business-logic waiting request, where client code 
expects at least *timeoutMs* to wait before return.

> Handle non-CopycatExceptions from SinkTasks
> ---
>
> Key: KAFKA-2480
> URL: https://issues.apache.org/jira/browse/KAFKA-2480
> Project: Kafka
>  Issue Type: Sub-task
>  Components: KafkaConnect
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>Priority: Major
> Fix For: 0.9.0.0
>
>
> Currently we catch Throwable in WorkerSinkTask, but we just log the 
> exception. This can lead to data loss because it indicates the messages in 
> the {{put(records)}} call probably were not handled properly. We need to 
> decide what the policy for handling these types of exceptions should be -- 
> try repeating the same records again, risking duplication? or skip them, 
> risking loss? or kill the task immediately and require intervention since 
> it's unclear what happened?
> SourceTasks don't have the same concern -- they can throw other exceptions 
> and as long as we catch them, it is up to the connector to ensure that it 
> does not lose data as a result.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8118) Ensure that tests close ZooKeeper clients since they can impact subsequent tests

2019-03-17 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794523#comment-16794523
 ] 

ASF GitHub Bot commented on KAFKA-8118:
---

rajinisivaram commented on pull request #6456: KAFKA-8118; Ensure ZooKeeper 
clients are closed in tests, fix verification
URL: https://github.com/apache/kafka/pull/6456
 
 
   We verify that ZK clients are closed in tests since these can affect 
subsequent tests and that makes it hard to debug test failures. But because of 
changes to ZooKeeper client, we are now checking the wrong thread name. The 
thread name used now is `-EventThread` where 
`creatorThreadName` varies depending on the test. Fixed verification 
`ZooKeeperTestHarness` to check this format and fixed tests which were leaving 
ZK clients behind. Also added a test to make sure we can detect changes to the 
thread name when we update ZK clients in future.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Ensure that tests close ZooKeeper clients since they can impact subsequent 
> tests
> 
>
> Key: KAFKA-8118
> URL: https://issues.apache.org/jira/browse/KAFKA-8118
> Project: Kafka
>  Issue Type: Test
>  Components: core
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Major
> Fix For: 2.3.0, 2.2.1
>
>
> A while ago, we added a check to verify that tests are closing ZooKeeper 
> clients in ZooKeeperTestHarness since left over clients can impact subsequent 
> tests (e.g by overwriting static JAAS configuration). But we have changed the 
> ZK client since then and hence the thread name being verified is no longer 
> correct. As a result, we have tests that are leaving ZK clients behind and we 
> need to fix that (as well as the incorrect verification).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8118) Ensure that tests close ZooKeeper clients since they can impact subsequent tests

2019-03-17 Thread Rajini Sivaram (JIRA)
Rajini Sivaram created KAFKA-8118:
-

 Summary: Ensure that tests close ZooKeeper clients since they can 
impact subsequent tests
 Key: KAFKA-8118
 URL: https://issues.apache.org/jira/browse/KAFKA-8118
 Project: Kafka
  Issue Type: Test
  Components: core
Reporter: Rajini Sivaram
Assignee: Rajini Sivaram
 Fix For: 2.3.0, 2.2.1


A while ago, we added a check to verify that tests are closing ZooKeeper 
clients in ZooKeeperTestHarness since left over clients can impact subsequent 
tests (e.g by overwriting static JAAS configuration). But we have changed the 
ZK client since then and hence the thread name being verified is no longer 
correct. As a result, we have tests that are leaving ZK clients behind and we 
need to fix that (as well as the incorrect verification).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)