[jira] [Resolved] (KAFKA-7877) Connect DLQ not used in SinkTask put()

2019-04-28 Thread Arjun Satish (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arjun Satish resolved KAFKA-7877.
-
Resolution: Fixed

Updated KIP.

> Connect DLQ not used in SinkTask put()
> --
>
> Key: KAFKA-7877
> URL: https://issues.apache.org/jira/browse/KAFKA-7877
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.0.0, 2.0.1, 2.1.0
>Reporter: Andrew Bourgeois
>Assignee: Arjun Satish
>Priority: Major
>
> The Dead Letter Queue isn't implemented for the put() operation.
> In 
> [https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java]
>  the "retryWithToleranceOperator" gets used during the conversion and 
> transformation phases, but not when delivering the messages to the sink task.
> According to KIP-298, the Dead Letter Queue should be used as long as we 
> throw org.apache.kafka.connect.errors.RetriableException.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-7877) Connect DLQ not used in SinkTask put()

2019-04-28 Thread Arjun Satish (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16828808#comment-16828808
 ] 

Arjun Satish edited comment on KAFKA-7877 at 4/28/19 6:23 PM:
--

Resolving this ticket. Feel free to reopen if we need some more clarifying text.


was (Author: wicknicks):
Updated KIP.

> Connect DLQ not used in SinkTask put()
> --
>
> Key: KAFKA-7877
> URL: https://issues.apache.org/jira/browse/KAFKA-7877
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.0.0, 2.0.1, 2.1.0
>Reporter: Andrew Bourgeois
>Assignee: Arjun Satish
>Priority: Major
>
> The Dead Letter Queue isn't implemented for the put() operation.
> In 
> [https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java]
>  the "retryWithToleranceOperator" gets used during the conversion and 
> transformation phases, but not when delivering the messages to the sink task.
> According to KIP-298, the Dead Letter Queue should be used as long as we 
> throw org.apache.kafka.connect.errors.RetriableException.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8199) ClassCastException when trying to groupBy after suppress

2019-04-28 Thread Slim Ouertani (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16828807#comment-16828807
 ] 

Slim Ouertani commented on KAFKA-8199:
--

[~JoseLopez] could you please check this initial [pull 
request|https://github.com/apache/kafka/pull/6646 ] 

> ClassCastException when trying to groupBy after suppress
> 
>
> Key: KAFKA-8199
> URL: https://issues.apache.org/jira/browse/KAFKA-8199
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0
>Reporter: Bill Bejeck
>Assignee: Jose Lopez
>Priority: Major
> Fix For: 2.3.0
>
>
> A topology with a groupBy after a suppress operation results in a 
> ClassCastException
>  The following sample topology
> {noformat}
> Properties properties = new Properties(); 
> properties.put(StreamsConfig.APPLICATION_ID_CONFIG, "appid"); 
> properties.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG,"localhost");
> StreamsBuilder builder = new StreamsBuilder();
>  builder.stream("topic")
> .groupByKey().windowedBy(TimeWindows.of(Duration.ofSeconds(30))).count() 
> .suppress(Suppressed.untilTimeLimit(Duration.ofHours(1), 
> BufferConfig.unbounded())) 
> .groupBy((k, v) -> KeyValue.pair(k,v)).count().toStream(); 
> builder.build(properties);
> {noformat}
> results in this exception:
> {noformat}
> java.lang.ClassCastException: 
> org.apache.kafka.streams.kstream.internals.KTableImpl$$Lambda$4/2084435065 
> cannot be cast to 
> org.apache.kafka.streams.kstream.internals.KTableProcessorSupplier{noformat}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-7877) Connect DLQ not used in SinkTask put()

2019-04-28 Thread Arjun Satish (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arjun Satish reassigned KAFKA-7877:
---

Assignee: Arjun Satish

> Connect DLQ not used in SinkTask put()
> --
>
> Key: KAFKA-7877
> URL: https://issues.apache.org/jira/browse/KAFKA-7877
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.0.0, 2.0.1, 2.1.0
>Reporter: Andrew Bourgeois
>Assignee: Arjun Satish
>Priority: Major
>
> The Dead Letter Queue isn't implemented for the put() operation.
> In 
> [https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java]
>  the "retryWithToleranceOperator" gets used during the conversion and 
> transformation phases, but not when delivering the messages to the sink task.
> According to KIP-298, the Dead Letter Queue should be used as long as we 
> throw org.apache.kafka.connect.errors.RetriableException.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7877) Connect DLQ not used in SinkTask put()

2019-04-28 Thread Arjun Satish (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16828805#comment-16828805
 ] 

Arjun Satish commented on KAFKA-7877:
-

[~Andrew Bourgeois]: updated the KIP. Also, added a section to "rejected 
alternatives" that indicate that we didn't address putting records that fail 
while writing to put() in a sink connectors.

> Connect DLQ not used in SinkTask put()
> --
>
> Key: KAFKA-7877
> URL: https://issues.apache.org/jira/browse/KAFKA-7877
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.0.0, 2.0.1, 2.1.0
>Reporter: Andrew Bourgeois
>Priority: Major
>
> The Dead Letter Queue isn't implemented for the put() operation.
> In 
> [https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java]
>  the "retryWithToleranceOperator" gets used during the conversion and 
> transformation phases, but not when delivering the messages to the sink task.
> According to KIP-298, the Dead Letter Queue should be used as long as we 
> throw org.apache.kafka.connect.errors.RetriableException.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-5666) Need feedback to user if consumption fails due to offsets.topic.replication.factor=3

2019-04-28 Thread Rens Groothuijsen (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-5666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16828030#comment-16828030
 ] 

Rens Groothuijsen commented on KAFKA-5666:
--

Hi, I'm fairly new to Kafka. Can I pick up this ticket?

> Need feedback to user if consumption fails due to 
> offsets.topic.replication.factor=3
> 
>
> Key: KAFKA-5666
> URL: https://issues.apache.org/jira/browse/KAFKA-5666
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, consumer
>Affects Versions: 0.11.0.0
>Reporter: Yeva Byzek
>Priority: Major
>  Labels: newbie, usability
>
> Introduced in 0.11: The offsets.topic.replication.factor broker config is now 
> enforced upon auto topic creation. Internal auto topic creation will fail 
> with a GROUP_COORDINATOR_NOT_AVAILABLE error until the cluster size meets 
> this replication factor requirement.
> Issue: Default is setting offsets.topic.replication.factor=3, but in 
> development and docker environments where there is only 1 broker, the offsets 
> topic will fail to be created when a consumer tries to consume and no records 
> will be returned.  As a result, the user experience is bad.  The user may 
> have no idea about this setting change and enforcement, and they just see 
> that `kafka-console-consumer` hangs with ZERO output. It is true that the 
> broker log file will provide a message (e.g. {{ERROR [KafkaApi-1] Number of 
> alive brokers '1' does not meet the required replication factor '3' for the 
> offsets topic (configured via 'offsets.topic.replication.factor'). This error 
> can be ignored if the cluster is starting up and not all brokers are up yet. 
> (kafka.server.KafkaApis)}}) but many users do not have access to the log 
> files or know how to get them.
> Suggestion: give feedback to the user/app if offsets topic cannot be created. 
>  For example, after some timeout.
> Workaround:
> Set offsets.topic.replication.factor=3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6455) Improve timestamp propagation at DSL level

2019-04-28 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16827963#comment-16827963
 ] 

ASF GitHub Bot commented on KAFKA-6455:
---

mjsax commented on pull request #6645: KAFKA-6455: Session Aggregation should 
use window-end-time as record timestamp
URL: https://github.com/apache/kafka/pull/6645
 
 
   For session-windows, the result record should have the window-end timestamp 
as record timestamp.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Improve timestamp propagation at DSL level
> --
>
> Key: KAFKA-6455
> URL: https://issues.apache.org/jira/browse/KAFKA-6455
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 1.0.0
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>Priority: Major
>  Labels: needs-kip
>
> At DSL level, we inherit the timestamp propagation "contract" from the 
> Processor API. This contract in not optimal at DSL level, and we should 
> define a DSL level contract that matches the semantics of the corresponding 
> DSL operator.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8302) consumer group error message

2019-04-28 Thread sijifeng (JIRA)
sijifeng created KAFKA-8302:
---

 Summary: consumer group error message
 Key: KAFKA-8302
 URL: https://issues.apache.org/jira/browse/KAFKA-8302
 Project: Kafka
  Issue Type: Bug
  Components: admin
Affects Versions: 1.0.0
Reporter: sijifeng


Hi

Today, I found an online error. To fix this problem, I needed to reset offset, 
but in the process, I reversed the partition and offset values, and 
commitAsync,then I found using the command "bin/kafka-consumer-groups 
--bootstrap-server Kafka01:9092 --group my_consumer_name -- description" is 
pending.

I print log use "adminClient.listGroupOffsets" ,And I found a log like  /  .

In this case, what should I do? I expect to delete the wrong information under 
consumer group, but I haven't found the right way. My version of Kafka is 1.0.0.

 


```

AdminClient adminClient = 
AdminClient.createSimplePlaintext("bootstrap_servers");
scala.collection.immutable.Map offsets = 
adminClient.listGroupOffsets("my_consumer_name");
JavaConversions.asJavaCollection(offsets.keySet()).forEach(tp-> 
System.out.println(tp

 

log just like

MyTopic, 78908765

```



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8301) None-exist topic make the process of describing consumer group blocked

2019-04-28 Thread Jingxin Xu (JIRA)
Jingxin Xu created KAFKA-8301:
-

 Summary: None-exist topic make the process of describing consumer 
group blocked
 Key: KAFKA-8301
 URL: https://issues.apache.org/jira/browse/KAFKA-8301
 Project: Kafka
  Issue Type: Bug
  Components: admin
Affects Versions: 1.1.1
Reporter: Jingxin Xu


I had some topics with none-exist partitions in a consumer group (forgot when I 
wrote these topics in my consumer groups), when I use `kafka-consumer-groups 
--group my_group --describe`, this command was pending and return nothing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)