[jira] [Comment Edited] (KAFKA-8289) KTable, Long> can't be suppressed

2019-04-25 Thread Xiaolin Jia (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826634#comment-16826634
 ] 

Xiaolin Jia edited comment on KAFKA-8289 at 4/26/19 4:45 AM:
-

[~mjsax], Thank you for replying to me. In KAFKA-7895  John was using a 
TimeWindows, but the window I used was SessionWindow, I am not sure if it is 
because of this. I have tried the 2.2.1-SNAPSHOT and the trunk, no effect all. 
As log shows my SessionWindow's endMs is increasing, it should be, but I don't 
think it should be printed out, Is my understanding correct? A and B windows’s 
info seems to be printed at same time. I will try to see the source code.


was (Author: xiaoxiaoliner):
[~mjsax], Thank you for replying to me. In KAFKA-7895  John was using a 
TimeWindows, but the window I used was SessionWindow, I am not sure if it is 
because of this. I have tried the 2.2.1-SNAPSHOT and the trunk, no effect all. 
As log shows my SessionWindow's endMs is increasing, it should be, but I think 
it shouldn't be printed out, Is my understanding correct? A and B windows’s 
info seems to be printed at same time. I will try to see the source code.

> KTable, Long>  can't be suppressed
> ---
>
> Key: KAFKA-8289
> URL: https://issues.apache.org/jira/browse/KAFKA-8289
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.2.0
> Environment: Broker on a Linux, stream app on my win10 laptop. 
> I add one row log.message.timestamp.type=LogAppendTime to my broker's 
> server.properties. stream app all default config.
>Reporter: Xiaolin Jia
>Priority: Blocker
>
> I write a simple stream app followed official developer guide [Stream 
> DSL|[https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#window-final-results]].
>  but I got more than one [Window Final 
> Results|https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#id31]
>  from a session time window.
> time ticker A -> (4,A) / 25s,
> time ticker B -> (4, B) / 25s  all send to the same topic 
> below is my stream app code 
> {code:java}
> kstreams[0]
> .peek((k, v) -> log.info("--> ping, k={},v={}", k, v))
> .groupBy((k, v) -> v, Grouped.with(Serdes.String(), Serdes.String()))
> .windowedBy(SessionWindows.with(Duration.ofSeconds(100)).grace(Duration.ofMillis(20)))
> .count()
> .suppress(Suppressed.untilWindowCloses(BufferConfig.unbounded()))
> .toStream().peek((k, v) -> log.info("window={},k={},v={}", k.window(), 
> k.key(), v));
> {code}
> {{here is my log print}}
> {noformat}
> 2019-04-24 20:00:26.142  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:00:47.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556106587744, 
> endMs=1556107129191},k=A,v=20
> 2019-04-24 20:00:51.071  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:16.065  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:41.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:06.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:31.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:56.208  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:21.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:46.078  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:04.684  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:11.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:19.371  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107426409},k=B,v=9
> 2019-04-24 20:04:19.372  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107445012},k=A,v=1
> 2019-04-24 20:04:29.604  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:36.067  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:49.715  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107451397},k=B,v=10
> 2019-04-24 20:04:49.716  INFO --- 

[jira] [Comment Edited] (KAFKA-8289) KTable, Long> can't be suppressed

2019-04-25 Thread Xiaolin Jia (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826634#comment-16826634
 ] 

Xiaolin Jia edited comment on KAFKA-8289 at 4/26/19 4:43 AM:
-

[~mjsax], Thank you for replying to me. In KAFKA-7895  John was using a 
TimeWindows, but the window I used was SessionWindow, I am not sure if it is 
because of this. I have tried the 2.2.1-SNAPSHOT and the trunk, no effect all. 
As log shows my SessionWindow's endMs is increasing, it should be, but I think 
it shouldn't be printed out, Is my understanding correct? A and B windows’s 
info seems to be printed at same time. I wiil try to see the source code.


was (Author: xiaoxiaoliner):
[~mjsax], Thank you for replying to me. In KAFKA-7895  John was using a 
TimeWindows, but the window I used was SessionWindow, I am not sure if it is 
because of this. I have tried the 2.2.1-SNAPSHOT and the trunk, no effect all. 
As log shows my SessionWindow's endMs is increasing, it should be, but I think 
it shouldn't be printed out, Is my understanding correct? A and B windows’s 
info were print the same time. I wiil try to see the source code.

> KTable, Long>  can't be suppressed
> ---
>
> Key: KAFKA-8289
> URL: https://issues.apache.org/jira/browse/KAFKA-8289
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.2.0
> Environment: Broker on a Linux, stream app on my win10 laptop. 
> I add one row log.message.timestamp.type=LogAppendTime to my broker's 
> server.properties. stream app all default config.
>Reporter: Xiaolin Jia
>Priority: Blocker
>
> I write a simple stream app followed official developer guide [Stream 
> DSL|[https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#window-final-results]].
>  but I got more than one [Window Final 
> Results|https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#id31]
>  from a session time window.
> time ticker A -> (4,A) / 25s,
> time ticker B -> (4, B) / 25s  all send to the same topic 
> below is my stream app code 
> {code:java}
> kstreams[0]
> .peek((k, v) -> log.info("--> ping, k={},v={}", k, v))
> .groupBy((k, v) -> v, Grouped.with(Serdes.String(), Serdes.String()))
> .windowedBy(SessionWindows.with(Duration.ofSeconds(100)).grace(Duration.ofMillis(20)))
> .count()
> .suppress(Suppressed.untilWindowCloses(BufferConfig.unbounded()))
> .toStream().peek((k, v) -> log.info("window={},k={},v={}", k.window(), 
> k.key(), v));
> {code}
> {{here is my log print}}
> {noformat}
> 2019-04-24 20:00:26.142  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:00:47.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556106587744, 
> endMs=1556107129191},k=A,v=20
> 2019-04-24 20:00:51.071  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:16.065  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:41.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:06.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:31.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:56.208  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:21.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:46.078  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:04.684  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:11.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:19.371  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107426409},k=B,v=9
> 2019-04-24 20:04:19.372  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107445012},k=A,v=1
> 2019-04-24 20:04:29.604  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:36.067  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:49.715  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107451397},k=B,v=10
> 2019-04-24 20:04:49.716  INFO --- 

[jira] [Comment Edited] (KAFKA-8289) KTable, Long> can't be suppressed

2019-04-25 Thread Xiaolin Jia (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826634#comment-16826634
 ] 

Xiaolin Jia edited comment on KAFKA-8289 at 4/26/19 4:43 AM:
-

[~mjsax], Thank you for replying to me. In KAFKA-7895  John was using a 
TimeWindows, but the window I used was SessionWindow, I am not sure if it is 
because of this. I have tried the 2.2.1-SNAPSHOT and the trunk, no effect all. 
As log shows my SessionWindow's endMs is increasing, it should be, but I think 
it shouldn't be printed out, Is my understanding correct? A and B windows’s 
info seems to be printed at same time. I will try to see the source code.


was (Author: xiaoxiaoliner):
[~mjsax], Thank you for replying to me. In KAFKA-7895  John was using a 
TimeWindows, but the window I used was SessionWindow, I am not sure if it is 
because of this. I have tried the 2.2.1-SNAPSHOT and the trunk, no effect all. 
As log shows my SessionWindow's endMs is increasing, it should be, but I think 
it shouldn't be printed out, Is my understanding correct? A and B windows’s 
info seems to be printed at same time. I wiil try to see the source code.

> KTable, Long>  can't be suppressed
> ---
>
> Key: KAFKA-8289
> URL: https://issues.apache.org/jira/browse/KAFKA-8289
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.2.0
> Environment: Broker on a Linux, stream app on my win10 laptop. 
> I add one row log.message.timestamp.type=LogAppendTime to my broker's 
> server.properties. stream app all default config.
>Reporter: Xiaolin Jia
>Priority: Blocker
>
> I write a simple stream app followed official developer guide [Stream 
> DSL|[https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#window-final-results]].
>  but I got more than one [Window Final 
> Results|https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#id31]
>  from a session time window.
> time ticker A -> (4,A) / 25s,
> time ticker B -> (4, B) / 25s  all send to the same topic 
> below is my stream app code 
> {code:java}
> kstreams[0]
> .peek((k, v) -> log.info("--> ping, k={},v={}", k, v))
> .groupBy((k, v) -> v, Grouped.with(Serdes.String(), Serdes.String()))
> .windowedBy(SessionWindows.with(Duration.ofSeconds(100)).grace(Duration.ofMillis(20)))
> .count()
> .suppress(Suppressed.untilWindowCloses(BufferConfig.unbounded()))
> .toStream().peek((k, v) -> log.info("window={},k={},v={}", k.window(), 
> k.key(), v));
> {code}
> {{here is my log print}}
> {noformat}
> 2019-04-24 20:00:26.142  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:00:47.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556106587744, 
> endMs=1556107129191},k=A,v=20
> 2019-04-24 20:00:51.071  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:16.065  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:41.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:06.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:31.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:56.208  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:21.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:46.078  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:04.684  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:11.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:19.371  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107426409},k=B,v=9
> 2019-04-24 20:04:19.372  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107445012},k=A,v=1
> 2019-04-24 20:04:29.604  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:36.067  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:49.715  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107451397},k=B,v=10
> 2019-04-24 20:04:49.716  INFO --- 

[jira] [Commented] (KAFKA-8289) KTable, Long> can't be suppressed

2019-04-25 Thread Xiaolin Jia (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826634#comment-16826634
 ] 

Xiaolin Jia commented on KAFKA-8289:


[~mjsax], Thank you for replying to me. In KAFKA-7895  John was using a 
TimeWindows, but the window I used was SessionWindow, I am not sure if it is 
because of this. I have tried the 2.2.1-SNAPSHOT and the trunk, no effect all. 
As log shows my SessionWindow's endMs is increasing, it should be, but I think 
it shouldn't be printed out, Is my understanding correct? A and B windows’s 
info were print the same time. I wiil try to see the source code.

> KTable, Long>  can't be suppressed
> ---
>
> Key: KAFKA-8289
> URL: https://issues.apache.org/jira/browse/KAFKA-8289
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.2.0
> Environment: Broker on a Linux, stream app on my win10 laptop. 
> I add one row log.message.timestamp.type=LogAppendTime to my broker's 
> server.properties. stream app all default config.
>Reporter: Xiaolin Jia
>Priority: Blocker
>
> I write a simple stream app followed official developer guide [Stream 
> DSL|[https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#window-final-results]].
>  but I got more than one [Window Final 
> Results|https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#id31]
>  from a session time window.
> time ticker A -> (4,A) / 25s,
> time ticker B -> (4, B) / 25s  all send to the same topic 
> below is my stream app code 
> {code:java}
> kstreams[0]
> .peek((k, v) -> log.info("--> ping, k={},v={}", k, v))
> .groupBy((k, v) -> v, Grouped.with(Serdes.String(), Serdes.String()))
> .windowedBy(SessionWindows.with(Duration.ofSeconds(100)).grace(Duration.ofMillis(20)))
> .count()
> .suppress(Suppressed.untilWindowCloses(BufferConfig.unbounded()))
> .toStream().peek((k, v) -> log.info("window={},k={},v={}", k.window(), 
> k.key(), v));
> {code}
> {{here is my log print}}
> {noformat}
> 2019-04-24 20:00:26.142  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:00:47.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556106587744, 
> endMs=1556107129191},k=A,v=20
> 2019-04-24 20:00:51.071  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:16.065  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:41.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:06.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:31.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:56.208  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:21.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:46.078  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:04.684  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:11.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:19.371  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107426409},k=B,v=9
> 2019-04-24 20:04:19.372  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107445012},k=A,v=1
> 2019-04-24 20:04:29.604  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:36.067  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:49.715  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107451397},k=B,v=10
> 2019-04-24 20:04:49.716  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107469935},k=A,v=2
> 2019-04-24 20:04:54.593  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:01.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:05:19.599  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:20.045  INFO --- [-StreamThread-1] c.g.k.AppStreams 

[jira] [Resolved] (KAFKA-8287) JVM global map to fence duplicate client id

2019-04-25 Thread Boyang Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boyang Chen resolved KAFKA-8287.

Resolution: Invalid

We don't want to do it for now because there are existing use cases where 
`client.id` is expected to be duplicate across different stream instances for 
request throttling purpose.

> JVM global map to fence duplicate client id
> ---
>
> Key: KAFKA-8287
> URL: https://issues.apache.org/jira/browse/KAFKA-8287
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
>
> After change in https://issues.apache.org/jira/browse/KAFKA-8285, the two 
> stream instances scheduled on same JVM will be mutually affected if they 
> accidentally assign same client.id, since the thread-id becomes local now. 
> The solution is to build a global concurrent map for solving conflict if two 
> threads happen to be having the same client.id.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8288) KStream.through consumer does not use custom TimestampExtractor

2019-04-25 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826538#comment-16826538
 ] 

Matthias J. Sax commented on KAFKA-8288:


No problem :)

I actually opened a PR to improve the JavaDocs: 
[https://github.com/apache/kafka/pull/6639]

> KStream.through consumer does not use custom TimestampExtractor
> ---
>
> Key: KAFKA-8288
> URL: https://issues.apache.org/jira/browse/KAFKA-8288
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.2.0
>Reporter: Maarten
>Priority: Minor
>
> The Kafka consumer created by {{KStream.through}} does not seem to be using 
> the custom TimestampExtractor set in Kafka Streams properties.
> The documentation of {{through}} states the following
> {code:java}
> ...
> This is equivalent to calling to(someTopic, Produced.with(keySerde, 
> valueSerde) and StreamsBuilder#stream(someTopicName, Consumed.with(keySerde, 
> valueSerde)).
> {code}
> However when I use the pattern above, the custom TimestampExtractor _is_ 
> called.
> I have verified that the streams app is reading from the specified topic and 
> that the timestamp extractor is called for other topics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7903) Replace OffsetCommit request/response with automated protocol

2019-04-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826536#comment-16826536
 ] 

ASF GitHub Bot commented on KAFKA-7903:
---

cmccabe commented on pull request #6583: KAFKA-7903 : use automated protocol 
for offset commit request
URL: https://github.com/apache/kafka/pull/6583
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Replace OffsetCommit request/response with automated protocol
> -
>
> Key: KAFKA-7903
> URL: https://issues.apache.org/jira/browse/KAFKA-7903
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8282) Missing JMX bandwidth quota metrics for Produce and Fetch

2019-04-25 Thread James Cheng (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826524#comment-16826524
 ] 

James Cheng commented on KAFKA-8282:


In 1.0.0, the behavior was changed such that these metrics are only collected 
if client quotas are enabled.

The change was made in https://issues.apache.org/jira/browse/KAFKA-5402

[~rsivaram] mentioned that if you want these metrics, but don't want to enforce 
quotas, you can set your quota to something really high (she recommends 
Long.MAX_VALUE - 1). 
https://issues.apache.org/jira/browse/KAFKA-5402?focusedCommentId=16044100=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16044100

Note, it can't be Long.MAX_VALUE, because the code actually treats that as 
"disable quotas". See 
[https://github.com/apache/kafka/pull/3303/files#diff-ccd0deee5adb38987e4f009b749fd11cR141]

 

 

 

> Missing JMX bandwidth quota metrics for Produce and Fetch
> -
>
> Key: KAFKA-8282
> URL: https://issues.apache.org/jira/browse/KAFKA-8282
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.1.1
>Reporter: JMVM
>Priority: Major
> Attachments: Screen Shot 2019-04-23 at 20.59.21.png
>
>
> Recently I performed several *rolling upgrades following official steps* for 
> our Kafka brokers *from 0.11.0.1 to newer versions in different 
> environments*, and apparently working fine in all cases from functional point 
> of view: *producers and consumers working as expected*. 
> Specifically, I upgraded:
>  # From *0.11.0.1 to 1.0.0*, and then from *1.0.0 to 2.0.0*, and then *to* 
> *2.1.1*
>  # *From 0.11.0.1 directly to 2.1.1*
> However, in all cases *JMX bandwidth quota metrics for Fetch and Produce* 
> which used to show *all producers and consumers working with brokers* are 
> gone, just showing queue-size, in our *JMX monitoring clients specifically 
> Introscope Wily* *keeping same configuration* (see attached image).
> In fact, I removed Wily filter configuration for JMX in *order to show all 
> possible metrics, and keeping both Fetch and Produce still gone*.
> Note I checked if having proper version after rolling upgrade, for example, 
> for *2.1.1*, and being as expected:
> *ll /opt/kafka/libs/*
> *total 54032*
> *-rw-r--r-- 1 kafka kafka    69409 Jan  4 08:42 activation-1.1.1.jar*
> *-rw-r--r-- 1 kafka kafka    14768 Jan  4 08:42 
> aopalliance-repackaged-2.5.0-b42.jar*
> *-rw-r--r-- 1 kafka kafka    90347 Jan  4 08:42 argparse4j-0.7.0.jar*
> *-rw-r--r-- 1 kafka kafka    20437 Jan  4 08:40 
> audience-annotations-0.5.0.jar*
> *-rw-r--r-- 1 kafka kafka   501879 Jan  4 08:43 commons-lang3-3.8.1.jar*
> *-rw-r--r-- 1 kafka kafka    96801 Feb  8 18:32 connect-api-2.1.1.jar*
> *-rw-r--r-- 1 kafka kafka    18265 Feb  8 18:32 
> connect-basic-auth-extension-2.1.1.jar*
> *-rw-r--r-- 1 kafka kafka    20509 Feb  8 18:32 connect-file-2.1.1.jar*
> *-rw-r--r-- 1 kafka kafka    45489 Feb  8 18:32 connect-json-2.1.1.jar*
> *-rw-r--r-- 1 kafka kafka   466588 Feb  8 18:32 connect-runtime-2.1.1.jar*
> *-rw-r--r-- 1 kafka kafka    90358 Feb  8 18:32 connect-transforms-2.1.1.jar*
> *-rw-r--r-- 1 kafka kafka  2442625 Jan  4 08:43 guava-20.0.jar*
> *-rw-r--r-- 1 kafka kafka   186763 Jan  4 08:42 hk2-api-2.5.0-b42.jar*
> *-rw-r--r-- 1 kafka kafka   189454 Jan  4 08:42 hk2-locator-2.5.0-b42.jar*
> *-rw-r--r-- 1 kafka kafka   135317 Jan  4 08:42 hk2-utils-2.5.0-b42.jar*
> *-rw-r--r-- 1 kafka kafka    66894 Jan 11 21:28 jackson-annotations-2.9.8.jar*
> *-rw-r--r-- 1 kafka kafka   325619 Jan 11 21:27 jackson-core-2.9.8.jar*
> *-rw-r--r-- 1 kafka kafka  1347236 Jan 11 21:27 jackson-databind-2.9.8.jar*
> *-rw-r--r-- 1 kafka kafka    32373 Jan 11 21:28 jackson-jaxrs-base-2.9.8.jar*
> *-rw-r--r-- 1 kafka kafka    15861 Jan 11 21:28 
> jackson-jaxrs-json-provider-2.9.8.jar*
> *-rw-r--r-- 1 kafka kafka    32627 Jan 11 21:28 
> jackson-module-jaxb-annotations-2.9.8.jar*
> *-rw-r--r-- 1 kafka kafka   737884 Jan  4 08:43 javassist-3.22.0-CR2.jar*
> *-rw-r--r-- 1 kafka kafka    26366 Jan  4 08:42 javax.annotation-api-1.2.jar*
> *-rw-r--r-- 1 kafka kafka 2497 Jan  4 08:42 javax.inject-1.jar*
> *-rw-r--r-- 1 kafka kafka 5951 Jan  4 08:42 javax.inject-2.5.0-b42.jar*
> *-rw-r--r-- 1 kafka kafka    95806 Jan  4 08:42 javax.servlet-api-3.1.0.jar*
> *-rw-r--r-- 1 kafka kafka   126898 Jan  4 08:42 javax.ws.rs-api-2.1.1.jar*
> *-rw-r--r-- 1 kafka kafka   127509 Jan  4 08:42 javax.ws.rs-api-2.1.jar*
> *-rw-r--r-- 1 kafka kafka   125632 Jan  4 08:42 jaxb-api-2.3.0.jar*
> *-rw-r--r-- 1 kafka kafka   181563 Jan  4 08:42 jersey-client-2.27.jar*
> *-rw-r--r-- 1 kafka kafka  1140395 Jan  4 08:43 jersey-common-2.27.jar*
> *-rw-r--r-- 1 kafka kafka    18085 Jan  4 08:42 
> jersey-container-servlet-2.27.jar*
> *-rw-r--r-- 1 kafka kafka    59332 Jan  4 08:42 
> 

[jira] [Commented] (KAFKA-5998) /.checkpoint.tmp Not found exception

2019-04-25 Thread Patrik Kleindl (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826493#comment-16826493
 ] 

Patrik Kleindl commented on KAFKA-5998:
---

Found a new log, again starting with the message after the state-cleaner ran.

Filtered on task 1_1, there was no rebalance or anything in the time from 19:30 
to 21:03

April 25th 2019, 21:07:51.658 2019-04-25 21:07:51,658 WARN 
[org.apache.kafka.streams.processor.internals.ProcessorStateManager] 
(application-fde85da6-9d2f-4457-8bdb-ea1c78c8c1e2-StreamThread-1) - 
[short-component-name:; transaction-id:; user-id:; creation-time:] task [1_1] 
Failed to write offset checkpoint file to 
/opt/app/wildfly/standalone/tmp/application-streamapp.v1/1_1/.checkpoint: {}: 
java.io.FileNotFoundException: 
/opt/app/wildfly/standalone/tmp/application-streamapp.v1/1_1/.checkpoint.tmp 
(No such file or directory)
 at java.io.FileOutputStream.open0(Native Method)
 at java.io.FileOutputStream.open(FileOutputStream.java:270)

April 25th 2019, 21:03:49.332 2019-04-25 21:03:49,332 INFO 
[org.apache.kafka.streams.processor.internals.StateDirectory] 
(application-fde85da6-9d2f-4457-8bdb-ea1c78c8c1e2-CleanupThread) - 
[short-component-name:; transaction-id:; user-id:; creation-time:] 
stream-thread [application-fde85da6-9d2f-4457-8bdb-ea1c78c8c1e2-CleanupThread] 
Deleting obsolete state directory 1_1 for task 1_1 as 813332ms has elapsed 
(cleanup delay is 60ms).

April 25th 2019, 19:30:52.902 2019-04-25 19:30:52,902 INFO 
[org.apache.kafka.streams.processor.internals.StreamThread] 
(application-fde85da6-9d2f-4457-8bdb-ea1c78c8c1e2-StreamThread-1) - 
[short-component-name:; transaction-id:; user-id:; creation-time:] 
stream-thread [application-fde85da6-9d2f-4457-8bdb-ea1c78c8c1e2-StreamThread-1] 
partition assignment took 80 ms.
 current active tasks: [1_0, 0_1, 1_1, 0_3, 2_1, 1_3, 1_4, 0_5, 2_3, 1_5, 1_6, 
0_7, 2_5, 0_11, 2_9, 1_11, 2_10, 2_11]
 current standby tasks: []
 previous active tasks: [1_0, 0_1, 1_1, 0_3, 2_1, 1_3, 1_4, 2_3, 0_5, 1_5, 1_6, 
0_7, 2_5, 0_11, 2_9, 1_11, 2_10, 2_11]

April 25th 2019, 19:30:52.713 2019-04-25 19:30:52,713 INFO 
[org.apache.kafka.streams.processor.internals.StreamThread] 
(application-fde85da6-9d2f-4457-8bdb-ea1c78c8c1e2-StreamThread-1) - 
[short-component-name:; transaction-id:; user-id:; creation-time:] 
stream-thread [application-fde85da6-9d2f-4457-8bdb-ea1c78c8c1e2-StreamThread-1] 
partition revocation took 764 ms.
 suspended active tasks: [1_0, 0_1, 1_1, 0_3, 2_1, 1_3, 1_4, 2_3, 0_5, 1_5, 
1_6, 0_7, 2_5, 0_11, 2_9, 1_11, 2_10, 2_11]
 suspended standby tasks: []

April 25th 2019, 19:30:39.144 2019-04-25 19:30:39,144 INFO 
[org.apache.kafka.streams.processor.internals.StreamThread] 
(application-fde85da6-9d2f-4457-8bdb-ea1c78c8c1e2-StreamThread-1) - 
[short-component-name:; transaction-id:; user-id:; creation-time:] 
stream-thread [application-fde85da6-9d2f-4457-8bdb-ea1c78c8c1e2-StreamThread-1] 
partition assignment took 29 ms.
 current active tasks: [1_0, 0_1, 1_1, 0_3, 2_1, 1_3, 1_4, 2_3, 0_5, 1_5, 1_6, 
0_7, 2_5, 0_11, 2_9, 1_11, 2_10, 2_11]
 current standby tasks: []
 previous active tasks: [1_0, 1_1, 1_3, 1_4, 2_3, 1_5, 1_6, 0_7, 2_5]

April 25th 2019, 19:30:29.619 2019-04-25 19:30:29,619 INFO 
[org.apache.kafka.streams.processor.internals.StreamThread] 
(application-fde85da6-9d2f-4457-8bdb-ea1c78c8c1e2-StreamThread-1) - 
[short-component-name:; transaction-id:; user-id:; creation-time:] 
stream-thread [application-fde85da6-9d2f-4457-8bdb-ea1c78c8c1e2-StreamThread-1] 
partition revocation took 2254 ms.
 suspended active tasks: [1_0, 1_1, 1_3, 1_4, 2_3, 1_5, 1_6, 0_7, 2_5]
 suspended standby tasks: []

April 25th 2019, 19:30:17.158 2019-04-25 19:30:17,158 INFO 
[org.apache.kafka.streams.processor.internals.StreamThread] 
(application-fde85da6-9d2f-4457-8bdb-ea1c78c8c1e2-StreamThread-1) - 
[short-component-name:; transaction-id:; user-id:; creation-time:] 
stream-thread [application-fde85da6-9d2f-4457-8bdb-ea1c78c8c1e2-StreamThread-1] 
partition assignment took 935 ms.
 current active tasks: [0_0, 1_0, 0_1, 1_1, 2_0, 1_2, 0_3, 2_1, 1_3, 2_2, 1_4, 
0_5, 2_3, 1_5, 0_6, 1_6, 0_7, 2_5]
 current standby tasks: []
 previous active tasks: [0_0, 1_0, 0_1, 1_1, 2_0, 1_2, 0_3, 2_1, 1_3, 2_2, 1_4, 
0_5, 2_3, 1_5, 0_6, 1_6, 0_7, 2_5, 1_7, 0_8, 2_6, 1_8, 0_9, 2_7, 1_9, 2_8, 
1_10, 0_11, 2_9, 1_11, 2_10, 2_11]

> /.checkpoint.tmp Not found exception
> 
>
> Key: KAFKA-5998
> URL: https://issues.apache.org/jira/browse/KAFKA-5998
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0, 0.11.0.1, 2.1.1
>Reporter: Yogesh BG
>Priority: Critical
> Attachments: 5998.v1.txt, 5998.v2.txt, Topology.txt, exc.txt, 
> props.txt, streams.txt
>
>
> I have one kafka broker and one kafka stream running... I am 

[jira] [Assigned] (KAFKA-6498) Add RocksDB statistics via Streams metrics

2019-04-25 Thread Bruno Cadonna (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Cadonna reassigned KAFKA-6498:


Assignee: Bruno Cadonna  (was: james chien)

> Add RocksDB statistics via Streams metrics
> --
>
> Key: KAFKA-6498
> URL: https://issues.apache.org/jira/browse/KAFKA-6498
> Project: Kafka
>  Issue Type: Improvement
>  Components: metrics, streams
>Reporter: Guozhang Wang
>Assignee: Bruno Cadonna
>Priority: Major
>  Labels: needs-kip
>
> RocksDB's own stats can be programmatically exposed via 
> {{Options.statistics()}} and the JNI `Statistics` has indeed implemented many 
> useful settings already. However these stats are not exposed directly via 
> Streams today and hence for any users who wants to get access to them they 
> have to manually interact with the underlying RocksDB directly, not through 
> Streams.
> We should expose such stats via Streams metrics programmatically for users to 
> investigate them without trying to access the rocksDB directly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8291) System test consumer_test.py failed on trunk

2019-04-25 Thread Guozhang Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-8291.
--
   Resolution: Fixed
Fix Version/s: 2.3.0

> System test consumer_test.py failed on trunk
> 
>
> Key: KAFKA-8291
> URL: https://issues.apache.org/jira/browse/KAFKA-8291
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, core
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
> Fix For: 2.3.0
>
>
> Looks like trunk is failing as for now 
> [https://jenkins.confluent.io/job/system-test-kafka-branch-builder/2537/]
> Potentially due to this PR: 
> [https://github.com/apache/kafka/commit/409fabc5610443f36574bdea2e2994b6c20e2829]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8292) Add support for --version parameter to command line tools

2019-04-25 Thread JIRA
Sönke Liebau created KAFKA-8292:
---

 Summary: Add support for --version parameter to command line tools
 Key: KAFKA-8292
 URL: https://issues.apache.org/jira/browse/KAFKA-8292
 Project: Kafka
  Issue Type: Improvement
Reporter: Sönke Liebau


During the implemenation of 
[KAFKA-8131|https://issues.apache.org/jira/browse/KAFKA-8131] we noticed that 
command line tools implement parsing of parameters in different ways.
For most of the tools the --version parameter was correctly implemented in that 
issue, for the following this still remains to be done:
* ConnectDistributed
* ConnectStandalone
* ProducerPerformance
* VerifiableConsumer
* VerifiableProducer



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7866) Duplicate offsets after transaction index append failure

2019-04-25 Thread Andrew Olson (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826431#comment-16826431
 ] 

Andrew Olson commented on KAFKA-7866:
-

[~hachikuji] Do you have an estimated release data for 2.2.1?

> Duplicate offsets after transaction index append failure
> 
>
> Key: KAFKA-7866
> URL: https://issues.apache.org/jira/browse/KAFKA-7866
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
> Fix For: 2.0.2, 2.1.2, 2.2.1
>
>
> We have encountered a situation in which an ABORT marker was written 
> successfully to the log, but failed to be written to the transaction index. 
> This prevented the log end offset from being incremented. This resulted in 
> duplicate offsets when the next append was attempted. The broker was using 
> JBOD and we would normally expect IOExceptions to cause the log directory to 
> be failed. That did not seem to happen here and the duplicates continued for 
> several hours.
> Unfortunately, we are not sure what the cause of the failure was. 
> Significantly, the first duplicate was also the first ABORT marker in the 
> log. Unlike the offset and timestamp index, the transaction index is created 
> on demand after the first aborted transction. It is likely that the attempt 
> to create and open the transaction index failed. There is some suggestion 
> that the process may have bumped into the open file limit. Whatever the 
> problem was, it also prevented log collection, so we cannot confirm our 
> guesses. 
> Without knowing the underlying cause, we can still consider some potential 
> improvements:
> 1. We probably should be catching non-IO exceptions in the append process. If 
> the append to one of the indexes fails, we potentially truncate the log or 
> re-throw it as an IOException to ensure that the log directory is no longer 
> used.
> 2. Even without the unexpected exception, there is a small window during 
> which even an IOException could lead to duplicate offsets. Marking a log 
> directory offline is an asynchronous operation and there is no guarantee that 
> another append cannot happen first. Given this, we probably need to detect 
> and truncate duplicates during the log recovery process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8288) KStream.through consumer does not use custom TimestampExtractor

2019-04-25 Thread Maarten (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826410#comment-16826410
 ] 

Maarten commented on KAFKA-8288:


That explains it, I was under the impression that through would produce and 
consume records to and from a non-internal topic and that this would be one of 
the use cases for it.

The docs read as if {{through()}} is just sugar for and {{to()}} and 
{{stream()}} but it I can see that it makes sense to optimize this further. You 
can't summarize all behavior in a couple of lines so I'm fine with the docs not 
being 100% accurate.

In retrospect I should have done a more thorough search of Jira, sorry for 
wasting your time.

> KStream.through consumer does not use custom TimestampExtractor
> ---
>
> Key: KAFKA-8288
> URL: https://issues.apache.org/jira/browse/KAFKA-8288
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.2.0
>Reporter: Maarten
>Priority: Minor
>
> The Kafka consumer created by {{KStream.through}} does not seem to be using 
> the custom TimestampExtractor set in Kafka Streams properties.
> The documentation of {{through}} states the following
> {code:java}
> ...
> This is equivalent to calling to(someTopic, Produced.with(keySerde, 
> valueSerde) and StreamsBuilder#stream(someTopicName, Consumed.with(keySerde, 
> valueSerde)).
> {code}
> However when I use the pattern above, the custom TimestampExtractor _is_ 
> called.
> I have verified that the streams app is reading from the specified topic and 
> that the timestamp extractor is called for other topics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8288) KStream.through consumer does not use custom TimestampExtractor

2019-04-25 Thread Maarten (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maarten resolved KAFKA-8288.

Resolution: Invalid

> KStream.through consumer does not use custom TimestampExtractor
> ---
>
> Key: KAFKA-8288
> URL: https://issues.apache.org/jira/browse/KAFKA-8288
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.2.0
>Reporter: Maarten
>Priority: Minor
>
> The Kafka consumer created by {{KStream.through}} does not seem to be using 
> the custom TimestampExtractor set in Kafka Streams properties.
> The documentation of {{through}} states the following
> {code:java}
> ...
> This is equivalent to calling to(someTopic, Produced.with(keySerde, 
> valueSerde) and StreamsBuilder#stream(someTopicName, Consumed.with(keySerde, 
> valueSerde)).
> {code}
> However when I use the pattern above, the custom TimestampExtractor _is_ 
> called.
> I have verified that the streams app is reading from the specified topic and 
> that the timestamp extractor is called for other topics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8051) remove KafkaMbean when network close

2019-04-25 Thread Andrew Olson (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826387#comment-16826387
 ] 

Andrew Olson commented on KAFKA-8051:
-

[~monty] It looks like you created 5 duplicate issues, KAFKA-8047 through 
KAFKA-8051.



> remove KafkaMbean when network close
> 
>
> Key: KAFKA-8051
> URL: https://issues.apache.org/jira/browse/KAFKA-8051
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, core
>Affects Versions: 0.10.2.0, 0.10.2.1, 0.10.2.2
>Reporter: limeng
>Priority: Critical
> Fix For: 2.2.1
>
>
> the  broker server will be oom when 
>  * a large number of clients frequently close and reconnect
>  * the clientId changes every time when reconnect,that gives rise to too much 
> kafkaMbean in broker
> the reason is that broker forget to remove kafkaMbean when detect connection 
> closes.
> h2.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8291) System test consumer_test.py failed on trunk

2019-04-25 Thread Boyang Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boyang Chen updated KAFKA-8291:
---
Description: 
Looks like trunk is failing as for now 
[https://jenkins.confluent.io/job/system-test-kafka-branch-builder/2537/]

Potentially due to this PR: 
[https://github.com/apache/kafka/commit/409fabc5610443f36574bdea2e2994b6c20e2829]

  was:Looks like trunk is failing as for now 
[https://jenkins.confluent.io/job/system-test-kafka-branch-builder/2537/]


> System test consumer_test.py failed on trunk
> 
>
> Key: KAFKA-8291
> URL: https://issues.apache.org/jira/browse/KAFKA-8291
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, core
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
>
> Looks like trunk is failing as for now 
> [https://jenkins.confluent.io/job/system-test-kafka-branch-builder/2537/]
> Potentially due to this PR: 
> [https://github.com/apache/kafka/commit/409fabc5610443f36574bdea2e2994b6c20e2829]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8291) System test consumer_test.py failed on trunk

2019-04-25 Thread Boyang Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826341#comment-16826341
 ] 

Boyang Chen commented on KAFKA-8291:


One learning: considering use open source version in default when triggering 
build: 
[https://jenkins.confluent.io/job/system-test-kafka-branch-builder/build?delay=0sec]

> System test consumer_test.py failed on trunk
> 
>
> Key: KAFKA-8291
> URL: https://issues.apache.org/jira/browse/KAFKA-8291
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, core
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
>
> Looks like trunk is failing as for now 
> [https://jenkins.confluent.io/job/system-test-kafka-branch-builder/2537/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8290) Streams Not Closing Fenced Producer On Task Close

2019-04-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826326#comment-16826326
 ] 

ASF GitHub Bot commented on KAFKA-8290:
---

bbejeck commented on pull request #6636: KAFKA-8290: Close producer for zombie 
task
URL: https://github.com/apache/kafka/pull/6636
 
 
   When we close a task and EOS is enabled we should always close the producer 
regardless if the task is in a zombie state (the broker fenced the producer) or 
not.
   
   I've added tests that fail without this change.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Streams Not Closing Fenced Producer On Task Close
> -
>
> Key: KAFKA-8290
> URL: https://issues.apache.org/jira/browse/KAFKA-8290
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
>Priority: Major
>
> When a producer is fenced during processing a rebalance is triggered and 
> Kafka Streams closes the (zombie) task, but not the producer.  When EOS is 
> enabled and we close a task the producer should always be closed regardless 
> if it was fenced or not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8291) System test consumer_test.py failed on trunk

2019-04-25 Thread Boyang Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boyang Chen updated KAFKA-8291:
---
Component/s: core
 consumer

> System test consumer_test.py failed on trunk
> 
>
> Key: KAFKA-8291
> URL: https://issues.apache.org/jira/browse/KAFKA-8291
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, core
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
>
> Looks like trunk is failing as for now 
> [https://jenkins.confluent.io/job/system-test-kafka-branch-builder/2537/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8290) Streams Not Closing Fenced Producer On Task Close

2019-04-25 Thread Bill Bejeck (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-8290:
---
Description: When a producer is fenced during processing a rebalance is 
triggered and Kafka Streams closes the (zombie) task, but not the producer.  
When EOS is enabled and we close a task the producer should always be closed 
regardless if it was fenced or not.  (was: When a producer is fenced during 
processing and a rebalance is triggered, Kafka Streams closes the(zombie) task 
closed, but the producer is not.  When EOS is enabled and we close a task the 
producer should always be closed regardless if it was fenced or not.)

> Streams Not Closing Fenced Producer On Task Close
> -
>
> Key: KAFKA-8290
> URL: https://issues.apache.org/jira/browse/KAFKA-8290
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
>Priority: Major
>
> When a producer is fenced during processing a rebalance is triggered and 
> Kafka Streams closes the (zombie) task, but not the producer.  When EOS is 
> enabled and we close a task the producer should always be closed regardless 
> if it was fenced or not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8290) Streams Not Closing Fenced Producer On Task Close

2019-04-25 Thread Bill Bejeck (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-8290:
---
Description: When a producer is fenced during processing and a rebalance is 
triggered, Kafka Streams closes the(zombie) task closed, but the producer is 
not.  When EOS is enabled and we close a task the producer should always be 
closed regardless if it was fenced or not.  (was: When a producer is fenced 
during processing and a rebalance is triggered, streams closes the(zombie) task 
but the producer is not.  When EOS is enabled and we close a task the producer 
should always be closed regardless if it was fenced or not.)

> Streams Not Closing Fenced Producer On Task Close
> -
>
> Key: KAFKA-8290
> URL: https://issues.apache.org/jira/browse/KAFKA-8290
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
>Priority: Major
>
> When a producer is fenced during processing and a rebalance is triggered, 
> Kafka Streams closes the(zombie) task closed, but the producer is not.  When 
> EOS is enabled and we close a task the producer should always be closed 
> regardless if it was fenced or not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8291) System test consumer_test.py failed on trunk

2019-04-25 Thread Boyang Chen (JIRA)
Boyang Chen created KAFKA-8291:
--

 Summary: System test consumer_test.py failed on trunk
 Key: KAFKA-8291
 URL: https://issues.apache.org/jira/browse/KAFKA-8291
 Project: Kafka
  Issue Type: Bug
Reporter: Boyang Chen
Assignee: Boyang Chen


Looks like trunk is failing as for now 
[https://jenkins.confluent.io/job/system-test-kafka-branch-builder/2537/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8290) Streams Not Closing Fenced Producer On Task Close

2019-04-25 Thread Bill Bejeck (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-8290:
---
Description: When a producer is fenced during processing and a rebalance is 
triggered, streams closes the(zombie) task but the producer is not.  When EOS 
is enabled and we close a task the producer should always be closed regardless 
if it was fenced or not.  (was: When a producer is fenced during processing and 
a rebalance is triggered, the task closed, but the (zombie) producer is not.  
When EOS is enabled and we close a task the producer should always be closed 
regardless if it was fenced or not.)

> Streams Not Closing Fenced Producer On Task Close
> -
>
> Key: KAFKA-8290
> URL: https://issues.apache.org/jira/browse/KAFKA-8290
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
>Priority: Major
>
> When a producer is fenced during processing and a rebalance is triggered, 
> streams closes the(zombie) task but the producer is not.  When EOS is enabled 
> and we close a task the producer should always be closed regardless if it was 
> fenced or not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8290) Streams Not Closing Fenced Producer On Task Close

2019-04-25 Thread Bill Bejeck (JIRA)
Bill Bejeck created KAFKA-8290:
--

 Summary: Streams Not Closing Fenced Producer On Task Close
 Key: KAFKA-8290
 URL: https://issues.apache.org/jira/browse/KAFKA-8290
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Bill Bejeck
Assignee: Bill Bejeck


When a producer is fenced during processing and a rebalance is triggered, the 
task closed, but the (zombie) producer is not.  When EOS is enabled and we 
close a task the producer should always be closed regardless if it was fenced 
or not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7631) NullPointerException when SCRAM is allowed bu ScramLoginModule is not in broker's jaas.conf

2019-04-25 Thread koert kuipers (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826229#comment-16826229
 ] 

koert kuipers commented on KAFKA-7631:
--

i think i ran into this. brokers are kafka 2.2.0.

my brokers use GSSAPI/kerberos, but i have also have SCRAM enabled for clients 
that use delegation tokens:
 sasl.mechanism.inter.broker.protocol=GSSAPI
 sasl.enabled.mechanisms=GSSAPI,SCRAM-SHA-256,SCRAM-SHA-512

my jaas.conf for brokers has com.sun.security.auth.module.Krb5LoginModule for 
KafkaClient

kafka server log shows:
{code}
[2019-04-25 12:23:48,108] WARN [SocketServer brokerId=xx] Unexpected error from 
/x.x.x.x; closing connection (org.apache.kafka.common.network.Selector)
java.lang.NullPointerException
    at 
org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.handleSaslToken(SaslServerAuthenticator.java:450)
    at 
org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.authenticate(SaslServerAuthenticator.java:290)
    at 
org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:173)
    at 
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:536)
    at org.apache.kafka.common.network.Selector.poll(Selector.java:472)
    at kafka.network.Processor.poll(SocketServer.scala:830)
    at kafka.network.Processor.run(SocketServer.scala:730)
    at java.lang.Thread.run(Thread.java:748)
 {code}
my client is spark structured streaming driver, which in spark 3 has kafka 
delegation support, which is what i am testing. i see here:
{code}
2019-04-25 12:23:48 DEBUG 
org.apache.kafka.common.security.authenticator.SaslClientAuthenticator: Set 
SASL client state to SEND_HANDSHAKE_REQUEST
2019-04-25 12:23:48 DEBUG 
org.apache.kafka.common.security.authenticator.SaslClientAuthenticator: Set 
SASL client state to RECEIVE_HANDSHAKE_RESPONSE
2019-04-25 12:23:48 DEBUG 
org.apache.kafka.common.security.authenticator.SaslClientAuthenticator: Set 
SASL client state to INITIAL
2019-04-25 12:23:48 DEBUG 
org.apache.kafka.common.security.scram.internals.ScramSaslClient: Setting 
SASL/SCRAM_SHA_512 client state to RECEIVE_SERVER_FIRST_MESSAGE
2019-04-25 12:23:48 DEBUG 
org.apache.kafka.common.security.authenticator.SaslClientAuthenticator: Set 
SASL client state to INTERMEDIATE
2019-04-25 12:23:48 DEBUG org.apache.kafka.common.network.Selector: [Consumer 
clientId=x, groupId=x] Connection with x/x.x.x.x disconnected
java.io.EOFException
at 
org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:96)
at 
org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.receiveResponseOrToken(SaslClientAuthenticator.java:407)
at 
org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.receiveKafkaResponse(SaslClientAuthenticator.java:497)
at 
org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.receiveToken(SaslClientAuthenticator.java:435)
at 
org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.authenticate(SaslClientAuthenticator.java:259)
at 
org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:173)
at 
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:536)
at org.apache.kafka.common.network.Selector.poll(Selector.java:472)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:535)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:265)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:227)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:161)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:245)
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:317)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1226)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1195)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1135)
at 
org.apache.spark.sql.kafka010.KafkaOffsetReader.$anonfun$fetchLatestOffsets$2(KafkaOffsetReader.scala:217)
at 
org.apache.spark.sql.kafka010.KafkaOffsetReader.$anonfun$withRetriesWithoutInterrupt$1(KafkaOffsetReader.scala:358)
at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at 
org.apache.spark.util.UninterruptibleThread.runUninterruptibly(UninterruptibleThread.scala:77)
at 

[jira] [Commented] (KAFKA-5998) /.checkpoint.tmp Not found exception

2019-04-25 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826188#comment-16826188
 ] 

Matthias J. Sax commented on KAFKA-5998:


In older release, a checkpoint file is written for all tasks. If the is not 
state, the file would just be empty. It's fixed since 2.0.0 release though (cf 
https://issues.apache.org/jira/browse/KAFKA-6499) 

> /.checkpoint.tmp Not found exception
> 
>
> Key: KAFKA-5998
> URL: https://issues.apache.org/jira/browse/KAFKA-5998
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0, 0.11.0.1, 2.1.1
>Reporter: Yogesh BG
>Priority: Critical
> Attachments: 5998.v1.txt, 5998.v2.txt, Topology.txt, exc.txt, 
> props.txt, streams.txt
>
>
> I have one kafka broker and one kafka stream running... I am running its 
> since two days under load of around 2500 msgs per second.. On third day am 
> getting below exception for some of the partitions, I have 16 partitions only 
> 0_0 and 0_1 gives this error
> {{09:43:25.955 [ks_0_inst-StreamThread-6] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:201)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:260)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:254)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks$1.apply(AssignedTasks.java:322)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:415)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.commit(AssignedTasks.java:314)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:700)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:683)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:523)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> 09:43:25.974 [ks_0_inst-StreamThread-15] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  

[jira] [Commented] (KAFKA-8287) JVM global map to fence duplicate client id

2019-04-25 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826183#comment-16826183
 ] 

Matthias J. Sax commented on KAFKA-8287:


Can you elaborate on this proposal? I am not sure if it's Kafka Streams' 
responsibility to check if the configuration is correct? Also, it does not 
solve the problem across JVMs. Hence, I am wondering if we should address it at 
all. Thoughts?

> JVM global map to fence duplicate client id
> ---
>
> Key: KAFKA-8287
> URL: https://issues.apache.org/jira/browse/KAFKA-8287
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
>
> After change in https://issues.apache.org/jira/browse/KAFKA-8285, the two 
> stream instances scheduled on same JVM will be mutually affected if they 
> accidentally assign same client.id, since the thread-id becomes local now. 
> The solution is to build a global concurrent map for solving conflict if two 
> threads happen to be having the same client.id.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8287) JVM global map to fence duplicate client id

2019-04-25 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-8287:
---
Component/s: streams

> JVM global map to fence duplicate client id
> ---
>
> Key: KAFKA-8287
> URL: https://issues.apache.org/jira/browse/KAFKA-8287
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
>
> After change in https://issues.apache.org/jira/browse/KAFKA-8285, the two 
> stream instances scheduled on same JVM will be mutually affected if they 
> accidentally assign same client.id, since the thread-id becomes local now. 
> The solution is to build a global concurrent map for solving conflict if two 
> threads happen to be having the same client.id.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-3729) Auto-configure non-default SerDes passed alongside the topology builder

2019-04-25 Thread Richard Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Yu updated KAFKA-3729:
--
Labels: api needs-kip newbie  (was: api newbie)

>  Auto-configure non-default SerDes passed alongside the topology builder
> 
>
> Key: KAFKA-3729
> URL: https://issues.apache.org/jira/browse/KAFKA-3729
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Fred Patton
>Assignee: Ted Yu
>Priority: Major
>  Labels: api, needs-kip, newbie
> Attachments: 3729.txt, 3729.v6.txt
>
>
> From Guozhang Wang:
>  "Only default serdes provided through configs are auto-configured today. But 
> we could auto-configure other serdes passed alongside the topology builder as 
> well."
> After the PR was merged, we realized that the current approach to implement 
> this features is actually not backward compatible. Thus, we need to revert 
> the commit for now to not break backward compatibility in 2.3 release. After 
> some more thinking, it seems that this feature is actually more complicated 
> to get right as it seem on the surface and hence it would required a proper 
> KIP.
> The following issues are identified:
>  * in the new code, configure() would be called twice, one in user code (if 
> people don't rewrite existing applications) and later via Kafka Streams – the 
> second call could "reconfigure" the Serde and overwrite the correct 
> configuration from the first call done by the user
>  * if there are multiple Serdes using the same configuration parameters 
> names, it's only possible to specify this parameter name once in the global 
> StreamsConfig; hence, it's not possible for users to configure both Serdes 
> differently
>  * basically, the global StreamsConfig needs to contain all configuration 
> parameters over all used Serdes to make a potential second call to 
> `configure()` idempotant
> To address the issues, some ideas would be:
>  * pass in the configuration via the constructor and deprecate `configure()` 
> method
>  * add a new method `isConfigured()` that would allow to skip the second 
> configuration call within Kafka Streams runtime
> There might be other ways to address this, and the different options should 
> be discussed on the KIP.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-3729) Auto-configure non-default SerDes passed alongside the topology builder

2019-04-25 Thread Richard Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Yu updated KAFKA-3729:
--
Labels: api needs-dicussion needs-kip newbie  (was: api needs-kip newbie)

>  Auto-configure non-default SerDes passed alongside the topology builder
> 
>
> Key: KAFKA-3729
> URL: https://issues.apache.org/jira/browse/KAFKA-3729
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Fred Patton
>Assignee: Ted Yu
>Priority: Major
>  Labels: api, needs-dicussion, needs-kip, newbie
> Attachments: 3729.txt, 3729.v6.txt
>
>
> From Guozhang Wang:
>  "Only default serdes provided through configs are auto-configured today. But 
> we could auto-configure other serdes passed alongside the topology builder as 
> well."
> After the PR was merged, we realized that the current approach to implement 
> this features is actually not backward compatible. Thus, we need to revert 
> the commit for now to not break backward compatibility in 2.3 release. After 
> some more thinking, it seems that this feature is actually more complicated 
> to get right as it seem on the surface and hence it would required a proper 
> KIP.
> The following issues are identified:
>  * in the new code, configure() would be called twice, one in user code (if 
> people don't rewrite existing applications) and later via Kafka Streams – the 
> second call could "reconfigure" the Serde and overwrite the correct 
> configuration from the first call done by the user
>  * if there are multiple Serdes using the same configuration parameters 
> names, it's only possible to specify this parameter name once in the global 
> StreamsConfig; hence, it's not possible for users to configure both Serdes 
> differently
>  * basically, the global StreamsConfig needs to contain all configuration 
> parameters over all used Serdes to make a potential second call to 
> `configure()` idempotant
> To address the issues, some ideas would be:
>  * pass in the configuration via the constructor and deprecate `configure()` 
> method
>  * add a new method `isConfigured()` that would allow to skip the second 
> configuration call within Kafka Streams runtime
> There might be other ways to address this, and the different options should 
> be discussed on the KIP.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7895) Ktable supress operator emitting more than one record for the same key per window

2019-04-25 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826169#comment-16826169
 ] 

Matthias J. Sax commented on KAFKA-7895:


2.2.1 release was proposed today. Cf: 
[https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.2.1]

> Ktable supress operator emitting more than one record for the same key per 
> window
> -
>
> Key: KAFKA-7895
> URL: https://issues.apache.org/jira/browse/KAFKA-7895
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0, 2.2.0, 2.1.1
>Reporter: prasanthi
>Assignee: John Roesler
>Priority: Blocker
> Fix For: 2.1.2, 2.2.1
>
>
> Hi, We are using kstreams to get the aggregated counts per vendor(key) within 
> a specified window.
> Here's how we configured the suppress operator to emit one final record per 
> key/window.
> {code:java}
> KTable, Long> windowedCount = groupedStream
>  .windowedBy(TimeWindows.of(Duration.ofMinutes(1)).grace(ofMillis(5L)))
>  .count(Materialized.with(Serdes.Integer(),Serdes.Long()))
>  .suppress(Suppressed.untilWindowCloses(unbounded()));
> {code}
> But we are getting more than one record for the same key/window as shown 
> below.
> {code:java}
> [KTABLE-TOSTREAM-10]: [131@154906704/154906710], 1039
> [KTABLE-TOSTREAM-10]: [131@154906704/154906710], 1162
> [KTABLE-TOSTREAM-10]: [9@154906704/154906710], 6584
> [KTABLE-TOSTREAM-10]: [88@154906704/154906710], 107
> [KTABLE-TOSTREAM-10]: [108@154906704/154906710], 315
> [KTABLE-TOSTREAM-10]: [119@154906704/154906710], 119
> [KTABLE-TOSTREAM-10]: [154@154906704/154906710], 746
> [KTABLE-TOSTREAM-10]: [154@154906704/154906710], 809{code}
> Could you please take a look?
> Thanks
>  
>  
> Added by John:
> Acceptance Criteria:
>  * add suppress to system tests, such that it's exercised with crash/shutdown 
> recovery, rebalance, etc.
>  ** [https://github.com/apache/kafka/pull/6278]
>  * make sure that there's some system test coverage with caching disabled.
>  ** Follow-on ticket: https://issues.apache.org/jira/browse/KAFKA-7943
>  * test with tighter time bounds with windows of say 30 seconds and use 
> system time without adding any extra time for verification
>  ** Follow-on ticket: https://issues.apache.org/jira/browse/KAFKA-7944



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-8289) KTable, Long> can't be suppressed

2019-04-25 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826168#comment-16826168
 ] 

Matthias J. Sax edited comment on KAFKA-8289 at 4/25/19 3:33 PM:
-

To what extend is this ticket different to KAFKA-7895? Can you elaborate? Could 
you also try out the fix for KAFKA-7895 to verify if it solves your problem?

Btw: 2.2.1 release should be release soon. Cf. 
[https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.2.1]


was (Author: mjsax):
To what extend is this ticket different to KAFKA-7895? Can you elaborate? Could 
you also try out the fix for KAFKA-7895 to verify if it solves your problem?

> KTable, Long>  can't be suppressed
> ---
>
> Key: KAFKA-8289
> URL: https://issues.apache.org/jira/browse/KAFKA-8289
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.2.0
> Environment: Broker on a Linux, stream app on my win10 laptop. 
> I add one row log.message.timestamp.type=LogAppendTime to my broker's 
> server.properties. stream app all default config.
>Reporter: Xiaolin Jia
>Priority: Blocker
>
> I write a simple stream app followed official developer guide [Stream 
> DSL|[https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#window-final-results]].
>  but I got more than one [Window Final 
> Results|https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#id31]
>  from a session time window.
> time ticker A -> (4,A) / 25s,
> time ticker B -> (4, B) / 25s  all send to the same topic 
> below is my stream app code 
> {code:java}
> kstreams[0]
> .peek((k, v) -> log.info("--> ping, k={},v={}", k, v))
> .groupBy((k, v) -> v, Grouped.with(Serdes.String(), Serdes.String()))
> .windowedBy(SessionWindows.with(Duration.ofSeconds(100)).grace(Duration.ofMillis(20)))
> .count()
> .suppress(Suppressed.untilWindowCloses(BufferConfig.unbounded()))
> .toStream().peek((k, v) -> log.info("window={},k={},v={}", k.window(), 
> k.key(), v));
> {code}
> {{here is my log print}}
> {noformat}
> 2019-04-24 20:00:26.142  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:00:47.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556106587744, 
> endMs=1556107129191},k=A,v=20
> 2019-04-24 20:00:51.071  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:16.065  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:41.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:06.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:31.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:56.208  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:21.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:46.078  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:04.684  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:11.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:19.371  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107426409},k=B,v=9
> 2019-04-24 20:04:19.372  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107445012},k=A,v=1
> 2019-04-24 20:04:29.604  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:36.067  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:49.715  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107451397},k=B,v=10
> 2019-04-24 20:04:49.716  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107469935},k=A,v=2
> 2019-04-24 20:04:54.593  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:01.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:05:19.599  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 

[jira] [Commented] (KAFKA-8289) KTable, Long> can't be suppressed

2019-04-25 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826168#comment-16826168
 ] 

Matthias J. Sax commented on KAFKA-8289:


To what extend is this ticket different to KAFKA-7895? Can you elaborate? Could 
you also try out the fix for KAFKA-7895 to verify is it solves your problem?

> KTable, Long>  can't be suppressed
> ---
>
> Key: KAFKA-8289
> URL: https://issues.apache.org/jira/browse/KAFKA-8289
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.2.0
> Environment: Broker on a Linux, stream app on my win10 laptop. 
> I add one row log.message.timestamp.type=LogAppendTime to my broker's 
> server.properties. stream app all default config.
>Reporter: Xiaolin Jia
>Priority: Blocker
>
> I write a simple stream app followed official developer guide [Stream 
> DSL|[https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#window-final-results]].
>  but I got more than one [Window Final 
> Results|https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#id31]
>  from a session time window.
> time ticker A -> (4,A) / 25s,
> time ticker B -> (4, B) / 25s  all send to the same topic 
> below is my stream app code 
> {code:java}
> kstreams[0]
> .peek((k, v) -> log.info("--> ping, k={},v={}", k, v))
> .groupBy((k, v) -> v, Grouped.with(Serdes.String(), Serdes.String()))
> .windowedBy(SessionWindows.with(Duration.ofSeconds(100)).grace(Duration.ofMillis(20)))
> .count()
> .suppress(Suppressed.untilWindowCloses(BufferConfig.unbounded()))
> .toStream().peek((k, v) -> log.info("window={},k={},v={}", k.window(), 
> k.key(), v));
> {code}
> {{here is my log print}}
> {noformat}
> 2019-04-24 20:00:26.142  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:00:47.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556106587744, 
> endMs=1556107129191},k=A,v=20
> 2019-04-24 20:00:51.071  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:16.065  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:41.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:06.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:31.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:56.208  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:21.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:46.078  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:04.684  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:11.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:19.371  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107426409},k=B,v=9
> 2019-04-24 20:04:19.372  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107445012},k=A,v=1
> 2019-04-24 20:04:29.604  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:36.067  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:49.715  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107451397},k=B,v=10
> 2019-04-24 20:04:49.716  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107469935},k=A,v=2
> 2019-04-24 20:04:54.593  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:01.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:05:19.599  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:20.045  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107476398},k=B,v=11
> 2019-04-24 20:05:20.047  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107501398},k=B,v=12
> 2019-04-24 

[jira] [Comment Edited] (KAFKA-8289) KTable, Long> can't be suppressed

2019-04-25 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826168#comment-16826168
 ] 

Matthias J. Sax edited comment on KAFKA-8289 at 4/25/19 3:32 PM:
-

To what extend is this ticket different to KAFKA-7895? Can you elaborate? Could 
you also try out the fix for KAFKA-7895 to verify if it solves your problem?


was (Author: mjsax):
To what extend is this ticket different to KAFKA-7895? Can you elaborate? Could 
you also try out the fix for KAFKA-7895 to verify is it solves your problem?

> KTable, Long>  can't be suppressed
> ---
>
> Key: KAFKA-8289
> URL: https://issues.apache.org/jira/browse/KAFKA-8289
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.2.0
> Environment: Broker on a Linux, stream app on my win10 laptop. 
> I add one row log.message.timestamp.type=LogAppendTime to my broker's 
> server.properties. stream app all default config.
>Reporter: Xiaolin Jia
>Priority: Blocker
>
> I write a simple stream app followed official developer guide [Stream 
> DSL|[https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#window-final-results]].
>  but I got more than one [Window Final 
> Results|https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#id31]
>  from a session time window.
> time ticker A -> (4,A) / 25s,
> time ticker B -> (4, B) / 25s  all send to the same topic 
> below is my stream app code 
> {code:java}
> kstreams[0]
> .peek((k, v) -> log.info("--> ping, k={},v={}", k, v))
> .groupBy((k, v) -> v, Grouped.with(Serdes.String(), Serdes.String()))
> .windowedBy(SessionWindows.with(Duration.ofSeconds(100)).grace(Duration.ofMillis(20)))
> .count()
> .suppress(Suppressed.untilWindowCloses(BufferConfig.unbounded()))
> .toStream().peek((k, v) -> log.info("window={},k={},v={}", k.window(), 
> k.key(), v));
> {code}
> {{here is my log print}}
> {noformat}
> 2019-04-24 20:00:26.142  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:00:47.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556106587744, 
> endMs=1556107129191},k=A,v=20
> 2019-04-24 20:00:51.071  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:16.065  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:41.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:06.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:31.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:56.208  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:21.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:46.078  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:04.684  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:11.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:19.371  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107426409},k=B,v=9
> 2019-04-24 20:04:19.372  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107445012},k=A,v=1
> 2019-04-24 20:04:29.604  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:36.067  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:49.715  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107451397},k=B,v=10
> 2019-04-24 20:04:49.716  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107469935},k=A,v=2
> 2019-04-24 20:04:54.593  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:01.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:05:19.599  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:20.045  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : 

[jira] [Commented] (KAFKA-8288) KStream.through consumer does not use custom TimestampExtractor

2019-04-25 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826166#comment-16826166
 ] 

Matthias J. Sax commented on KAFKA-8288:


This behavior is by design. Compare: 
https://issues.apache.org/jira/browse/KAFKA-4785

I agree that the docs are a little bit miss-leading. Do you face a particular 
issue with the behavior? Or are you just surprised about it, because the docs 
are not 100% accurate?

> KStream.through consumer does not use custom TimestampExtractor
> ---
>
> Key: KAFKA-8288
> URL: https://issues.apache.org/jira/browse/KAFKA-8288
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.2.0
>Reporter: Maarten
>Priority: Minor
>
> The Kafka consumer created by {{KStream.through}} does not seem to be using 
> the custom TimestampExtractor set in Kafka Streams properties.
> The documentation of {{through}} states the following
> {code:java}
> ...
> This is equivalent to calling to(someTopic, Produced.with(keySerde, 
> valueSerde) and StreamsBuilder#stream(someTopicName, Consumed.with(keySerde, 
> valueSerde)).
> {code}
> However when I use the pattern above, the custom TimestampExtractor _is_ 
> called.
> I have verified that the streams app is reading from the specified topic and 
> that the timestamp extractor is called for other topics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8289) KTable, Long> can't be suppressed

2019-04-25 Thread Xiaolin Jia (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaolin Jia updated KAFKA-8289:
---
Description: 
I write a simple stream app followed official developer guide [Stream 
DSL|[https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#window-final-results]].
 but I got more than one [Window Final 
Results|https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#id31]
 from a session time window.

time ticker A -> (4,A) / 25s,

time ticker B -> (4, B) / 25s  all send to the same topic 

below is my stream app code 
{code:java}
kstreams[0]
.peek((k, v) -> log.info("--> ping, k={},v={}", k, v))
.groupBy((k, v) -> v, Grouped.with(Serdes.String(), Serdes.String()))
.windowedBy(SessionWindows.with(Duration.ofSeconds(100)).grace(Duration.ofMillis(20)))
.count()
.suppress(Suppressed.untilWindowCloses(BufferConfig.unbounded()))
.toStream().peek((k, v) -> log.info("window={},k={},v={}", k.window(), k.key(), 
v));
{code}
{{here is my log print}}
{noformat}
2019-04-24 20:00:26.142  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:00:47.070  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556106587744, endMs=1556107129191},k=A,v=20
2019-04-24 20:00:51.071  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:01:16.065  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:01:41.066  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:02:06.069  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:02:31.066  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:02:56.208  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:03:21.070  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:03:46.078  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:04:04.684  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=A
2019-04-24 20:04:11.069  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:04:19.371  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556107226473, endMs=1556107426409},k=B,v=9
2019-04-24 20:04:19.372  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556107445012, endMs=1556107445012},k=A,v=1
2019-04-24 20:04:29.604  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=A
2019-04-24 20:04:36.067  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:04:49.715  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556107226473, endMs=1556107451397},k=B,v=10
2019-04-24 20:04:49.716  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556107445012, endMs=1556107469935},k=A,v=2
2019-04-24 20:04:54.593  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=A
2019-04-24 20:05:01.070  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:05:19.599  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=A
2019-04-24 20:05:20.045  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556107226473, endMs=1556107476398},k=B,v=11
2019-04-24 20:05:20.047  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556107226473, endMs=1556107501398},k=B,v=12
2019-04-24 20:05:26.075  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:05:44.598  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=A
2019-04-24 20:05:50.399  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556107445012, endMs=1556107519930},k=A,v=4
2019-04-24 20:05:50.400  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556107226473, endMs=1556107526405},k=B,v=13
2019-04-24 20:05:51.067  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:06:09.595  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=A
2019-04-24 20:06:16.089  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:06:20.765  INFO --- [-StreamThread-1] 

[jira] [Updated] (KAFKA-8289) KTable, Long> can't be suppressed

2019-04-25 Thread Xiaolin Jia (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaolin Jia updated KAFKA-8289:
---
Description: 
I write a simple stream app followed official developer guide [Stream 
DSL|[https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#window-final-results]].
 but I got more than one [Window Final 
Results|https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#id31]
 from a session time window.

time ticker A -> (4,A) / 25s,

time ticker B -> (4, B) / 25s  all send to the same topic 

below is my stream app code 
{code:java}
kstreams[0]
.peek((k, v) -> log.info("--> ping, k={},v={}", k, v))
.groupBy((k, v) -> v, Grouped.with(Serdes.String(), Serdes.String()))
.windowedBy(SessionWindows.with(Duration.ofSeconds(100)).grace(Duration.ofMillis(20)))
.count()
.suppress(Suppressed.untilWindowCloses(BufferConfig.unbounded()))
.toStream().peek((k, v) -> log.info("window={},k={},v={}", k.window(), k.key(), 
v));
{code}
{{here is my log print}}
{noformat}
2019-04-24 20:00:26.142  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:00:47.070  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556106587744, endMs=1556107129191},k=A,v=20
2019-04-24 20:00:51.071  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:01:16.065  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:01:41.066  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:02:06.069  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:02:31.066  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:02:56.208  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:03:21.070  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:03:46.078  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:04:04.684  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=A
2019-04-24 20:04:11.069  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:04:19.371  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556107226473, endMs=1556107426409},k=B,v=9
2019-04-24 20:04:19.372  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556107445012, endMs=1556107445012},k=A,v=1
2019-04-24 20:04:29.604  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=A
2019-04-24 20:04:36.067  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:04:49.715  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556107226473, endMs=1556107451397},k=B,v=10
2019-04-24 20:04:49.716  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556107445012, endMs=1556107469935},k=A,v=2
2019-04-24 20:04:54.593  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=A
2019-04-24 20:05:01.070  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:05:19.599  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=A
2019-04-24 20:05:20.045  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556107226473, endMs=1556107476398},k=B,v=11
2019-04-24 20:05:20.047  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556107226473, endMs=1556107501398},k=B,v=12
2019-04-24 20:05:26.075  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:05:44.598  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=A
2019-04-24 20:05:50.399  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556107445012, endMs=1556107519930},k=A,v=4
2019-04-24 20:05:50.400  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556107226473, endMs=1556107526405},k=B,v=13
2019-04-24 20:05:51.067  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:06:09.595  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=A
2019-04-24 20:06:16.089  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:06:20.765  INFO --- [-StreamThread-1] 

[jira] [Issue Comment Deleted] (KAFKA-8289) KTable, Long> can't be suppressed

2019-04-25 Thread Xiaolin Jia (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaolin Jia updated KAFKA-8289:
---
Comment: was deleted

(was: Can a [SessionWindowedKStream] be suppressed after count operation? It 
seems the latest type record produce a previous type record 'Window Final 
Results'. I just want get exactly one [Window Final 
Results|https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#id31].
 

First i just start one time ticker, log print seems ok, when I start the 
second, then window info print log appeared. )

> KTable, Long>  can't be suppressed
> ---
>
> Key: KAFKA-8289
> URL: https://issues.apache.org/jira/browse/KAFKA-8289
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.2.0
> Environment: Broker on a Linux, stream app on my win10 laptop. 
> I add one row log.message.timestamp.type=LogAppendTime to my broker's 
> server.properties. stream app all default config.
>Reporter: Xiaolin Jia
>Priority: Blocker
>
> I write a simple stream app followed official developer guide [Stream 
> DSL|[https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#window-final-results]].
>  but I got more than one [Window Final 
> Results|https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#id31]
>  from a session time window.
> time ticker A -> (4,A) / 25s,
> time ticker B -> (4, B) / 25s  all send to the same topic 
> below is my stream app code 
> {code:java}
> kstreams[0]
> .peek((k, v) -> log.info("--> ping, k={},v={}", k, v))
> .groupBy((k, v) -> v, Grouped.with(Serdes.String(), Serdes.String()))
> .windowedBy(SessionWindows.with(Duration.ofSeconds(100)).grace(Duration.ofMillis(20)))
> .count()
> .suppress(Suppressed.untilWindowCloses(BufferConfig.unbounded()))
> .toStream().peek((k, v) -> log.info("window={},k={},v={}", k.window(), 
> k.key(), v));
> {code}
> {{here is my log print}}
> {noformat}
> 2019-04-24 20:00:26.142  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:00:47.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556106587744, 
> endMs=1556107129191},k=A,v=20
> 2019-04-24 20:00:51.071  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:16.065  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:41.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:06.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:31.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:56.208  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:21.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:46.078  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:04.684  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:11.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:19.371  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107426409},k=B,v=9
> 2019-04-24 20:04:19.372  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107445012},k=A,v=1
> 2019-04-24 20:04:29.604  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:36.067  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:49.715  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107451397},k=B,v=10
> 2019-04-24 20:04:49.716  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107469935},k=A,v=2
> 2019-04-24 20:04:54.593  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:01.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:05:19.599  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:20.045  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : 

[jira] [Updated] (KAFKA-8289) KTable, Long> can't be suppressed

2019-04-25 Thread Xiaolin Jia (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaolin Jia updated KAFKA-8289:
---
Description: 
I write a simple stream app followed official developer guide [Stream 
DSL|[https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#window-final-results]].
 but I got more than one [Window Final 
Results|https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#id31]
 from a session time window.

time ticker A -> (4,A) / 25s,

time ticker B -> (4, B) / 25s  all send to the same topic 

below is my stream app code 
{code:java}
kstreams[0]
.peek((k, v) -> log.info("--> ping, k={},v={}", k, v))
.groupBy((k, v) -> v, Grouped.with(Serdes.String(), Serdes.String()))
.windowedBy(SessionWindows.with(Duration.ofSeconds(100)).grace(Duration.ofMillis(20)))
.count()
.suppress(Suppressed.untilWindowCloses(BufferConfig.unbounded()))
.toStream().peek((k, v) -> log.info("window={},k={},v={}", k.window(), k.key(), 
v));
{code}
{{here is my log print}}
{noformat}
2019-04-24 20:00:26.142  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:00:47.070  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556106587744, endMs=1556107129191},k=A,v=20
2019-04-24 20:00:51.071  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:01:16.065  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:01:41.066  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:02:06.069  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:02:31.066  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:02:56.208  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:03:21.070  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:03:46.078  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:04:04.684  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=A
2019-04-24 20:04:11.069  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:04:19.371  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556107226473, endMs=1556107426409},k=B,v=9
2019-04-24 20:04:19.372  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556107445012, endMs=1556107445012},k=A,v=1
2019-04-24 20:04:29.604  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=A
2019-04-24 20:04:36.067  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:04:49.715  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556107226473, endMs=1556107451397},k=B,v=10
2019-04-24 20:04:49.716  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556107445012, endMs=1556107469935},k=A,v=2
2019-04-24 20:04:54.593  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=A
2019-04-24 20:05:01.070  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:05:19.599  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=A
2019-04-24 20:05:20.045  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556107226473, endMs=1556107476398},k=B,v=11
2019-04-24 20:05:20.047  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556107226473, endMs=1556107501398},k=B,v=12
2019-04-24 20:05:26.075  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:05:44.598  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=A
2019-04-24 20:05:50.399  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556107445012, endMs=1556107519930},k=A,v=4
2019-04-24 20:05:50.400  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556107226473, endMs=1556107526405},k=B,v=13
2019-04-24 20:05:51.067  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:06:09.595  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=A
2019-04-24 20:06:16.089  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:06:20.765  INFO --- [-StreamThread-1] 

[jira] [Updated] (KAFKA-8289) KTable, Long> can't be suppressed

2019-04-25 Thread Xiaolin Jia (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaolin Jia updated KAFKA-8289:
---
Description: 
I write a simple stream app followed official developer guide [Stream 
DSL|[https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#window-final-results]].
 yesterday that I got more than one [Window Final 
Results|https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#id31]
 from a session time window.

time ticker A -> (4,A) / 25s,

time ticker B -> (4, B) / 25s  all send to the same topic 

below is my stream app code 
{code:java}
kstreams[0]
.peek((k, v) -> log.info("--> ping, k={},v={}", k, v))
.groupBy((k, v) -> v, Grouped.with(Serdes.String(), Serdes.String()))
.windowedBy(SessionWindows.with(Duration.ofSeconds(100)).grace(Duration.ofMillis(20)))
.count()
.suppress(Suppressed.untilWindowCloses(BufferConfig.unbounded()))
.toStream().peek((k, v) -> log.info("window={},k={},v={}", k.window(), k.key(), 
v));
{code}
{{here is my log print}}
{noformat}
2019-04-24 20:00:26.142  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:00:47.070  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556106587744, endMs=1556107129191},k=A,v=20
2019-04-24 20:00:51.071  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:01:16.065  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:01:41.066  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:02:06.069  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:02:31.066  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:02:56.208  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:03:21.070  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:03:46.078  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:04:04.684  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=A
2019-04-24 20:04:11.069  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:04:19.371  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556107226473, endMs=1556107426409},k=B,v=9
2019-04-24 20:04:19.372  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556107445012, endMs=1556107445012},k=A,v=1
2019-04-24 20:04:29.604  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=A
2019-04-24 20:04:36.067  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:04:49.715  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556107226473, endMs=1556107451397},k=B,v=10
2019-04-24 20:04:49.716  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556107445012, endMs=1556107469935},k=A,v=2
2019-04-24 20:04:54.593  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=A
2019-04-24 20:05:01.070  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:05:19.599  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=A
2019-04-24 20:05:20.045  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556107226473, endMs=1556107476398},k=B,v=11
2019-04-24 20:05:20.047  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556107226473, endMs=1556107501398},k=B,v=12
2019-04-24 20:05:26.075  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:05:44.598  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=A
2019-04-24 20:05:50.399  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556107445012, endMs=1556107519930},k=A,v=4
2019-04-24 20:05:50.400  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556107226473, endMs=1556107526405},k=B,v=13
2019-04-24 20:05:51.067  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:06:09.595  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=A
2019-04-24 20:06:16.089  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:06:20.765  INFO --- 

[jira] [Comment Edited] (KAFKA-8289) KTable, Long> can't be suppressed

2019-04-25 Thread Xiaolin Jia (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825833#comment-16825833
 ] 

Xiaolin Jia edited comment on KAFKA-8289 at 4/25/19 8:06 AM:
-

Can a [SessionWindowedKStream] be suppressed after count operation? It seems 
the latest type record produce a previous type record 'Window Final Results'. I 
just want get exactly one [Window Final 
Results|https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#id31].
 

First i just start one time ticker, log print seems ok, when I start the 
second, then window info print log appeared. 


was (Author: xiaoxiaoliner):
Can a [SessionWindowedKStream] be suppressed after count operation? It seems 
the latest type record produce a previous type record 'Window Final Results'. I 
just want get exactly one [Window Final 
Results|https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#id31].
 

> KTable, Long>  can't be suppressed
> ---
>
> Key: KAFKA-8289
> URL: https://issues.apache.org/jira/browse/KAFKA-8289
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.2.0
> Environment: Broker on a Linux, stream app on my win10 laptop. 
> I add one row log.message.timestamp.type=LogAppendTime to my broker's 
> server.properties. stream app all default config.
>Reporter: Xiaolin Jia
>Priority: Blocker
>
> I encountered a problem yesterday that I got more than one [Window Final 
> Results|https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#id31]
>  from a session time window.
> time ticker A -> (4,A) / 25s,
> time ticker B -> (4, B) / 25s  all send to the same topic 
> below is my stream app code 
> {code:java}
> kstreams[0]
> .peek((k, v) -> log.info("--> ping, k={},v={}", k, v))
> .groupBy((k, v) -> v, Grouped.with(Serdes.String(), Serdes.String()))
> .windowedBy(SessionWindows.with(Duration.ofSeconds(100)).grace(Duration.ofMillis(20)))
> .count()
> .suppress(Suppressed.untilWindowCloses(BufferConfig.unbounded()))
> .toStream().peek((k, v) -> log.info("window={},k={},v={}", k.window(), 
> k.key(), v));
> {code}
> {{here is my log print}}
> {noformat}
> 2019-04-24 20:00:26.142  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:00:47.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556106587744, 
> endMs=1556107129191},k=A,v=20
> 2019-04-24 20:00:51.071  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:16.065  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:41.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:06.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:31.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:56.208  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:21.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:46.078  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:04.684  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:11.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:19.371  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107426409},k=B,v=9
> 2019-04-24 20:04:19.372  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107445012},k=A,v=1
> 2019-04-24 20:04:29.604  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:36.067  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:49.715  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107451397},k=B,v=10
> 2019-04-24 20:04:49.716  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107469935},k=A,v=2
> 2019-04-24 20:04:54.593  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:01.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 

[jira] [Comment Edited] (KAFKA-8289) KTable, Long> can't be suppressed

2019-04-25 Thread Xiaolin Jia (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825833#comment-16825833
 ] 

Xiaolin Jia edited comment on KAFKA-8289 at 4/25/19 7:58 AM:
-

Can a [SessionWindowedKStream] be suppressed after count operation? It seems 
the latest type record produce a previous type record 'Window Final Results'. I 
just want get exactly one [Window Final 
Results|https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#id31].
 


was (Author: xiaoxiaoliner):
Can a [SessionWindowedKStream|] be suppressed after count operation? It seems 
the latest type record produce a previous type record 'Window Final Results'

> KTable, Long>  can't be suppressed
> ---
>
> Key: KAFKA-8289
> URL: https://issues.apache.org/jira/browse/KAFKA-8289
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.2.0
> Environment: Broker on a Linux, stream app on my win10 laptop. 
> I add one row log.message.timestamp.type=LogAppendTime to my broker's 
> server.properties. stream app all default config.
>Reporter: Xiaolin Jia
>Priority: Blocker
>
> I encountered a problem yesterday that I got more than one [Window Final 
> Results|https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#id31]
>  from a session time window.
> time ticker A -> (4,A) / 25s,
> time ticker B -> (4, B) / 25s  all send to the same topic 
> below is my stream app code 
> {code:java}
> kstreams[0]
> .peek((k, v) -> log.info("--> ping, k={},v={}", k, v))
> .groupBy((k, v) -> v, Grouped.with(Serdes.String(), Serdes.String()))
> .windowedBy(SessionWindows.with(Duration.ofSeconds(100)).grace(Duration.ofMillis(20)))
> .count()
> .suppress(Suppressed.untilWindowCloses(BufferConfig.unbounded()))
> .toStream().peek((k, v) -> log.info("window={},k={},v={}", k.window(), 
> k.key(), v));
> {code}
> {{here is my log print}}
> {noformat}
> 2019-04-24 20:00:26.142  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:00:47.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556106587744, 
> endMs=1556107129191},k=A,v=20
> 2019-04-24 20:00:51.071  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:16.065  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:41.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:06.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:31.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:56.208  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:21.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:46.078  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:04.684  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:11.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:19.371  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107426409},k=B,v=9
> 2019-04-24 20:04:19.372  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107445012},k=A,v=1
> 2019-04-24 20:04:29.604  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:36.067  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:49.715  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107451397},k=B,v=10
> 2019-04-24 20:04:49.716  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107469935},k=A,v=2
> 2019-04-24 20:04:54.593  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:01.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:05:19.599  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:20.045  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> 

[jira] [Comment Edited] (KAFKA-8289) KTable, Long> can't be suppressed

2019-04-25 Thread Xiaolin Jia (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825833#comment-16825833
 ] 

Xiaolin Jia edited comment on KAFKA-8289 at 4/25/19 7:57 AM:
-

Can a [SessionWindowedKStream|] be suppressed after count operation? It seems 
the latest type record produce a previous type record 'Window Final Results'


was (Author: xiaoxiaoliner):
Can a [SessionWindowedKStream|] be suppressed after count operation? 

> KTable, Long>  can't be suppressed
> ---
>
> Key: KAFKA-8289
> URL: https://issues.apache.org/jira/browse/KAFKA-8289
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.2.0
> Environment: Broker on a Linux, stream app on my win10 laptop. 
> I add one row log.message.timestamp.type=LogAppendTime to my broker's 
> server.properties. stream app all default config.
>Reporter: Xiaolin Jia
>Priority: Blocker
>
> I encountered a problem yesterday that I got more than one [Window Final 
> Results|https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#id31]
>  from a session time window.
> time ticker A -> (4,A) / 25s,
> time ticker B -> (4, B) / 25s  all send to the same topic 
> below is my stream app code 
> {code:java}
> kstreams[0]
> .peek((k, v) -> log.info("--> ping, k={},v={}", k, v))
> .groupBy((k, v) -> v, Grouped.with(Serdes.String(), Serdes.String()))
> .windowedBy(SessionWindows.with(Duration.ofSeconds(100)).grace(Duration.ofMillis(20)))
> .count()
> .suppress(Suppressed.untilWindowCloses(BufferConfig.unbounded()))
> .toStream().peek((k, v) -> log.info("window={},k={},v={}", k.window(), 
> k.key(), v));
> {code}
> {{here is my log print}}
> {noformat}
> 2019-04-24 20:00:26.142  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:00:47.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556106587744, 
> endMs=1556107129191},k=A,v=20
> 2019-04-24 20:00:51.071  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:16.065  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:41.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:06.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:31.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:56.208  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:21.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:46.078  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:04.684  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:11.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:19.371  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107426409},k=B,v=9
> 2019-04-24 20:04:19.372  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107445012},k=A,v=1
> 2019-04-24 20:04:29.604  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:36.067  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:49.715  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107451397},k=B,v=10
> 2019-04-24 20:04:49.716  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107469935},k=A,v=2
> 2019-04-24 20:04:54.593  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:01.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:05:19.599  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:20.045  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107476398},k=B,v=11
> 2019-04-24 20:05:20.047  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107501398},k=B,v=12
> 2019-04-24 20:05:26.075  INFO 

[jira] [Comment Edited] (KAFKA-8289) KTable, Long> can't be suppressed

2019-04-25 Thread Xiaolin Jia (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825833#comment-16825833
 ] 

Xiaolin Jia edited comment on KAFKA-8289 at 4/25/19 7:56 AM:
-

Can a [SessionWindowedKStream|] be suppressed after count operation? 


was (Author: xiaoxiaoliner):
Can a 
[SessionWindowedKStream|eclipse-javadoc:%E2%98%82=stes/C:%5C/Users%5C/jxl17%5C/.m2%5C/repository%5C/org%5C/apache%5C/kafka%5C/kafka-streams%5C/2.3.0-SNAPSHOT%5C/kafka-streams-2.3.0-SNAPSHOT.jar%3Corg.apache.kafka.streams.kstream(KGroupedStream.class%E2%98%83KGroupedStream~windowedBy~Lorg.apache.kafka.streams.kstream.SessionWindows;%E2%98%82org.apache.kafka.streams.kstream.SessionWindowedKStream]
 be suppressed after count operation? 

> KTable, Long>  can't be suppressed
> ---
>
> Key: KAFKA-8289
> URL: https://issues.apache.org/jira/browse/KAFKA-8289
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.2.0
> Environment: Broker on a Linux, stream app on my win10 laptop. 
> I add one row log.message.timestamp.type=LogAppendTime to my broker's 
> server.properties. stream app all default config.
>Reporter: Xiaolin Jia
>Priority: Blocker
>
> I encountered a problem yesterday that I got more than one [Window Final 
> Results|https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#id31]
>  from a session time window.
> time ticker A -> (4,A) / 25s,
> time ticker B -> (4, B) / 25s  all send to the same topic 
> below is my stream app code 
> {code:java}
> kstreams[0]
> .peek((k, v) -> log.info("--> ping, k={},v={}", k, v))
> .groupBy((k, v) -> v, Grouped.with(Serdes.String(), Serdes.String()))
> .windowedBy(SessionWindows.with(Duration.ofSeconds(100)).grace(Duration.ofMillis(20)))
> .count()
> .suppress(Suppressed.untilWindowCloses(BufferConfig.unbounded()))
> .toStream().peek((k, v) -> log.info("window={},k={},v={}", k.window(), 
> k.key(), v));
> {code}
> {{here is my log print}}
> {noformat}
> 2019-04-24 20:00:26.142  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:00:47.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556106587744, 
> endMs=1556107129191},k=A,v=20
> 2019-04-24 20:00:51.071  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:16.065  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:41.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:06.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:31.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:56.208  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:21.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:46.078  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:04.684  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:11.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:19.371  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107426409},k=B,v=9
> 2019-04-24 20:04:19.372  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107445012},k=A,v=1
> 2019-04-24 20:04:29.604  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:36.067  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:49.715  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107451397},k=B,v=10
> 2019-04-24 20:04:49.716  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107469935},k=A,v=2
> 2019-04-24 20:04:54.593  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:01.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:05:19.599  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:20.045  INFO --- [-StreamThread-1] c.g.k.AppStreams  
> 

[jira] [Commented] (KAFKA-8289) KTable, Long> can't be suppressed

2019-04-25 Thread Xiaolin Jia (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825833#comment-16825833
 ] 

Xiaolin Jia commented on KAFKA-8289:


Can a 
[SessionWindowedKStream|eclipse-javadoc:%E2%98%82=stes/C:%5C/Users%5C/jxl17%5C/.m2%5C/repository%5C/org%5C/apache%5C/kafka%5C/kafka-streams%5C/2.3.0-SNAPSHOT%5C/kafka-streams-2.3.0-SNAPSHOT.jar%3Corg.apache.kafka.streams.kstream(KGroupedStream.class%E2%98%83KGroupedStream~windowedBy~Lorg.apache.kafka.streams.kstream.SessionWindows;%E2%98%82org.apache.kafka.streams.kstream.SessionWindowedKStream]
 be suppressed after count operation? 

> KTable, Long>  can't be suppressed
> ---
>
> Key: KAFKA-8289
> URL: https://issues.apache.org/jira/browse/KAFKA-8289
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.2.0
> Environment: Broker on a Linux, stream app on my win10 laptop. 
> I add one row log.message.timestamp.type=LogAppendTime to my broker's 
> server.properties. stream app all default config.
>Reporter: Xiaolin Jia
>Priority: Blocker
>
> I encountered a problem yesterday that I got more than one [Window Final 
> Results|https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#id31]
>  from a session time window.
> time ticker A -> (4,A) / 25s,
> time ticker B -> (4, B) / 25s  all send to the same topic 
> below is my stream app code 
> {code:java}
> kstreams[0]
> .peek((k, v) -> log.info("--> ping, k={},v={}", k, v))
> .groupBy((k, v) -> v, Grouped.with(Serdes.String(), Serdes.String()))
> .windowedBy(SessionWindows.with(Duration.ofSeconds(100)).grace(Duration.ofMillis(20)))
> .count()
> .suppress(Suppressed.untilWindowCloses(BufferConfig.unbounded()))
> .toStream().peek((k, v) -> log.info("window={},k={},v={}", k.window(), 
> k.key(), v));
> {code}
> {{here is my log print}}
> {noformat}
> 2019-04-24 20:00:26.142  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:00:47.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556106587744, 
> endMs=1556107129191},k=A,v=20
> 2019-04-24 20:00:51.071  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:16.065  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:41.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:06.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:31.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:56.208  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:21.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:46.078  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:04.684  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:11.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:19.371  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107426409},k=B,v=9
> 2019-04-24 20:04:19.372  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107445012},k=A,v=1
> 2019-04-24 20:04:29.604  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:36.067  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:49.715  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107451397},k=B,v=10
> 2019-04-24 20:04:49.716  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107469935},k=A,v=2
> 2019-04-24 20:04:54.593  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:01.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:05:19.599  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:20.045  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107476398},k=B,v=11
> 2019-04-24 20:05:20.047  INFO --- [-StreamThread-1] 

[jira] [Updated] (KAFKA-8289) KTable, Long> can't be suppressed

2019-04-25 Thread Xiaolin Jia (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaolin Jia updated KAFKA-8289:
---
Priority: Blocker  (was: Major)

> KTable, Long>  can't be suppressed
> ---
>
> Key: KAFKA-8289
> URL: https://issues.apache.org/jira/browse/KAFKA-8289
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.2.0
> Environment: Broker on a Linux, stream app on my win10 laptop. 
> I add one row log.message.timestamp.type=LogAppendTime to my broker's 
> server.properties. stream app all default config.
>Reporter: Xiaolin Jia
>Priority: Blocker
>
> I encountered a problem yesterday that I got more than one [Window Final 
> Results|https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#id31]
>  from a session time window.
> time ticker A -> (4,A) / 25s,
> time ticker B -> (4, B) / 25s  all send to the same topic 
> below is my stream app code 
> {code:java}
> kstreams[0]
> .peek((k, v) -> log.info("--> ping, k={},v={}", k, v))
> .groupBy((k, v) -> v, Grouped.with(Serdes.String(), Serdes.String()))
> .windowedBy(SessionWindows.with(Duration.ofSeconds(100)).grace(Duration.ofMillis(20)))
> .count()
> .suppress(Suppressed.untilWindowCloses(BufferConfig.unbounded()))
> .toStream().peek((k, v) -> log.info("window={},k={},v={}", k.window(), 
> k.key(), v));
> {code}
> {{here is my log print}}
> {noformat}
> 2019-04-24 20:00:26.142  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:00:47.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556106587744, 
> endMs=1556107129191},k=A,v=20
> 2019-04-24 20:00:51.071  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:16.065  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:41.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:06.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:31.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:56.208  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:21.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:46.078  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:04.684  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:11.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:19.371  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107426409},k=B,v=9
> 2019-04-24 20:04:19.372  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107445012},k=A,v=1
> 2019-04-24 20:04:29.604  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:36.067  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:49.715  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107451397},k=B,v=10
> 2019-04-24 20:04:49.716  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107469935},k=A,v=2
> 2019-04-24 20:04:54.593  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:01.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:05:19.599  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:20.045  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107476398},k=B,v=11
> 2019-04-24 20:05:20.047  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107501398},k=B,v=12
> 2019-04-24 20:05:26.075  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:05:44.598  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:50.399  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : 

[jira] [Created] (KAFKA-8289) KTable, Long> can't be suppressed

2019-04-25 Thread Xiaolin Jia (JIRA)
Xiaolin Jia created KAFKA-8289:
--

 Summary: KTable, Long>  can't be suppressed
 Key: KAFKA-8289
 URL: https://issues.apache.org/jira/browse/KAFKA-8289
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.2.0
 Environment: Broker on a Linux, stream app on my win10 laptop. 
I add one row log.message.timestamp.type=LogAppendTime to my broker's 
server.properties. stream app all default config.
Reporter: Xiaolin Jia


I encountered a problem yesterday that I got more than one [Window Final 
Results|https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#id31]
 from a session time window.

time ticker A -> (4,A) / 25s,

time ticker B -> (4, B) / 25s  all send to the same topic 

below is my stream app code 
{code:java}
kstreams[0]
.peek((k, v) -> log.info("--> ping, k={},v={}", k, v))
.groupBy((k, v) -> v, Grouped.with(Serdes.String(), Serdes.String()))
.windowedBy(SessionWindows.with(Duration.ofSeconds(100)).grace(Duration.ofMillis(20)))
.count()
.suppress(Suppressed.untilWindowCloses(BufferConfig.unbounded()))
.toStream().peek((k, v) -> log.info("window={},k={},v={}", k.window(), k.key(), 
v));
{code}
{{here is my log print}}
{noformat}
2019-04-24 20:00:26.142  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:00:47.070  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556106587744, endMs=1556107129191},k=A,v=20
2019-04-24 20:00:51.071  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:01:16.065  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:01:41.066  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:02:06.069  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:02:31.066  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:02:56.208  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:03:21.070  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:03:46.078  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:04:04.684  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=A
2019-04-24 20:04:11.069  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:04:19.371  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556107226473, endMs=1556107426409},k=B,v=9
2019-04-24 20:04:19.372  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556107445012, endMs=1556107445012},k=A,v=1
2019-04-24 20:04:29.604  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=A
2019-04-24 20:04:36.067  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:04:49.715  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556107226473, endMs=1556107451397},k=B,v=10
2019-04-24 20:04:49.716  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556107445012, endMs=1556107469935},k=A,v=2
2019-04-24 20:04:54.593  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=A
2019-04-24 20:05:01.070  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:05:19.599  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=A
2019-04-24 20:05:20.045  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556107226473, endMs=1556107476398},k=B,v=11
2019-04-24 20:05:20.047  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556107226473, endMs=1556107501398},k=B,v=12
2019-04-24 20:05:26.075  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:05:44.598  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=A
2019-04-24 20:05:50.399  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556107445012, endMs=1556107519930},k=A,v=4
2019-04-24 20:05:50.400  INFO --- [-StreamThread-1] c.g.k.AppStreams
: window=Window{startMs=1556107226473, endMs=1556107526405},k=B,v=13
2019-04-24 20:05:51.067  INFO --- [-StreamThread-1] c.g.k.AppStreams
: --> ping, k=4,v=B
2019-04-24 20:06:09.595  INFO --- [-StreamThread-1] c.g.k.AppStreams  

[jira] [Commented] (KAFKA-5998) /.checkpoint.tmp Not found exception

2019-04-25 Thread Patrik Kleindl (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825821#comment-16825821
 ] 

Patrik Kleindl commented on KAFKA-5998:
---

This issue was what led me to https://issues.apache.org/jira/browse/KAFKA-7672, 
because there issues with the checkpoint file are also mentioned.

I don't know if anyone on this thread has reported the problem with 2.2 or 
trunk, we are just working on the upgrade.

One way I could imagine that this is not a problem if the task has no RocksDB 
but tries to write the checkpoint file anyway, but I don't see in which case 
this would be possible.

> /.checkpoint.tmp Not found exception
> 
>
> Key: KAFKA-5998
> URL: https://issues.apache.org/jira/browse/KAFKA-5998
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0, 0.11.0.1, 2.1.1
>Reporter: Yogesh BG
>Priority: Critical
> Attachments: 5998.v1.txt, 5998.v2.txt, Topology.txt, exc.txt, 
> props.txt, streams.txt
>
>
> I have one kafka broker and one kafka stream running... I am running its 
> since two days under load of around 2500 msgs per second.. On third day am 
> getting below exception for some of the partitions, I have 16 partitions only 
> 0_0 and 0_1 gives this error
> {{09:43:25.955 [ks_0_inst-StreamThread-6] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:201)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:260)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:254)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks$1.apply(AssignedTasks.java:322)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:415)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.commit(AssignedTasks.java:314)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:700)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:683)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:523)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> 09:43:25.974 [ks_0_inst-StreamThread-15] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> 

[jira] [Created] (KAFKA-8288) KStream.through consumer does not use custom TimestampExtractor

2019-04-25 Thread Maarten (JIRA)
Maarten created KAFKA-8288:
--

 Summary: KStream.through consumer does not use custom 
TimestampExtractor
 Key: KAFKA-8288
 URL: https://issues.apache.org/jira/browse/KAFKA-8288
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.2.0
Reporter: Maarten


The Kafka consumer created by {{KStream.through}} does not seem to be using the 
custom TimestampExtractor set in Kafka Streams properties.

The documentation of {{through}} states the following

{code:java}
...
This is equivalent to calling to(someTopic, Produced.with(keySerde, valueSerde) 
and StreamsBuilder#stream(someTopicName, Consumed.with(keySerde, valueSerde)).
{code}

However when I use the pattern above, the custom TimestampExtractor _is_ called.

I have verified that the streams app is reading from the specified topic and 
that the timestamp extractor is called for other topics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8173) Kafka Errors after version upgrade from 0.10.2.2 to 2.1.1

2019-04-25 Thread Arpit Khare (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825808#comment-16825808
 ] 

Arpit Khare commented on KAFKA-8173:


[~regexcracker] This issue occurs because of corrupt index files, which is a 
result of unclean shutdown of the Brokers.  
To solve this issue, please take the backup of the index files mentioned in the 
error log. Then remove the index file manually from the Kafka logs.dir 
directory. Finally, restart the Kafka broker. The index file will be recreated 
on broker restart.

> Kafka Errors after version upgrade from 0.10.2.2 to 2.1.1 
> --
>
> Key: KAFKA-8173
> URL: https://issues.apache.org/jira/browse/KAFKA-8173
> Project: Kafka
>  Issue Type: Improvement
>  Components: offset manager
>Reporter: Amit Anand
>Priority: Major
>
> After Kafka version upgrade from 0.10.2.2 to 2.1.1 Warnings starts coming for 
> all the topics "due to Corrupt time index found, time index file".
> {code:java}
> [2019-03-28 17:23:55.877+] WARN [Log partition=FirstTopic-6, 
> dir=/apps/kafka/data] Found a corrupted index file corresponding to log file 
> /apps/kafka/data/FirstTopic-6/0494.log due to Corrupt time 
> index found, time index file 
> (/apps/kafka/data/FirstTopic-6/0494.timeindex) has non-zero 
> size but the last timestamp is 0 which is less than the first timestamp 
> 1553720469480}, recovering segment and rebuilding index files... 
> (kafka.log.Log) }}
> {{[2019-03-28 17:23:55.877+] WARN [Log partition=NewTopic-3, 
> dir=/apps/kafka/data] Found a corrupted index file corresponding to log file 
> /apps/kafka/data/NewTopic-3/0494.log due to Corrupt time 
> index found, time index file 
> (/apps/kafka/data/NewTopic-3/0494.timeindex) has non-zero 
> size but the last timestamp is 0 which is less than the first timestamp 
> 1553720469480}, recovering segment and rebuilding index files... 
> (kafka.log.Log) [2019-03-28 17:23:55.877+] WARN [Log 
> partition=SecondTopic-3, dir=/apps/kafka/data] Found a corrupted index file 
> corresponding to log file 
> /apps/kafka/data/SecondTopic-3/0494.log due to Corrupt time 
> index found, time index file 
> (/apps/kafka/data/SecondTopic-3/0494.timeindex) has non-zero 
> size but the last timestamp is 0 which is less than the first timestamp 
> 1553720469480}, recovering segment and rebuilding index files... 
> (kafka.log.Log)
>  
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)