[jira] [Assigned] (KAFKA-15964) Flaky test: testHighAvailabilityTaskAssignorLargeNumConsumers – org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest

2024-03-06 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-15964:
---

Assignee: Matthias J. Sax

> Flaky test: testHighAvailabilityTaskAssignorLargeNumConsumers – 
> org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest
> ---
>
> Key: KAFKA-15964
> URL: https://issues.apache.org/jira/browse/KAFKA-15964
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Reporter: Apoorv Mittal
>    Assignee: Matthias J. Sax
>Priority: Major
>  Labels: flaky-test
>
> PR build: 
> [https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-14767/15/tests/]
>  
> {code:java}
> java.lang.AssertionError: The first assignment took too long to complete at 
> 94250ms.Stacktracejava.lang.AssertionError: The first assignment took too 
> long to complete at 94250ms.at 
> org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest.completeLargeAssignment(StreamsAssignmentScaleTest.java:220)
>  at 
> org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest.testHighAvailabilityTaskAssignorLargeNumConsumers(StreamsAssignmentScaleTest.java:85)
> at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.base/java.lang.reflect.Method.invoke(Method.java:568)   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
>at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16343) Improve tests of streams foreignkeyjoin package

2024-03-06 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16343:

Component/s: unit tests

> Improve tests of streams foreignkeyjoin package
> ---
>
> Key: KAFKA-16343
> URL: https://issues.apache.org/jira/browse/KAFKA-16343
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams, unit tests
>Affects Versions: 3.7.0
>Reporter: Ayoub Omari
>Assignee: Ayoub Omari
>Priority: Major
>
> Some classes are not tested in streams foreignkeyjoin package, such as 
> SubscriptionSendProcessorSupplier and ForeignTableJoinProcessorSupplier. 
> Corresponding tests should be added.
> The class ForeignTableJoinProcessorSupplierTest should be renamed as it is 
> not testing ForeignTableJoinProcessor, but rather 
> SubscriptionJoinProcessorSupplier.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16335) Remove Deprecated method on StreamPartitioner

2024-03-06 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824137#comment-17824137
 ] 

Matthias J. Sax commented on KAFKA-16335:
-

Thanks for your interest to work on this ticket. Note that the next release 
will be 3.8, and thus we cannot yet complete this ticket right now.

> Remove Deprecated method on StreamPartitioner
> -
>
> Key: KAFKA-16335
> URL: https://issues.apache.org/jira/browse/KAFKA-16335
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>            Reporter: Matthias J. Sax
>Assignee: Caio César
>Priority: Blocker
> Fix For: 4.0.0
>
>
> Deprecated in 3.4 release via 
> [https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=211883356]
>  * StreamPartitioner#partition (singular)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15417) JoinWindow does not seem to work properly with a KStream - KStream - LeftJoin()

2024-03-05 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-15417.
-
Fix Version/s: 3.8.0
   Resolution: Fixed

> JoinWindow does not  seem to work properly with a KStream - KStream - 
> LeftJoin()
> 
>
> Key: KAFKA-15417
> URL: https://issues.apache.org/jira/browse/KAFKA-15417
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 3.4.0
>Reporter: Victor van den Hoven
>Assignee: Victor van den Hoven
>Priority: Major
> Fix For: 3.8.0
>
> Attachments: Afbeelding 1-1.png, Afbeelding 1.png, 
> SimpleStreamTopology.java, SimpleStreamTopologyTest.java
>
>
> In Kafka-streams 3.4.0 :
> According to the javadoc of the Joinwindow:
> _There are three different window configuration supported:_
>  * _before = after = time-difference_
>  * _before = 0 and after = time-difference_
>  * _*before = time-difference and after = 0*_
>  
> However if I use a joinWindow with *before = time-difference and after = 0* 
> on a kstream-kstream-leftjoin the *after=0* part does not seem to work.
> When using _stream1.leftjoin(stream2, joinWindow)_ with 
> {_}joinWindow{_}.{_}after=0 and joinWindow.before=30s{_}, any new message on 
> stream 1 that can not be joined with any messages on stream2 should be joined 
> with a null-record after the _joinWindow.after_ has ended and a new message 
> has arrived on stream1.
> It does not.
> Only if the new message arrives after the value of _joinWindow.before_ the 
> previous message will be joined with a null-record.
>  
> Attached you can find two files with a TopologyTestDriver Unit test to 
> reproduce.
> topology:   stream1.leftjoin( stream2, joiner, joinwindow)
> joinWindow has before=5000ms and after=0
> message1(key1) ->  stream1
> after 4000ms message2(key2) -> stream1  ->  NO null-record join was made, but 
> the after period was expired.
> after 4900ms message2(key2) -> stream1  ->  NO null-record join was made, but 
> the after period was expired.
> after 5000ms message2(key2) -> stream1  ->  A null-record join was made,  
> before period was expired.
> after 6000ms message2(key2) -> stream1  ->  A null-record join was made,  
> before period was expired.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15417) JoinWindow does not seem to work properly with a KStream - KStream - LeftJoin()

2024-03-05 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-15417.
-
Fix Version/s: 3.8.0
   Resolution: Fixed

> JoinWindow does not  seem to work properly with a KStream - KStream - 
> LeftJoin()
> 
>
> Key: KAFKA-15417
> URL: https://issues.apache.org/jira/browse/KAFKA-15417
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 3.4.0
>Reporter: Victor van den Hoven
>Assignee: Victor van den Hoven
>Priority: Major
> Fix For: 3.8.0
>
> Attachments: Afbeelding 1-1.png, Afbeelding 1.png, 
> SimpleStreamTopology.java, SimpleStreamTopologyTest.java
>
>
> In Kafka-streams 3.4.0 :
> According to the javadoc of the Joinwindow:
> _There are three different window configuration supported:_
>  * _before = after = time-difference_
>  * _before = 0 and after = time-difference_
>  * _*before = time-difference and after = 0*_
>  
> However if I use a joinWindow with *before = time-difference and after = 0* 
> on a kstream-kstream-leftjoin the *after=0* part does not seem to work.
> When using _stream1.leftjoin(stream2, joinWindow)_ with 
> {_}joinWindow{_}.{_}after=0 and joinWindow.before=30s{_}, any new message on 
> stream 1 that can not be joined with any messages on stream2 should be joined 
> with a null-record after the _joinWindow.after_ has ended and a new message 
> has arrived on stream1.
> It does not.
> Only if the new message arrives after the value of _joinWindow.before_ the 
> previous message will be joined with a null-record.
>  
> Attached you can find two files with a TopologyTestDriver Unit test to 
> reproduce.
> topology:   stream1.leftjoin( stream2, joiner, joinwindow)
> joinWindow has before=5000ms and after=0
> message1(key1) ->  stream1
> after 4000ms message2(key2) -> stream1  ->  NO null-record join was made, but 
> the after period was expired.
> after 4900ms message2(key2) -> stream1  ->  NO null-record join was made, but 
> the after period was expired.
> after 5000ms message2(key2) -> stream1  ->  A null-record join was made,  
> before period was expired.
> after 6000ms message2(key2) -> stream1  ->  A null-record join was made,  
> before period was expired.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15625) Do not flush global state store at each commit

2024-03-05 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-15625:

Fix Version/s: 3.8.0

> Do not flush global state store at each commit
> --
>
> Key: KAFKA-15625
> URL: https://issues.apache.org/jira/browse/KAFKA-15625
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Bruno Cadonna
>Assignee: Ayoub Omari
>Priority: Major
> Fix For: 3.8.0
>
>
> Global state stores are flushed at each commit. While that is not a big issue 
> with at-least-once processing mode since the commit interval is by default 
> 30s, it might become an issue with EOS where the commit interval is 200ms by 
> default.
> One option would be to flush and checkpoint global state stores when the 
> delta of the content exceeds a given threshold as we do for other stores. See 
> https://github.com/apache/kafka/blob/a1f3c6d16061566a4f53c72a95e2679b8ee229e0/streams/src/main/java/org/apache/kafka/streams/processor/internals/AbstractTask.java#L97
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14747) FK join should record discarded subscription responses

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-14747.
-
Fix Version/s: 3.8.0
   Resolution: Fixed

> FK join should record discarded subscription responses
> --
>
> Key: KAFKA-14747
> URL: https://issues.apache.org/jira/browse/KAFKA-14747
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>            Reporter: Matthias J. Sax
>Assignee: Ayoub Omari
>Priority: Minor
>  Labels: beginner, newbie
> Fix For: 3.8.0
>
>
> FK-joins are subject to a race condition: If the left-hand side record is 
> updated, a subscription is sent to the right-hand side (including a hash 
> value of the left-hand side record), and the right-hand side might send back 
> join responses (also including the original hash). The left-hand side only 
> processed the responses if the returned hash matches to current hash of the 
> left-hand side record, because a different hash implies that the lef- hand 
> side record was updated in the mean time (including sending a new 
> subscription to the right hand side), and thus the data is stale and the 
> response should not be processed (joining the response to the new record 
> could lead to incorrect results).
> A similar thing can happen on a right-hand side update that triggers a 
> response, that might be dropped if the left-hand side record was updated in 
> parallel.
> While the behavior is correct, we don't record if this happens. We should 
> consider to record this using the existing "dropped record" sensor or maybe 
> add a new sensor.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14747) FK join should record discarded subscription responses

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-14747.
-
Fix Version/s: 3.8.0
   Resolution: Fixed

> FK join should record discarded subscription responses
> --
>
> Key: KAFKA-14747
> URL: https://issues.apache.org/jira/browse/KAFKA-14747
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>            Reporter: Matthias J. Sax
>Assignee: Ayoub Omari
>Priority: Minor
>  Labels: beginner, newbie
> Fix For: 3.8.0
>
>
> FK-joins are subject to a race condition: If the left-hand side record is 
> updated, a subscription is sent to the right-hand side (including a hash 
> value of the left-hand side record), and the right-hand side might send back 
> join responses (also including the original hash). The left-hand side only 
> processed the responses if the returned hash matches to current hash of the 
> left-hand side record, because a different hash implies that the lef- hand 
> side record was updated in the mean time (including sending a new 
> subscription to the right hand side), and thus the data is stale and the 
> response should not be processed (joining the response to the new record 
> could lead to incorrect results).
> A similar thing can happen on a right-hand side update that triggers a 
> response, that might be dropped if the left-hand side record was updated in 
> parallel.
> While the behavior is correct, we don't record if this happens. We should 
> consider to record this using the existing "dropped record" sensor or maybe 
> add a new sensor.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16338) Removed Deprecated configs from StreamsConfig

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16338:

Description: 
* "buffered.records.per.partition" were deprecated via 
[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=186878390] 
(KIP not fully implemented yet, so move this from the 4.0 into this 5.0 ticket)
 * "default.store.config" was deprecated 

  was:
* "buffered.records.per.partition" was deprecated in 3.4 via 
[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=186878390], 
however the KIP is not fully implemented yet, so we move this from the 4.0 into 
this 5.0 ticket
 * "default.dsl.store" was deprecated in 3.7 via 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-954%3A+expand+default+DSL+store+configuration+to+custom+types]
 


> Removed Deprecated configs from StreamsConfig
> -
>
> Key: KAFKA-16338
> URL: https://issues.apache.org/jira/browse/KAFKA-16338
> Project: Kafka
>  Issue Type: Sub-task
>      Components: streams
>Reporter: Matthias J. Sax
>Priority: Blocker
> Fix For: 5.0.0
>
>
> * "buffered.records.per.partition" were deprecated via 
> [https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=186878390] 
> (KIP not fully implemented yet, so move this from the 4.0 into this 5.0 
> ticket)
>  * "default.store.config" was deprecated 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16338) Removed Deprecated configs from StreamsConfig

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16338:

Description: 
* "buffered.records.per.partition" was deprecated in 3.4 via 
[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=186878390], 
however the KIP is not fully implemented yet, so we move this from the 4.0 into 
this 5.0 ticket
 * "default.dsl.store" was deprecated in 3.7 via 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-954%3A+expand+default+DSL+store+configuration+to+custom+types]
 

  was:* "buffered.records.per.partition" were deprecated via 
[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=186878390] 
(KIP not fully implemented yet, so move this from the 4.0 into this 5.0 ticket)


> Removed Deprecated configs from StreamsConfig
> -
>
> Key: KAFKA-16338
> URL: https://issues.apache.org/jira/browse/KAFKA-16338
> Project: Kafka
>  Issue Type: Sub-task
>      Components: streams
>Reporter: Matthias J. Sax
>Priority: Blocker
> Fix For: 5.0.0
>
>
> * "buffered.records.per.partition" was deprecated in 3.4 via 
> [https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=186878390], 
> however the KIP is not fully implemented yet, so we move this from the 4.0 
> into this 5.0 ticket
>  * "default.dsl.store" was deprecated in 3.7 via 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-954%3A+expand+default+DSL+store+configuration+to+custom+types]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-12829) Remove Deprecated methods can classes of old Processor API

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-12829:

Description: 
The following methods were deprecated in version 2.7:
 * org.apache.kafka.streams.Topology#addProcessor(java.lang.String, 
org.apache.kafka.streams.processor.ProcessorSupplier, java.lang.String...)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.processor.TimestampExtractor, 
org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.StreamsBuilder#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.kstream.Consumed, 
org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.processor.StateStore#init(org.apache.kafka.streams.processor.ProcessorContext,
 org.apache.kafka.streams.processor.StateStore)
 * org.apache.kafka.streams.kstream.KStream.process (multiple, taking the old 
`ProcessorSupplier`)
 * org.apache.kafka.streams.scala.KStream.process (multiple, taking the old 
`ProcessorSupplier`)

The following classes were deprecated:
 * org.apache.kafka.streams.processor.\{Processor, ProcessorSupplier, 
ProcessorContext} (note: that `ProcessorContext` is also use by `Transformer` 
which was also deprecated in 3.3 and can be removed, too)
 * Cf https://issues.apache.org/jira/browse/KAFKA-16339 – both tickets should 
be worked on together

Also cf related tickets:
 * https://issues.apache.org/jira/browse/KAFKA-12832
 * https://issues.apache.org/jira/browse/KAFKA-12833

See KAFKA-10605 and KIP-478.

  was:
The following methods were deprecated in version 2.7:
 * org.apache.kafka.streams.Topology#addProcessor(java.lang.String, 
org.apache.kafka.streams.processor.ProcessorSupplier, java.lang.String...)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.processor.TimestampExtractor, 
org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.StreamsBuilder#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.kstream.Consumed, 
org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.processor.StateStore#init(org.apache.kafka.streams.processor.ProcessorContext,
 org.apache.kafka.streams.processor.StateStore)
 * org.apache.kafka.streams.kstream.KStream.process (multiple, taking the old 
`ProcessorSupplier`)
 * org.apache.kafka.streams.scala.KStream.process (multiple, taking the old 
`ProcessorSupplier`)

The following classes were deprecated:
 * org.apache.kafka.streams.processor.\{Processor, ProcessorSupplier, 
ProcessorContext} (note: that `ProcessorContext` is also use by `Transformer` 
which was also deprecated in 3.3 and can be removed, too)
 * Cf https://issues.apache.org/jira/browse/KAFKA-16339 – both tickets should 
be worked on together

 

See KAFKA-10605 and KIP-478.


> Remove Deprecated methods can classes of old Processor API
> --
>
> Key: KAFKA-12829
> URL: https://issues.apache.org/jira/browse/KAFKA-12829
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Josep Prat
>Priority: Blocker
> Fix For: 4.0.0
>
>
> The following methods were deprecated in version 2.7:
>  * org.apache.kafka.streams.Topology#addProcessor(java.lang.String, 
> org.apache.kafka.streams.processor.ProcessorSupplier, java.lang.String...)
>  * 
> org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
>  java.lang.String, org.apache.kafka.common.serialization.Deserializer, 
> org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
> java.la

[jira] [Resolved] (KAFKA-10603) Re-design KStream.process() and K*.transform*() operations

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-10603.
-
Resolution: Fixed

> Re-design KStream.process() and K*.transform*() operations
> --
>
> Key: KAFKA-10603
> URL: https://issues.apache.org/jira/browse/KAFKA-10603
> Project: Kafka
>  Issue Type: New Feature
>Reporter: John Roesler
>Priority: Major
>  Labels: needs-kip
>
> After the implementation of KIP-478, we have the ability to reconsider all 
> these APIs, and maybe just replace them with
> {code:java}
> // KStream
> KStream process(ProcessorSupplier) 
> // KTable
> KTable process(ProcessorSupplier){code}
>  
> but it needs more thought and a KIP for sure.
>  
> This ticket probably supercedes 
> https://issues.apache.org/jira/browse/KAFKA-8396



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-10603) Re-design KStream.process() and K*.transform*() operations

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-10603.
-
Resolution: Fixed

> Re-design KStream.process() and K*.transform*() operations
> --
>
> Key: KAFKA-10603
> URL: https://issues.apache.org/jira/browse/KAFKA-10603
> Project: Kafka
>  Issue Type: New Feature
>Reporter: John Roesler
>Priority: Major
>  Labels: needs-kip
>
> After the implementation of KIP-478, we have the ability to reconsider all 
> these APIs, and maybe just replace them with
> {code:java}
> // KStream
> KStream process(ProcessorSupplier) 
> // KTable
> KTable process(ProcessorSupplier){code}
>  
> but it needs more thought and a KIP for sure.
>  
> This ticket probably supercedes 
> https://issues.apache.org/jira/browse/KAFKA-8396



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16339) Remove Deprecated "transformer" methods and classes

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16339:
---

 Summary: Remove Deprecated "transformer" methods and classes
 Key: KAFKA-16339
 URL: https://issues.apache.org/jira/browse/KAFKA-16339
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 4.0.0


Cf 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-820%3A+Extend+KStream+process+with+new+Processor+API]
 * KStream#tranform
 * KStream#flatTransform
 * KStream#transformValue
 * KStream#flatTransformValues
 * and the corresponding Scala methods

Related to https://issues.apache.org/jira/browse/KAFKA-12829, and both tickets 
should be worked on together.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16339) Remove Deprecated "transformer" methods and classes

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16339:
---

 Summary: Remove Deprecated "transformer" methods and classes
 Key: KAFKA-16339
 URL: https://issues.apache.org/jira/browse/KAFKA-16339
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 4.0.0


Cf 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-820%3A+Extend+KStream+process+with+new+Processor+API]
 * KStream#tranform
 * KStream#flatTransform
 * KStream#transformValue
 * KStream#flatTransformValues
 * and the corresponding Scala methods

Related to https://issues.apache.org/jira/browse/KAFKA-12829, and both tickets 
should be worked on together.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-12829) Remove Deprecated methods can classes of old Processor API

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-12829:

Description: 
The following methods were deprecated in version 2.7:
 * org.apache.kafka.streams.Topology#addProcessor(java.lang.String, 
org.apache.kafka.streams.processor.ProcessorSupplier, java.lang.String...)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.processor.TimestampExtractor, 
org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.StreamsBuilder#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.kstream.Consumed, 
org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.processor.StateStore#init(org.apache.kafka.streams.processor.ProcessorContext,
 org.apache.kafka.streams.processor.StateStore)
 * org.apache.kafka.streams.kstream.KStream.process (multiple, taking the old 
`ProcessorSupplier`)
 * org.apache.kafka.streams.scala.KStream.process (multiple, taking the old 
`ProcessorSupplier`)

The following classes were deprecated:
 * org.apache.kafka.streams.processor.\{Processor, ProcessorSupplier, 
ProcessorContext} (note: that `ProcessorContext` is also use by `Transformer` 
which was also deprecated in 3.3 and can be removed, too)
 * Cf https://issues.apache.org/jira/browse/KAFKA-16339 – both tickets should 
be worked on together

 

See KAFKA-10605 and KIP-478.

  was:
The following methods were deprecated in version 2.7:
 * org.apache.kafka.streams.Topology#addProcessor(java.lang.String, 
org.apache.kafka.streams.processor.ProcessorSupplier, java.lang.String...)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.processor.TimestampExtractor, 
org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.StreamsBuilder#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.kstream.Consumed, 
org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.processor.StateStore#init(org.apache.kafka.streams.processor.ProcessorContext,
 org.apache.kafka.streams.processor.StateStore)
 * org.apache.kafka.streams.kstream.KStream.process (multiple, taking the old 
`ProcessorSupplier`)
 * org.apache.kafka.streams.scala.KStream.process (multiple, taking the old 
`ProcessorSupplier`)

The following classes were deprecated:
 * org.apache.kafka.streams.processor.\{Processor, ProcessorSupplier, 
ProcessorContext} (note: that `ProcessorContext` is also use by `Transformer` 
which was also deprecated in 3.3 and can be removed, too)

 

See KAFKA-10605 and KIP-478.


> Remove Deprecated methods can classes of old Processor API
> --
>
> Key: KAFKA-12829
> URL: https://issues.apache.org/jira/browse/KAFKA-12829
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Josep Prat
>Priority: Blocker
> Fix For: 4.0.0
>
>
> The following methods were deprecated in version 2.7:
>  * org.apache.kafka.streams.Topology#addProcessor(java.lang.String, 
> org.apache.kafka.streams.processor.ProcessorSupplier, java.lang.String...)
>  * 
> org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
>  java.lang.String, org.apache.kafka.common.serialization.Deserializer, 
> org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
> java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
>  * 
> org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
>  java.lang.String, org.apache.kafka.streams.processor.T

[jira] [Updated] (KAFKA-12829) Remove Deprecated methods can classes of old Processor API

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-12829:

Description: 
The following methods were deprecated in version 2.7:
 * org.apache.kafka.streams.Topology#addProcessor(java.lang.String, 
org.apache.kafka.streams.processor.ProcessorSupplier, java.lang.String...)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.processor.TimestampExtractor, 
org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.StreamsBuilder#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.kstream.Consumed, 
org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.processor.StateStore#init(org.apache.kafka.streams.processor.ProcessorContext,
 org.apache.kafka.streams.processor.StateStore)
 * org.apache.kafka.streams.kstream.KStream.process (multiple, taking the old 
`ProcessorSupplier`)
 * org.apache.kafka.streams.scala.KStream.process (multiple, taking the old 
`ProcessorSupplier`)

The following classes were deprecated:
 * org.apache.kafka.streams.processor.\{Processor, ProcessorSupplier, 
ProcessorContext} (note: that `ProcessorContext` is also use by `Transformer` 
which was also deprecated in 3.3 and can be removed, too)

 

See KAFKA-10605 and KIP-478.

  was:
The following methods were deprecated in version 2.7:
 * org.apache.kafka.streams.Topology#addProcessor(java.lang.String, 
org.apache.kafka.streams.processor.ProcessorSupplier, java.lang.String...)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.processor.TimestampExtractor, 
org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.StreamsBuilder#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.kstream.Consumed, 
org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.processor.StateStore#init(org.apache.kafka.streams.processor.ProcessorContext,
 org.apache.kafka.streams.processor.StateStore)
 * org.apache.kafka.streams.kstream.KStream.process (multiple, taking the old 
`ProcessorSupplier`)

 * org.apache.kafka.streams.scala.KStream.process (multiple, taking the old 
`ProcessorSupplier`)

The following classes were deprecated:
 * org.apache.kafka.streams.processor.\{Processor, ProcessorSupplier, 
ProcessorContext} (note: that `ProcessorContext` is also use by `Transformer` 
which was also deprecated in 3.3 and can be removed, too)

 

See KAFKA-10605 and KIP-478.


> Remove Deprecated methods can classes of old Processor API
> --
>
> Key: KAFKA-12829
> URL: https://issues.apache.org/jira/browse/KAFKA-12829
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Josep Prat
>Priority: Blocker
> Fix For: 4.0.0
>
>
> The following methods were deprecated in version 2.7:
>  * org.apache.kafka.streams.Topology#addProcessor(java.lang.String, 
> org.apache.kafka.streams.processor.ProcessorSupplier, java.lang.String...)
>  * 
> org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
>  java.lang.String, org.apache.kafka.common.serialization.Deserializer, 
> org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
> java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
>  * 
> org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
>  java.lang.String, org.apache.kafka.streams.processor.TimestampExtractor, 
> org.apache.kafka.common.serialization.Deserializer, 
> org.apache.

[jira] [Updated] (KAFKA-12822) Remove Deprecated APIs of Kafka Streams in 4.0

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-12822:

Component/s: streams-test-utils

> Remove Deprecated APIs of Kafka Streams in 4.0
> --
>
> Key: KAFKA-12822
> URL: https://issues.apache.org/jira/browse/KAFKA-12822
> Project: Kafka
>  Issue Type: Task
>  Components: streams, streams-test-utils
>Reporter: Josep Prat
>Priority: Blocker
> Fix For: 4.0.0
>
>
> This is an umbrella ticket that tries to collect all APIs under Kafka Streams 
> that were deprecated after 2.5 and up-to 3.5 (given the current release plan 
> to ship 4.0 after 3.8 we don't want to include changes introduced in 3.6 or 
> later releases).
> Each subtask will de focusing on a specific API, so it's easy to discuss if 
> it should be removed by 4.0.0 or maybe even at a later point.
> While KIP-770 was approved for 3.4 and is partially implemented, it was not 
> completed yet, and thus should not be subject to 4.0 release: 
> [https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=186878390] 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16338) Removed Deprecated configs from StreamsConfig

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16338:
---

 Summary: Removed Deprecated configs from StreamsConfig
 Key: KAFKA-16338
 URL: https://issues.apache.org/jira/browse/KAFKA-16338
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 5.0.0


* "buffered.records.per.partition" were deprecated via 
[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=186878390] 
(KIP not fully implemented yet, so move this from the 4.0 into this 5.0 ticket)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16338) Removed Deprecated configs from StreamsConfig

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16338:
---

 Summary: Removed Deprecated configs from StreamsConfig
 Key: KAFKA-16338
 URL: https://issues.apache.org/jira/browse/KAFKA-16338
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 5.0.0


* "buffered.records.per.partition" were deprecated via 
[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=186878390] 
(KIP not fully implemented yet, so move this from the 4.0 into this 5.0 ticket)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16337) Remove Deprecates APIs of Kafka Streams in 5.0

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16337:
---

 Summary: Remove Deprecates APIs of Kafka Streams in 5.0
 Key: KAFKA-16337
 URL: https://issues.apache.org/jira/browse/KAFKA-16337
 Project: Kafka
  Issue Type: Task
  Components: streams, streams-test-utils
Reporter: Matthias J. Sax
 Fix For: 5.0.0


This is an umbrella ticket that tries to collect all APIs under Kafka Streams 
that were deprecated in 3.6 or later. When the release scheduled for 5.0 will 
be set, we might need to remove sub-tasks if they don't hit the 1-year 
threshold.

Each subtask will de focusing on a specific API, so it's easy to discuss if it 
should be removed by 5.0.0 or maybe even at a later point.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16337) Remove Deprecates APIs of Kafka Streams in 5.0

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16337:
---

 Summary: Remove Deprecates APIs of Kafka Streams in 5.0
 Key: KAFKA-16337
 URL: https://issues.apache.org/jira/browse/KAFKA-16337
 Project: Kafka
  Issue Type: Task
  Components: streams, streams-test-utils
Reporter: Matthias J. Sax
 Fix For: 5.0.0


This is an umbrella ticket that tries to collect all APIs under Kafka Streams 
that were deprecated in 3.6 or later. When the release scheduled for 5.0 will 
be set, we might need to remove sub-tasks if they don't hit the 1-year 
threshold.

Each subtask will de focusing on a specific API, so it's easy to discuss if it 
should be removed by 5.0.0 or maybe even at a later point.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-12822) Remove Deprecated APIs of Kafka Streams in 4.0

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-12822:

Description: 
This is an umbrella ticket that tries to collect all APIs under Kafka Streams 
that were deprecated after 2.5 and up-to 3.5 (given the current release plan to 
ship 4.0 after 3.8 we don't want to include changes introduced in 3.6 or later 
releases).

Each subtask will de focusing on a specific API, so it's easy to discuss if it 
should be removed by 4.0.0 or maybe even at a later point.

While KIP-770 was approved for 3.4 and is partially implemented, it was not 
completed yet, and thus should not be subject to 4.0 release: 
[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=186878390] 

 

  was:
This is an umbrella ticket that tries to collect all APIs under Kafka Streams 
that were deprecated after 2.5 and up-to 3.5 (given the current release plan to 
ship 4.0 after 3.8 we don't want to include changes introduced in 3.6 or later 
releases).

Each subtask will de focusing on a specific API, so it's easy to discuss if it 
should be removed by 4.0.0 or maybe even at a later point.

 


> Remove Deprecated APIs of Kafka Streams in 4.0
> --
>
> Key: KAFKA-12822
> URL: https://issues.apache.org/jira/browse/KAFKA-12822
> Project: Kafka
>  Issue Type: Task
>  Components: streams
>Reporter: Josep Prat
>Priority: Blocker
> Fix For: 4.0.0
>
>
> This is an umbrella ticket that tries to collect all APIs under Kafka Streams 
> that were deprecated after 2.5 and up-to 3.5 (given the current release plan 
> to ship 4.0 after 3.8 we don't want to include changes introduced in 3.6 or 
> later releases).
> Each subtask will de focusing on a specific API, so it's easy to discuss if 
> it should be removed by 4.0.0 or maybe even at a later point.
> While KIP-770 was approved for 3.4 and is partially implemented, it was not 
> completed yet, and thus should not be subject to 4.0 release: 
> [https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=186878390] 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16336) Remove Deprecated metric standby-process-ratio

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16336:
---

 Summary: Remove Deprecated metric standby-process-ratio
 Key: KAFKA-16336
 URL: https://issues.apache.org/jira/browse/KAFKA-16336
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 4.0.0


Metric "standby-process-ratio" was deprecated in 3.5 release via 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-869%3A+Improve+Streams+State+Restoration+Visibility



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16336) Remove Deprecated metric standby-process-ratio

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16336:
---

 Summary: Remove Deprecated metric standby-process-ratio
 Key: KAFKA-16336
 URL: https://issues.apache.org/jira/browse/KAFKA-16336
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 4.0.0


Metric "standby-process-ratio" was deprecated in 3.5 release via 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-869%3A+Improve+Streams+State+Restoration+Visibility



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-12822) Remove Deprecated APIs of Kafka Streams in 4.0

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-12822:

Description: 
This is an umbrella ticket that tries to collect all APIs under Kafka Streams 
that were deprecated after 2.5 and up-to 3.5 (given the current release plan to 
ship 4.0 after 3.8 we don't want to include changes introduced in 3.6 or later 
releases).

Each subtask will de focusing on a specific API, so it's easy to discuss if it 
should be removed by 4.0.0 or maybe even at a later point.

 

  was:
This is an umbrella ticket that tries to collect all APIs under Kafka Streams 
that were deprecated after 2.5 and up-to 3.4 (given the current release plan to 
ship 4.0 after 3.8 we don't want to include changes introduced in 3.5 release).

Each subtask will de focusing on a specific API, so it's easy to discuss if it 
should be removed by 4.0.0 or maybe even at a later point.

 


> Remove Deprecated APIs of Kafka Streams in 4.0
> --
>
> Key: KAFKA-12822
> URL: https://issues.apache.org/jira/browse/KAFKA-12822
> Project: Kafka
>  Issue Type: Task
>  Components: streams
>Reporter: Josep Prat
>Priority: Blocker
> Fix For: 4.0.0
>
>
> This is an umbrella ticket that tries to collect all APIs under Kafka Streams 
> that were deprecated after 2.5 and up-to 3.5 (given the current release plan 
> to ship 4.0 after 3.8 we don't want to include changes introduced in 3.6 or 
> later releases).
> Each subtask will de focusing on a specific API, so it's easy to discuss if 
> it should be removed by 4.0.0 or maybe even at a later point.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-12689) Remove deprecated EOS configs

2024-03-04 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17823353#comment-17823353
 ] 

Matthias J. Sax commented on KAFKA-12689:
-

As commented on https://issues.apache.org/jira/browse/KAFKA-16331, I am not 
sure if we should really remove EOSv1 already.

> Remove deprecated EOS configs
> -
>
> Key: KAFKA-12689
> URL: https://issues.apache.org/jira/browse/KAFKA-12689
> Project: Kafka
>  Issue Type: Task
>  Components: streams
>Reporter: A. Sophie Blee-Goldman
>Priority: Blocker
> Fix For: 4.0.0
>
>
> In 
> [KIP-732|https://cwiki.apache.org/confluence/display/KAFKA/KIP-732%3A+Deprecate+eos-alpha+and+replace+eos-beta+with+eos-v2]
>  we deprecated the EXACTLY_ONCE and EXACTLY_ONCE_BETA configs in 
> StreamsConfig, to be removed in 4.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-12822) Remove Deprecated APIs of Kafka Streams in 4.0

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-12822:

Description: 
This is an umbrella ticket that tries to collect all APIs under Kafka Streams 
that were deprecated after 2.5 and up-to 3.4 (given the current release plan to 
ship 4.0 after 3.8 we don't want to include changes introduced in 3.5 release).

Each subtask will de focusing on a specific API, so it's easy to discuss if it 
should be removed by 4.0.0 or maybe even at a later point.

 

  was:
This is an umbrella ticket that tries to collect all APIs under Kafka Streams 
that were deprecated after 2.5 (the current threshold for being removed in 
version 3.0.0).

Each subtask will de focusing on a specific API, so it's easy to discuss if it 
should be removed by 4.0.0 or maybe even at a later point.

 


> Remove Deprecated APIs of Kafka Streams in 4.0
> --
>
> Key: KAFKA-12822
> URL: https://issues.apache.org/jira/browse/KAFKA-12822
> Project: Kafka
>  Issue Type: Task
>  Components: streams
>Reporter: Josep Prat
>Priority: Blocker
> Fix For: 4.0.0
>
>
> This is an umbrella ticket that tries to collect all APIs under Kafka Streams 
> that were deprecated after 2.5 and up-to 3.4 (given the current release plan 
> to ship 4.0 after 3.8 we don't want to include changes introduced in 3.5 
> release).
> Each subtask will de focusing on a specific API, so it's easy to discuss if 
> it should be removed by 4.0.0 or maybe even at a later point.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16335) Remove Deprecated method on StreamPartitioner

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16335:
---

 Summary: Remove Deprecated method on StreamPartitioner
 Key: KAFKA-16335
 URL: https://issues.apache.org/jira/browse/KAFKA-16335
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 4.0.0


Deprecated in 3.4 release via 
[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=211883356]
 * StreamPartitioner#partition (singular)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16335) Remove Deprecated method on StreamPartitioner

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16335:
---

 Summary: Remove Deprecated method on StreamPartitioner
 Key: KAFKA-16335
 URL: https://issues.apache.org/jira/browse/KAFKA-16335
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 4.0.0


Deprecated in 3.4 release via 
[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=211883356]
 * StreamPartitioner#partition (singular)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16334) Remove Deprecated command line option from reset tool

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16334:
---

 Summary: Remove Deprecated command line option from reset tool
 Key: KAFKA-16334
 URL: https://issues.apache.org/jira/browse/KAFKA-16334
 Project: Kafka
  Issue Type: Sub-task
  Components: streams, tools
Reporter: Matthias J. Sax
 Fix For: 4.0.0


--bootstrap-server (singular) was deprecated in 3.4 release via 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-865%3A+Support+--bootstrap-server+in+kafka-streams-application-reset]
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16334) Remove Deprecated command line option from reset tool

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16334:
---

 Summary: Remove Deprecated command line option from reset tool
 Key: KAFKA-16334
 URL: https://issues.apache.org/jira/browse/KAFKA-16334
 Project: Kafka
  Issue Type: Sub-task
  Components: streams, tools
Reporter: Matthias J. Sax
 Fix For: 4.0.0


--bootstrap-server (singular) was deprecated in 3.4 release via 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-865%3A+Support+--bootstrap-server+in+kafka-streams-application-reset]
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16333) Removed Deprecated methods KTable#join

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16333:
---

 Summary: Removed Deprecated methods KTable#join
 Key: KAFKA-16333
 URL: https://issues.apache.org/jira/browse/KAFKA-16333
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 4.0.0


KTable#join() methods taking a `Named` parameter got deprecated in 3.1 release 
via https://issues.apache.org/jira/browse/KAFKA-13813 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16333) Removed Deprecated methods KTable#join

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16333:
---

 Summary: Removed Deprecated methods KTable#join
 Key: KAFKA-16333
 URL: https://issues.apache.org/jira/browse/KAFKA-16333
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 4.0.0


KTable#join() methods taking a `Named` parameter got deprecated in 3.1 release 
via https://issues.apache.org/jira/browse/KAFKA-13813 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16332) Remove Deprecated builder methods for Time/Session/Join/SlidingWindows

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16332:

Component/s: streams

> Remove Deprecated builder methods for Time/Session/Join/SlidingWindows
> --
>
> Key: KAFKA-16332
> URL: https://issues.apache.org/jira/browse/KAFKA-16332
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>            Reporter: Matthias J. Sax
>Priority: Blocker
>
> Deprecated in 3.0: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-633%3A+Deprecate+24-hour+Default+Grace+Period+for+Windowed+Operations+in+Streams]
>  
>  * TimeWindows#of
>  * TimeWindows#grace
>  * SessionWindows#with
>  * SessionWindows#grace
>  * JoinWindows#of
>  * JoinWindows#grace
>  * SlidingWindows#withTimeDifferencAndGrace
> Me might want to hold-off to cleanup JoinWindows due to 
> https://issues.apache.org/jira/browse/KAFKA-13813 (open for discussion)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16332) Remove Deprecated builder methods for Time/Session/Join/SlidingWindows

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16332:

Priority: Blocker  (was: Major)

> Remove Deprecated builder methods for Time/Session/Join/SlidingWindows
> --
>
> Key: KAFKA-16332
> URL: https://issues.apache.org/jira/browse/KAFKA-16332
> Project: Kafka
>  Issue Type: Sub-task
>        Reporter: Matthias J. Sax
>Priority: Blocker
>
> Deprecated in 3.0: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-633%3A+Deprecate+24-hour+Default+Grace+Period+for+Windowed+Operations+in+Streams]
>  
>  * TimeWindows#of
>  * TimeWindows#grace
>  * SessionWindows#with
>  * SessionWindows#grace
>  * JoinWindows#of
>  * JoinWindows#grace
>  * SlidingWindows#withTimeDifferencAndGrace
> Me might want to hold-off to cleanup JoinWindows due to 
> https://issues.apache.org/jira/browse/KAFKA-13813 (open for discussion)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16332) Remove Deprecated builder methods for Time/Session/Join/SlidingWindows

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16332:
---

 Summary: Remove Deprecated builder methods for 
Time/Session/Join/SlidingWindows
 Key: KAFKA-16332
 URL: https://issues.apache.org/jira/browse/KAFKA-16332
 Project: Kafka
  Issue Type: Sub-task
Reporter: Matthias J. Sax


Deprecated in 3.0: 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-633%3A+Deprecate+24-hour+Default+Grace+Period+for+Windowed+Operations+in+Streams]
 
 * TimeWindows#of
 * TimeWindows#grace
 * SessionWindows#with
 * SessionWindows#grace
 * JoinWindows#of
 * JoinWindows#grace
 * SlidingWindows#withTimeDifferencAndGrace

Me might want to hold-off to cleanup JoinWindows due to 
https://issues.apache.org/jira/browse/KAFKA-13813 (open for discussion)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16332) Remove Deprecated builder methods for Time/Session/Join/SlidingWindows

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16332:
---

 Summary: Remove Deprecated builder methods for 
Time/Session/Join/SlidingWindows
 Key: KAFKA-16332
 URL: https://issues.apache.org/jira/browse/KAFKA-16332
 Project: Kafka
  Issue Type: Sub-task
Reporter: Matthias J. Sax


Deprecated in 3.0: 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-633%3A+Deprecate+24-hour+Default+Grace+Period+for+Windowed+Operations+in+Streams]
 
 * TimeWindows#of
 * TimeWindows#grace
 * SessionWindows#with
 * SessionWindows#grace
 * JoinWindows#of
 * JoinWindows#grace
 * SlidingWindows#withTimeDifferencAndGrace

Me might want to hold-off to cleanup JoinWindows due to 
https://issues.apache.org/jira/browse/KAFKA-13813 (open for discussion)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16332) Remove Deprecated builder methods for Time/Session/Join/SlidingWindows

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16332:

Fix Version/s: 4.0.0

> Remove Deprecated builder methods for Time/Session/Join/SlidingWindows
> --
>
> Key: KAFKA-16332
> URL: https://issues.apache.org/jira/browse/KAFKA-16332
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>            Reporter: Matthias J. Sax
>Priority: Blocker
> Fix For: 4.0.0
>
>
> Deprecated in 3.0: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-633%3A+Deprecate+24-hour+Default+Grace+Period+for+Windowed+Operations+in+Streams]
>  
>  * TimeWindows#of
>  * TimeWindows#grace
>  * SessionWindows#with
>  * SessionWindows#grace
>  * JoinWindows#of
>  * JoinWindows#grace
>  * SlidingWindows#withTimeDifferencAndGrace
> Me might want to hold-off to cleanup JoinWindows due to 
> https://issues.apache.org/jira/browse/KAFKA-13813 (open for discussion)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16331) Remove Deprecated EOSv1

2024-03-04 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17823337#comment-17823337
 ] 

Matthias J. Sax commented on KAFKA-16331:
-

While I think we should for sure remove Producer#sendOffsetsToTransactions, I 
am not sure if we should remove EOSv1 already. We might still want to add 
"multi-cluster support" and might need EOSv1 for this case.

> Remove Deprecated EOSv1
> ---
>
> Key: KAFKA-16331
> URL: https://issues.apache.org/jira/browse/KAFKA-16331
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Priority: Blocker
> Fix For: 4.0.0
>
>
> EOSv1 was deprecated in AK 3.0 via 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-732%3A+Deprecate+eos-alpha+and+replace+eos-beta+with+eos-v2]
>  * remove conifg
>  * remove Producer#sendOffsetsToTransaction
>  * cleanup code



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16331) Remove Deprecated EOSv1

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16331:
---

 Summary: Remove Deprecated EOSv1
 Key: KAFKA-16331
 URL: https://issues.apache.org/jira/browse/KAFKA-16331
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 4.0.0


EOSv1 was deprecated in AK 3.0 via 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-732%3A+Deprecate+eos-alpha+and+replace+eos-beta+with+eos-v2]
 * remove conifg
 * remove Producer#sendOffsetsToTransaction
 * cleanup code



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16331) Remove Deprecated EOSv1

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16331:
---

 Summary: Remove Deprecated EOSv1
 Key: KAFKA-16331
 URL: https://issues.apache.org/jira/browse/KAFKA-16331
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 4.0.0


EOSv1 was deprecated in AK 3.0 via 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-732%3A+Deprecate+eos-alpha+and+replace+eos-beta+with+eos-v2]
 * remove conifg
 * remove Producer#sendOffsetsToTransaction
 * cleanup code



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16328) Remove Deprecated config from StreamsConfig

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16328:

Description: 
* #retries was deprecated in AK 2.7 – already unused – via 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-572%3A+Improve+timeouts+and+retries+in+Kafka+Streams]
 * "inner window serde" configs were deprecated in AK 3.0 via 
[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=177047930] 

  was:Deprecated in AK 2.7 – already unused – via 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-572%3A+Improve+timeouts+and+retries+in+Kafka+Streams


> Remove Deprecated config from StreamsConfig
> ---
>
> Key: KAFKA-16328
> URL: https://issues.apache.org/jira/browse/KAFKA-16328
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Priority: Blocker
> Fix For: 4.0.0
>
>
> * #retries was deprecated in AK 2.7 – already unused – via 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-572%3A+Improve+timeouts+and+retries+in+Kafka+Streams]
>  * "inner window serde" configs were deprecated in AK 3.0 via 
> [https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=177047930] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16328) Remove Deprecated config from StreamsConfig

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16328:

Summary: Remove Deprecated config from StreamsConfig  (was: Remove 
Deprecated config StreamsConfig#retries)

> Remove Deprecated config from StreamsConfig
> ---
>
> Key: KAFKA-16328
> URL: https://issues.apache.org/jira/browse/KAFKA-16328
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>            Reporter: Matthias J. Sax
>Priority: Blocker
> Fix For: 4.0.0
>
>
> Deprecated in AK 2.7 – already unused – via 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-572%3A+Improve+timeouts+and+retries+in+Kafka+Streams



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16330) Remove Deprecated methods/variables from TaskId

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16330:

Description: 
Cf [https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=181306557]

 

This ticket relates to https://issues.apache.org/jira/browse/KAFKA-16329 and 
both should be worked on together.

  was:Cf 
[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=181306557]


> Remove Deprecated methods/variables from TaskId
> ---
>
> Key: KAFKA-16330
> URL: https://issues.apache.org/jira/browse/KAFKA-16330
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>            Reporter: Matthias J. Sax
>Priority: Blocker
> Fix For: 4.0.0
>
>
> Cf 
> [https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=181306557]
>  
> This ticket relates to https://issues.apache.org/jira/browse/KAFKA-16329 and 
> both should be worked on together.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16329) Remove Deprecated Task/ThreadMetadata classes and related methods

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16329:

Description: 
Deprecated in AK 3.0 via 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-744%3A+Migrate+TaskMetadata+and+ThreadMetadata+to+an+interface+with+internal+implementation]
 
 * 
org.apache.kafka.streams.processor.TaskMetadata
 * org.apache.kafka.streams.processo.ThreadMetadata
 * org.apache.kafka.streams.KafkaStreams#localThredMetadata
 * org.apache.kafka.streams.state.StreamsMetadata
 * org.apache.kafka.streams.KafkaStreams#allMetadata
 * org.apache.kafka.streams.KafkaStreams#allMetadataForStore

This is related https://issues.apache.org/jira/browse/KAFKA-16330 and both 
ticket should be worked on together.

  was:
Deprecated in AK 3.0 via 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-744%3A+Migrate+TaskMetadata+and+ThreadMetadata+to+an+interface+with+internal+implementation]
 
 * 
org.apache.kafka.streams.processor.TaskMetadata
 * org.apache.kafka.streams.processo.ThreadMetadata
 * org.apache.kafka.streams.KafkaStreams#localThredMetadata
 * org.apache.kafka.streams.state.StreamsMetadata
 * org.apache.kafka.streams.KafkaStreams#allMetadata
 * org.apache.kafka.streams.KafkaStreams#allMetadataForStore

 


> Remove Deprecated Task/ThreadMetadata classes and related methods
> -
>
> Key: KAFKA-16329
> URL: https://issues.apache.org/jira/browse/KAFKA-16329
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>            Reporter: Matthias J. Sax
>Priority: Blocker
> Fix For: 4.0.0
>
>
> Deprecated in AK 3.0 via 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-744%3A+Migrate+TaskMetadata+and+ThreadMetadata+to+an+interface+with+internal+implementation]
>  
>  * 
> org.apache.kafka.streams.processor.TaskMetadata
>  * org.apache.kafka.streams.processo.ThreadMetadata
>  * org.apache.kafka.streams.KafkaStreams#localThredMetadata
>  * org.apache.kafka.streams.state.StreamsMetadata
>  * org.apache.kafka.streams.KafkaStreams#allMetadata
>  * org.apache.kafka.streams.KafkaStreams#allMetadataForStore
> This is related https://issues.apache.org/jira/browse/KAFKA-16330 and both 
> ticket should be worked on together.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16330) Remove Deprecated methods/variables from TaskId

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16330:
---

 Summary: Remove Deprecated methods/variables from TaskId
 Key: KAFKA-16330
 URL: https://issues.apache.org/jira/browse/KAFKA-16330
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 4.0.0


Cf [https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=181306557]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16330) Remove Deprecated methods/variables from TaskId

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16330:
---

 Summary: Remove Deprecated methods/variables from TaskId
 Key: KAFKA-16330
 URL: https://issues.apache.org/jira/browse/KAFKA-16330
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 4.0.0


Cf [https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=181306557]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16329) Remove Deprecated Task/ThreadMetadata classes and related methods

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16329:

Description: 
Deprecated in AK 3.0 via 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-744%3A+Migrate+TaskMetadata+and+ThreadMetadata+to+an+interface+with+internal+implementation]
 
 * 
org.apache.kafka.streams.processor.TaskMetadata
 * org.apache.kafka.streams.processo.ThreadMetadata
 * org.apache.kafka.streams.KafkaStreams#localThredMetadata
 * org.apache.kafka.streams.state.StreamsMetadata
 * org.apache.kafka.streams.KafkaStreams#allMetadata
 * org.apache.kafka.streams.KafkaStreams#allMetadataForStore

 

  was:Deprecated in AK 3.0 via 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-744%3A+Migrate+TaskMetadata+and+ThreadMetadata+to+an+interface+with+internal+implementation]
 


> Remove Deprecated Task/ThreadMetadata classes and related methods
> -
>
> Key: KAFKA-16329
> URL: https://issues.apache.org/jira/browse/KAFKA-16329
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>            Reporter: Matthias J. Sax
>Priority: Blocker
> Fix For: 4.0.0
>
>
> Deprecated in AK 3.0 via 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-744%3A+Migrate+TaskMetadata+and+ThreadMetadata+to+an+interface+with+internal+implementation]
>  
>  * 
> org.apache.kafka.streams.processor.TaskMetadata
>  * org.apache.kafka.streams.processo.ThreadMetadata
>  * org.apache.kafka.streams.KafkaStreams#localThredMetadata
>  * org.apache.kafka.streams.state.StreamsMetadata
>  * org.apache.kafka.streams.KafkaStreams#allMetadata
>  * org.apache.kafka.streams.KafkaStreams#allMetadataForStore
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16329) Remove Deprecated Task/ThreadMetadata classes and related methods

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16329:
---

 Summary: Remove Deprecated Task/ThreadMetadata classes and related 
methods
 Key: KAFKA-16329
 URL: https://issues.apache.org/jira/browse/KAFKA-16329
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 4.0.0


Deprecated in AK 3.0 via 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-744%3A+Migrate+TaskMetadata+and+ThreadMetadata+to+an+interface+with+internal+implementation]
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16329) Remove Deprecated Task/ThreadMetadata classes and related methods

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16329:
---

 Summary: Remove Deprecated Task/ThreadMetadata classes and related 
methods
 Key: KAFKA-16329
 URL: https://issues.apache.org/jira/browse/KAFKA-16329
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 4.0.0


Deprecated in AK 3.0 via 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-744%3A+Migrate+TaskMetadata+and+ThreadMetadata+to+an+interface+with+internal+implementation]
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-12829) Remove Deprecated methods can classes of old Processor API

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-12829:

Description: 
The following methods were deprecated in version 2.7:
 * org.apache.kafka.streams.Topology#addProcessor(java.lang.String, 
org.apache.kafka.streams.processor.ProcessorSupplier, java.lang.String...)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.processor.TimestampExtractor, 
org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.StreamsBuilder#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.kstream.Consumed, 
org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.processor.StateStore#init(org.apache.kafka.streams.processor.ProcessorContext,
 org.apache.kafka.streams.processor.StateStore)
 * org.apache.kafka.streams.kstream.KStream.process (multiple, taking the old 
`ProcessorSupplier`)

 * org.apache.kafka.streams.scala.KStream.process (multiple, taking the old 
`ProcessorSupplier`)

The following classes were deprecated:
 * org.apache.kafka.streams.processor.\{Processor, ProcessorSupplier, 
ProcessorContext} (note: that `ProcessorContext` is also use by `Transformer` 
which was also deprecated in 3.3 and can be removed, too)

 

See KAFKA-10605 and KIP-478.

  was:
The following methods were deprecated in version 2.7:
 * org.apache.kafka.streams.Topology#addProcessor(java.lang.String, 
org.apache.kafka.streams.processor.ProcessorSupplier, java.lang.String...)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.processor.TimestampExtractor, 
org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.StreamsBuilder#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.kstream.Consumed, 
org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.processor.StateStore#init(org.apache.kafka.streams.processor.ProcessorContext,
 org.apache.kafka.streams.processor.StateStore)
 * org.apache.kafka.streams.kstream.KStream.process (multiple, taking the old 
`ProcessorSupplier`)

 * 
org.apache.kafka.streams.scala.KStream.process (multiple, taking the old 
`ProcessorSupplier`)

The following classes were deprecated:
 * org.apache.kafka.streams.processor.\{Processor, ProcessorSupplier, 
ProcessorContext}

 

See KAFKA-10605 and KIP-478.


> Remove Deprecated methods can classes of old Processor API
> --
>
> Key: KAFKA-12829
> URL: https://issues.apache.org/jira/browse/KAFKA-12829
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Josep Prat
>Priority: Blocker
> Fix For: 4.0.0
>
>
> The following methods were deprecated in version 2.7:
>  * org.apache.kafka.streams.Topology#addProcessor(java.lang.String, 
> org.apache.kafka.streams.processor.ProcessorSupplier, java.lang.String...)
>  * 
> org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
>  java.lang.String, org.apache.kafka.common.serialization.Deserializer, 
> org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
> java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
>  * 
> org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
>  java.lang.String, org.apache.kafka.streams.processor.TimestampExtractor, 
> org.apache.kafka.common.serialization.Deserializer, 
> org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
> java.lang.String, org.apache.kafka.streams.processor.

[jira] [Updated] (KAFKA-12829) Remove Deprecated methods can classes of old Processor API

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-12829:

Description: 
The following methods were deprecated in version 2.7:
 * org.apache.kafka.streams.Topology#addProcessor(java.lang.String, 
org.apache.kafka.streams.processor.ProcessorSupplier, java.lang.String...)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.processor.TimestampExtractor, 
org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.StreamsBuilder#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.kstream.Consumed, 
org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.processor.StateStore#init(org.apache.kafka.streams.processor.ProcessorContext,
 org.apache.kafka.streams.processor.StateStore)
 * org.apache.kafka.streams.kstream.KStream.process (multiple, taking the old 
`ProcessorSupplier`)

 * 
org.apache.kafka.streams.scala.KStream.process (multiple, taking the old 
`ProcessorSupplier`)

The following classes were deprecated:
 * org.apache.kafka.streams.processor.\{Processor, ProcessorSupplier, 
ProcessorContext}

 

See KAFKA-10605 and KIP-478.

  was:
The following methods were deprecated in version 2.7:
 * org.apache.kafka.streams.Topology#addProcessor(java.lang.String, 
org.apache.kafka.streams.processor.ProcessorSupplier, java.lang.String...)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.processor.TimestampExtractor, 
org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.StreamsBuilder#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.kstream.Consumed, 
org.apache.kafka.streams.processor.ProcessorSupplier)

 

See KAFKA-10605 and KIP-478.


> Remove Deprecated methods can classes of old Processor API
> --
>
> Key: KAFKA-12829
> URL: https://issues.apache.org/jira/browse/KAFKA-12829
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Josep Prat
>Priority: Blocker
> Fix For: 4.0.0
>
>
> The following methods were deprecated in version 2.7:
>  * org.apache.kafka.streams.Topology#addProcessor(java.lang.String, 
> org.apache.kafka.streams.processor.ProcessorSupplier, java.lang.String...)
>  * 
> org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
>  java.lang.String, org.apache.kafka.common.serialization.Deserializer, 
> org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
> java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
>  * 
> org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
>  java.lang.String, org.apache.kafka.streams.processor.TimestampExtractor, 
> org.apache.kafka.common.serialization.Deserializer, 
> org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
> java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
>  * 
> org.apache.kafka.streams.StreamsBuilder#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
>  java.lang.String, org.apache.kafka.streams.kstream.Consumed, 
> org.apache.kafka.streams.processor.ProcessorSupplier)
>  * 
> org.apache.kafka.streams.processor.StateStore#init(org.apache.kafka.streams.processor.ProcessorContext,
>  org.apache.kafka.streams.processor.StateStore)
>  * org.apache.kafka.streams.kstream.KStream.process (multiple, taking the old 
> `ProcessorSupplier`)
>  * 
> org.apache.kafka.streams.scala.KStream.process (multiple, takin

[jira] [Resolved] (KAFKA-12831) Remove Deprecated method StateStore#init

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-12831.
-
Resolution: Fixed

> Remove Deprecated method StateStore#init
> 
>
> Key: KAFKA-12831
> URL: https://issues.apache.org/jira/browse/KAFKA-12831
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Josep Prat
>Priority: Blocker
> Fix For: 4.0.0
>
>
> The method 
> org.apache.kafka.streams.processor.StateStore#init(org.apache.kafka.streams.processor.ProcessorContext,
>  org.apache.kafka.streams.processor.StateStore) was deprected in version 2.7
>  
> See KAFKA-10562 and KIP-478
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-12831) Remove Deprecated method StateStore#init

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-12831.
-
Resolution: Fixed

> Remove Deprecated method StateStore#init
> 
>
> Key: KAFKA-12831
> URL: https://issues.apache.org/jira/browse/KAFKA-12831
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Josep Prat
>Priority: Blocker
> Fix For: 4.0.0
>
>
> The method 
> org.apache.kafka.streams.processor.StateStore#init(org.apache.kafka.streams.processor.ProcessorContext,
>  org.apache.kafka.streams.processor.StateStore) was deprected in version 2.7
>  
> See KAFKA-10562 and KIP-478
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-12829) Remove Deprecated methods can classes of old Processor API

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-12829:

Description: 
The following methods were deprecated in version 2.7:
 * org.apache.kafka.streams.Topology#addProcessor(java.lang.String, 
org.apache.kafka.streams.processor.ProcessorSupplier, java.lang.String...)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.processor.TimestampExtractor, 
org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.StreamsBuilder#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.kstream.Consumed, 
org.apache.kafka.streams.processor.ProcessorSupplier)

 

See KAFKA-10605 and KIP-478.

  was:
The following methods were deprecated in version 2.7:
 * org.apache.kafka.streams.Topology#addProcessor(java.lang.String, 
org.apache.kafka.streams.processor.ProcessorSupplier, java.lang.String...) 
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.processor.TimestampExtractor, 
org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier) 

 

See KAFKA-10605 and KIP-478.


> Remove Deprecated methods can classes of old Processor API
> --
>
> Key: KAFKA-12829
> URL: https://issues.apache.org/jira/browse/KAFKA-12829
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Josep Prat
>Priority: Blocker
> Fix For: 4.0.0
>
>
> The following methods were deprecated in version 2.7:
>  * org.apache.kafka.streams.Topology#addProcessor(java.lang.String, 
> org.apache.kafka.streams.processor.ProcessorSupplier, java.lang.String...)
>  * 
> org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
>  java.lang.String, org.apache.kafka.common.serialization.Deserializer, 
> org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
> java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
>  * 
> org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
>  java.lang.String, org.apache.kafka.streams.processor.TimestampExtractor, 
> org.apache.kafka.common.serialization.Deserializer, 
> org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
> java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
>  * 
> org.apache.kafka.streams.StreamsBuilder#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
>  java.lang.String, org.apache.kafka.streams.kstream.Consumed, 
> org.apache.kafka.streams.processor.ProcessorSupplier)
>  
> See KAFKA-10605 and KIP-478.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-12825) Remove Deprecated method StreamsBuilder#addGlobalStore

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-12825.
-
Resolution: Fixed

> Remove Deprecated method StreamsBuilder#addGlobalStore
> --
>
> Key: KAFKA-12825
> URL: https://issues.apache.org/jira/browse/KAFKA-12825
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Josep Prat
>Priority: Blocker
> Fix For: 4.0.0
>
>
> Methods:
> org.apache.kafka.streams.scala.StreamsBuilder#addGlobalStore
> org.apache.kafka.streams.StreamsBuilder#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
>  java.lang.String, org.apache.kafka.streams.kstream.Consumed, 
> org.apache.kafka.streams.processor.ProcessorSupplier)
> were deprecated in 2.7
>  
> See KAFKA-10379 and KIP-478



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-12825) Remove Deprecated method StreamsBuilder#addGlobalStore

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-12825.
-
Resolution: Fixed

> Remove Deprecated method StreamsBuilder#addGlobalStore
> --
>
> Key: KAFKA-12825
> URL: https://issues.apache.org/jira/browse/KAFKA-12825
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Josep Prat
>Priority: Blocker
> Fix For: 4.0.0
>
>
> Methods:
> org.apache.kafka.streams.scala.StreamsBuilder#addGlobalStore
> org.apache.kafka.streams.StreamsBuilder#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
>  java.lang.String, org.apache.kafka.streams.kstream.Consumed, 
> org.apache.kafka.streams.processor.ProcessorSupplier)
> were deprecated in 2.7
>  
> See KAFKA-10379 and KIP-478



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-12829) Remove Deprecated methods can classes of old Processor API

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-12829:

Summary: Remove Deprecated methods can classes of old Processor API  (was: 
Remove Deprecated methods under Topology)

> Remove Deprecated methods can classes of old Processor API
> --
>
> Key: KAFKA-12829
> URL: https://issues.apache.org/jira/browse/KAFKA-12829
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Josep Prat
>Priority: Blocker
> Fix For: 4.0.0
>
>
> The following methods were deprecated in version 2.7:
>  * org.apache.kafka.streams.Topology#addProcessor(java.lang.String, 
> org.apache.kafka.streams.processor.ProcessorSupplier, java.lang.String...) 
>  * 
> org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
>  java.lang.String, org.apache.kafka.common.serialization.Deserializer, 
> org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
> java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
>  * 
> org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
>  java.lang.String, org.apache.kafka.streams.processor.TimestampExtractor, 
> org.apache.kafka.common.serialization.Deserializer, 
> org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
> java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier) 
>  
> See KAFKA-10605 and KIP-478.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16328) Remove Deprecated config StreamsConfig#retries

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16328:

Summary: Remove Deprecated config StreamsConfig#retries  (was: Remove 
deprecated config StreamsConfig#retries)

> Remove Deprecated config StreamsConfig#retries
> --
>
> Key: KAFKA-16328
> URL: https://issues.apache.org/jira/browse/KAFKA-16328
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>            Reporter: Matthias J. Sax
>Priority: Blocker
> Fix For: 4.0.0
>
>
> Deprecated in AK 2.7 – already unused – via 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-572%3A+Improve+timeouts+and+retries+in+Kafka+Streams



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16328) Remove deprecated config StreamsConfig#retries

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16328:
---

 Summary: Remove deprecated config StreamsConfig#retries
 Key: KAFKA-16328
 URL: https://issues.apache.org/jira/browse/KAFKA-16328
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 4.0.0


Deprecated in AK 2.7 – already unused – via 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-572%3A+Improve+timeouts+and+retries+in+Kafka+Streams



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16328) Remove deprecated config StreamsConfig#retries

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16328:
---

 Summary: Remove deprecated config StreamsConfig#retries
 Key: KAFKA-16328
 URL: https://issues.apache.org/jira/browse/KAFKA-16328
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 4.0.0


Deprecated in AK 2.7 – already unused – via 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-572%3A+Improve+timeouts+and+retries+in+Kafka+Streams



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16327) Remove Deprecated variable StreamsConfig#TOPOLOGY_OPTIMIZATION

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16327:
---

 Summary: Remove Deprecated variable 
StreamsConfig#TOPOLOGY_OPTIMIZATION 
 Key: KAFKA-16327
 URL: https://issues.apache.org/jira/browse/KAFKA-16327
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax


Deprecated in 2.7 release via 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-626%3A+Rename+StreamsConfig+config+variable+name]
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16327) Remove Deprecated variable StreamsConfig#TOPOLOGY_OPTIMIZATION

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16327:

Priority: Blocker  (was: Minor)

> Remove Deprecated variable StreamsConfig#TOPOLOGY_OPTIMIZATION 
> ---
>
> Key: KAFKA-16327
> URL: https://issues.apache.org/jira/browse/KAFKA-16327
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>            Reporter: Matthias J. Sax
>Priority: Blocker
>
> Deprecated in 2.7 release via 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-626%3A+Rename+StreamsConfig+config+variable+name]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16327) Remove Deprecated variable StreamsConfig#TOPOLOGY_OPTIMIZATION

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16327:

Fix Version/s: 4.0.0

> Remove Deprecated variable StreamsConfig#TOPOLOGY_OPTIMIZATION 
> ---
>
> Key: KAFKA-16327
> URL: https://issues.apache.org/jira/browse/KAFKA-16327
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>            Reporter: Matthias J. Sax
>Priority: Blocker
> Fix For: 4.0.0
>
>
> Deprecated in 2.7 release via 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-626%3A+Rename+StreamsConfig+config+variable+name]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16327) Remove Deprecated variable StreamsConfig#TOPOLOGY_OPTIMIZATION

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16327:
---

 Summary: Remove Deprecated variable 
StreamsConfig#TOPOLOGY_OPTIMIZATION 
 Key: KAFKA-16327
 URL: https://issues.apache.org/jira/browse/KAFKA-16327
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax


Deprecated in 2.7 release via 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-626%3A+Rename+StreamsConfig+config+variable+name]
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-1024: Make the restore behavior of GlobalKTables with custom processors configureable

2024-03-01 Thread Matthias J. Sax

Thanks for the KIP Walker.

Fixing this issue, and providing users some flexibility to opt-in/out on 
"restore reprocessing" is certainly a good improvement.


From an API design POV, I like the idea to not require passing in a 
ProcessorSupplier to begin with. Given the current implementation of the 
restore process, this might have been the better API from the beginning 
on... Well, better late than never :)


For this new method w/o a supplier, I am wondering if we want to keep 
`addGlobalStore` or name it `addGlobalReadOnlyStore` -- we do a similar 
thing via KIP-813. Just an idea.


However, I am not convinced that adding a new boolean parameter is the 
best way to design the API. Unfortunately, I don't have any elegant 
proposal myself. Just a somewhat crazy idea to do a larger API change:


Making a step back, a global store, is by definition a terminal node -- 
we don't support to add child nodes. Hence, while we expose a full 
`ProcessorContext` interface, we actually limit what functionality it 
supports. Thus, I am wondering if we should stop using the generic 
`Processor` interface to begin with, but design a new one which is 
tailored to the needs of global stores? -- This would of course be of 
much larger scope than originally intended by this KIP, but it might be 
a great opportunity to kill two birds with one stone?


The only other question to consider is: do we believe that global stores 
will never have child nodes, or could we actually allow for child nodes 
in the future? If yes, it might not be smart to move off using 
`Processor` interface In general, I could imagine, especially as we 
now want to support "process on restore" to allow simple stateless 
operators like `map()` or `filter()` on a `GlobalTable` (or allow to add 
custom global processors) at some point in the future?


Just wanted to put this out to see what people think...


-Matthias


On 2/29/24 1:26 PM, Walker Carlson wrote:

Hello everybody,

I wanted to propose a change to our addGlobalStore methods so that the
restore behavior can be controlled on a preprocessor level. This should
help Kafka Stream users to better tune Global stores added with the
processor API to better fit their needs.

Details are in the kip here: https://cwiki.apache.org/confluence/x/E4t3EQ

Thanks,
Walker



Re: [DISCUSS] KIP-1020: Deprecate `window.size.ms` in `StreamsConfig`

2024-03-01 Thread Matthias J. Sax

One more thought after I did some more digging on the related PR.

Should we do the same thing for `windowed.inner.serde.class`?


Both config belong to windowed serdes (which KS provide) but the KS code 
itself does never use them (and in fact, disallows to use them and would 
throw an error is used). Both are intended for plain consumer use cases 
for which the window serdes are used.


The question to me is, should we add them back somewhere else? It does 
not really belong into `ConsumerConfig` either, but maybe we could add 
them to the corresponding serde or (de)serialize classes?



-Matthias


On 2/21/24 2:41 PM, Matthias J. Sax wrote:

Thanks for the KIP. Sounds like a nice cleanup.

window.size.ms  is not a true KafkaStreams config, and results in an 
error when set from a KStreams application


What error?


Given that the configs is used by `TimeWindowedDeserializer` I am 
wondering if we should additionally add


public class TimeWindowedDeserializer {

     public static final String WINDOW_SIZE_MS_CONFIG = "window.size.ms";
}



-Matthias


On 2/15/24 6:32 AM, Lucia Cerchie wrote:

Hey everyone,

I'd like to discuss KIP-1020
<https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=290982804>,
which would move to deprecate `window.size.ms` in `StreamsConfig` since `
window.size.ms` is a client config.

Thanks in advance!

Lucia Cerchie



[jira] [Assigned] (KAFKA-15797) Flaky test EosV2UpgradeIntegrationTest.shouldUpgradeFromEosAlphaToEosV2[true]

2024-02-29 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-15797:
---

Assignee: Matthias J. Sax

> Flaky test EosV2UpgradeIntegrationTest.shouldUpgradeFromEosAlphaToEosV2[true] 
> --
>
> Key: KAFKA-15797
> URL: https://issues.apache.org/jira/browse/KAFKA-15797
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Reporter: Justine Olshan
>        Assignee: Matthias J. Sax
>Priority: Major
>  Labels: flaky-test
>
> I found two recent failures:
> [https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-14629/22/testReport/junit/org.apache.kafka.streams.integration/EosV2UpgradeIntegrationTest/Build___JDK_8_and_Scala_2_12___shouldUpgradeFromEosAlphaToEosV2_true_/]
> [https://ci-builds.apache.org/job/Kafka/job/kafka/job/trunk/2365/testReport/junit/org.apache.kafka.streams.integration/EosV2UpgradeIntegrationTest/Build___JDK_21_and_Scala_2_13___shouldUpgradeFromEosAlphaToEosV2_true__2/]
>  
> Output generally looks like:
> {code:java}
> java.lang.AssertionError: Did not receive all 138 records from topic 
> multiPartitionOutputTopic within 6 ms, currently accumulated data is 
> [KeyValue(0, 0), KeyValue(0, 1), KeyValue(0, 3), KeyValue(0, 6), KeyValue(0, 
> 10), KeyValue(0, 15), KeyValue(0, 21), KeyValue(0, 28), KeyValue(0, 36), 
> KeyValue(0, 45), KeyValue(0, 55), KeyValue(0, 66), KeyValue(0, 78), 
> KeyValue(0, 91), KeyValue(0, 55), KeyValue(0, 66), KeyValue(0, 78), 
> KeyValue(0, 91), KeyValue(0, 105), KeyValue(0, 120), KeyValue(0, 136), 
> KeyValue(0, 153), KeyValue(0, 171), KeyValue(0, 190), KeyValue(3, 0), 
> KeyValue(3, 1), KeyValue(3, 3), KeyValue(3, 6), KeyValue(3, 10), KeyValue(3, 
> 15), KeyValue(3, 21), KeyValue(3, 28), KeyValue(3, 36), KeyValue(3, 45), 
> KeyValue(3, 55), KeyValue(3, 66), KeyValue(3, 78), KeyValue(3, 91), 
> KeyValue(3, 105), KeyValue(3, 120), KeyValue(3, 136), KeyValue(3, 153), 
> KeyValue(3, 171), KeyValue(3, 190), KeyValue(3, 190), KeyValue(3, 210), 
> KeyValue(3, 231), KeyValue(3, 253), KeyValue(3, 276), KeyValue(3, 300), 
> KeyValue(3, 325), KeyValue(3, 351), KeyValue(3, 378), KeyValue(3, 406), 
> KeyValue(3, 435), KeyValue(1, 0), KeyValue(1, 1), KeyValue(1, 3), KeyValue(1, 
> 6), KeyValue(1, 10), KeyValue(1, 15), KeyValue(1, 21), KeyValue(1, 28), 
> KeyValue(1, 36), KeyValue(1, 45), KeyValue(1, 55), KeyValue(1, 66), 
> KeyValue(1, 78), KeyValue(1, 91), KeyValue(1, 105), KeyValue(1, 120), 
> KeyValue(1, 136), KeyValue(1, 153), KeyValue(1, 171), KeyValue(1, 190), 
> KeyValue(1, 120), KeyValue(1, 136), KeyValue(1, 153), KeyValue(1, 171), 
> KeyValue(1, 190), KeyValue(1, 210), KeyValue(1, 231), KeyValue(1, 253), 
> KeyValue(1, 276), KeyValue(1, 300), KeyValue(1, 325), KeyValue(1, 351), 
> KeyValue(1, 378), KeyValue(1, 406), KeyValue(1, 435), KeyValue(2, 0), 
> KeyValue(2, 1), KeyValue(2, 3), KeyValue(2, 6), KeyValue(2, 10), KeyValue(2, 
> 15), KeyValue(2, 21), KeyValue(2, 28), KeyValue(2, 36), KeyValue(2, 45), 
> KeyValue(2, 55), KeyValue(2, 66), KeyValue(2, 78), KeyValue(2, 91), 
> KeyValue(2, 105), KeyValue(2, 55), KeyValue(2, 66), KeyValue(2, 78), 
> KeyValue(2, 91), KeyValue(2, 105), KeyValue(2, 120), KeyValue(2, 136), 
> KeyValue(2, 153), KeyValue(2, 171), KeyValue(2, 190), KeyValue(2, 210), 
> KeyValue(2, 231), KeyValue(2, 253), KeyValue(2, 276), KeyValue(2, 300), 
> KeyValue(2, 325), KeyValue(2, 351), KeyValue(2, 378), KeyValue(2, 406), 
> KeyValue(0, 210), KeyValue(0, 231), KeyValue(0, 253), KeyValue(0, 276), 
> KeyValue(0, 300), KeyValue(0, 325), KeyValue(0, 351), KeyValue(0, 378), 
> KeyValue(0, 406), KeyValue(0, 435)] Expected: is a value equal to or greater 
> than <138> but: <134> was less than <138>{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16316) Make the restore behavior of GlobalKTables with custom processors configureable

2024-02-29 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16316:

Labels: needs-kip  (was: )

> Make the restore behavior of GlobalKTables with custom processors 
> configureable
> ---
>
> Key: KAFKA-16316
> URL: https://issues.apache.org/jira/browse/KAFKA-16316
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Walker Carlson
>Assignee: Walker Carlson
>Priority: Major
>  Labels: needs-kip
>
> Take the change implemented in 
> https://issues.apache.org/jira/browse/KAFKA-7663 and make it optional through 
> adding a couple methods to the API



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-9062) Handle stalled writes to RocksDB

2024-02-26 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820819#comment-17820819
 ] 

Matthias J. Sax commented on KAFKA-9062:


Given that bulk loading was disabled, should we close this ticket?

> Handle stalled writes to RocksDB
> 
>
> Key: KAFKA-9062
> URL: https://issues.apache.org/jira/browse/KAFKA-9062
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: A. Sophie Blee-Goldman
>Priority: Major
>  Labels: new-streams-runtime-should-fix
>
> RocksDB may stall writes at times when background compactions or flushes are 
> having trouble keeping up. This means we can effectively end up blocking 
> indefinitely during a StateStore#put call within Streams, and may get kicked 
> from the group if the throttling does not ease up within the max poll 
> interval.
> Example: when restoring large amounts of state from scratch, we use the 
> strategy recommended by RocksDB of turning off automatic compactions and 
> dumping everything into L0. We do batch somewhat, but do not sort these small 
> batches before loading into the db, so we end up with a large number of 
> unsorted L0 files.
> When restoration is complete and we toggle the db back to normal (not bulk 
> loading) settings, a background compaction is triggered to merge all these 
> into the next level. This background compaction can take a long time to merge 
> unsorted keys, especially when the amount of data is quite large.
> Any new writes while the number of L0 files exceeds the max will be stalled 
> until the compaction can finish, and processing after restoring from scratch 
> can block beyond the polling interval



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Join request

2024-02-24 Thread Matthias J. Sax

To subscribe, please follow instructions from the webpage

https://kafka.apache.org/contact


-Matthias

On 2/23/24 1:15 AM, kashi mori wrote:

Hi, please add my email to the mailin list



Re: Join request

2024-02-24 Thread Matthias J. Sax

To subscribe, please follow instructions from the webpage

https://kafka.apache.org/contact


-Matthias

On 2/23/24 1:15 AM, kashi mori wrote:

Hi, please add my email to the mailin list



[jira] [Resolved] (KAFKA-12549) Allow state stores to opt-in transactional support

2024-02-22 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-12549.
-
Resolution: Duplicate

Closing this ticket in favor of K14412.

> Allow state stores to opt-in transactional support
> --
>
> Key: KAFKA-12549
> URL: https://issues.apache.org/jira/browse/KAFKA-12549
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: Guozhang Wang
>Priority: Major
>
> Right now Kafka Stream's EOS implementation does not make any assumptions 
> about the state store's transactional support. Allowing the state stores to 
> optionally provide transactional support can have multiple benefits. E.g., if 
> we add some APIs into the {{StateStore}} interface, like {{beginTxn}}, 
> {{commitTxn}} and {{abortTxn}}. Streams library can determine if these are 
> supported via an additional {{boolean transactional()}} API, and if yes the 
> these APIs can be used under both ALOS and EOS like the following (otherwise 
> then just fallback to the normal processing logic):
> Within thread processing loops:
> 1. store.beginTxn
> 2. store.put // during processing
> 3. streams commit // either through eos protocol or not
> 4. store.commitTxn
> 5. start the next txn by store.beginTxn
> If the state stores allow Streams to do something like above, we can have the 
> following benefits:
> * Reduce the duplicated records upon crashes for ALOS (note this is not EOS 
> still, but some middle-ground where uncommitted data within a state store 
> would not be retained if store.commitTxn failed).
> * No need to wipe the state store and re-bootstrap from scratch upon crashes 
> for EOS. E.g., if a crash-failure happened between streams commit completes 
> and store.commitTxn. We can instead just roll-forward the transaction by 
> replaying the changelog from the second recent streams committed offset 
> towards the most recent committed offset.
> * Remote stores that support txn then do not need to support wiping 
> (https://issues.apache.org/jira/browse/KAFKA-12475).
> * We can fix the known issues of emit-on-change 
> (https://cwiki.apache.org/confluence/display/KAFKA/KIP-557%3A+Add+emit+on+change+support+for+Kafka+Streams).
> * We can support "query committed data only" for interactive queries (see 
> below for reasons).
> As for the implementation of these APIs, there are several options:
> * The state store itself have natural transaction features (e.g. RocksDB).
> * Use an in-memory buffer for all puts within a transaction, and upon 
> `commitTxn` write the whole buffer as a batch to the underlying state store, 
> or just drop the whole buffer upon aborting. Then for interactive queries, 
> one can optionally only query the underlying store for committed data only.
> * Use a separate store as the transient persistent buffer. Upon `beginTxn` 
> create a new empty transient store, and upon `commitTxn` merge the store into 
> the underlying store. Same applies for interactive querying committed-only 
> data. This has a benefit compared with the one above that there's no memory 
> pressure even with long transactions, but incurs more complexity / 
> performance overhead with the separate persistent store.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-12549) Allow state stores to opt-in transactional support

2024-02-22 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-12549.
-
Resolution: Duplicate

Closing this ticket in favor of K14412.

> Allow state stores to opt-in transactional support
> --
>
> Key: KAFKA-12549
> URL: https://issues.apache.org/jira/browse/KAFKA-12549
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: Guozhang Wang
>Priority: Major
>
> Right now Kafka Stream's EOS implementation does not make any assumptions 
> about the state store's transactional support. Allowing the state stores to 
> optionally provide transactional support can have multiple benefits. E.g., if 
> we add some APIs into the {{StateStore}} interface, like {{beginTxn}}, 
> {{commitTxn}} and {{abortTxn}}. Streams library can determine if these are 
> supported via an additional {{boolean transactional()}} API, and if yes the 
> these APIs can be used under both ALOS and EOS like the following (otherwise 
> then just fallback to the normal processing logic):
> Within thread processing loops:
> 1. store.beginTxn
> 2. store.put // during processing
> 3. streams commit // either through eos protocol or not
> 4. store.commitTxn
> 5. start the next txn by store.beginTxn
> If the state stores allow Streams to do something like above, we can have the 
> following benefits:
> * Reduce the duplicated records upon crashes for ALOS (note this is not EOS 
> still, but some middle-ground where uncommitted data within a state store 
> would not be retained if store.commitTxn failed).
> * No need to wipe the state store and re-bootstrap from scratch upon crashes 
> for EOS. E.g., if a crash-failure happened between streams commit completes 
> and store.commitTxn. We can instead just roll-forward the transaction by 
> replaying the changelog from the second recent streams committed offset 
> towards the most recent committed offset.
> * Remote stores that support txn then do not need to support wiping 
> (https://issues.apache.org/jira/browse/KAFKA-12475).
> * We can fix the known issues of emit-on-change 
> (https://cwiki.apache.org/confluence/display/KAFKA/KIP-557%3A+Add+emit+on+change+support+for+Kafka+Streams).
> * We can support "query committed data only" for interactive queries (see 
> below for reasons).
> As for the implementation of these APIs, there are several options:
> * The state store itself have natural transaction features (e.g. RocksDB).
> * Use an in-memory buffer for all puts within a transaction, and upon 
> `commitTxn` write the whole buffer as a batch to the underlying state store, 
> or just drop the whole buffer upon aborting. Then for interactive queries, 
> one can optionally only query the underlying store for committed data only.
> * Use a separate store as the transient persistent buffer. Upon `beginTxn` 
> create a new empty transient store, and upon `commitTxn` merge the store into 
> the underlying store. Same applies for interactive querying committed-only 
> data. This has a benefit compared with the one above that there's no memory 
> pressure even with long transactions, but incurs more complexity / 
> performance overhead with the separate persistent store.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-844: Transactional State Stores

2024-02-22 Thread Matthias J. Sax
To close the loop on this thread. KIP-892 was accepted and is currently 
implemented. Thus I'll go a head and mark this KIP a discarded.


Thanks a lot Alex for spending so much time on this very important 
feature! Without your ground work, we would not have KIP-892 and your 
contributions are noticed!


-Matthias


On 11/21/22 5:12 AM, Nick Telford wrote:

Hi Alex,

Thanks for getting back to me. I actually have most of a working
implementation already. I'm going to write it up as a new KIP, so that it
can be reviewed independently of KIP-844.

Hopefully, working together we can have it ready sooner.

I'll keep you posted on my progress.

Regards,
Nick

On Mon, 21 Nov 2022 at 11:25, Alexander Sorokoumov
 wrote:


Hey Nick,

Thank you for the prototype testing and benchmarking, and sorry for the
late reply!

I agree that it is worth revisiting the WriteBatchWithIndex approach. I
will implement a fork of the current prototype that uses that mechanism to
ensure transactionality and let you know when it is ready for
review/testing in this ML thread.

As for time estimates, I might not have enough time to finish the prototype
in December, so it will probably be ready for review in January.

Best,
Alex

On Fri, Nov 11, 2022 at 4:24 PM Nick Telford 
wrote:


Hi everyone,

Sorry to dredge this up again. I've had a chance to start doing some
testing with the WIP Pull Request, and it appears as though the secondary
store solution performs rather poorly.

In our testing, we had a non-transactional state store that would restore
(from scratch), at a rate of nearly 1,000,000 records/second. When we
switched it to a transactional store, it restored at a rate of less than
40,000 records/second.

I suspect the key issues here are having to copy the data out of the
temporary store and into the main store on-commit, and to a lesser

extent,

the extra memory copies during writes.

I think it's worth re-visiting the WriteBatchWithIndex solution, as it's
clear from the RocksDB post[1] on the subject that it's the recommended

way

to achieve transactionality.

The only issue you identified with this solution was that uncommitted
writes are required to entirely fit in-memory, and RocksDB recommends

they

don't exceed 3-4MiB. If we do some back-of-the-envelope calculations, I
think we'll find that this will be a non-issue for all but the most

extreme

cases, and for those, I think I have a fairly simple solution.

Firstly, when EOS is enabled, the default commit.interval.ms is set to
100ms, which provides fairly short intervals that uncommitted writes need
to be buffered in-memory. If we assume a worst case of 1024 byte records
(and for most cases, they should be much smaller), then 4MiB would hold
~4096 records, which with 100ms commit intervals is a throughput of
approximately 40,960 records/second. This seems quite reasonable.

For use cases that wouldn't reasonably fit in-memory, my suggestion is

that

we have a mechanism that tracks the number/size of uncommitted records in
stores, and prematurely commits the Task when this size exceeds a
configured threshold.

Thanks for your time, and let me know what you think!
--
Nick

1: https://rocksdb.org/blog/2015/02/27/write-batch-with-index.html

On Thu, 6 Oct 2022 at 19:31, Alexander Sorokoumov
 wrote:


Hey Nick,

It is going to be option c. Existing state is considered to be

committed

and there will be an additional RocksDB for uncommitted writes.

I am out of office until October 24. I will update KIP and make sure

that

we have an upgrade test for that after coming back from vacation.

Best,
Alex

On Thu, Oct 6, 2022 at 5:06 PM Nick Telford 
wrote:


Hi everyone,

I realise this has already been voted on and accepted, but it

occurred

to

me today that the KIP doesn't define the migration/upgrade path for
existing non-transactional StateStores that *become* transactional,

i.e.

by

adding the transactional boolean to the StateStore constructor.

What would be the result, when such a change is made to a Topology,

without

explicitly wiping the application state?
a) An error.
b) Local state is wiped.
c) Existing RocksDB database is used as committed writes and new

RocksDB

database is created for uncommitted writes.
d) Something else?

Regards,

Nick

On Thu, 1 Sept 2022 at 12:16, Alexander Sorokoumov
 wrote:


Hey Guozhang,

Sounds good. I annotated all added StateStore methods (commit,

recover,

transactional) with @Evolving.

Best,
Alex



On Wed, Aug 31, 2022 at 7:32 PM Guozhang Wang 

wrote:



Hello Alex,

Thanks for the detailed replies, I think that makes sense, and in

the

long

run we would need some public indicators from StateStore to

determine

if

checkpoints can really be used to indicate clean snapshots.

As for the @Evolving label, I think we can still keep it but for

a

different reason, since as we add more state management

functionalities

in

the near future we may need to revisit the public APIs again and

hence

keeping it as @Evolving 

[jira] [Updated] (KAFKA-16302) Builds failing due to streams test execution failures

2024-02-22 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16302:

Component/s: streams
 unit tests

> Builds failing due to streams test execution failures
> -
>
> Key: KAFKA-16302
> URL: https://issues.apache.org/jira/browse/KAFKA-16302
> Project: Kafka
>  Issue Type: Task
>  Components: streams, unit tests
>Reporter: Justine Olshan
>Priority: Major
>
> I'm seeing this on master and many PR builds for all versions:
>  
> {code:java}
> [2024-02-22T14:37:07.076Z] * What went wrong:
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1426[2024-02-22T14:37:07.076Z]
>  Execution failed for task ':streams:test'.
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1427[2024-02-22T14:37:07.076Z]
>  > The following test methods could not be retried, which is unexpected. 
> Please file a bug report at 
> https://github.com/gradle/test-retry-gradle-plugin/issues
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1428[2024-02-22T14:37:07.076Z]
>  
> org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest#shouldLogAndMeasureExpiredRecords[org.apache.kafka.streams.state.internals.SessionKeySchema@78d39a69]
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1429[2024-02-22T14:37:07.076Z]
>  
> org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest#shouldLogAndMeasureExpiredRecords[org.apache.kafka.streams.state.internals.WindowKeySchema@3c818ac4]
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1430[2024-02-22T14:37:07.076Z]
>  
> org.apache.kafka.streams.state.internals.RocksDBTimestampedSegmentedBytesStoreTest#shouldLogAndMeasureExpiredRecords[org.apache.kafka.streams.state.internals.WindowKeySchema@251f7d26]
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1431[2024-02-22T14:37:07.076Z]
>  
> org.apache.kafka.streams.state.internals.RocksDBTimestampedSegmentedBytesStoreTest#shouldLogAndMeasureExpiredRecords[org.apache.kafka.streams.state.internals.SessionKeySchema@52c8295b]
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1432[2024-02-22T14:37:07.076Z]
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16302) Builds failing due to streams test execution failures

2024-02-22 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-16302:
---

Assignee: Justine Olshan

> Builds failing due to streams test execution failures
> -
>
> Key: KAFKA-16302
> URL: https://issues.apache.org/jira/browse/KAFKA-16302
> Project: Kafka
>  Issue Type: Task
>  Components: streams, unit tests
>Reporter: Justine Olshan
>Assignee: Justine Olshan
>Priority: Major
>
> I'm seeing this on master and many PR builds for all versions:
>  
> {code:java}
> [2024-02-22T14:37:07.076Z] * What went wrong:
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1426[2024-02-22T14:37:07.076Z]
>  Execution failed for task ':streams:test'.
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1427[2024-02-22T14:37:07.076Z]
>  > The following test methods could not be retried, which is unexpected. 
> Please file a bug report at 
> https://github.com/gradle/test-retry-gradle-plugin/issues
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1428[2024-02-22T14:37:07.076Z]
>  
> org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest#shouldLogAndMeasureExpiredRecords[org.apache.kafka.streams.state.internals.SessionKeySchema@78d39a69]
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1429[2024-02-22T14:37:07.076Z]
>  
> org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest#shouldLogAndMeasureExpiredRecords[org.apache.kafka.streams.state.internals.WindowKeySchema@3c818ac4]
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1430[2024-02-22T14:37:07.076Z]
>  
> org.apache.kafka.streams.state.internals.RocksDBTimestampedSegmentedBytesStoreTest#shouldLogAndMeasureExpiredRecords[org.apache.kafka.streams.state.internals.WindowKeySchema@251f7d26]
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1431[2024-02-22T14:37:07.076Z]
>  
> org.apache.kafka.streams.state.internals.RocksDBTimestampedSegmentedBytesStoreTest#shouldLogAndMeasureExpiredRecords[org.apache.kafka.streams.state.internals.SessionKeySchema@52c8295b]
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1432[2024-02-22T14:37:07.076Z]
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-1020: Deprecate `window.size.ms` in `StreamsConfig`

2024-02-21 Thread Matthias J. Sax

Thanks for the KIP. Sounds like a nice cleanup.


window.size.ms  is not a true KafkaStreams config, and results in an error when 
set from a KStreams application


What error?


Given that the configs is used by `TimeWindowedDeserializer` I am 
wondering if we should additionally add


public class TimeWindowedDeserializer {

public static final String WINDOW_SIZE_MS_CONFIG = "window.size.ms";
}



-Matthias


On 2/15/24 6:32 AM, Lucia Cerchie wrote:

Hey everyone,

I'd like to discuss KIP-1020
,
which would move to deprecate `window.size.ms` in `StreamsConfig` since `
window.size.ms` is a client config.

Thanks in advance!

Lucia Cerchie



Re: GlobalKTable with RocksDB - queries before state RUNNING?

2024-02-21 Thread Matthias J. Sax

Filed https://issues.apache.org/jira/browse/KAFKA-16295

Also, for global store support, we do have a ticket already: 
https://issues.apache.org/jira/browse/KAFKA-13523


It's actually a little bit more involved due to position tracking... I 
guess we might need a KIP to fix this.


And yes: if anybody has interest to pick it up, that would be great. We 
did push a couple of IQv2 improvements into upcoming 3.7 release, and of 
course hope to make it the default eventually and to deprecate IQv1.


We should actually also start to document IQv2... 
https://issues.apache.org/jira/browse/KAFKA-16262



-Matthias

On 11/21/23 4:50 PM, Sophie Blee-Goldman wrote:

Just to make sure I understand the logs, you're saying the "new file
processed" lines represent store queries, and presumably the
com.osr.serKafkaStreamsService is your service that's issuing these queries?

You need to wait for the app to finish restoring state before querying it.
Based on this message -- "KafkaStreams has not been started, you can retry
after calling start()" -- I assume you're kicking off the querying service
right away and blocking queries until after KafkaStreams#start is called.
But you need to wait for it to actually finish starting up, not just for
start() to be called. The best way to do this is by setting a state
listener via KafkaStreams#setStateListener, and then using this to listen
in on the KafkaStreams.State and blocking the queries until the state has
changed to RUNNING.

In case you're curious about why this seems to work with in-memory stores
but not with rocksdb, it seems like in the in-memory case, the queries that
are attempted during restoration are blocked due to the store being closed
(according to "(Quarkus Main Thread) the state store, store-name, is not
open.")

So why is the store closed for most of the restoration in the in-memory
case only? This gets a bit into the weeds, but it has to do with the
sequence of events in starting up a state store. When the global thread
starts up, it'll first loop over all its state stores and call #init on
them. Two things have to happen inside #init: the store is opened, and the
store registers itself with the ProcessorContext. The #register involves
various things, including a call to fetch the end offsets of the topic for
global state stores. This is a blocking call, so the store might stay
inside the #register call for a relatively long while.

For RocksDB stores, we open the store first and then call #register, so by
the time the GlobalStreamThread is sitting around waiting on the end
offsets response, the store is open and your queries are getting through to
it. However the in-memory store actually registers itself *first*, before
marking itself as open, and so it remains closed for most of the time it
spends in restoration and blocks any query attempts during this time.

I suppose it would make sense to align the two store implementations to
have the same behavior, and the in-memory store is probably technically
more correct. But in the end you really should just wait for the
KafkaStreams.State to get to RUNNING before querying the state store, as
that's the only true guarantee.

Hope this helps!

-Sophie

On Tue, Nov 21, 2023 at 6:44 AM Christian Zuegner
 wrote:


Hi,

we have the following problem - a Kafka Topic ~20Megabytes is made
available as GlobalKTable for queries. With using RocksDB the GKTable is
ready for queries instantly even without having reading the data complete -
all get() requests return null. After a few seconds the data is querieable
correctly - but this is to late for our application. Once we switch to
IN_MEMORY we get the expected behavior. The store is only ready after all
data has been read from topic.

How can we achieve the same behavior with the RocksDB setup?

Snipet to build KafkaStreams Topology

builder.globalTable(
   "topic-name",
   Consumed.with(Serdes.String(), Serdes.String()),

Materialized.as(STORE_NAME).withStoreType(Materialized.StoreType.ROCKS_DB)
);

Query the Table

while (true) {
 try {
 return streams.store(

StoreQueryParameters.fromNameAndType(FileCrawlerKafkaTopologyProducer.STORE_NAME,
QueryableStoreTypes.keyValueStore()));
 } catch (InvalidStateStoreException e) {
 logger.warn(e.getMessage());
 try {
 Thread.sleep(3000);
 } catch (InterruptedException ignored) {
 }
 }
 }

The store is queried with getStore().get(key); <- here we get the null
values.

This is the Log Output when RocksDB - first query before state RUNNING

...
2023-11-21 15:15:40,629 INFO  [com.osr.serKafkaStreamsService] (Quarkus
Main Thread) wait for kafka streams store to get ready: KafkaStreams has
not been started, you can retry after calling start()
2023-11-21 15:15:41,781 INFO  [org.apa.kaf.str.KafkaStreams]
(pool-10-thread-1) stream-client
[topic-name-7c35d436-f18c-4cb9-9d87-80855df5d1a2] State 

[jira] [Created] (KAFKA-16295) Align RocksDB and in-memory store inti() sequence

2024-02-21 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16295:
---

 Summary: Align RocksDB and in-memory store inti() sequence
 Key: KAFKA-16295
 URL: https://issues.apache.org/jira/browse/KAFKA-16295
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Matthias J. Sax


Cf [https://lists.apache.org/thread/f4z1vmpb21xhyxl6966xtcb3958fyx5d] 
{quote}For RocksDB stores, we open the store first and then call #register, 
[...] However the in-memory store actually registers itself *first*, before 
marking itself as open,[..].

I suppose it would make sense to align the two store implementations to have 
the same behavior, and the in-memory store is probably technically more correct.
{quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16295) Align RocksDB and in-memory store inti() sequence

2024-02-21 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16295:
---

 Summary: Align RocksDB and in-memory store inti() sequence
 Key: KAFKA-16295
 URL: https://issues.apache.org/jira/browse/KAFKA-16295
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Matthias J. Sax


Cf [https://lists.apache.org/thread/f4z1vmpb21xhyxl6966xtcb3958fyx5d] 
{quote}For RocksDB stores, we open the store first and then call #register, 
[...] However the in-memory store actually registers itself *first*, before 
marking itself as open,[..].

I suppose it would make sense to align the two store implementations to have 
the same behavior, and the in-memory store is probably technically more correct.
{quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] KIP-1019: Expose method to determine Metric Measurability

2024-02-20 Thread Matthias J. Sax

+1 (binding)

On 2/20/24 2:55 AM, Manikumar wrote:

+1 (binding).

Thanks for the KIP.

Manikumar

On Tue, Feb 20, 2024 at 2:31 PM Andrew Schofield <
andrew_schofield_j...@outlook.com> wrote:


Hi Apoorv,
Thanks for the KIP.

+1 (non-binding)

Thanks,
Andrew


On 19 Feb 2024, at 22:31, Apoorv Mittal 

wrote:


Hi,
I’d like to start the voting for KIP-1019: Expose method to determine
Metric Measurability.



https://cwiki.apache.org/confluence/display/KAFKA/KIP-1019%3A+Expose+method+to+determine+Metric+Measurability


Regards,
Apoorv Mittal







[jira] [Updated] (KAFKA-16284) Performance regression in RocksDB

2024-02-20 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16284:

Affects Version/s: 3.8.0

> Performance regression in RocksDB
> -
>
> Key: KAFKA-16284
> URL: https://issues.apache.org/jira/browse/KAFKA-16284
> Project: Kafka
>  Issue Type: Task
>  Components: streams
>Affects Versions: 3.8.0
>Reporter: Lucas Brutschy
>Assignee: Lucas Brutschy
>Priority: Major
>
> In benchmarks, we are noticing a performance regression in the performance of 
> `RocksDBStore`.
> The regression happens between those two commits:
>  
> {code:java}
> trunk - 70c8b8d0af - regressed - 2024-01-06T14:00:20Z
> trunk - d5aa341a18 - not regressed - 2023-12-31T11:47:14Z
> {code}
> The regression can be reproduced by the following test:
>  
> {code:java}
> package org.apache.kafka.streams.state.internals;
> import org.apache.kafka.common.serialization.Serdes;
> import org.apache.kafka.common.utils.Bytes;
> import org.apache.kafka.streams.StreamsConfig;
> import org.apache.kafka.streams.processor.StateStoreContext;
> import org.apache.kafka.test.InternalMockProcessorContext;
> import org.apache.kafka.test.MockRocksDbConfigSetter;
> import org.apache.kafka.test.StreamsTestUtils;
> import org.apache.kafka.test.TestUtils;
> import org.junit.Before;
> import org.junit.Test;
> import java.io.File;
> import java.nio.ByteBuffer;
> import java.util.Properties;
> public class RocksDBStorePerfTest {
> InternalMockProcessorContext context;
> RocksDBStore rocksDBStore;
> final static String DB_NAME = "db-name";
> final static String METRICS_SCOPE = "metrics-scope";
> RocksDBStore getRocksDBStore() {
> return new RocksDBStore(DB_NAME, METRICS_SCOPE);
> }
> @Before
> public void setUp() {
> final Properties props = StreamsTestUtils.getStreamsConfig();
> props.put(StreamsConfig.ROCKSDB_CONFIG_SETTER_CLASS_CONFIG, 
> MockRocksDbConfigSetter.class);
> File dir = TestUtils.tempDirectory();
> context = new InternalMockProcessorContext<>(
> dir,
> Serdes.String(),
> Serdes.String(),
> new StreamsConfig(props)
> );
> }
> @Test
> public void testPerf() {
> long start = System.currentTimeMillis();
> for (int i = 0; i < 10; i++) {
> System.out.println("Iteration: "+i+" Time: " + 
> (System.currentTimeMillis() - start));
> RocksDBStore rocksDBStore = getRocksDBStore();
> rocksDBStore.init((StateStoreContext) context, rocksDBStore);
> for (int j = 0; j < 100; j++) {
> rocksDBStore.put(new 
> Bytes(ByteBuffer.allocate(4).putInt(j).array()), "perf".getBytes());
> }
> rocksDBStore.close();
> }
> long end = System.currentTimeMillis();
> System.out.println("Time: " + (end - start));
> }
> }
>  {code}
>  
> I have isolated the regression to commit 
> [5bc3aa4|https://github.com/apache/kafka/commit/5bc3aa428067dff1f2b9075ff5d1351fb05d4b10].
>  On my machine, the test takes ~8 seconds before 
> [5bc3aa4|https://github.com/apache/kafka/commit/5bc3aa428067dff1f2b9075ff5d1351fb05d4b10]
>  and ~30 seconds after 
> [5bc3aa4|https://github.com/apache/kafka/commit/5bc3aa428067dff1f2b9075ff5d1351fb05d4b10].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16279) Avoid leaking abstractions of `StateStore`

2024-02-19 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16279:
---

 Summary: Avoid leaking abstractions of `StateStore`
 Key: KAFKA-16279
 URL: https://issues.apache.org/jira/browse/KAFKA-16279
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Matthias J. Sax


The `StateStore` interface is user facing and contains a few life-cycle 
management methods (like `init()` and `close()`) – those methods are exposed 
for users to develop custom state stores.

However, we also use `StateStore` as base interface for store-handles in the 
PAPI, and thus life-cycle management methods are leaking into the PAPI (maybe 
also others – would need a dedicated discussion which one we consider useful 
for PAPI users and which not).

We should consider to change what we expose in the PAPI (atm, we only document 
via JavaDocs that eg. `close()` should never be called; but it's of course not 
ideal and would be better if `close()` et al. would not be expose for `PAPI` 
users to begin with.)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16279) Avoid leaking abstractions of `StateStore`

2024-02-19 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16279:
---

 Summary: Avoid leaking abstractions of `StateStore`
 Key: KAFKA-16279
 URL: https://issues.apache.org/jira/browse/KAFKA-16279
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Matthias J. Sax


The `StateStore` interface is user facing and contains a few life-cycle 
management methods (like `init()` and `close()`) – those methods are exposed 
for users to develop custom state stores.

However, we also use `StateStore` as base interface for store-handles in the 
PAPI, and thus life-cycle management methods are leaking into the PAPI (maybe 
also others – would need a dedicated discussion which one we consider useful 
for PAPI users and which not).

We should consider to change what we expose in the PAPI (atm, we only document 
via JavaDocs that eg. `close()` should never be called; but it's of course not 
ideal and would be better if `close()` et al. would not be expose for `PAPI` 
users to begin with.)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-14049) Relax Non Null Requirement for KStreamGlobalKTable Left Join

2024-02-18 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-14049:

Fix Version/s: 3.7.0

> Relax Non Null Requirement for KStreamGlobalKTable Left Join
> 
>
> Key: KAFKA-14049
> URL: https://issues.apache.org/jira/browse/KAFKA-14049
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Saumya Gupta
>Assignee: Florin Akermann
>Priority: Major
> Fix For: 3.7.0
>
>
> Null Values in the Stream for a Left Join would indicate a Tombstone Message 
> that needs to propagated if not actually joined with the GlobalKTable 
> message, hence these messages should not be ignored .



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15143) MockFixedKeyProcessorContext is missing from test-utils

2024-02-16 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-15143:

Labels: needs-kip  (was: )

> MockFixedKeyProcessorContext is missing from test-utils
> ---
>
> Key: KAFKA-15143
> URL: https://issues.apache.org/jira/browse/KAFKA-15143
> Project: Kafka
>  Issue Type: Bug
>  Components: streams-test-utils
>Affects Versions: 3.5.0
>Reporter: Tomasz Kaszuba
>Assignee: Shashwat Pandey
>Priority: Major
>  Labels: needs-kip
>
> I am trying to test a ContextualFixedKeyProcessor but it is not possible to 
> call the init method from within a unit test since the MockProcessorContext 
> doesn't implement  
> {code:java}
> FixedKeyProcessorContext {code}
> but only
> {code:java}
> ProcessorContext
> {code}
> Shouldn't there also be a *MockFixedKeyProcessorContext* in the test utils?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-15143) MockFixedKeyProcessorContext is missing from test-utils

2024-02-16 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-15143:
---

Assignee: Shashwat Pandey

> MockFixedKeyProcessorContext is missing from test-utils
> ---
>
> Key: KAFKA-15143
> URL: https://issues.apache.org/jira/browse/KAFKA-15143
> Project: Kafka
>  Issue Type: Bug
>  Components: streams-test-utils
>Affects Versions: 3.5.0
>Reporter: Tomasz Kaszuba
>Assignee: Shashwat Pandey
>Priority: Major
>
> I am trying to test a ContextualFixedKeyProcessor but it is not possible to 
> call the init method from within a unit test since the MockProcessorContext 
> doesn't implement  
> {code:java}
> FixedKeyProcessorContext {code}
> but only
> {code:java}
> ProcessorContext
> {code}
> Shouldn't there also be a *MockFixedKeyProcessorContext* in the test utils?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [Discuss] KIP-1019: Expose method to determine Metric Measurability

2024-02-16 Thread Matthias J. Sax
Thanks for the KIP. Seems there is not much we need to discuss about it. 
Feel free to start a VOTE.


-Matthias

On 2/15/24 7:24 AM, Manikumar wrote:

LGTM, Thanks for the KIP.

On Thu, Feb 15, 2024 at 8:50 PM Doğuşcan Namal 
wrote:


LGTM thanks for the KIP.

+1(non-binding)

On Wed, 14 Feb 2024 at 15:22, Andrew Schofield <
andrew_schofield_j...@outlook.com> wrote:


Hi Apoorv,
Thanks for the KIP. Looks like a useful change to tidy up the metrics

code.


Thanks,
Andrew


On 14 Feb 2024, at 14:55, Apoorv Mittal 

wrote:


Hi,
I would like to start discussion of a small KIP which fills a gap in
determining Kafka Metric measurability.

KIP-1019: Expose method to determine Metric Measurability
<



https://cwiki.apache.org/confluence/display/KAFKA/KIP-1019%3A+Expose+method+to+determine+Metric+Measurability



Regards,
Apoorv Mittal









[jira] [Created] (KAFKA-16263) Add Kafka Streams docs about available listeners/callback

2024-02-15 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16263:
---

 Summary: Add Kafka Streams docs about available listeners/callback
 Key: KAFKA-16263
 URL: https://issues.apache.org/jira/browse/KAFKA-16263
 Project: Kafka
  Issue Type: Task
  Components: docs, streams
Reporter: Matthias J. Sax


Kafka Streams allows to register all kind of listeners and callback (eg, 
uncaught-exception-handler, restore-listeners, etc) but those are not in the 
documentation.

A good place might be 
[https://kafka.apache.org/documentation/streams/developer-guide/running-app.html]
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16263) Add Kafka Streams docs about available listeners/callback

2024-02-15 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16263:
---

 Summary: Add Kafka Streams docs about available listeners/callback
 Key: KAFKA-16263
 URL: https://issues.apache.org/jira/browse/KAFKA-16263
 Project: Kafka
  Issue Type: Task
  Components: docs, streams
Reporter: Matthias J. Sax


Kafka Streams allows to register all kind of listeners and callback (eg, 
uncaught-exception-handler, restore-listeners, etc) but those are not in the 
documentation.

A good place might be 
[https://kafka.apache.org/documentation/streams/developer-guide/running-app.html]
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16262) Add IQv2 to Kafka Streams documentation

2024-02-15 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16262:
---

 Summary: Add IQv2 to Kafka Streams documentation
 Key: KAFKA-16262
 URL: https://issues.apache.org/jira/browse/KAFKA-16262
 Project: Kafka
  Issue Type: Task
  Components: docs, streams
Reporter: Matthias J. Sax


The new IQv2 API was added many release ago. While it is still not feature 
complete, we should add it to the docs 
([https://kafka.apache.org/documentation/streams/developer-guide/interactive-queries.html])
 to make users aware of the new API so they can start to try it out, report 
issue and provide feedback / feature requests.

We might still state that IQv2 is not yet feature complete, but should change 
the docs in a way to position is as the "new API", and have code exmples.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16262) Add IQv2 to Kafka Streams documentation

2024-02-15 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16262:
---

 Summary: Add IQv2 to Kafka Streams documentation
 Key: KAFKA-16262
 URL: https://issues.apache.org/jira/browse/KAFKA-16262
 Project: Kafka
  Issue Type: Task
  Components: docs, streams
Reporter: Matthias J. Sax


The new IQv2 API was added many release ago. While it is still not feature 
complete, we should add it to the docs 
([https://kafka.apache.org/documentation/streams/developer-guide/interactive-queries.html])
 to make users aware of the new API so they can start to try it out, report 
issue and provide feedback / feature requests.

We might still state that IQv2 is not yet feature complete, but should change 
the docs in a way to position is as the "new API", and have code exmples.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-12317) Relax non-null key requirement for left/outer KStream joins

2024-02-15 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17817833#comment-17817833
 ] 

Matthias J. Sax commented on KAFKA-12317:
-

[~aki] – I was just looking into 
[https://kafka.apache.org/documentation/streams/developer-guide/dsl-api.html#joining]
 and it seem we need to update this a little bit. Eg, there is
{quote}Input records with a {{null}} key or a {{null}} value are ignored and do 
not trigger the join.
{quote}
for left/outer stream-stream join, what is not correct any longer.

The "table" that explains joins semantics, is also done with the assumption 
that all keys are the same and that the key is never null – in general, this 
table view is hard to read anyway and it might be good to replace it with 
something better.

Would you be interested to do a follow up PR to update the docs? – I assume 
that other sections (not just stream-stream join) needs an update.

> Relax non-null key requirement for left/outer KStream joins
> ---
>
> Key: KAFKA-12317
> URL: https://issues.apache.org/jira/browse/KAFKA-12317
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Florin Akermann
>Priority: Major
>  Labels: kip
>
> Currently, for a stream-streams and stream-table/globalTable join 
> KafkaStreams drops all stream records with a `null`{-}key (`null`-join-key 
> for stream-globalTable), because for a `null`{-}(join)key the join is 
> undefined: ie, we don't have an attribute the do the table lookup (we 
> consider the stream-record as malformed). Note, that we define the semantics 
> of _left/outer_ join as: keep the stream record if no matching join record 
> was found.
> We could relax the definition of _left_ stream-table/globalTable and 
> _left/outer_ stream-stream join though, and not drop `null`-(join)key stream 
> records, and call the ValueJoiner with a `null` "other-side" value instead: 
> if the stream record key (or join-key) is `null`, we could treat is as 
> "failed lookup" instead of treating the stream record as corrupted.
> If we make this change, users that want to keep the current behavior, can add 
> a `filter()` before the join to drop `null`-(join)key records from the stream 
> explicitly.
> Note that this change also requires to change the behavior if we insert a 
> repartition topic before the join: currently, we drop `null`-key record 
> before writing into the repartition topic (as we know they would be dropped 
> later anyway). We need to relax this behavior for a left stream-table and 
> left/outer stream-stream join. User need to be aware (ie, we might need to 
> put this into the docs and JavaDocs), that records with `null`-key would be 
> partitioned randomly.
> KIP-962: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-962%3A+Relax+non-null+key+requirement+in+Kafka+Streams]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14049) Relax Non Null Requirement for KStreamGlobalKTable Left Join

2024-02-15 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17817827#comment-17817827
 ] 

Matthias J. Sax commented on KAFKA-14049:
-

[~aki] does KIP-962 cover this ticket?

> Relax Non Null Requirement for KStreamGlobalKTable Left Join
> 
>
> Key: KAFKA-14049
> URL: https://issues.apache.org/jira/browse/KAFKA-14049
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Saumya Gupta
>Assignee: Florin Akermann
>Priority: Major
>
> Null Values in the Stream for a Left Join would indicate a Tombstone Message 
> that needs to propagated if not actually joined with the GlobalKTable 
> message, hence these messages should not be ignored .



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16260) Remove window.size.ms from StreamsConfig

2024-02-14 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16260:

Component/s: streams

> Remove window.size.ms from StreamsConfig
> 
>
> Key: KAFKA-16260
> URL: https://issues.apache.org/jira/browse/KAFKA-16260
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Lucia Cerchie
>Priority: Major
>
> {{window.size.ms}}  is not a true KafkaStreams config, and results in an 
> error when set from a KStreams application. It belongs on the client.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16260) Remove window.size.ms from StreamsConfig

2024-02-14 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16260:

Labels: needs-kip  (was: )

> Remove window.size.ms from StreamsConfig
> 
>
> Key: KAFKA-16260
> URL: https://issues.apache.org/jira/browse/KAFKA-16260
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Lucia Cerchie
>Priority: Major
>  Labels: needs-kip
>
> {{window.size.ms}}  is not a true KafkaStreams config, and results in an 
> error when set from a KStreams application. It belongs on the client.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16260) Remove window.size.ms from StreamsConfig

2024-02-14 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-16260:
---

Assignee: Lucia Cerchie

> Remove window.size.ms from StreamsConfig
> 
>
> Key: KAFKA-16260
> URL: https://issues.apache.org/jira/browse/KAFKA-16260
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Lucia Cerchie
>Assignee: Lucia Cerchie
>Priority: Major
>  Labels: needs-kip
>
> {{window.size.ms}}  is not a true KafkaStreams config, and results in an 
> error when set from a KStreams application. It belongs on the client.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: EOS date for Kafka 3.5.1

2024-02-12 Thread Matthias J. Sax

https://cwiki.apache.org/confluence/display/KAFKA/Time+Based+Release+Plan#TimeBasedReleasePlan-WhatIsOurEOLPolicy?

On 2/11/24 8:08 PM, Sahil Sharma D wrote:

Hi team,

Can you please share the EOS date for Kafka Version 3.5.1?

Regards,
Sahil



<    1   2   3   4   5   6   7   8   9   10   >