[jira] [Commented] (KAFKA-4587) Rethink Unification of Caching with Dedupping

2020-03-27 Thread Maatari (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-4587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17069241#comment-17069241
 ] 

Maatari commented on KAFKA-4587:


Hi all, i'm note sure to follow what is the status of this ticket. Is it still 
relevant ?

> Rethink Unification of Caching with Dedupping
> -
>
> Key: KAFKA-4587
> URL: https://issues.apache.org/jira/browse/KAFKA-4587
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Priority: Major
>
> This is discussed in PR https://github.com/apache/kafka/pull/1588
> In order to support user-customizable state store suppliers in the DSL we did 
> the following:
> 1) Introduce a {{TupleForwarder}} to forward tuples from cached stores that 
> is wrapping user customized stores.
> 2) Narrow the scope to only dedup on forwarding if it is the default 
> CachingXXStore with wrapper RocksDB. 
> With this, the unification of dedupping and caching is less useful now, and 
> we are complicating the inner implementations for forwarding a lot. We need 
> to re-consider this feature with finer granularity of turning on / off 
> caching per store, potentially with explicit triggers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (KAFKA-6443) KTable involved in multiple joins could result in duplicate results

2020-03-27 Thread Maatari (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-6443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17069222#comment-17069222
 ] 

Maatari edited comment on KAFKA-6443 at 3/28/20, 5:18 AM:
--

Hi all, is there a way to know what is the status of this ticket ? We are heavy 
user of KTable multiple join including the scenario above, and have noticed 
indeed, an unusual/substantial amount of intermediated result being generated 
downstream


was (Author: maatdeamon):
Hi all, is there a way to know what is the status of this ticket ? We are heavy 
user of KTable multiple join and have noticed indeed, an unusual/substantial 
amount of intermediated result being generated downstream

> KTable involved in multiple joins could result in duplicate results
> ---
>
> Key: KAFKA-6443
> URL: https://issues.apache.org/jira/browse/KAFKA-6443
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Priority: Major
>
> Consider the following multi table-table joins:
> {code}
> table1.join(table2).join(table2);// "join" could be replaced with 
> "leftJoin" and "outerJoin"
> {code}
> where {{table2}} is involved multiple times in this multi-way joins. In this 
> case, when a new record from the source topic of {{table2}} is being 
> processing, it will send to two children down in the topology and hence may 
> resulting in duplicated join results depending on the join types.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-6443) KTable involved in multiple joins could result in duplicate results

2020-03-27 Thread Daniel Okouya (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-6443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17069222#comment-17069222
 ] 

Daniel Okouya commented on KAFKA-6443:
--

Hi all, is there a way to know what is the status of this ticket ? We are heavy 
user of KTable multiple join and have noticed indeed, an unusual/substantial 
amount of intermediated result being generated downstream

> KTable involved in multiple joins could result in duplicate results
> ---
>
> Key: KAFKA-6443
> URL: https://issues.apache.org/jira/browse/KAFKA-6443
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Priority: Major
>
> Consider the following multi table-table joins:
> {code}
> table1.join(table2).join(table2);// "join" could be replaced with 
> "leftJoin" and "outerJoin"
> {code}
> where {{table2}} is involved multiple times in this multi-way joins. In this 
> case, when a new record from the source topic of {{table2}} is being 
> processing, it will send to two children down in the topology and hence may 
> resulting in duplicated join results depending on the join types.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9770) Caching State Store does not Close Underlying State Store When Exception is Thrown During Flushing

2020-03-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17069171#comment-17069171
 ] 

ASF GitHub Bot commented on KAFKA-9770:
---

vvcephei commented on pull request #8368: KAFKA-9770: Close underlying state 
store also when flush throws
URL: https://github.com/apache/kafka/pull/8368
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Caching State Store does not Close Underlying State Store When Exception is 
> Thrown During Flushing
> --
>
> Key: KAFKA-9770
> URL: https://issues.apache.org/jira/browse/KAFKA-9770
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.0.0
>Reporter: Bruno Cadonna
>Assignee: Bruno Cadonna
>Priority: Major
>
> When a caching state store is closed it calls its {{flush()}} method. If 
> {{flush()}} throws an exception the underlying state store is not closed.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9774) Create official Docker image for Kafka Connect

2020-03-27 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17069144#comment-17069144
 ] 

Jordan Moore commented on KAFKA-9774:
-

[~rhauch] & [~kkonstantine] , lemme know your thoughts here. 

Here's the current design
 # Use Jib to build the image 
 # Follow similar practices as the confluentinc/cp-docker image in that the 
CONNECT_ variables are templated into the worker property file. I'd planned on 
using FreeMarker to do this, but later realized that's probably overkill.
 # Wrap ConnectDistributed with a small class that just initializes that file 
from the environment variables.
 # Maybe add some additional verification around expected variables.
 # Write up some documentation that would be hosted in DockerHub
 # Ping someone in Docker official-images repo to get this put up there.

Yes, right now the above repo is built in Maven, and just a branch of some 
Kafka Stream stuff I did before. That is all fixable of course, no problem.

> Create official Docker image for Kafka Connect
> --
>
> Key: KAFKA-9774
> URL: https://issues.apache.org/jira/browse/KAFKA-9774
> Project: Kafka
>  Issue Type: Task
>  Components: build, KafkaConnect, packaging
>Affects Versions: 2.4.1
>Reporter: Jordan Moore
>Priority: Major
>  Labels: build, features
> Attachments: image-2020-03-27-05-04-46-792.png, 
> image-2020-03-27-05-05-59-024.png
>
>
> This is a ticket for creating an *official* apache/kafka-connect Docker 
> image. 
> Does this need a KIP?  -  I don't think so. This would be a new feature, not 
> any API change. 
> Why is this needed?
>  # Kafka Connect is stateless. I believe this is why a Kafka image is not 
> created?
>  # It scales much more easily with Docker and orchestrators. It operates much 
> like any other serverless / "microservice" web application 
>  # People struggle with deploying it because it is packaged _with Kafka_ , 
> which leads some to believe it needs to _*run* with Kafka_ on the same 
> machine. 
> I think there is separate ticket for creating an official Docker image for 
> Kafka but clearly none exist. I reached out to Confluent about this, but 
> heard nothing yet.
> !image-2020-03-27-05-05-59-024.png|width=740,height=196!
>  
> Zookeeper already has one , btw  
> !image-2020-03-27-05-04-46-792.png|width=739,height=288!
> *References*: 
> [Docs for Official 
> Images|[https://docs.docker.com/docker-hub/official_images/]]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9778) Add validateConnector functionality to the EmbeddedConnectCluster

2020-03-27 Thread Daniel Osvath (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Osvath updated KAFKA-9778:
-
Description: 
A validate endpoint should be added to enables the integration testing of 
validation functionalities, including validation success and assertion of 
specific error messages.

This PR adds a method {{validateConnectorConfig}} to the 
{{EmbeddedConnectCluster}} that pings the {{/config/validate}} endpoint with 
the given configurations. [More about the endpoint 
here.|https://kafka.apache.org/documentation/#connect_rest]

With this addition, the validations for the connector can be tested in a 
similar way integration tests currently use the {{configureConnector}} method, 
for ex: {{connect.configureConnector(CONNECTOR_NAME, props);}}. The validation 
call would look like: {{ConfigInfos validateResponse = 
connect.validateConnectorConfig(CONNECTOR_CLASS_NAME, props);}}.

  was:
A validate endpoint should be added to enables the integration testing of 
validation functionalities, including validation success and assertion of 
specific error messages.

This PR adds a method {{validateConnectorConfig}} to the 
{{EmbeddedConnectCluster}} that pings the {{/config/validate}} endpoint with 
the given configurations. [More about the endpoint 
here.|https://docs.confluent.io/current/connect/references/restapi.html#put--connector-plugins-(string-name)-config-validate]

With this addition, the validations for the connector can be tested in a 
similar way integration tests currently use the {{configureConnector}} method, 
for ex: {{connect.configureConnector(CONNECTOR_NAME, props);}}. The validation 
call would look like: {{ConfigInfos validateResponse = 
connect.validateConnectorConfig(CONNECTOR_CLASS_NAME, props);}}.


> Add validateConnector functionality to the EmbeddedConnectCluster
> -
>
> Key: KAFKA-9778
> URL: https://issues.apache.org/jira/browse/KAFKA-9778
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Daniel Osvath
>Priority: Minor
>
> A validate endpoint should be added to enables the integration testing of 
> validation functionalities, including validation success and assertion of 
> specific error messages.
> This PR adds a method {{validateConnectorConfig}} to the 
> {{EmbeddedConnectCluster}} that pings the {{/config/validate}} endpoint with 
> the given configurations. [More about the endpoint 
> here.|https://kafka.apache.org/documentation/#connect_rest]
> With this addition, the validations for the connector can be tested in a 
> similar way integration tests currently use the {{configureConnector}} 
> method, for ex: {{connect.configureConnector(CONNECTOR_NAME, props);}}. The 
> validation call would look like: {{ConfigInfos validateResponse = 
> connect.validateConnectorConfig(CONNECTOR_CLASS_NAME, props);}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9779) Add version 2.5 to streams system tests

2020-03-27 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-9779:
---
Component/s: system tests
 streams

> Add version 2.5 to streams system tests
> ---
>
> Key: KAFKA-9779
> URL: https://issues.apache.org/jira/browse/KAFKA-9779
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams, system tests
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9779) Add version 2.5 to streams system tests

2020-03-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17069112#comment-17069112
 ] 

ASF GitHub Bot commented on KAFKA-9779:
---

abbccdda commented on pull request #8378: KAFKA-9779: Add Stream System Test 2.5
URL: https://github.com/apache/kafka/pull/8378
 
 
   As title, we should be adding the upgrade test after 2.5 release.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add version 2.5 to streams system tests
> ---
>
> Key: KAFKA-9779
> URL: https://issues.apache.org/jira/browse/KAFKA-9779
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9780) Deprecate commit records without record metadata

2020-03-27 Thread Mario Molina (Jira)
Mario Molina created KAFKA-9780:
---

 Summary: Deprecate commit records without record metadata
 Key: KAFKA-9780
 URL: https://issues.apache.org/jira/browse/KAFKA-9780
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Affects Versions: 2.4.1
Reporter: Mario Molina
Assignee: Mario Molina
 Fix For: 2.5.0, 2.6.0


Since KIP-382 (MirrorMaker 2.0) a new method {{commitRecord}} was included in 
{{SourceTask}} class to be called by the worker adding a new parameter with the 
record metadata. The old {{commitRecord}} method is called and from the new one 
and it's preserved just for backwards compatibility.

The idea is to deprecate this method so that we could remove it in a future 
release.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9779) Add version 2.5 to streams system tests

2020-03-27 Thread Boyang Chen (Jira)
Boyang Chen created KAFKA-9779:
--

 Summary: Add version 2.5 to streams system tests
 Key: KAFKA-9779
 URL: https://issues.apache.org/jira/browse/KAFKA-9779
 Project: Kafka
  Issue Type: Improvement
Reporter: Boyang Chen
Assignee: Boyang Chen






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9778) Add validateConnector functionality to the EmbeddedConnectCluster

2020-03-27 Thread Daniel Osvath (Jira)
Daniel Osvath created KAFKA-9778:


 Summary: Add validateConnector functionality to the 
EmbeddedConnectCluster
 Key: KAFKA-9778
 URL: https://issues.apache.org/jira/browse/KAFKA-9778
 Project: Kafka
  Issue Type: Improvement
Reporter: Daniel Osvath


A validate endpoint should be added to enables the integration testing of 
validation functionalities, including validation success and assertion of 
specific error messages.

This PR adds a method {{validateConnectorConfig}} to the 
{{EmbeddedConnectCluster}} that pings the {{/config/validate}} endpoint with 
the given configurations. [More about the endpoint 
here.|https://docs.confluent.io/current/connect/references/restapi.html#put--connector-plugins-(string-name)-config-validate]

With this addition, the validations for the connector can be tested in a 
similar way integration tests currently use the {{configureConnector}} method, 
for ex: {{connect.configureConnector(CONNECTOR_NAME, props);}}. The validation 
call would look like: {{ConfigInfos validateResponse = 
connect.validateConnectorConfig(CONNECTOR_CLASS_NAME, props);}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9777) Purgatory locking bug can lead to hanging transaction

2020-03-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17069075#comment-17069075
 ] 

ASF GitHub Bot commented on KAFKA-9777:
---

hachikuji commented on pull request #8377: KAFKA-9777; Use asynchronous write 
to log after txn marker completion
URL: https://github.com/apache/kafka/pull/8377
 
 
   This patch addresses a locking issue with `DelayTxnMarker` completion. 
Because of the reliance on the read lock in `TransactionStateManager`, we 
cannot guarantee that a call to `checkAndComplete` will offer an opportunity to 
complete the job. This patch removes the reliance on this lock. Instead when 
the operation is completed, we write the completion message to the log 
asynchronously.
   
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Purgatory locking bug can lead to hanging transaction
> -
>
> Key: KAFKA-9777
> URL: https://issues.apache.org/jira/browse/KAFKA-9777
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 1.1.1, 2.0.1, 2.1.1, 2.2.2, 2.3.1, 2.4.1
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Critical
>
> Once a transaction reaches the `PrepareCommit` or `PrepareAbort` state, the 
> transaction coordinator must send markers to all partitions included in the 
> transaction. After all markers have been sent, then the transaction 
> transitions to the corresponding completed state. Until this transition 
> occurs, no additional progress can be made by the producer.
> The transaction coordinator uses a purgatory to track completion of the 
> markers that need to be sent. Once all markers have been written, then the 
> `DelayedTxnMarker` task becomes completable. We depend on its completion in 
> order to transition to the completed state.
> Related to KAFKA-8334, there is a bug in the locking protocol which is used 
> to check completion of the `DelayedTxnMarker` task. The purgatory attempts to 
> provide a "happens before" contract for task completion with 
> `checkAndComplete`. Basically if a task is completed before calling 
> `checkAndComplete`, then it should be given an opportunity to complete as 
> long as there is sufficient time remaining before expiration. 
> The bug in the locking protocol is that it expects that the operation lock is 
> exclusive to the operation. See here: 
> https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/DelayedOperation.scala#L114.
>  The logic assumes that if the lock cannot be acquired, then the other holder 
> of the lock must be attempting completion of the same delayed operation. If 
> that is not the case, then the "happens before" contract is broken and a task 
> may not get completed until expiration even if it has been satisfied.
> In the case of `DelayedTxnMarker`, the lock in use is the read side of a 
> read-write lock which is used for partition loading: 
> https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/coordinator/transaction/TransactionMarkerChannelManager.scala#L264.
>  In fact, if the lock cannot be acquired, it means that it is being held in 
> order to complete some loading operation, in which case it will definitely 
> not attempt completion of the delayed operation. If this happens to occur on 
> the last call to `checkAndComplete` after all markers have been written, then 
> the transition to the completing state will never occur.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9776) Producer should automatically downgrade TxnOffsetCommitRequest

2020-03-27 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-9776:
---
Summary: Producer should automatically downgrade TxnOffsetCommitRequest  
(was: Producer should automatically downgrade CommitTxRequest)

> Producer should automatically downgrade TxnOffsetCommitRequest
> --
>
> Key: KAFKA-9776
> URL: https://issues.apache.org/jira/browse/KAFKA-9776
> Project: Kafka
>  Issue Type: Improvement
>  Components: producer 
>Affects Versions: 2.5.0
>Reporter: Matthias J. Sax
>Assignee: Boyang Chen
>Priority: Critical
>
> When using transactions with a 2.5 producer against 2.4 (or older) brokers, 
> it is not possible to call `producer.commitTransaction(..., 
> ConsumerGroupMetadata)` but only the old API `producer.commitTransaction(..., 
> String applicationId)` is supported.
> This implies that a developer needs to know the broker version when writing 
> an application or write additional code to call the one or the other API 
> depending on the broker version (the developer would need to write code to 
> figure out the broker version, too).
> We should change the producer to automatically downgrade to the older 
> CommitTxRequest if `commitTransaction(..., ConsumerGroupMetadata)` is used 
> against older brokers to avoid an `UnsupportedVersionException`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9776) Producer should automatically downgrade TxnOffsetCommitRequest

2020-03-27 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-9776:
---
Description: 
When using transactions with a 2.5 producer against 2.4 (or older) brokers, it 
is not possible to call `producer.commitTransaction(..., 
ConsumerGroupMetadata)` but only the old API `producer.commitTransaction(..., 
String applicationId)` is supported.

This implies that a developer needs to know the broker version when writing an 
application or write additional code to call the one or the other API depending 
on the broker version (the developer would need to write code to figure out the 
broker version, too).

We should change the producer to automatically downgrade to the older 
TxnOffsetCommitRequest if `commitTransaction(..., ConsumerGroupMetadata)` is 
used against older brokers to avoid an `UnsupportedVersionException`.

  was:
When using transactions with a 2.5 producer against 2.4 (or older) brokers, it 
is not possible to call `producer.commitTransaction(..., 
ConsumerGroupMetadata)` but only the old API `producer.commitTransaction(..., 
String applicationId)` is supported.

This implies that a developer needs to know the broker version when writing an 
application or write additional code to call the one or the other API depending 
on the broker version (the developer would need to write code to figure out the 
broker version, too).

We should change the producer to automatically downgrade to the older 
CommitTxRequest if `commitTransaction(..., ConsumerGroupMetadata)` is used 
against older brokers to avoid an `UnsupportedVersionException`.


> Producer should automatically downgrade TxnOffsetCommitRequest
> --
>
> Key: KAFKA-9776
> URL: https://issues.apache.org/jira/browse/KAFKA-9776
> Project: Kafka
>  Issue Type: Improvement
>  Components: producer 
>Affects Versions: 2.5.0
>Reporter: Matthias J. Sax
>Assignee: Boyang Chen
>Priority: Critical
>
> When using transactions with a 2.5 producer against 2.4 (or older) brokers, 
> it is not possible to call `producer.commitTransaction(..., 
> ConsumerGroupMetadata)` but only the old API `producer.commitTransaction(..., 
> String applicationId)` is supported.
> This implies that a developer needs to know the broker version when writing 
> an application or write additional code to call the one or the other API 
> depending on the broker version (the developer would need to write code to 
> figure out the broker version, too).
> We should change the producer to automatically downgrade to the older 
> TxnOffsetCommitRequest if `commitTransaction(..., ConsumerGroupMetadata)` is 
> used against older brokers to avoid an `UnsupportedVersionException`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9724) Consumer wrongly ignores fetched records "since it no longer has valid position"

2020-03-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17069028#comment-17069028
 ] 

ASF GitHub Bot commented on KAFKA-9724:
---

mumrah commented on pull request #8376: KAFKA-9724 Newer clients not always 
sending fetch request to older brokers
URL: https://github.com/apache/kafka/pull/8376
 
 
   We had a similar case previously with KAFKA-8422 (#6806) where we would skip 
the validation step if the broker was on a version older than 2.3. 
   
   This PR makes a similar change on the `prepareFetchRequest` side. If the 
broker is older than 2.3, we will skip the transition to AWAITING_VALIDATION 
but also we will clear that state if it had been set by a call to 
`seekUnvalidated`.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Consumer wrongly ignores fetched records "since it no longer has valid 
> position"
> 
>
> Key: KAFKA-9724
> URL: https://issues.apache.org/jira/browse/KAFKA-9724
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer
>Affects Versions: 2.4.0
>Reporter: Oleg Muravskiy
>Priority: Major
>
> After upgrading kafka-client to 2.4.0 (while brokers are still at 2.2.0) 
> consumers in a consumer group intermittently stop progressing on assigned 
> partitions, even when there are messages to consume. This is not a permanent 
> condition, as they progress from time to time, but become slower with time, 
> and catch up after restart.
> Here is a sample of 3 consecutive ignored fetches:
> {noformat}
> 2020-03-15 12:08:58,440 DEBUG [Thread-6] o.a.k.c.c.i.ConsumerCoordinator - 
> Committed offset 538065584 for partition mrt-rrc10-6
> 2020-03-15 12:08:58,541 DEBUG [Thread-6] o.a.k.c.consumer.internals.Fetcher - 
> Skipping validation of fetch offsets for partitions [mrt-rrc10-1, 
> mrt-rrc10-6, mrt-rrc22-7] since the broker does not support the required 
> protocol version (introduced in Kafka 2.3)
> 2020-03-15 12:08:58,549 DEBUG [Thread-6] org.apache.kafka.clients.Metadata - 
> Updating last seen epoch from null to 62 for partition mrt-rrc10-6
> 2020-03-15 12:08:58,557 DEBUG [Thread-6] o.a.k.c.c.i.ConsumerCoordinator - 
> Committed offset 538065584 for partition mrt-rrc10-6
> 2020-03-15 12:08:58,652 DEBUG [Thread-6] o.a.k.c.consumer.internals.Fetcher - 
> Skipping validation of fetch offsets for partitions [mrt-rrc10-1, 
> mrt-rrc10-6, mrt-rrc22-7] since the broker does not support the required 
> protocol version (introduced in Kafka 2.3)
> 2020-03-15 12:08:58,659 DEBUG [Thread-6] org.apache.kafka.clients.Metadata - 
> Updating last seen epoch from null to 62 for partition mrt-rrc10-6
> 2020-03-15 12:08:58,659 DEBUG [Thread-6] o.a.k.c.consumer.internals.Fetcher - 
> Fetch READ_UNCOMMITTED at offset 538065584 for partition mrt-rrc10-6 returned 
> fetch data (error=NONE, highWaterMark=538065631, lastStableOffset = 
> 538065631, logStartOffset = 485284547, preferredReadReplica = absent, 
> abortedTransactions = null, recordsSizeInBytes=16380)
> 2020-03-15 12:08:58,659 DEBUG [Thread-6] o.a.k.c.consumer.internals.Fetcher - 
> Ignoring fetched records for partition mrt-rrc10-6 since it no longer has 
> valid position
> 2020-03-15 12:08:58,665 DEBUG [Thread-6] o.a.k.c.c.i.ConsumerCoordinator - 
> Committed offset 538065584 for partition mrt-rrc10-6
> 2020-03-15 12:08:58,761 DEBUG [Thread-6] o.a.k.c.consumer.internals.Fetcher - 
> Skipping validation of fetch offsets for partitions [mrt-rrc10-1, 
> mrt-rrc10-6, mrt-rrc22-7] since the broker does not support the required 
> protocol version (introduced in Kafka 2.3)
> 2020-03-15 12:08:58,761 DEBUG [Thread-6] o.a.k.c.consumer.internals.Fetcher - 
> Added READ_UNCOMMITTED fetch request for partition mrt-rrc10-6 at position 
> FetchPosition{offset=538065584, offsetEpoch=Optional[62], 
> currentLeader=LeaderAndEpoch{leader=node03.kafka:9092 (id: 3 rack: null), 
> epoch=-1}} to node node03.kafka:9092 (id: 3 rack: null)
> 2020-03-15 12:08:58,761 DEBUG [Thread-6] o.a.k.c.consumer.internals.Fetcher - 
> Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), 
> implied=(mrt-rrc10-6, mrt-rrc22-7, mrt-rrc10-1)) to broker node03.kafka:9092 
> (id: 3 rack: null)
> 2020-03-15 12:08:58,770 DEBUG [Thread-6] org.apache.kafka.clients.Metadata - 
> Updating last seen epoch from null to 62 for partition mrt-rrc10-6
> 2020-03-15 12:08:58,770 DEBUG [Thread-6] o.a.k.c.consumer.internals.Fetcher - 
> Fetch READ_UNCOMMITTED at offset 538065584 for partition mrt-rrc10-6 returned 
> fetch 

[jira] [Commented] (KAFKA-9776) Producer should automatically downgrade CommitTxRequest

2020-03-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17069021#comment-17069021
 ] 

ASF GitHub Bot commented on KAFKA-9776:
---

abbccdda commented on pull request #8375: KAFKA-9776: Downgrade TxnCommit API 
v3 when broker doesn't support
URL: https://github.com/apache/kafka/pull/8375
 
 
   Revert the decision for the `sendOffsetsToTransaction(groupMetadata)` API to 
fail with old version of brokers for the sake of making the application easier 
to adapt between versions. This PR silently downgrade the TxnOffsetCommit API 
when the build version is small than 3.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Producer should automatically downgrade CommitTxRequest
> ---
>
> Key: KAFKA-9776
> URL: https://issues.apache.org/jira/browse/KAFKA-9776
> Project: Kafka
>  Issue Type: Improvement
>  Components: producer 
>Affects Versions: 2.5.0
>Reporter: Matthias J. Sax
>Assignee: Boyang Chen
>Priority: Critical
>
> When using transactions with a 2.5 producer against 2.4 (or older) brokers, 
> it is not possible to call `producer.commitTransaction(..., 
> ConsumerGroupMetadata)` but only the old API `producer.commitTransaction(..., 
> String applicationId)` is supported.
> This implies that a developer needs to know the broker version when writing 
> an application or write additional code to call the one or the other API 
> depending on the broker version (the developer would need to write code to 
> figure out the broker version, too).
> We should change the producer to automatically downgrade to the older 
> CommitTxRequest if `commitTransaction(..., ConsumerGroupMetadata)` is used 
> against older brokers to avoid an `UnsupportedVersionException`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9777) Purgatory locking bug can lead to hanging transaction

2020-03-27 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-9777:
--

 Summary: Purgatory locking bug can lead to hanging transaction
 Key: KAFKA-9777
 URL: https://issues.apache.org/jira/browse/KAFKA-9777
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.4.1, 2.3.1, 2.2.2, 2.1.1, 2.0.1, 1.1.1
Reporter: Jason Gustafson
Assignee: Jason Gustafson


Once a transaction reaches the `PrepareCommit` or `PrepareAbort` state, the 
transaction coordinator must send markers to all partitions included in the 
transaction. After all markers have been sent, then the transaction transitions 
to the corresponding completed state. Until this transition occurs, no 
additional progress can be made by the producer.

The transaction coordinator uses a purgatory to track completion of the markers 
that need to be sent. Once all markers have been written, then the 
`DelayedTxnMarker` task becomes completable. We depend on its completion in 
order to transition to the completed state.

Related to KAFKA-8334, there is a bug in the locking protocol which is used to 
check completion of the `DelayedTxnMarker` task. The purgatory attempts to 
provide a "happens before" contract for task completion with 
`checkAndComplete`. Basically if a task is completed before calling 
`checkAndComplete`, then it should be given an opportunity to complete as long 
as there is sufficient time remaining before expiration. 

The bug in the locking protocol is that it expects that the operation lock is 
exclusive to the operation. See here: 
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/DelayedOperation.scala#L114.
 The logic assumes that if the lock cannot be acquired, then the other holder 
of the lock must be attempting completion of the same delayed operation. If 
that is not the case, then the "happens before" contract is broken and a task 
may not get completed until expiration even if it has been satisfied.

In the case of `DelayedTxnMarker`, the lock in use is the read side of a 
read-write lock which is used for partition loading: 
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/coordinator/transaction/TransactionMarkerChannelManager.scala#L264.
 In fact, if the lock cannot be acquired, it means that it is being held in 
order to complete some loading operation, in which case it will definitely not 
attempt completion of the delayed operation. If this happens to occur on the 
last call to `checkAndComplete` after all markers have been written, then the 
transition to the completing state will never occur.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-9776) Producer should automatically downgrade CommitTxRequest

2020-03-27 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-9776:
--

Assignee: Boyang Chen

> Producer should automatically downgrade CommitTxRequest
> ---
>
> Key: KAFKA-9776
> URL: https://issues.apache.org/jira/browse/KAFKA-9776
> Project: Kafka
>  Issue Type: Improvement
>  Components: producer 
>Affects Versions: 2.5.0
>Reporter: Matthias J. Sax
>Assignee: Boyang Chen
>Priority: Critical
>
> When using transactions with a 2.5 producer against 2.4 (or older) brokers, 
> it is not possible to call `producer.commitTransaction(..., 
> ConsumerGroupMetadata)` but only the old API `producer.commitTransaction(..., 
> String applicationId)` is supported.
> This implies that a developer needs to know the broker version when writing 
> an application or write additional code to call the one or the other API 
> depending on the broker version (the developer would need to write code to 
> figure out the broker version, too).
> We should change the producer to automatically downgrade to the older 
> CommitTxRequest if `commitTransaction(..., ConsumerGroupMetadata)` is used 
> against older brokers to avoid an `UnsupportedVersionException`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9776) Producer should automatically downgrade CommitTxRequest

2020-03-27 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-9776:
--

 Summary: Producer should automatically downgrade CommitTxRequest
 Key: KAFKA-9776
 URL: https://issues.apache.org/jira/browse/KAFKA-9776
 Project: Kafka
  Issue Type: Improvement
  Components: producer 
Affects Versions: 2.5.0
Reporter: Matthias J. Sax


When using transactions with a 2.5 producer against 2.4 (or older) brokers, it 
is not possible to call `producer.commitTransaction(..., 
ConsumerGroupMetadata)` but only the old API `producer.commitTransaction(..., 
String applicationId)` is supported.

This implies that a developer needs to know the broker version when writing an 
application or write additional code to call the one or the other API depending 
on the broker version (the developer would need to write code to figure out the 
broker version, too).

We should change the producer to automatically downgrade to the older 
CommitTxRequest if `commitTransaction(..., ConsumerGroupMetadata)` is used 
against older brokers to avoid an `UnsupportedVersionException`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9771) Inter-worker SSL is broken for keystores with multiple certificates

2020-03-27 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis resolved KAFKA-9771.
---
Resolution: Fixed

The fix was merged in `trunk` and the `2.5` release branch in time for the 
release of `2.5.0`

> Inter-worker SSL is broken for keystores with multiple certificates
> ---
>
> Key: KAFKA-9771
> URL: https://issues.apache.org/jira/browse/KAFKA-9771
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Blocker
>
> The recent bump in Jetty version causes inter-worker communication to fail in 
> Connect when SSL is enabled and the keystore for the worker contains multiple 
> certificates (which it might, in the case that SNI is enabled and the 
> worker's REST interface is bound to multiple domain names). This is caused by 
> [changes introduced in Jetty 
> 9.4.23|https://github.com/eclipse/jetty.project/pull/4085], which are later 
> [fixed in Jetty 9.4.25|https://github.com/eclipse/jetty.project/pull/4404].
> We recently tried and failed to [upgrade to Jetty 
> 9.4.25|https://github.com/apache/kafka/pull/8183], so upgrading the Jetty 
> version to fix this issue isn't a viable option. Additionally, the [earliest 
> clean version of Jetty|https://www.eclipse.org/jetty/security-reports.html] 
> (at the time of writing) with regards to CVEs is 9.4.24, so reverting to a 
> pre-9.4.23 version is also not a viable option.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9771) Inter-worker SSL is broken for keystores with multiple certificates

2020-03-27 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis updated KAFKA-9771:
--
Fix Version/s: (was: 2.5.0)

> Inter-worker SSL is broken for keystores with multiple certificates
> ---
>
> Key: KAFKA-9771
> URL: https://issues.apache.org/jira/browse/KAFKA-9771
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Blocker
>
> The recent bump in Jetty version causes inter-worker communication to fail in 
> Connect when SSL is enabled and the keystore for the worker contains multiple 
> certificates (which it might, in the case that SNI is enabled and the 
> worker's REST interface is bound to multiple domain names). This is caused by 
> [changes introduced in Jetty 
> 9.4.23|https://github.com/eclipse/jetty.project/pull/4085], which are later 
> [fixed in Jetty 9.4.25|https://github.com/eclipse/jetty.project/pull/4404].
> We recently tried and failed to [upgrade to Jetty 
> 9.4.25|https://github.com/apache/kafka/pull/8183], so upgrading the Jetty 
> version to fix this issue isn't a viable option. Additionally, the [earliest 
> clean version of Jetty|https://www.eclipse.org/jetty/security-reports.html] 
> (at the time of writing) with regards to CVEs is 9.4.24, so reverting to a 
> pre-9.4.23 version is also not a viable option.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9771) Inter-worker SSL is broken for keystores with multiple certificates

2020-03-27 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis updated KAFKA-9771:
--
Affects Version/s: (was: 2.5.0)

> Inter-worker SSL is broken for keystores with multiple certificates
> ---
>
> Key: KAFKA-9771
> URL: https://issues.apache.org/jira/browse/KAFKA-9771
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Blocker
> Fix For: 2.5.0
>
>
> The recent bump in Jetty version causes inter-worker communication to fail in 
> Connect when SSL is enabled and the keystore for the worker contains multiple 
> certificates (which it might, in the case that SNI is enabled and the 
> worker's REST interface is bound to multiple domain names). This is caused by 
> [changes introduced in Jetty 
> 9.4.23|https://github.com/eclipse/jetty.project/pull/4085], which are later 
> [fixed in Jetty 9.4.25|https://github.com/eclipse/jetty.project/pull/4404].
> We recently tried and failed to [upgrade to Jetty 
> 9.4.25|https://github.com/apache/kafka/pull/8183], so upgrading the Jetty 
> version to fix this issue isn't a viable option. Additionally, the [earliest 
> clean version of Jetty|https://www.eclipse.org/jetty/security-reports.html] 
> (at the time of writing) with regards to CVEs is 9.4.24, so reverting to a 
> pre-9.4.23 version is also not a viable option.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9771) Inter-worker SSL is broken for keystores with multiple certificates

2020-03-27 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis updated KAFKA-9771:
--
Fix Version/s: 2.5.0

> Inter-worker SSL is broken for keystores with multiple certificates
> ---
>
> Key: KAFKA-9771
> URL: https://issues.apache.org/jira/browse/KAFKA-9771
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.5.0
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Blocker
> Fix For: 2.5.0
>
>
> The recent bump in Jetty version causes inter-worker communication to fail in 
> Connect when SSL is enabled and the keystore for the worker contains multiple 
> certificates (which it might, in the case that SNI is enabled and the 
> worker's REST interface is bound to multiple domain names). This is caused by 
> [changes introduced in Jetty 
> 9.4.23|https://github.com/eclipse/jetty.project/pull/4085], which are later 
> [fixed in Jetty 9.4.25|https://github.com/eclipse/jetty.project/pull/4404].
> We recently tried and failed to [upgrade to Jetty 
> 9.4.25|https://github.com/apache/kafka/pull/8183], so upgrading the Jetty 
> version to fix this issue isn't a viable option. Additionally, the [earliest 
> clean version of Jetty|https://www.eclipse.org/jetty/security-reports.html] 
> (at the time of writing) with regards to CVEs is 9.4.24, so reverting to a 
> pre-9.4.23 version is also not a viable option.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9771) Inter-worker SSL is broken for keystores with multiple certificates

2020-03-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17068925#comment-17068925
 ] 

ASF GitHub Bot commented on KAFKA-9771:
---

kkonstantine commented on pull request #8369: KAFKA-9771: Port patch for 
inter-worker Connect SSL from Jetty 9.4.25
URL: https://github.com/apache/kafka/pull/8369
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Inter-worker SSL is broken for keystores with multiple certificates
> ---
>
> Key: KAFKA-9771
> URL: https://issues.apache.org/jira/browse/KAFKA-9771
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.5.0
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Blocker
>
> The recent bump in Jetty version causes inter-worker communication to fail in 
> Connect when SSL is enabled and the keystore for the worker contains multiple 
> certificates (which it might, in the case that SNI is enabled and the 
> worker's REST interface is bound to multiple domain names). This is caused by 
> [changes introduced in Jetty 
> 9.4.23|https://github.com/eclipse/jetty.project/pull/4085], which are later 
> [fixed in Jetty 9.4.25|https://github.com/eclipse/jetty.project/pull/4404].
> We recently tried and failed to [upgrade to Jetty 
> 9.4.25|https://github.com/apache/kafka/pull/8183], so upgrading the Jetty 
> version to fix this issue isn't a viable option. Additionally, the [earliest 
> clean version of Jetty|https://www.eclipse.org/jetty/security-reports.html] 
> (at the time of writing) with regards to CVEs is 9.4.24, so reverting to a 
> pre-9.4.23 version is also not a viable option.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9770) Caching State Store does not Close Underlying State Store When Exception is Thrown During Flushing

2020-03-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17068820#comment-17068820
 ] 

ASF GitHub Bot commented on KAFKA-9770:
---

cadonna commented on pull request #8368: KAFKA-9770: Close underlying state 
store also when flush throws
URL: https://github.com/apache/kafka/pull/8368
 
 
   When a caching state store is closed it calls its flush() method.
   If flush() throws an exception the underlying state store is not closed.
   
   This commit ensures that state stores underlying a caching state store
   are closed even when flush() throws.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Caching State Store does not Close Underlying State Store When Exception is 
> Thrown During Flushing
> --
>
> Key: KAFKA-9770
> URL: https://issues.apache.org/jira/browse/KAFKA-9770
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.0.0
>Reporter: Bruno Cadonna
>Assignee: Bruno Cadonna
>Priority: Major
>
> When a caching state store is closed it calls its {{flush()}} method. If 
> {{flush()}} throws an exception the underlying state store is not closed.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9770) Caching State Store does not Close Underlying State Store When Exception is Thrown During Flushing

2020-03-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17068819#comment-17068819
 ] 

ASF GitHub Bot commented on KAFKA-9770:
---

vvcephei commented on pull request #8368: KAFKA-9770: Close underlying state 
store also when flush throws
URL: https://github.com/apache/kafka/pull/8368
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Caching State Store does not Close Underlying State Store When Exception is 
> Thrown During Flushing
> --
>
> Key: KAFKA-9770
> URL: https://issues.apache.org/jira/browse/KAFKA-9770
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.0.0
>Reporter: Bruno Cadonna
>Assignee: Bruno Cadonna
>Priority: Major
>
> When a caching state store is closed it calls its {{flush()}} method. If 
> {{flush()}} throws an exception the underlying state store is not closed.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9770) Caching State Store does not Close Underlying State Store When Exception is Thrown During Flushing

2020-03-27 Thread Bruno Cadonna (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Cadonna updated KAFKA-9770:
-
Affects Version/s: (was: 2.5.0)
   2.0.0

> Caching State Store does not Close Underlying State Store When Exception is 
> Thrown During Flushing
> --
>
> Key: KAFKA-9770
> URL: https://issues.apache.org/jira/browse/KAFKA-9770
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.0.0
>Reporter: Bruno Cadonna
>Assignee: Bruno Cadonna
>Priority: Major
>
> When a caching state store is closed it calls its {{flush()}} method. If 
> {{flush()}} throws an exception the underlying state store is not closed.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9770) Caching State Store does not Close Underlying State Store When Exception is Thrown During Flushing

2020-03-27 Thread Bruno Cadonna (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Cadonna updated KAFKA-9770:
-
Affects Version/s: (was: 2.6.0)
   2.5.0

> Caching State Store does not Close Underlying State Store When Exception is 
> Thrown During Flushing
> --
>
> Key: KAFKA-9770
> URL: https://issues.apache.org/jira/browse/KAFKA-9770
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.5.0
>Reporter: Bruno Cadonna
>Assignee: Bruno Cadonna
>Priority: Major
>
> When a caching state store is closed it calls its {{flush()}} method. If 
> {{flush()}} throws an exception the underlying state store is not closed.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9775) IllegalFormatConversionException from kafka-consumer-perf-test.sh

2020-03-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17068581#comment-17068581
 ] 

ASF GitHub Bot commented on KAFKA-9775:
---

tombentley commented on pull request #8373: KAFKA-9775: Fix 
IllegalFormatConversionException in ToolsUtils
URL: https://github.com/apache/kafka/pull/8373
 
 
   The runtime type of Metric.metricValue() needn't always be a Double,
   for example, if it's a gauge from IntGaugeSuite.
   Since it's impossible to format non-double values with 3 point precision
   IllegalFormatConversionException resulted.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> IllegalFormatConversionException from kafka-consumer-perf-test.sh
> -
>
> Key: KAFKA-9775
> URL: https://issues.apache.org/jira/browse/KAFKA-9775
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Major
>
> Exception in thread "main" java.util.IllegalFormatConversionException: f != 
> java.lang.Integer
>   at 
> java.base/java.util.Formatter$FormatSpecifier.failConversion(Formatter.java:4426)
>   at 
> java.base/java.util.Formatter$FormatSpecifier.printFloat(Formatter.java:2951)
>   at 
> java.base/java.util.Formatter$FormatSpecifier.print(Formatter.java:2898)
>   at java.base/java.util.Formatter.format(Formatter.java:2673)
>   at java.base/java.util.Formatter.format(Formatter.java:2609)
>   at java.base/java.lang.String.format(String.java:2897)
>   at scala.collection.immutable.StringLike.format(StringLike.scala:354)
>   at scala.collection.immutable.StringLike.format$(StringLike.scala:353)
>   at scala.collection.immutable.StringOps.format(StringOps.scala:33)
>   at kafka.utils.ToolsUtils$.$anonfun$printMetrics$3(ToolsUtils.scala:60)
>   at 
> kafka.utils.ToolsUtils$.$anonfun$printMetrics$3$adapted(ToolsUtils.scala:58)
>   at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
>   at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
>   at kafka.utils.ToolsUtils$.printMetrics(ToolsUtils.scala:58)
>   at kafka.tools.ConsumerPerformance$.main(ConsumerPerformance.scala:82)
>   at kafka.tools.ConsumerPerformance.main(ConsumerPerformance.scala)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9775) IllegalFormatConversionException from kafka-consumer-perf-test.sh

2020-03-27 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-9775:
--

 Summary: IllegalFormatConversionException from 
kafka-consumer-perf-test.sh
 Key: KAFKA-9775
 URL: https://issues.apache.org/jira/browse/KAFKA-9775
 Project: Kafka
  Issue Type: Bug
  Components: tools
Reporter: Tom Bentley
Assignee: Tom Bentley


Exception in thread "main" java.util.IllegalFormatConversionException: f != 
java.lang.Integer
at 
java.base/java.util.Formatter$FormatSpecifier.failConversion(Formatter.java:4426)
at 
java.base/java.util.Formatter$FormatSpecifier.printFloat(Formatter.java:2951)
at 
java.base/java.util.Formatter$FormatSpecifier.print(Formatter.java:2898)
at java.base/java.util.Formatter.format(Formatter.java:2673)
at java.base/java.util.Formatter.format(Formatter.java:2609)
at java.base/java.lang.String.format(String.java:2897)
at scala.collection.immutable.StringLike.format(StringLike.scala:354)
at scala.collection.immutable.StringLike.format$(StringLike.scala:353)
at scala.collection.immutable.StringOps.format(StringOps.scala:33)
at kafka.utils.ToolsUtils$.$anonfun$printMetrics$3(ToolsUtils.scala:60)
at 
kafka.utils.ToolsUtils$.$anonfun$printMetrics$3$adapted(ToolsUtils.scala:58)
at 
scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at 
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at kafka.utils.ToolsUtils$.printMetrics(ToolsUtils.scala:58)
at kafka.tools.ConsumerPerformance$.main(ConsumerPerformance.scala:82)
at kafka.tools.ConsumerPerformance.main(ConsumerPerformance.scala)




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9773) Option combination "[bootstrap-server],[config]" can't be used with option "[alter]"

2020-03-27 Thread Luke Chen (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17068530#comment-17068530
 ] 

Luke Chen commented on KAFKA-9773:
--

Meanwhile, I'll work on this issue.

> Option combination "[bootstrap-server],[config]" can't be used with option 
> "[alter]"
> 
>
> Key: KAFKA-9773
> URL: https://issues.apache.org/jira/browse/KAFKA-9773
> Project: Kafka
>  Issue Type: Test
>Reporter: startjava
>Priority: Major
>
> ghy@ghy-VirtualBox:~/T/k/bin$ ./kafka-topics.sh --topic my3 
> --bootstrap-server localhost:9081,localhost:9082,localhost:9083 --alter 
> --config max.message.bytes=20480
> Option combination "[bootstrap-server],[config]" can't be used with option 
> "[alter]"
>  
> use kafka2.4.1 version bottom error!
> how do not show bottom error ?
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-1206) allow Kafka to start from a resource negotiator system

2020-03-27 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-1206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17068526#comment-17068526
 ] 

Jordan Moore commented on KAFKA-1206:
-

How relevant is this in light of Mesos or Kubernetes?

> allow Kafka to start from a resource negotiator system
> --
>
> Key: KAFKA-1206
> URL: https://issues.apache.org/jira/browse/KAFKA-1206
> Project: Kafka
>  Issue Type: Bug
>  Components: packaging
>Reporter: Joe Stein
>Priority: Major
>  Labels: mesos
> Attachments: KAFKA-1206_2014-01-16_00:40:30.patch
>
>
> We need a generic implementation to hold the property information for 
> brokers, producers and consumers.  We want the resource negotiator to store 
> this information however it wants and give it respond with a 
> java.util.Properties.  This can get used then in the Kafka.scala as 
> serverConfigs for the KafkaServerStartable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10) Kafka deployment on EC2 should be WHIRR based, instead of current contrib/deploy code based solution

2020-03-27 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17068525#comment-17068525
 ] 

Jordan Moore commented on KAFKA-10:
---

This should be closed. Whirr is in the Apache Attic. 

> Kafka deployment on EC2 should be WHIRR based, instead of current 
> contrib/deploy code based solution
> 
>
> Key: KAFKA-10
> URL: https://issues.apache.org/jira/browse/KAFKA-10
> Project: Kafka
>  Issue Type: Improvement
>  Components: packaging
>Affects Versions: 0.6
>Priority: Major
>
> Apache Whirr is a a set of libraries for running cloud services 
> http://incubator.apache.org/whirr/ 
> It is desirable that Kafka's integration with EC2 be Whirr based, rather than 
> the code based solution we currently have in contrib/deploy. 
> The code in contrib/deploy will be deleted in 0.6 release



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-5306) Official init.d scripts

2020-03-27 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-5306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17068521#comment-17068521
 ] 

Jordan Moore commented on KAFKA-5306:
-

I feel like SystemD won out the battle for the most part, going forward

> Official init.d scripts
> ---
>
> Key: KAFKA-5306
> URL: https://issues.apache.org/jira/browse/KAFKA-5306
> Project: Kafka
>  Issue Type: Improvement
>  Components: packaging
>Affects Versions: 0.10.2.1
> Environment: Ubuntu 14.04
>Reporter: Shahar
>Priority: Minor
>
> It would be great to have an officially supported init.d script for starting 
> and stopping Kafka as a service.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (KAFKA-9773) Option combination "[bootstrap-server],[config]" can't be used with option "[alter]"

2020-03-27 Thread Luke Chen (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17068518#comment-17068518
 ] 

Luke Chen edited comment on KAFKA-9773 at 3/27/20, 10:21 AM:
-

Please use kafka-config.sh --alter instead.

Ex: 
{code:java}
kafka-configs.sh --zookeeper localhost:2181 --entity-type topics --entity-name 
my-topic --alter --add-config max.message.bytes=128000
{code}
 


was (Author: showuon):
Please use kafka-config.sh --alter instead.

Ex: k{{afka-configs}}{{.sh --zookeeper localhost:2181 --entity-}}{{type}} 
{{topics --entity-name my-topic }}{{--alter --add-config 
max.message.bytes=128000}}

{{}}{{}}{{}}{{}}

 

> Option combination "[bootstrap-server],[config]" can't be used with option 
> "[alter]"
> 
>
> Key: KAFKA-9773
> URL: https://issues.apache.org/jira/browse/KAFKA-9773
> Project: Kafka
>  Issue Type: Test
>Reporter: startjava
>Priority: Major
>
> ghy@ghy-VirtualBox:~/T/k/bin$ ./kafka-topics.sh --topic my3 
> --bootstrap-server localhost:9081,localhost:9082,localhost:9083 --alter 
> --config max.message.bytes=20480
> Option combination "[bootstrap-server],[config]" can't be used with option 
> "[alter]"
>  
> use kafka2.4.1 version bottom error!
> how do not show bottom error ?
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9774) Create official Docker image for Kafka Connect

2020-03-27 Thread Jordan Moore (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17068519#comment-17068519
 ] 

Jordan Moore commented on KAFKA-9774:
-

Happy to take this on. And already working on a POC here (ignore the README and 
the project name, I started working on only the pom and the main class in this 
branch) - 
[https://github.com/cricket007/kafka-streams-jib-example/tree/feature/connect-distributed]

> Create official Docker image for Kafka Connect
> --
>
> Key: KAFKA-9774
> URL: https://issues.apache.org/jira/browse/KAFKA-9774
> Project: Kafka
>  Issue Type: Task
>  Components: build, KafkaConnect, packaging
>Affects Versions: 2.4.1
>Reporter: Jordan Moore
>Priority: Major
>  Labels: build, features
> Attachments: image-2020-03-27-05-04-46-792.png, 
> image-2020-03-27-05-05-59-024.png
>
>
> This is a ticket for creating an *official* apache/kafka-connect Docker 
> image. 
> Does this need a KIP?  -  I don't think so. This would be a new feature, not 
> any API change. 
> Why is this needed?
>  # Kafka Connect is stateless. I believe this is why a Kafka image is not 
> created?
>  # It scales much more easily with Docker and orchestrators. It operates much 
> like any other serverless / "microservice" web application 
>  # People struggle with deploying it because it is packaged _with Kafka_ , 
> which leads some to believe it needs to _*run* with Kafka_ on the same 
> machine. 
> I think there is separate ticket for creating an official Docker image for 
> Kafka but clearly none exist. I reached out to Confluent about this, but 
> heard nothing yet.
> !image-2020-03-27-05-05-59-024.png|width=740,height=196!
>  
> Zookeeper already has one , btw  
> !image-2020-03-27-05-04-46-792.png|width=739,height=288!
> *References*: 
> [Docs for Official 
> Images|[https://docs.docker.com/docker-hub/official_images/]]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9773) Option combination "[bootstrap-server],[config]" can't be used with option "[alter]"

2020-03-27 Thread Luke Chen (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17068518#comment-17068518
 ] 

Luke Chen commented on KAFKA-9773:
--

Please use kafka-config.sh --alter instead.

Ex: k{{afka-configs}}{{.sh --zookeeper localhost:2181 --entity-}}{{type}} 
{{topics --entity-name my-topic }}{{--alter --add-config 
max.message.bytes=128000}}

{{}}{{}}{{}}{{}}

 

> Option combination "[bootstrap-server],[config]" can't be used with option 
> "[alter]"
> 
>
> Key: KAFKA-9773
> URL: https://issues.apache.org/jira/browse/KAFKA-9773
> Project: Kafka
>  Issue Type: Test
>Reporter: startjava
>Priority: Major
>
> ghy@ghy-VirtualBox:~/T/k/bin$ ./kafka-topics.sh --topic my3 
> --bootstrap-server localhost:9081,localhost:9082,localhost:9083 --alter 
> --config max.message.bytes=20480
> Option combination "[bootstrap-server],[config]" can't be used with option 
> "[alter]"
>  
> use kafka2.4.1 version bottom error!
> how do not show bottom error ?
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9774) Create official Docker image for Kafka Connect

2020-03-27 Thread Jordan Moore (Jira)
Jordan Moore created KAFKA-9774:
---

 Summary: Create official Docker image for Kafka Connect
 Key: KAFKA-9774
 URL: https://issues.apache.org/jira/browse/KAFKA-9774
 Project: Kafka
  Issue Type: Task
  Components: build, KafkaConnect, packaging
Affects Versions: 2.4.1
Reporter: Jordan Moore
 Attachments: image-2020-03-27-05-04-46-792.png, 
image-2020-03-27-05-05-59-024.png

This is a ticket for creating an *official* apache/kafka-connect Docker image. 

Does this need a KIP?  -  I don't think so. This would be a new feature, not 
any API change. 

Why is this needed?
 # Kafka Connect is stateless. I believe this is why a Kafka image is not 
created?
 # It scales much more easily with Docker and orchestrators. It operates much 
like any other serverless / "microservice" web application 
 # People struggle with deploying it because it is packaged _with Kafka_ , 
which leads some to believe it needs to _*run* with Kafka_ on the same machine. 

I think there is separate ticket for creating an official Docker image for 
Kafka but clearly none exist. I reached out to Confluent about this, but heard 
nothing yet.

!image-2020-03-27-05-05-59-024.png|width=740,height=196!

 

Zookeeper already has one , btw  
!image-2020-03-27-05-04-46-792.png|width=739,height=288!

*References*: 

[Docs for Official Images|[https://docs.docker.com/docker-hub/official_images/]]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9773) Option combination "[bootstrap-server],[config]" can't be used with option "[alter]"

2020-03-27 Thread startjava (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

startjava updated KAFKA-9773:
-
Issue Type: Test  (was: Task)

> Option combination "[bootstrap-server],[config]" can't be used with option 
> "[alter]"
> 
>
> Key: KAFKA-9773
> URL: https://issues.apache.org/jira/browse/KAFKA-9773
> Project: Kafka
>  Issue Type: Test
>Reporter: startjava
>Priority: Major
>
> ghy@ghy-VirtualBox:~/T/k/bin$ ./kafka-topics.sh --topic my3 
> --bootstrap-server localhost:9081,localhost:9082,localhost:9083 --alter 
> --config max.message.bytes=20480
> Option combination "[bootstrap-server],[config]" can't be used with option 
> "[alter]"
>  
> use kafka2.4.1 version bottom error!
> how do not show bottom error ?
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9773) Option combination "[bootstrap-server],[config]" can't be used with option "[alter]"

2020-03-27 Thread startjava (Jira)
startjava created KAFKA-9773:


 Summary: Option combination "[bootstrap-server],[config]" can't be 
used with option "[alter]"
 Key: KAFKA-9773
 URL: https://issues.apache.org/jira/browse/KAFKA-9773
 Project: Kafka
  Issue Type: Task
Reporter: startjava


ghy@ghy-VirtualBox:~/T/k/bin$ ./kafka-topics.sh --topic my3 --bootstrap-server 
localhost:9081,localhost:9082,localhost:9083 --alter --config 
max.message.bytes=20480
Option combination "[bootstrap-server],[config]" can't be used with option 
"[alter]"

 

use kafka2.4.1 version bottom error!

how do not show bottom error ?

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9673) Conditionally apply SMTs

2020-03-27 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17068418#comment-17068418
 ] 

Tom Bentley commented on KAFKA-9673:


I opened 
[KIP-585|https://cwiki.apache.org/confluence/display/KAFKA/KIP-585%3A+Conditional+SMT]
 for discussion.

> Conditionally apply SMTs
> 
>
> Key: KAFKA-9673
> URL: https://issues.apache.org/jira/browse/KAFKA-9673
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Major
>
> KAFKA-7052 ended up using IAE with a message, rather than NPE in the case of 
> a SMT being applied to a record lacking a given field. It's still not 
> possible to apply a SMT conditionally, which is what things like Debezium 
> really need in order to apply transformations only to non-schema change 
> events.
> [~rhauch] suggested a mechanism to conditionally apply any SMT but was 
> concerned about the possibility of a naming collision (assuming it was 
> configured by a simple config)
> I'd like to propose something which would solve this problem without the 
> possibility of such collisions. The idea is to have a higher-level condition, 
> which applies an arbitrary transformation (or transformation chain) according 
> to some predicate on the record. 
> More concretely, it might be configured like this:
> {noformat}
>   transforms.conditionalExtract.type: Conditional
>   transforms.conditionalExtract.transforms: extractInt
>   transforms.conditionalExtract.transforms.extractInt.type: 
> org.apache.kafka.connect.transforms.ExtractField$Key
>   transforms.conditionalExtract.transforms.extractInt.field: c1
>   transforms.conditionalExtract.condition: topic-matches:
> {noformat}
> * The {{Conditional}} SMT is configured with its own list of transforms 
> ({{transforms.conditionalExtract.transforms}}) to apply. This would work just 
> like the top level {{transforms}} config, so subkeys can be used to configure 
> these transforms in the usual way.
> * The {{condition}} config defines the predicate for when the transforms are 
> applied to a record using a {{:}} syntax
> We could initially support three condition types:
> *{{topic-matches:}}* The transformation would be applied if the 
> record's topic name matched the given regular expression pattern. For 
> example, the following would apply the transformation on records being sent 
> to any topic with a name beginning with "my-prefix-":
> {noformat}
>transforms.conditionalExtract.condition: topic-matches:my-prefix-.*
> {noformat}
>
> *{{has-header:}}* The transformation would be applied if the 
> record had at least one header with the given name. For example, the 
> following will apply the transformation on records with at least one header 
> with the name "my-header":
> {noformat}
>transforms.conditionalExtract.condition: has-header:my-header
> {noformat}
>
> *{{not:}}* This would negate the result of another named 
> condition using the condition config prefix. For example, the following will 
> apply the transformation on records which lack any header with the name 
> my-header:
> {noformat}
>   transforms.conditionalExtract.condition: not:hasMyHeader
>   transforms.conditionalExtract.condition.hasMyHeader: 
> has-header:my-header
> {noformat}
> I foresee one implementation concern with this approach, which is that 
> currently {{Transformation}} has to return a fixed {{ConfigDef}}, and this 
> proposal would require something more flexible in order to allow the config 
> parameters to depend on the listed transform aliases (and similarly for named 
> predicate used for the {{not:}} predicate). I think this could be done by 
> adding a {{default}} method to {{Transformation}} for getting the ConfigDef 
> given the config, for example.
> Obviously this would require a KIP, but before I spend any more time on this 
> I'd be interested in your thoughts [~rhauch], [~rmoff], [~gunnar.morling].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)