[jira] [Commented] (KAFKA-13146) Consider client use cases for accessing controller endpoints

2021-07-29 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17389899#comment-17389899
 ] 

Tom Bentley commented on KAFKA-13146:
-

{quote}We have also considered whether the internal __cluster_metadata topic 
should be readable through the controller endpoints by consumers.{quote}

That's what I had expected when originally reading the various KIPs, but 
exposing the metadata log comes with some downsides. Would consumers be able to 
do {{read_committed}} reads?

> Consider client use cases for accessing controller endpoints
> 
>
> Key: KAFKA-13146
> URL: https://issues.apache.org/jira/browse/KAFKA-13146
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Priority: Major
>  Labels: kip-500
>
> In KAFKA-13143, we dropped the Metadata from the controller APIs. We did this 
> for two reasons. First, the implementation did not return any topic metadata. 
> This was confusing for users who mistakenly tried to use the controller 
> endpoint in order to describe or list topics since it would appear that no 
> topics existed in the cluster. The second reason is that the implementation 
> returned the controller endpoints. So even if we returned the topic metadata, 
> clients would be unable to access the topics for reading or writing through 
> the controller endpoint.
> So for 3.0, we are effectively saying that clients should only access the 
> broker endpoints. Long term, is that what we want? When running the 
> controllers as separate nodes, it may be useful to initialize the controllers 
> and cluster metadata before starting any of the brokers, for example. For 
> this to work, we need to put some thought into how the Metadata API should 
> work with controllers. For example, we can return a flag or some kind of 
> error code in the response to indicate that topic metadata is not available. 
> We have also considered whether the internal __cluster_metadata topic should 
> be readable through the controller endpoints by consumers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-13276) Public DescribeConsumerGroupsResult constructor refers to KafkaFutureImpl

2021-09-06 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-13276:
---

 Summary: Public DescribeConsumerGroupsResult constructor refers to 
KafkaFutureImpl
 Key: KAFKA-13276
 URL: https://issues.apache.org/jira/browse/KAFKA-13276
 Project: Kafka
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Tom Bentley


The new public DescribeConsumerGroupsResult constructor refers to the 
non-public API KafkaFutureImpl



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-13285) Use consistent access modifier for Admin clients Result classes

2021-09-09 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-13285:
---

 Summary: Use consistent access modifier for Admin clients Result 
classes
 Key: KAFKA-13285
 URL: https://issues.apache.org/jira/browse/KAFKA-13285
 Project: Kafka
  Issue Type: Task
  Components: admin
Affects Versions: 3.0.0
Reporter: Tom Bentley


The following classes in the Admin client have public constructors, while the 
rest have package-private constructors:
AlterClientQuotasResult
AlterUserScramCredentialsResult
DeleteRecordsResult
DescribeClientQuotasResult
DescribeConsumerGroupsResult
ListOffsetsResult

There should be consistency across all the Result classes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-13285) Use consistent access modifier for Admin clients Result classes

2021-09-12 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17413733#comment-17413733
 ] 

Tom Bentley commented on KAFKA-13285:
-

[~vijaykriishna] I opened 
[KIP-774|https://cwiki.apache.org/confluence/display/KAFKA/KIP-774%3A+Deprecate+public+access+to+Admin+client%27s+*Result+constructors]
 about this last week. That will need to be approved before any PR for this 
issue can be merged.

> Use consistent access modifier for Admin clients Result classes
> ---
>
> Key: KAFKA-13285
> URL: https://issues.apache.org/jira/browse/KAFKA-13285
> Project: Kafka
>  Issue Type: Task
>  Components: admin
>Affects Versions: 3.0.0
>Reporter: Tom Bentley
>Assignee: Vijay
>Priority: Minor
>
> The following classes in the Admin client have public constructors, while the 
> rest have package-private constructors:
> AlterClientQuotasResult
> AlterUserScramCredentialsResult
> DeleteRecordsResult
> DescribeClientQuotasResult
> DescribeConsumerGroupsResult
> ListOffsetsResult
> There should be consistency across all the Result classes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-13329) Connect does not perform preflight validation for per-connector key and value converters

2021-09-27 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17421183#comment-17421183
 ] 

Tom Bentley commented on KAFKA-13329:
-

I guess we could add a {{default}} method for converters to optionally expose a 
{{ConfigDef}}, defaulted to returning null.

> Connect does not perform preflight validation for per-connector key and value 
> converters
> 
>
> Key: KAFKA-13329
> URL: https://issues.apache.org/jira/browse/KAFKA-13329
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Major
>
> Users may specify a key and/or value converter class for their connector 
> directly in the configuration for that connector. If this occurs, no 
> preflight validation is performed to ensure that the specified converter is 
> valid.
> Unfortunately, the [Converter 
> interface|https://github.com/apache/kafka/blob/4eb386f6e060e12e1940c0d780987e3a7c438d74/connect/api/src/main/java/org/apache/kafka/connect/storage/Converter.java]
>  does not require converters to expose a {{ConfigDef}} (unlike the 
> [HeaderConverter 
> interface|https://github.com/apache/kafka/blob/4eb386f6e060e12e1940c0d780987e3a7c438d74/connect/api/src/main/java/org/apache/kafka/connect/storage/HeaderConverter.java#L48-L52],
>  which does have that requirement), so it's unlikely that the configuration 
> properties of the converter itself can be validated.
> However, we can and should still validate that the converter class exists, 
> can be instantiated (i.e., has a public, no-args constructor and is a 
> concrete, non-abstract class), and implements the {{Converter}} interface.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-13393) Update website documentation to include required arguments when creating a topic

2021-10-25 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17433617#comment-17433617
 ] 

Tom Bentley commented on KAFKA-13393:
-

I think this was a mistake in Kafka 3.0, see 
https://issues.apache.org/jira/browse/KAFKA-13396

> Update website documentation to include required arguments when creating a 
> topic
> 
>
> Key: KAFKA-13393
> URL: https://issues.apache.org/jira/browse/KAFKA-13393
> Project: Kafka
>  Issue Type: Bug
>  Components: docs, documentation, website
>Affects Versions: 3.0.0
>Reporter: Florian Lehmann
>Priority: Trivial
>
> In the quickstart documentation 
> ([quickstart_createtopic|https://kafka.apache.org/quickstart#quickstart_createtopic])
>  , there is a command specified to create a topic:
> {code:java}
> $ bin/kafka-topics.sh --create --topic quickstart-events --bootstrap-server 
> localhost:9092
> {code}
> However, it is no longer working due to missing arguments:
>  * partitions
>  * replications-factor
>  
> The previous command should be replaced with this one:
>  
> {code:java}
> $ bin/kafka-topics.sh --create --topic quickstart-events --bootstrap-server 
> localhost:9092 --partitions 1 --replication-factor 1
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-13398) The caller program will be shut down directly when the execution of Kafka script is abnormal

2021-10-26 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17434185#comment-17434185
 ] 

Tom Bentley commented on KAFKA-13398:
-

[~RivenSun] the Admin client is backwards compatible, assuming the broker 
version you're running supports the RPCs you need. 
[https://cwiki.apache.org/confluence/display/KAFKA/Kafka+APIs] shows which APIs 
are supported in which version. For example AlterPartitionsReassignments 
appeared in 2.4. 

 

If you really _do_ need to use the CLI tools then one way to avoid the 
{{System.exit}} might be to run with a security manager and use {{checkExit}} 
to prevent calling System.exit.

> The caller program will be shut down directly when the execution of Kafka 
> script is abnormal
> 
>
> Key: KAFKA-13398
> URL: https://issues.apache.org/jira/browse/KAFKA-13398
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin
>Affects Versions: 3.0.0
>Reporter: RivenSun
>Priority: Major
>
> hello [~showuon] and [~guozhang]
> Kafka has some key functions that have not yet been integrated into 
> Java-AdminClient, so I have to use some Scala classes in the Kafka Server 
> `kafka.admin` package in my java program, such as: 
> `ReassignPartitionsCommand`, `ConsumerGroupCommand` (reset group offsets),  
> and etc., to call their `*main(args: Array[String])*` methods in order to 
> achieve specific functions.
> *Problem*:
> 1. In different Kafka versions, these Scala classes may have different 
> requirements for input parameters, or they may have different treatments for 
> the results of command execution.
>  1) `ReassignPartitionsCommand` requires  --bootstrap-server is required in 
> the latest high version,
> but requires --zookeeper in the low version.
>  Once the parameter verification fails, the *Exit.exit(1, Some(message))* 
> method will be called, which will cause my process to shut down directly.
>  2) In Kafka 3.0.0 version, there is this code at the end in the `*main(args: 
> Array[String])*` method of `ReassignPartitionsCommand`
> {code:java}
> // If the command failed, exit with a non-zero exit code.
>  if (failed) {
>  Exit.exit(1)
>  }{code}
> This will also make my process shut down directly
> So I hope that the Kafka community will be able to print out the reason and 
> stack of the corresponding exception when the parameter verification fails or 
> the execution command is abnormal, and then return from the `*main(args: 
> Array[String])*` method of the command, but don't call `*Exit.exit(...)*` 
> method. Of course, when the script is executed on the machine, there is no 
> problem with exiting directly.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-6080) Transactional EoS for source connectors

2021-11-02 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-6080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17437419#comment-17437419
 ] 

Tom Bentley commented on KAFKA-6080:


[~sliebau], 
[KIP-618|https://cwiki.apache.org/confluence/display/KAFKA/KIP-618%3A+Exactly-Once+Support+for+Source+Connectors]
 has been accepted and [~ChrisEgerton] is working on its implementation in 
[https://github.com/apache/kafka/pull/10907,] hopefully in Kafka 3.2.

> Transactional EoS for source connectors
> ---
>
> Key: KAFKA-6080
> URL: https://issues.apache.org/jira/browse/KAFKA-6080
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect
>Reporter: Antony Stubbs
>Assignee: Chris Egerton
>Priority: Major
>  Labels: needs-kip
> Fix For: 3.2.0
>
>
> Exactly once (eos) message production for source connectors.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9374) Worker can be disabled by blocked connectors

2020-01-07 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17009522#comment-17009522
 ] 

Tom Bentley commented on KAFKA-9374:


"Abandoning" the thread isn't really a solution. You cannot be sure that thread 
will ever die. Abandon enough threads and it starts to look like a resource 
leak. Also, any connector which hangs in one of those methods is buggy and that 
bug will come to light (and potentially be fixed) sooner if the connector fails 
in a noticeable way which the end user is likely to notice. By papering over 
the cracks like this isn't the end user more likely to just try recreating the 
connector (which maybe works some of the time)? Thus potentially letting the 
bug live longer?

> Worker can be disabled by blocked connectors
> 
>
> Key: KAFKA-9374
> URL: https://issues.apache.org/jira/browse/KAFKA-9374
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.0.0, 1.0.1, 1.0.2, 1.1.0, 1.1.1, 2.0.0, 2.0.1, 2.1.0, 
> 2.2.0, 2.1.1, 2.3.0, 2.2.1, 2.2.2, 2.4.0, 2.3.1
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Major
>
> If a connector hangs during any of its {{initialize}}, {{start}}, {{stop}}, 
> \{taskConfigs}}, {{taskClass}}, {{version}}, {{config}}, or {{validate}} 
> methods, the worker will be disabled for some types of requests thereafter, 
> including connector creation, connector reconfiguration, and connector 
> deletion.
>  This only occurs in distributed mode and is due to the threading model used 
> by the 
> [DistributedHerder|https://github.com/apache/kafka/blob/03f763df8a8d9482d8c099806336f00cf2521465/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/DistributedHerder.java]
>  class.
>  
> One potential solution could be to treat connectors that fail to start, stop, 
> etc. in time similarly to tasks that fail to stop within the [task graceful 
> shutdown timeout 
> period|https://github.com/apache/kafka/blob/03f763df8a8d9482d8c099806336f00cf2521465/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfig.java#L121-L126]
>  by handling all connector interactions on a separate thread, waiting for 
> them to complete within a timeout, and abandoning the thread (and 
> transitioning the connector to the {{FAILED}} state, if it has been created 
> at all) if that timeout expires.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9374) Worker can be disabled by blocked connectors

2020-01-08 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17010527#comment-17010527
 ] 

Tom Bentley commented on KAFKA-9374:


[~ChrisEgerton] yeah, I agree there's no tenable alternative for recovering the 
thread, and protecting the worker is a worthy aim. I like the idea of using an 
error response (for those cases associated with a request) in addition to 
transitioning the connector to failed.

Would the timeout be configurable (not arguing that it should, merely asking)?

> Worker can be disabled by blocked connectors
> 
>
> Key: KAFKA-9374
> URL: https://issues.apache.org/jira/browse/KAFKA-9374
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.0.0, 1.0.1, 1.0.2, 1.1.0, 1.1.1, 2.0.0, 2.0.1, 2.1.0, 
> 2.2.0, 2.1.1, 2.3.0, 2.2.1, 2.2.2, 2.4.0, 2.3.1
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Major
>
> If a connector hangs during any of its {{initialize}}, {{start}}, {{stop}}, 
> \{taskConfigs}}, {{taskClass}}, {{version}}, {{config}}, or {{validate}} 
> methods, the worker will be disabled for some types of requests thereafter, 
> including connector creation, connector reconfiguration, and connector 
> deletion.
>  -This only occurs in distributed mode and is due to the threading model used 
> by the 
> [DistributedHerder|https://github.com/apache/kafka/blob/03f763df8a8d9482d8c099806336f00cf2521465/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/DistributedHerder.java]
>  class.- This affects both distributed and standalone mode. Distributed 
> herders perform some connector work synchronously in their {{tick}} thread, 
> which also handles group membership and some REST requests. The majority of 
> the herder methods for the standalone herder are {{synchronized}}, including 
> those for creating, updating, and deleting connectors; as long as one of 
> those methods blocks, all subsequent calls to any of these methods will also 
> be blocked.
>  
> One potential solution could be to treat connectors that fail to start, stop, 
> etc. in time similarly to tasks that fail to stop within the [task graceful 
> shutdown timeout 
> period|https://github.com/apache/kafka/blob/03f763df8a8d9482d8c099806336f00cf2521465/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfig.java#L121-L126]
>  by handling all connector interactions on a separate thread, waiting for 
> them to complete within a timeout, and abandoning the thread (and 
> transitioning the connector to the {{FAILED}} state, if it has been created 
> at all) if that timeout expires.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-8768) Replace DeleteRecords request/response with automated protocol

2020-01-13 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley reassigned KAFKA-8768:
--

Assignee: Tom Bentley  (was: Mickael Maison)

> Replace DeleteRecords request/response with automated protocol
> --
>
> Key: KAFKA-8768
> URL: https://issues.apache.org/jira/browse/KAFKA-8768
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Assignee: Tom Bentley
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-7787) Add error specifications to KAFKA-7609

2020-01-14 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley updated KAFKA-7787:
---
Description: In our RPC JSON, it would be nice if we could specify what 
versions of a response could contain what errors.  See the discussion here: 
https://github.com/apache/kafka/pull/5893#discussion_r244841051  (was: In our 
RPC JSON, it would be nice if we could specify what versions of a response 
could contain what errors.  See the discussion here: 
https://github.com/apache/kafka/pull/5893)

> Add error specifications to KAFKA-7609
> --
>
> Key: KAFKA-7787
> URL: https://issues.apache.org/jira/browse/KAFKA-7787
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Colin McCabe
>Priority: Minor
>
> In our RPC JSON, it would be nice if we could specify what versions of a 
> response could contain what errors.  See the discussion here: 
> https://github.com/apache/kafka/pull/5893#discussion_r244841051



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9433) Replace AlterConfigs request/response with automated protocol

2020-01-15 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-9433:
--

 Summary: Replace AlterConfigs request/response with automated 
protocol
 Key: KAFKA-9433
 URL: https://issues.apache.org/jira/browse/KAFKA-9433
 Project: Kafka
  Issue Type: Sub-task
Reporter: Tom Bentley
Assignee: Tom Bentley






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9432) Replace DescribeConfigs request/response with automated protocol

2020-01-15 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-9432:
--

 Summary: Replace DescribeConfigs request/response with automated 
protocol
 Key: KAFKA-9432
 URL: https://issues.apache.org/jira/browse/KAFKA-9432
 Project: Kafka
  Issue Type: Sub-task
Reporter: Tom Bentley
Assignee: Tom Bentley






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9434) Replace AlterReplicaLogDirs request/response with automated protocol

2020-01-15 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-9434:
--

 Summary: Replace AlterReplicaLogDirs request/response with 
automated protocol
 Key: KAFKA-9434
 URL: https://issues.apache.org/jira/browse/KAFKA-9434
 Project: Kafka
  Issue Type: Sub-task
Reporter: Tom Bentley
Assignee: Tom Bentley






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9435) Replace DescribeLogDirs request/response with automated protocol

2020-01-15 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-9435:
--

 Summary: Replace DescribeLogDirs request/response with automated 
protocol
 Key: KAFKA-9435
 URL: https://issues.apache.org/jira/browse/KAFKA-9435
 Project: Kafka
  Issue Type: Sub-task
Reporter: Tom Bentley
Assignee: Tom Bentley






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-7787) Add error specifications to KAFKA-7609

2020-01-17 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley reassigned KAFKA-7787:
--

Assignee: Tom Bentley

> Add error specifications to KAFKA-7609
> --
>
> Key: KAFKA-7787
> URL: https://issues.apache.org/jira/browse/KAFKA-7787
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Colin McCabe
>Assignee: Tom Bentley
>Priority: Minor
>
> In our RPC JSON, it would be nice if we could specify what versions of a 
> response could contain what errors.  See the discussion here: 
> https://github.com/apache/kafka/pull/5893#discussion_r244841051



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-7787) Add error specifications to KAFKA-7609

2020-01-17 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-7787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17017873#comment-17017873
 ] 

Tom Bentley commented on KAFKA-7787:


[~cmccabe] I was thinking about this and have a partially working 
implementation, but wanted your thoughts before I spend more time on it.

cc: [~hachikuji] who raised the original comment on the PR for the message 
generator.

The following approach works for all enum-like codes in the protocol, not just 
error codes.

h2. Declaring coded values

The generator will support a new kind of JSON object input {{"type": "codes"}}, 
which describes a set of distinct named integer (byte, short, etc) values. For 
example:
{code:language=js}
{
  "type": "codes",
  "name": "ErrorCodes",
  "valueType": "int16"
  "codes": [
{ "name": "UNKNOWN_SERVER_ERROR", "value": -1,
  "about": "The server experienced an unexpected error when processing the 
request." },
{ "name": "NONE", "value": 0,
  "about": "No error." },
{ "name": "OFFSET_OUT_OF_RANGE", "value": 1,
  "about": "The requested offset is not within the range of offsets 
maintained by the server." },
  ...
  ]
}
{code}

 * The `valueType` is the type of the integer values.
 * The `codes` lists each of the allowed values. The `about` is optional.

This would generate a class of constants:

{code:language=java}
class ErrorCodes {
public final static short UNKNOWN_SERVER_ERROR = -1;
public final static short NONE = 0;
...

public static boolean isValid(short v) {
return NONE <= v && v <= MAX; 
}
}
{code}

* The {{isValid()}} method validates that its parameter is one of the allowed 
values.
* It's an error for two constants to have the same value.
* There need be no requirement for the values to be contiguous.


Continuing the example this allows the existing `Errors` enum to be written as:

{code:language=java}
enum Errors {
NONE(ErrorCodes.NONE, ...);
...
}
{code}

h2. Using codes in field specs

The field spec will support a {{domain}} property which names the set of codes 
that values of the field may take. For example an {{ErrorCode}} field:

{code:language=js}
 {
  "name": "ErrorCode",
  "type": "int16",
  "domain": {
"name": "ErrorCodes",
"values": [
  { "name": "NONE", "validVersions": "0+" },
  { "name": "FOO", "validVersions": "0+" },
  { "name": "BAR", "validVersions": "3+" },
  ...
]
   }
 }
{code}

* The {{name}} is the name of a corresponding codes declaration.
* The {{values}} is optional. When it's missing then any of the values in the 
codes declaration are permitted. When it's present, then only the given values 
are allowed. Values are given as an object with a `name` that identifies a 
value from the codes declaration and optionally, a {validVersions} which allows 
a given code to only be allowed in the given versions of the message.

The owning {Data} class (or inner classes of the {Data} class) will gain a 
method for validating the error codes. The implementation would depend on 
whether {values} and/or {validVersions} were given, but might look like this:

{code:language=java}
public static boolean isValidErrorCode(short v, short version) {
switch (version) {
case 0:
case 1:
case 2:
return v == ErrorCodes.NONE || v == ErrorCodes.FOO;
case 3:
return v == ErrorCodes.NONE || v == ErrorCodes.FOO || v == 
ErrorCodes.BAR;
 }
}
{code}

h2. Validation

We can call the validation methods and throw:

 * When serializing requests
 * When deserializing requests
 * When serializing responses, except for error code fields.

The reason for distinguishing error code fields arises from the difficultly of 
knowing for certain which exception types can be thrown in the code called from 
the handler in the broker. We don't want a mistake the allowed error codes to 
result in a needless exception in the broker. So for these instead of throwing 
we could log the unexpected value.

We could use properties of the field spec to configure what code was generated 
for serialization and deserialization on a per-message basis.

Thoughts?



> Add error specifications to KAFKA-7609
> --
>
> Key: KAFKA-7787
> URL: https://issues.apache.org/jira/browse/KAFKA-7787
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Colin McCabe
>Assignee: Tom Bentley
>Priority: Minor
>
> In our RPC JSON, it would be nice if we could specify what versions of a 
> response could contain what errors.  See the discussion here: 
> https://github.com/apache/kafka/pull/5893#discussion_r244841051



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-2526) Console Producer / Consumer's serde config is not working

2020-01-17 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-2526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17018099#comment-17018099
 ] 

Tom Bentley commented on KAFKA-2526:


[~mgharat] are you working on this, or intending to come back to it?

> Console Producer / Consumer's serde config is not working
> -
>
> Key: KAFKA-2526
> URL: https://issues.apache.org/jira/browse/KAFKA-2526
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Mayuresh Gharat
>Priority: Major
>  Labels: newbie
>
> Although in the console producer one can specify the key value serializer, 
> they are actually not used since 1) it always serialize the input string as 
> String.getBytes (hence always pre-assume the string serializer) and 2) it is 
> actually only passed into the old producer. The same issues exist in console 
> consumer.
> In addition the configs in the console producer is messy: we have 1) some 
> config values exposed as cmd parameters, and 2) some config values in 
> --producer-property and 3) some in --property.
> It will be great to clean the configs up in both console producer and 
> consumer, and put them into a single --property parameter which could 
> possibly take a file to reading in property values as well, and only leave 
> --new-producer as the other command line parameter.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-6359) Work for KIP-236

2020-02-25 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley resolved KAFKA-6359.

Resolution: Implemented

This was addressed by KIP-455 instead.

> Work for KIP-236
> 
>
> Key: KAFKA-6359
> URL: https://issues.apache.org/jira/browse/KAFKA-6359
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Tom Bentley
>Assignee: GEORGE LI
>Priority: Minor
>
> This issue is for the work described in KIP-236.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-5554) Hilight config settings for particular common use cases

2020-02-25 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley resolved KAFKA-5554.

Resolution: Abandoned

> Hilight config settings for particular common use cases
> ---
>
> Key: KAFKA-5554
> URL: https://issues.apache.org/jira/browse/KAFKA-5554
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Minor
>
> Judging by the sorts of questions seen on the mailling list, stack overflow 
> etc it seems common for users to assume that Kafka will default to settings 
> which won't lose messages. They start using Kafka and at some later time find 
> messages have been lost.
> While it's not our fault if users don't read the documentation, there's a lot 
> of configuration documentation to digest and it's easy for people to miss an 
> important setting.
> Therefore, I'd like to suggest that in addition to the current configuration 
> docs we add a short section highlighting those settings which pertain to 
> common use cases, such as:
> * configs to avoid lost messages
> * configs for low latency
> I'm sure some users will continue to not read the documentation, but when 
> they inevitably start asking questions it means people can respond with "have 
> you configured everything as described here?"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-7787) Add error specifications to KAFKA-7609

2020-02-25 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-7787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17044630#comment-17044630
 ] 

Tom Bentley commented on KAFKA-7787:


[~cmccabe] any feedback on my comment?

> Add error specifications to KAFKA-7609
> --
>
> Key: KAFKA-7787
> URL: https://issues.apache.org/jira/browse/KAFKA-7787
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Colin McCabe
>Assignee: Tom Bentley
>Priority: Minor
>
> In our RPC JSON, it would be nice if we could specify what versions of a 
> response could contain what errors.  See the discussion here: 
> https://github.com/apache/kafka/pull/5893#discussion_r244841051



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9633) ConfigProvider.close() not called

2020-03-02 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-9633:
--

 Summary: ConfigProvider.close() not called
 Key: KAFKA-9633
 URL: https://issues.apache.org/jira/browse/KAFKA-9633
 Project: Kafka
  Issue Type: Bug
Reporter: Tom Bentley
Assignee: Tom Bentley


ConfigProvider extends Closeable, but in the following contexts the {{close()}} 
method is never called:

1. AbstractConfig
2. WorkerConfigTransformer



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9634) ConfigProvider does not document thread safety

2020-03-02 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-9634:
--

 Summary: ConfigProvider does not document thread safety
 Key: KAFKA-9634
 URL: https://issues.apache.org/jira/browse/KAFKA-9634
 Project: Kafka
  Issue Type: Bug
Reporter: Tom Bentley
Assignee: Tom Bentley


In Kafka Connect {{ConfigProvider}} can be used concurrently (e.g. via PUT to 
{{/{connectorType}/config/validate}}, but there is no mention of concurrent 
usage in the Javadocs for {{ConfigProvider}}. It's probably worth calling out 
that implementations need to be thread safe.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9635) Should ConfigProvider.subscribe be decrecated?

2020-03-02 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-9635:
--

 Summary: Should ConfigProvider.subscribe be decrecated?
 Key: KAFKA-9635
 URL: https://issues.apache.org/jira/browse/KAFKA-9635
 Project: Kafka
  Issue Type: Bug
Reporter: Tom Bentley
Assignee: Tom Bentley


KIP 297 added the ConfigProvider interface for use with Kafka Connect.
Its seems that at that time it was anticipated that config providers should 
have a change notification mechanism to facilitate dynamic reconfiguration. 
This was realised by having `subscribe()`, `unsubscribe()` and 
`unsubscribeAll()` methods in the ConfigProvider interface.

KIP-421 subsequently added the ability to use config providers with other 
configs (e.g. client, broker and Kafka Streams). KIP-421 didn't end up using 
the change notification feature, since it was incompatible with being able to 
update broker configs atomically.

As things currently stand the `subscribe()`, `unsubscribe()` and 
`unsubscribeAll()`  methods remain in the ConfigProvider interface but are not 
used anywhere in the Kafka code base. Is there an intention to make use of 
these methods, or should they be deprecated?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9635) Should ConfigProvider.subscribe be deprecated?

2020-03-02 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley updated KAFKA-9635:
---
Summary: Should ConfigProvider.subscribe be deprecated?  (was: Should 
ConfigProvider.subscribe be decrecated?)

> Should ConfigProvider.subscribe be deprecated?
> --
>
> Key: KAFKA-9635
> URL: https://issues.apache.org/jira/browse/KAFKA-9635
> Project: Kafka
>  Issue Type: Bug
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Minor
>
> KIP 297 added the ConfigProvider interface for use with Kafka Connect.
> Its seems that at that time it was anticipated that config providers should 
> have a change notification mechanism to facilitate dynamic reconfiguration. 
> This was realised by having `subscribe()`, `unsubscribe()` and 
> `unsubscribeAll()` methods in the ConfigProvider interface.
> KIP-421 subsequently added the ability to use config providers with other 
> configs (e.g. client, broker and Kafka Streams). KIP-421 didn't end up using 
> the change notification feature, since it was incompatible with being able to 
> update broker configs atomically.
> As things currently stand the `subscribe()`, `unsubscribe()` and 
> `unsubscribeAll()`  methods remain in the ConfigProvider interface but are 
> not used anywhere in the Kafka code base. Is there an intention to make use 
> of these methods, or should they be deprecated?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9633) ConfigProvider.close() not called

2020-03-02 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley updated KAFKA-9633:
---
Labels: patch-available  (was: )

> ConfigProvider.close() not called
> -
>
> Key: KAFKA-9633
> URL: https://issues.apache.org/jira/browse/KAFKA-9633
> Project: Kafka
>  Issue Type: Bug
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Minor
>  Labels: patch-available
>
> ConfigProvider extends Closeable, but in the following contexts the 
> {{close()}} method is never called:
> 1. AbstractConfig
> 2. WorkerConfigTransformer



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9633) ConfigProvider.close() not called

2020-03-02 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17049506#comment-17049506
 ] 

Tom Bentley commented on KAFKA-9633:


Patch available: https://github.com/apache/kafka/pull/8204

> ConfigProvider.close() not called
> -
>
> Key: KAFKA-9633
> URL: https://issues.apache.org/jira/browse/KAFKA-9633
> Project: Kafka
>  Issue Type: Bug
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Minor
>  Labels: patch-available
>
> ConfigProvider extends Closeable, but in the following contexts the 
> {{close()}} method is never called:
> 1. AbstractConfig
> 2. WorkerConfigTransformer



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9635) Should ConfigProvider.subscribe be deprecated?

2020-03-04 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17050997#comment-17050997
 ] 

Tom Bentley commented on KAFKA-9635:


[~kkonstantine], [~cmccabe] can you offer any insights into the future use of 
these methods?

> Should ConfigProvider.subscribe be deprecated?
> --
>
> Key: KAFKA-9635
> URL: https://issues.apache.org/jira/browse/KAFKA-9635
> Project: Kafka
>  Issue Type: Bug
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Minor
>
> KIP 297 added the ConfigProvider interface for use with Kafka Connect.
> Its seems that at that time it was anticipated that config providers should 
> have a change notification mechanism to facilitate dynamic reconfiguration. 
> This was realised by having {{subscribe()}}, {{unsubscribe()}} and 
> {{unsubscribeAll()}} methods in the ConfigProvider interface.
> KIP-421 subsequently added the ability to use config providers with other 
> configs (e.g. client, broker and Kafka Streams). KIP-421 didn't end up using 
> the change notification feature, since it was incompatible with being able to 
> update broker configs atomically.
> As things currently stand the {{subscribe()}}, {{unsubscribe()}} and 
> {{unsubscribeAll()}}  methods remain in the ConfigProvider interface but are 
> not used anywhere in the Kafka code base. Is there an intention to make use 
> of these methods, or should they be deprecated?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9635) Should ConfigProvider.subscribe be deprecated?

2020-03-04 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley updated KAFKA-9635:
---
Description: 
KIP 297 added the ConfigProvider interface for use with Kafka Connect.
Its seems that at that time it was anticipated that config providers should 
have a change notification mechanism to facilitate dynamic reconfiguration. 
This was realised by having {{subscribe()}}, {{unsubscribe()}} and 
{{unsubscribeAll()}} methods in the ConfigProvider interface.

KIP-421 subsequently added the ability to use config providers with other 
configs (e.g. client, broker and Kafka Streams). KIP-421 didn't end up using 
the change notification feature, since it was incompatible with being able to 
update broker configs atomically.

As things currently stand the {{subscribe()}}, {{unsubscribe()}} and 
{{unsubscribeAll()}}  methods remain in the ConfigProvider interface but are 
not used anywhere in the Kafka code base. Is there an intention to make use of 
these methods, or should they be deprecated?

  was:
KIP 297 added the ConfigProvider interface for use with Kafka Connect.
Its seems that at that time it was anticipated that config providers should 
have a change notification mechanism to facilitate dynamic reconfiguration. 
This was realised by having `subscribe()`, `unsubscribe()` and 
`unsubscribeAll()` methods in the ConfigProvider interface.

KIP-421 subsequently added the ability to use config providers with other 
configs (e.g. client, broker and Kafka Streams). KIP-421 didn't end up using 
the change notification feature, since it was incompatible with being able to 
update broker configs atomically.

As things currently stand the `subscribe()`, `unsubscribe()` and 
`unsubscribeAll()`  methods remain in the ConfigProvider interface but are not 
used anywhere in the Kafka code base. Is there an intention to make use of 
these methods, or should they be deprecated?


> Should ConfigProvider.subscribe be deprecated?
> --
>
> Key: KAFKA-9635
> URL: https://issues.apache.org/jira/browse/KAFKA-9635
> Project: Kafka
>  Issue Type: Bug
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Minor
>
> KIP 297 added the ConfigProvider interface for use with Kafka Connect.
> Its seems that at that time it was anticipated that config providers should 
> have a change notification mechanism to facilitate dynamic reconfiguration. 
> This was realised by having {{subscribe()}}, {{unsubscribe()}} and 
> {{unsubscribeAll()}} methods in the ConfigProvider interface.
> KIP-421 subsequently added the ability to use config providers with other 
> configs (e.g. client, broker and Kafka Streams). KIP-421 didn't end up using 
> the change notification feature, since it was incompatible with being able to 
> update broker configs atomically.
> As things currently stand the {{subscribe()}}, {{unsubscribe()}} and 
> {{unsubscribeAll()}}  methods remain in the ConfigProvider interface but are 
> not used anywhere in the Kafka code base. Is there an intention to make use 
> of these methods, or should they be deprecated?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9650) Include human readable quantities for default config docs

2020-03-04 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-9650:
--

 Summary: Include human readable quantities for default config docs
 Key: KAFKA-9650
 URL: https://issues.apache.org/jira/browse/KAFKA-9650
 Project: Kafka
  Issue Type: Improvement
  Components: docs
Reporter: Tom Bentley
Assignee: Tom Bentley


The Kafka config docs include default values for quantities in milliseconds and 
bytes, for example {{log.segment.bytes}} has default: {{1073741824}}. Many 
readers won't know that that's 1GiB, so will have to work it out. It would make 
the docs more readable if we included the quantity in the appropriate unit in 
parenthesis after the actual default value, like this:

default: 1073741824 (=1GiB)

Similarly for values in milliseconds. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9651) Divide by zero in DefaultPartitioner

2020-03-04 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-9651:
--

 Summary: Divide by zero in DefaultPartitioner
 Key: KAFKA-9651
 URL: https://issues.apache.org/jira/browse/KAFKA-9651
 Project: Kafka
  Issue Type: Bug
Reporter: Tom Bentley
Assignee: Tom Bentley


The following exception was observed in a Kafka Streams application running on 
Kafka 2.3:

java.lang.ArithmeticException: / by zero
at 
org.apache.kafka.clients.producer.internals.DefaultPartitioner.partition(DefaultPartitioner.java:69)
at 
org.apache.kafka.streams.processor.internals.DefaultStreamPartitioner.partition(DefaultStreamPartitioner.java:39)
at 
org.apache.kafka.streams.processor.internals.StreamsMetadataState.getStreamsMetadataForKey(StreamsMetadataState.java:255)
at 
org.apache.kafka.streams.processor.internals.StreamsMetadataState.getMetadataWithKey(StreamsMetadataState.java:155)
at 
org.apache.kafka.streams.KafkaStreams.metadataForKey(KafkaStreams.java:1019)

The cause is that the {{Cluster}} returns an empty list from 
{{partitionsForTopic(topic)}} and the size is then used as a divisor.

The same pattern of using the size of the partitions as divisor is used in 
other implementations of {{Partitioner}} and also {{StickyPartitionCache}}, so 
presumably they're also prone to this problem when {{Cluster}} lacks 
information about a topic. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9663) KafkaStreams.metadataForKey, queryMetadataForKey docs don't mention null

2020-03-05 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-9663:
--

 Summary: KafkaStreams.metadataForKey, queryMetadataForKey docs 
don't mention null
 Key: KAFKA-9663
 URL: https://issues.apache.org/jira/browse/KAFKA-9663
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Tom Bentley
Assignee: Tom Bentley


The Javadoc for {{KafkaStreams.metadataForKey}} and 
{{KafkaStreams.queryMetadataForKey}} don't document the possible null return 
value.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9673) Conditionally apply SMTs

2020-03-06 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-9673:
--

 Summary: Conditionally apply SMTs
 Key: KAFKA-9673
 URL: https://issues.apache.org/jira/browse/KAFKA-9673
 Project: Kafka
  Issue Type: New Feature
  Components: KafkaConnect
Reporter: Tom Bentley
Assignee: Tom Bentley


KAFKA-7052 ended up using IAE with a message, rather than NPE in the case of a 
SMT being applied to a record lacking a given field. It's still not possible to 
apply a SMT conditionally, which is what things like Debezium really need in 
order to apply transformations only to non-schema change events.

[~rhauch] suggested a mechanism to conditionally apply any SMT but was 
concerned about the possibility of a naming collision (assuming it was 
configured by a simple config)

I'd like to propose something which would solve this problem without the 
possibility of such collisions. The idea is to have a higher-level condition, 
which applies an arbitrary transformation (or transformation chain) according 
to some predicate on the record. 

More concretely, it might be configured like this:

{noformat}
  transforms.conditionalExtract.type: Conditional
  transforms.conditionalExtract.transforms: extractInt
  transforms.conditionalExtract.transforms.extractInt.type: 
org.apache.kafka.connect.transforms.ExtractField$Key
  transforms.conditionalExtract.transforms.extractInt.field: c1
  transforms.conditionalExtract.condition: topic-matches:
{noformat}

* The {{Conditional}} SMT is configured with its own list of transforms 
({{transforms.conditionalExtract.transforms}}) to apply. This would work just 
like the top level {{transforms}} config, so subkeys can be used to configure 
these transforms in the usual way.
* The {{condition}} config defines the predicate for when the transforms are 
applied to a record using a {{:}} syntax

We could initially support three condition types:

*{{topic-matches:}}* The transformation would be applied if the 
record's topic name matched the given regular expression pattern. For example, 
the following would apply the transformation on records being sent to any topic 
with a name beginning with "my-prefix-":
{noformat}
   transforms.conditionalExtract.condition: topic-matches:my-prefix-.*
{noformat}
   
*{{has-header:}}* The transformation would be applied if the 
record had at least one header with the given name. For example, the following 
will apply the transformation on records with at least one header with the name 
"my-header":
{noformat}
   transforms.conditionalExtract.condition: has-header:my-header
{noformat}
   
*{{not:}}* This would negate the result of another named 
condition using the condition config prefix. For example, the following will 
apply the transformation on records which lack any header with the name 
my-header:

{noformat}
  transforms.conditionalExtract.condition: not:hasMyHeader
  transforms.conditionalExtract.condition.hasMyHeader: has-header:my-header
{noformat}

I foresee one implementation concern with this approach, which is that 
currently {{Transformation}} has to return a fixed {{ConfigDef}}, and this 
proposal would require something more flexible in order to allow the config 
parameters to depend on the listed transform aliases (and similarly for named 
predicate used for the {{not:}} predicate). I think this could be done by 
adding a {{default}} method to {{Transformation}} for getting the ConfigDef 
given the config, for example.

Obviously this would require a KIP, but before I spend any more time on this 
I'd be interested in your thoughts [~rhauch], [~rmoff], [~gunnar.morling].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-7787) Add error specifications to KAFKA-7609

2020-03-06 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-7787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17053686#comment-17053686
 ] 

Tom Bentley commented on KAFKA-7787:


[~cmccabe] thanks for getting back to me.

The UNKNOWN enum values are interesting as (at least for {{ConfigType}}) 
UNKNOWN represents a value which is never sent on the wire. So it didn't feel 
especially weird to not enumerate that in the protocol. But it would be simple 
enough to declare things like that in the {{codes}} declaration (and perhaps 
mark it as not being part of the wire protocol, so it couldn't be used in a 
{{domain}}). 

I already have the sharing of enums between messages sorted by using a new top 
level type, as you suggest.

For errors, I just used the constants in the existing Errors class, which is a 
bit of a cheat as it has to be manually maintained. Are you aiming to generate 
{{Errors}}?

I'll post a PR with the code I hacked together so you can see how it works at 
this point.

> Add error specifications to KAFKA-7609
> --
>
> Key: KAFKA-7787
> URL: https://issues.apache.org/jira/browse/KAFKA-7787
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Colin McCabe
>Assignee: Tom Bentley
>Priority: Minor
>
> In our RPC JSON, it would be nice if we could specify what versions of a 
> response could contain what errors.  See the discussion here: 
> https://github.com/apache/kafka/pull/5893#discussion_r244841051



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9691) Flaky test kafka.admin.TopicCommandWithAdminClientTest#testDescribeUnderReplicatedPartitionsWhenReassignmentIsInProgress

2020-03-10 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-9691:
--

 Summary: Flaky test 
kafka.admin.TopicCommandWithAdminClientTest#testDescribeUnderReplicatedPartitionsWhenReassignmentIsInProgress
 Key: KAFKA-9691
 URL: https://issues.apache.org/jira/browse/KAFKA-9691
 Project: Kafka
  Issue Type: Bug
  Components: unit tests
Affects Versions: 2.5.0
Reporter: Tom Bentley


Stacktrace:

{noformat}
java.lang.NullPointerException
at 
kafka.admin.TopicCommandWithAdminClientTest.$anonfun$testDescribeUnderReplicatedPartitionsWhenReassignmentIsInProgress$3(TopicCommandWithAdminClientTest.scala:673)
at 
kafka.admin.TopicCommandWithAdminClientTest.testDescribeUnderReplicatedPartitionsWhenReassignmentIsInProgress(TopicCommandWithAdminClientTest.scala:671)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:118)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Met

[jira] [Updated] (KAFKA-9691) Flaky test kafka.admin.TopicCommandWithAdminClientTest#testDescribeUnderReplicatedPartitionsWhenReassignmentIsInProgress

2020-03-10 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley updated KAFKA-9691:
---
Labels: flaky-test  (was: )

> Flaky test 
> kafka.admin.TopicCommandWithAdminClientTest#testDescribeUnderReplicatedPartitionsWhenReassignmentIsInProgress
> 
>
> Key: KAFKA-9691
> URL: https://issues.apache.org/jira/browse/KAFKA-9691
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Affects Versions: 2.5.0
>Reporter: Tom Bentley
>Priority: Major
>  Labels: flaky-test
>
> Stacktrace:
> {noformat}
> java.lang.NullPointerException
>   at 
> kafka.admin.TopicCommandWithAdminClientTest.$anonfun$testDescribeUnderReplicatedPartitionsWhenReassignmentIsInProgress$3(TopicCommandWithAdminClientTest.scala:673)
>   at 
> kafka.admin.TopicCommandWithAdminClientTest.testDescribeUnderReplicatedPartitionsWhenReassignmentIsInProgress(TopicCommandWithAdminClientTest.scala:671)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>   at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker

[jira] [Created] (KAFKA-9692) Flaky test - kafka.admin.ReassignPartitionsClusterTest#znodeReassignmentShouldOverrideApiTriggeredReassignment

2020-03-10 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-9692:
--

 Summary: Flaky test - 
kafka.admin.ReassignPartitionsClusterTest#znodeReassignmentShouldOverrideApiTriggeredReassignment
 Key: KAFKA-9692
 URL: https://issues.apache.org/jira/browse/KAFKA-9692
 Project: Kafka
  Issue Type: Bug
  Components: unit tests
Affects Versions: 2.5.0
Reporter: Tom Bentley


{noformat}
java.lang.AssertionError: expected: but was:
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.failNotEquals(Assert.java:835)
at org.junit.Assert.assertEquals(Assert.java:120)
at org.junit.Assert.assertEquals(Assert.java:146)
at 
kafka.admin.ReassignPartitionsClusterTest.assertReplicas(ReassignPartitionsClusterTest.scala:1220)
at 
kafka.admin.ReassignPartitionsClusterTest.assertIsReassigning(ReassignPartitionsClusterTest.scala:1191)
at 
kafka.admin.ReassignPartitionsClusterTest.znodeReassignmentShouldOverrideApiTriggeredReassignment(ReassignPartitionsClusterTest.scala:897)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at jdk.internal.reflect.GeneratedMethodAccessor34.invoke(Unknown Source)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:118)
at jdk.internal.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.gradle.intern

[jira] [Commented] (KAFKA-9698) Wrong default max.message.bytes in document

2020-03-11 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17056751#comment-17056751
 ] 

Tom Bentley commented on KAFKA-9698:


KAFKA-4203 has been fixed for Kafka 2.5 which is not released yet, and 
therefore not reflected in the documentation currently at 
http://kafka.apache.org/documentation/. When 2.5 is released the documentation 
will be updated.

> Wrong default max.message.bytes in document
> ---
>
> Key: KAFKA-9698
> URL: https://issues.apache.org/jira/browse/KAFKA-9698
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.4.0
>Reporter: jiamei xie
>Priority: Major
>
> The broker default for max.message.byte  has been changed  to 1048588 in 
> https://issues.apache.org/jira/browse/KAFKA-4203. But the default value in 
> http://kafka.apache.org/documentation/ is still 112.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-9737) Describing log dir reassignment times out if broker is offline

2020-03-20 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley reassigned KAFKA-9737:
--

Assignee: Tom Bentley

> Describing log dir reassignment times out if broker is offline
> --
>
> Key: KAFKA-9737
> URL: https://issues.apache.org/jira/browse/KAFKA-9737
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Tom Bentley
>Priority: Major
>
> If there is any broker offline when trying to describe a log dir 
> reassignment, then we get the something like the following error:
> {code}
> Status of partition reassignment: 
>   
>Partitions reassignment failed due to 
> org.apache.kafka.common.errors.TimeoutException: 
> Call(callName=describeReplicaLogDirs, deadlineMs=1584663960068, tries=1, 
> nextAllowedTryMs=158466
> 3960173) timed out at 1584663960073 after 1 attempt(s)
>   
>
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TimeoutException: 
> Call(callName=describeReplicaLogDirs, deadlineMs=1584663960068, tries=1, 
> nextAllowedTryMs=158
> 4663960173) timed out at 1584663960073 after 1 attempt(s) 
>   
>   
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
>   
>
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
>   
>  
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
>   
>  
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
>   
>   
> at 
> kafka.admin.ReassignPartitionsCommand$.checkIfReplicaReassignmentSucceeded(ReassignPartitionsCommand.scala:381)
>
> at 
> kafka.admin.ReassignPartitionsCommand$.verifyAssignment(ReassignPartitionsCommand.scala:98)
>  
> at 
> kafka.admin.ReassignPartitionsCommand$.verifyAssignment(ReassignPartitionsCommand.scala:90)
> at 
> kafka.admin.ReassignPartitionsCommand$.main(ReassignPartitionsCommand.scala:61)
> at 
> kafka.admin.ReassignPartitionsCommand.main(ReassignPartitionsCommand.scala)
> Caused by: org.apache.kafka.common.errors.TimeoutException: 
> Call(callName=describeReplicaLogDirs, deadlineMs=1584663960068, tries=1, 
> nextAllowedTryMs=1584663960173) timed out at 15846
> 63960073 after 1 attempt(s)
> Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting 
> for a node assignment.
> {code}
> It would be nice if the tool was smart enough to notice brokers that are 
> offline and report them as such while reporting the status of reassignments 
> for online brokers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9692) Flaky test - kafka.admin.ReassignPartitionsClusterTest#znodeReassignmentShouldOverrideApiTriggeredReassignment

2020-03-20 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley resolved KAFKA-9692.

Resolution: Done

This test got deleted as part of KAFKA-8820, when much of the testing changed 
to unit tests.

> Flaky test - 
> kafka.admin.ReassignPartitionsClusterTest#znodeReassignmentShouldOverrideApiTriggeredReassignment
> --
>
> Key: KAFKA-9692
> URL: https://issues.apache.org/jira/browse/KAFKA-9692
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Affects Versions: 2.5.0
>Reporter: Tom Bentley
>Priority: Major
>  Labels: flaky-test
>
> {noformat}
> java.lang.AssertionError: expected: but was: 101)>
>   at org.junit.Assert.fail(Assert.java:89)
>   at org.junit.Assert.failNotEquals(Assert.java:835)
>   at org.junit.Assert.assertEquals(Assert.java:120)
>   at org.junit.Assert.assertEquals(Assert.java:146)
>   at 
> kafka.admin.ReassignPartitionsClusterTest.assertReplicas(ReassignPartitionsClusterTest.scala:1220)
>   at 
> kafka.admin.ReassignPartitionsClusterTest.assertIsReassigning(ReassignPartitionsClusterTest.scala:1191)
>   at 
> kafka.admin.ReassignPartitionsClusterTest.znodeReassignmentShouldOverrideApiTriggeredReassignment(ReassignPartitionsClusterTest.scala:897)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>   at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at jdk.internal.reflect.GeneratedMethodAccessor34.invoke(Unknown Source)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.jav

[jira] [Assigned] (KAFKA-9737) Describing log dir reassignment times out if broker is offline

2020-03-20 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley reassigned KAFKA-9737:
--

Assignee: (was: Tom Bentley)

> Describing log dir reassignment times out if broker is offline
> --
>
> Key: KAFKA-9737
> URL: https://issues.apache.org/jira/browse/KAFKA-9737
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Priority: Major
>
> If there is any broker offline when trying to describe a log dir 
> reassignment, then we get the something like the following error:
> {code}
> Status of partition reassignment: 
>   
>Partitions reassignment failed due to 
> org.apache.kafka.common.errors.TimeoutException: 
> Call(callName=describeReplicaLogDirs, deadlineMs=1584663960068, tries=1, 
> nextAllowedTryMs=158466
> 3960173) timed out at 1584663960073 after 1 attempt(s)
>   
>
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TimeoutException: 
> Call(callName=describeReplicaLogDirs, deadlineMs=1584663960068, tries=1, 
> nextAllowedTryMs=158
> 4663960173) timed out at 1584663960073 after 1 attempt(s) 
>   
>   
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
>   
>
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
>   
>  
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
>   
>  
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
>   
>   
> at 
> kafka.admin.ReassignPartitionsCommand$.checkIfReplicaReassignmentSucceeded(ReassignPartitionsCommand.scala:381)
>
> at 
> kafka.admin.ReassignPartitionsCommand$.verifyAssignment(ReassignPartitionsCommand.scala:98)
>  
> at 
> kafka.admin.ReassignPartitionsCommand$.verifyAssignment(ReassignPartitionsCommand.scala:90)
> at 
> kafka.admin.ReassignPartitionsCommand$.main(ReassignPartitionsCommand.scala:61)
> at 
> kafka.admin.ReassignPartitionsCommand.main(ReassignPartitionsCommand.scala)
> Caused by: org.apache.kafka.common.errors.TimeoutException: 
> Call(callName=describeReplicaLogDirs, deadlineMs=1584663960068, tries=1, 
> nextAllowedTryMs=1584663960173) timed out at 15846
> 63960073 after 1 attempt(s)
> Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting 
> for a node assignment.
> {code}
> It would be nice if the tool was smart enough to notice brokers that are 
> offline and report them as such while reporting the status of reassignments 
> for online brokers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-1368) Upgrade log4j

2020-03-20 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-1368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17063231#comment-17063231
 ] 

Tom Bentley commented on KAFKA-1368:


[~mimaison], [~ecomar] are you working on this? If not do you mind if I try to 
take it forward?

> Upgrade log4j
> -
>
> Key: KAFKA-1368
> URL: https://issues.apache.org/jira/browse/KAFKA-1368
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.0
>Reporter: Vladislav Pernin
>Assignee: Mickael Maison
>Priority: Major
>
> Upgrade log4j to at least 1.2.16 ou 1.2.17.
> Usage of EnhancedPatternLayout will be possible.
> It allows to set delimiters around the full log, stacktrace included, making 
> log messages collection easier with tools like Logstash.
> Example : <[%d{}]...[%t] %m%throwable>%n
> <[2014-04-08 11:07:20,360] ERROR [KafkaApi-1] Error when processing fetch 
> request for partition [X,6] offset 700 from consumer with correlation id 
> 0 (kafka.server.KafkaApis)
> kafka.common.OffsetOutOfRangeException: Request for offset 700 but we only 
> have log segments in the range 16021 to 16021.
> at kafka.log.Log.read(Log.scala:429)
> ...
> at java.lang.Thread.run(Thread.java:744)>



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-1368) Upgrade log4j

2020-03-26 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-1368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17067811#comment-17067811
 ] 

Tom Bentley commented on KAFKA-1368:


I've looked at this and there are two basic problems which need to be addressed:

1. `Log4jController` has a hard dependency on log4j. This could be made to work 
with log4j2 keeping it current API (essentially becoming a façade). 
2. On its own 1. is not a drop-in replacement. Users would have to delete the 
log4j and slf4j-log4j12 jars from the lib directory and add the log4j2 ones and 
also write their own {{log4j2.properties}}, {{tools-log4j2.properties}} and 
{{connect-log4j2.properties}} files. Including the log4j2 jars with the 
distribution would be a bit tricky because of slf4j's static binding approach. 
We'd need to ensure only one binding was on the classpath. Then we'd need a 
mechanism for the user to select log4j2, perhaps a {{--logging=log4j2}} option 
for `kafka-server-start.sh` and `zookeeper-server-start.sh` etc, or maybe 
inspecting {{KAFKA_LOG4J_OPTS}} to see if the config file was 
{{log4j.properties}} or {{log4j2.properties}}. I'm assuming we would stick with 
log4j by default until Kafka 3.0, anything else would risk regressions.

Any thoughts on this [~ijuma], [~ewencp], [~mimaison]? Would this need a KIP?

 

> Upgrade log4j
> -
>
> Key: KAFKA-1368
> URL: https://issues.apache.org/jira/browse/KAFKA-1368
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.0
>Reporter: Vladislav Pernin
>Assignee: Tom Bentley
>Priority: Major
>
> Upgrade log4j to at least 1.2.16 ou 1.2.17.
> Usage of EnhancedPatternLayout will be possible.
> It allows to set delimiters around the full log, stacktrace included, making 
> log messages collection easier with tools like Logstash.
> Example : <[%d{}]...[%t] %m%throwable>%n
> <[2014-04-08 11:07:20,360] ERROR [KafkaApi-1] Error when processing fetch 
> request for partition [X,6] offset 700 from consumer with correlation id 
> 0 (kafka.server.KafkaApis)
> kafka.common.OffsetOutOfRangeException: Request for offset 700 but we only 
> have log segments in the range 16021 to 16021.
> at kafka.log.Log.read(Log.scala:429)
> ...
> at java.lang.Thread.run(Thread.java:744)>



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9673) Conditionally apply SMTs

2020-03-27 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17068418#comment-17068418
 ] 

Tom Bentley commented on KAFKA-9673:


I opened 
[KIP-585|https://cwiki.apache.org/confluence/display/KAFKA/KIP-585%3A+Conditional+SMT]
 for discussion.

> Conditionally apply SMTs
> 
>
> Key: KAFKA-9673
> URL: https://issues.apache.org/jira/browse/KAFKA-9673
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Major
>
> KAFKA-7052 ended up using IAE with a message, rather than NPE in the case of 
> a SMT being applied to a record lacking a given field. It's still not 
> possible to apply a SMT conditionally, which is what things like Debezium 
> really need in order to apply transformations only to non-schema change 
> events.
> [~rhauch] suggested a mechanism to conditionally apply any SMT but was 
> concerned about the possibility of a naming collision (assuming it was 
> configured by a simple config)
> I'd like to propose something which would solve this problem without the 
> possibility of such collisions. The idea is to have a higher-level condition, 
> which applies an arbitrary transformation (or transformation chain) according 
> to some predicate on the record. 
> More concretely, it might be configured like this:
> {noformat}
>   transforms.conditionalExtract.type: Conditional
>   transforms.conditionalExtract.transforms: extractInt
>   transforms.conditionalExtract.transforms.extractInt.type: 
> org.apache.kafka.connect.transforms.ExtractField$Key
>   transforms.conditionalExtract.transforms.extractInt.field: c1
>   transforms.conditionalExtract.condition: topic-matches:
> {noformat}
> * The {{Conditional}} SMT is configured with its own list of transforms 
> ({{transforms.conditionalExtract.transforms}}) to apply. This would work just 
> like the top level {{transforms}} config, so subkeys can be used to configure 
> these transforms in the usual way.
> * The {{condition}} config defines the predicate for when the transforms are 
> applied to a record using a {{:}} syntax
> We could initially support three condition types:
> *{{topic-matches:}}* The transformation would be applied if the 
> record's topic name matched the given regular expression pattern. For 
> example, the following would apply the transformation on records being sent 
> to any topic with a name beginning with "my-prefix-":
> {noformat}
>transforms.conditionalExtract.condition: topic-matches:my-prefix-.*
> {noformat}
>
> *{{has-header:}}* The transformation would be applied if the 
> record had at least one header with the given name. For example, the 
> following will apply the transformation on records with at least one header 
> with the name "my-header":
> {noformat}
>transforms.conditionalExtract.condition: has-header:my-header
> {noformat}
>
> *{{not:}}* This would negate the result of another named 
> condition using the condition config prefix. For example, the following will 
> apply the transformation on records which lack any header with the name 
> my-header:
> {noformat}
>   transforms.conditionalExtract.condition: not:hasMyHeader
>   transforms.conditionalExtract.condition.hasMyHeader: 
> has-header:my-header
> {noformat}
> I foresee one implementation concern with this approach, which is that 
> currently {{Transformation}} has to return a fixed {{ConfigDef}}, and this 
> proposal would require something more flexible in order to allow the config 
> parameters to depend on the listed transform aliases (and similarly for named 
> predicate used for the {{not:}} predicate). I think this could be done by 
> adding a {{default}} method to {{Transformation}} for getting the ConfigDef 
> given the config, for example.
> Obviously this would require a KIP, but before I spend any more time on this 
> I'd be interested in your thoughts [~rhauch], [~rmoff], [~gunnar.morling].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9775) IllegalFormatConversionException from kafka-consumer-perf-test.sh

2020-03-27 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-9775:
--

 Summary: IllegalFormatConversionException from 
kafka-consumer-perf-test.sh
 Key: KAFKA-9775
 URL: https://issues.apache.org/jira/browse/KAFKA-9775
 Project: Kafka
  Issue Type: Bug
  Components: tools
Reporter: Tom Bentley
Assignee: Tom Bentley


Exception in thread "main" java.util.IllegalFormatConversionException: f != 
java.lang.Integer
at 
java.base/java.util.Formatter$FormatSpecifier.failConversion(Formatter.java:4426)
at 
java.base/java.util.Formatter$FormatSpecifier.printFloat(Formatter.java:2951)
at 
java.base/java.util.Formatter$FormatSpecifier.print(Formatter.java:2898)
at java.base/java.util.Formatter.format(Formatter.java:2673)
at java.base/java.util.Formatter.format(Formatter.java:2609)
at java.base/java.lang.String.format(String.java:2897)
at scala.collection.immutable.StringLike.format(StringLike.scala:354)
at scala.collection.immutable.StringLike.format$(StringLike.scala:353)
at scala.collection.immutable.StringOps.format(StringOps.scala:33)
at kafka.utils.ToolsUtils$.$anonfun$printMetrics$3(ToolsUtils.scala:60)
at 
kafka.utils.ToolsUtils$.$anonfun$printMetrics$3$adapted(ToolsUtils.scala:58)
at 
scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at 
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at kafka.utils.ToolsUtils$.printMetrics(ToolsUtils.scala:58)
at kafka.tools.ConsumerPerformance$.main(ConsumerPerformance.scala:82)
at kafka.tools.ConsumerPerformance.main(ConsumerPerformance.scala)




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-1368) Upgrade log4j

2020-04-01 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-1368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17072533#comment-17072533
 ] 

Tom Bentley commented on KAFKA-1368:


Noticed this is duplicated by https://issues.apache.org/jira/browse/KAFKA-9366, 
so I'm going to close this one.

> Upgrade log4j
> -
>
> Key: KAFKA-1368
> URL: https://issues.apache.org/jira/browse/KAFKA-1368
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.0
>Reporter: Vladislav Pernin
>Assignee: Tom Bentley
>Priority: Major
>
> Upgrade log4j to at least 1.2.16 ou 1.2.17.
> Usage of EnhancedPatternLayout will be possible.
> It allows to set delimiters around the full log, stacktrace included, making 
> log messages collection easier with tools like Logstash.
> Example : <[%d{}]...[%t] %m%throwable>%n
> <[2014-04-08 11:07:20,360] ERROR [KafkaApi-1] Error when processing fetch 
> request for partition [X,6] offset 700 from consumer with correlation id 
> 0 (kafka.server.KafkaApis)
> kafka.common.OffsetOutOfRangeException: Request for offset 700 but we only 
> have log segments in the range 16021 to 16021.
> at kafka.log.Log.read(Log.scala:429)
> ...
> at java.lang.Thread.run(Thread.java:744)>



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-1368) Upgrade log4j

2020-04-01 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-1368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley resolved KAFKA-1368.

Resolution: Duplicate

> Upgrade log4j
> -
>
> Key: KAFKA-1368
> URL: https://issues.apache.org/jira/browse/KAFKA-1368
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.0
>Reporter: Vladislav Pernin
>Assignee: Tom Bentley
>Priority: Major
>
> Upgrade log4j to at least 1.2.16 ou 1.2.17.
> Usage of EnhancedPatternLayout will be possible.
> It allows to set delimiters around the full log, stacktrace included, making 
> log messages collection easier with tools like Logstash.
> Example : <[%d{}]...[%t] %m%throwable>%n
> <[2014-04-08 11:07:20,360] ERROR [KafkaApi-1] Error when processing fetch 
> request for partition [X,6] offset 700 from consumer with correlation id 
> 0 (kafka.server.KafkaApis)
> kafka.common.OffsetOutOfRangeException: Request for offset 700 but we only 
> have log segments in the range 16021 to 16021.
> at kafka.log.Log.read(Log.scala:429)
> ...
> at java.lang.Thread.run(Thread.java:744)>



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-7787) Add error specifications to KAFKA-7609

2020-04-03 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-7787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17074447#comment-17074447
 ] 

Tom Bentley commented on KAFKA-7787:


[~cmccabe] any more thoughts about this (did you notice I opened a PR in case 
you wanted to take a look?)

{quote}In the case of Errors, we need to associate each enum with a 
corresponding exception. I guess we could have a helper class for this.
{quote}
 
Can you elaborate on what you're envisaging there? I played around with 
generating functions for mapping between codes and enum elements, via a 
"enumClass" key in the codes declaration:

{code}
{
  "type": "codes",
  "name": "ErrorCodes",
  "valueType": "int16",
  "enumClass": "org.apache.kafka.common.protocol.Errors",
  "codes": [
{ "name": "UNKNOWN_SERVER_ERROR", "value":  -1,
  "about": "The server experienced an unexpected error when processing the 
request." },
{ "name": "NONE", "value":  0,
  "about": "No error." },
{code}

Obviously for error codes we can use {{Errors}} and from that obtain the 
exception class, but that might also be useful for the other code enums.

The other thing I'm struggling with a bit is how we can correctly specify the 
valid versions for each error code at the use site, e.g. 

{code}
 {
  "name": "ErrorCode",
  "type": "int16",
  "domain": {
"name": "ErrorCodes",
"values": [
  { "name": "NONE", "validVersions": "0+" },
  { "name": "FOO", "validVersions": "0+" },
  { "name": "BAR", "validVersions": "3+" },
  ...
]
   }
 }
{code}

Determining correctly what error codes can propagate from a given RPC is 
difficult enough even for the _current version_ of an API. 

Do you have any ideas for a testing methodology?

> Add error specifications to KAFKA-7609
> --
>
> Key: KAFKA-7787
> URL: https://issues.apache.org/jira/browse/KAFKA-7787
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Colin McCabe
>Assignee: Tom Bentley
>Priority: Minor
>
> In our RPC JSON, it would be nice if we could specify what versions of a 
> response could contain what errors.  See the discussion here: 
> https://github.com/apache/kafka/pull/5893#discussion_r244841051



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-8955) Add an AbstractResponse#errorCounts method that takes a stream or iterable

2020-04-03 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley reassigned KAFKA-8955:
--

Assignee: Tom Bentley

> Add an AbstractResponse#errorCounts method that takes a stream or iterable
> --
>
> Key: KAFKA-8955
> URL: https://issues.apache.org/jira/browse/KAFKA-8955
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Colin McCabe
>Assignee: Tom Bentley
>Priority: Minor
>
> We should have an AbstractResponse#errorCounts method that takes a stream or 
> iterable.  This would allow us to avoid copying data in many cases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-7613) Enable javac rawtypes, serial and try xlint warnings

2020-04-08 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley reassigned KAFKA-7613:
--

Assignee: Tom Bentley

> Enable javac rawtypes, serial and try xlint warnings
> 
>
> Key: KAFKA-7613
> URL: https://issues.apache.org/jira/browse/KAFKA-7613
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Tom Bentley
>Priority: Major
>
> KAFKA-7612 enables all Xlint warnings apart from the following:
> {code:java}
> options.compilerArgs << "-Xlint:-rawtypes"
> options.compilerArgs << "-Xlint:-serial"
> options.compilerArgs << "-Xlint:-try"{code}
> We should fix the issues and enable the warnings.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-7613) Enable javac rawtypes, serial and try xlint warnings

2020-04-08 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17077980#comment-17077980
 ] 

Tom Bentley commented on KAFKA-7613:


[~ijuma] I'll do this over a number of PRs to avoid the code review being very 
burdensome. I assume that where it's not possible to change public interfaces 
(e.g. because they inherit {{AutoCloseable.close()}} which generates a warning 
about it possibly throwing {{InterruptedException}}), that it's OK to suppress 
those warning without a KIP?

> Enable javac rawtypes, serial and try xlint warnings
> 
>
> Key: KAFKA-7613
> URL: https://issues.apache.org/jira/browse/KAFKA-7613
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Tom Bentley
>Priority: Major
>
> KAFKA-7612 enables all Xlint warnings apart from the following:
> {code:java}
> options.compilerArgs << "-Xlint:-rawtypes"
> options.compilerArgs << "-Xlint:-serial"
> options.compilerArgs << "-Xlint:-try"{code}
> We should fix the issues and enable the warnings.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-6987) Reimplement KafkaFuture with CompletableFuture

2021-01-13 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-6987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17263998#comment-17263998
 ] 

Tom Bentley commented on KAFKA-6987:


[~andrasbeni] is this something you plan to work on again soon (I noticed that 
you closed the PR)? If not, would you mind if I took it on?

> Reimplement KafkaFuture with CompletableFuture
> --
>
> Key: KAFKA-6987
> URL: https://issues.apache.org/jira/browse/KAFKA-6987
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 2.0.0
>Reporter: Andras Beni
>Priority: Minor
>
> KafkaFuture documentation states:
> {{This will eventually become a thin shim on top of Java 8's 
> CompletableFuture.}}
> With Java 7 support dropped in 2.0, it is time to get rid of custom code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-6987) Reimplement KafkaFuture with CompletableFuture

2021-01-13 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley reassigned KAFKA-6987:
--

Assignee: Tom Bentley

> Reimplement KafkaFuture with CompletableFuture
> --
>
> Key: KAFKA-6987
> URL: https://issues.apache.org/jira/browse/KAFKA-6987
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 2.0.0
>Reporter: Andras Beni
>Assignee: Tom Bentley
>Priority: Minor
>
> KafkaFuture documentation states:
> {{This will eventually become a thin shim on top of Java 8's 
> CompletableFuture.}}
> With Java 7 support dropped in 2.0, it is time to get rid of custom code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-12263) Improve MockClient RequestMatcher interface

2021-02-02 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley reassigned KAFKA-12263:
---

Assignee: Tom Bentley

> Improve MockClient RequestMatcher interface
> ---
>
> Key: KAFKA-12263
> URL: https://issues.apache.org/jira/browse/KAFKA-12263
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Tom Bentley
>Priority: Major
>
> MockClient has a RequestMatcher interface which is used to verify that a 
> request received by the client matches an expected type:
> {code}
> @FunctionalInterface
> public interface RequestMatcher {
> boolean matches(AbstractRequest body);
> }
> {code}
> The interface is awkward in practice because there is nothing we can do 
> except fail the test if the request does not match. But in that case, the 
> MockClient does not have enough context to throw a useful error explaining 
> why the match failed. Instead we just print a generic message about the match 
> failure.
> A better approach would probably to turn this into more of a RequestAssertion:
> {code}
> @FunctionalInterface
> public interface RequestAssertion {
> void assertRequest(AbstractRequest body);
> }
> {code}
> Then implementations could then be constructed of a sequence of Junit 
> assertions. When there is a failure, we can trace it back to the specific 
> assertion that failed. Of course they can do that now with RequestMatcher as 
> well, but the expectation would be more explicit.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12263) Improve MockClient RequestMatcher interface

2021-02-02 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley updated KAFKA-12263:

 Flags: Patch
Labels: patch-available  (was: )

> Improve MockClient RequestMatcher interface
> ---
>
> Key: KAFKA-12263
> URL: https://issues.apache.org/jira/browse/KAFKA-12263
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Tom Bentley
>Priority: Major
>  Labels: patch-available
>
> MockClient has a RequestMatcher interface which is used to verify that a 
> request received by the client matches an expected type:
> {code}
> @FunctionalInterface
> public interface RequestMatcher {
> boolean matches(AbstractRequest body);
> }
> {code}
> The interface is awkward in practice because there is nothing we can do 
> except fail the test if the request does not match. But in that case, the 
> MockClient does not have enough context to throw a useful error explaining 
> why the match failed. Instead we just print a generic message about the match 
> failure.
> A better approach would probably to turn this into more of a RequestAssertion:
> {code}
> @FunctionalInterface
> public interface RequestAssertion {
> void assertRequest(AbstractRequest body);
> }
> {code}
> Then implementations could then be constructed of a sequence of Junit 
> assertions. When there is a failure, we can trace it back to the specific 
> assertion that failed. Of course they can do that now with RequestMatcher as 
> well, but the expectation would be more explicit.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12278) Keep api versions consistent with api scope

2021-02-02 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17277751#comment-17277751
 ] 

Tom Bentley commented on KAFKA-12278:
-

Isn't this a dupe of KAFKA-12232?

> Keep api versions consistent with api scope
> ---
>
> Key: KAFKA-12278
> URL: https://issues.apache.org/jira/browse/KAFKA-12278
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
>
> With KIP-500, some APIs are only accessible by the broker and some are only 
> accessible by the controller. We need a better way to indicate the scope of 
> the API so that we can keep it consistent with the `ApiVersions` API. 
> Basically we have the following scopes:
> - zk broker (e.g. LeaderAndIsr)
> - kip-500 broker (e.g. DecommissionBroker)
> - kip-500 controller (e.g. Envelope)
> These categories are not mutually exclusive. For example, the `Fetch` API 
> must be exposed in all scopes. We could go even further by distinguishing an 
> inter-broker scope, but that is probably not needed for now.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10703) Document that default configs are not supported for TOPIC entities

2021-02-03 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley updated KAFKA-10703:

 Flags: Patch
Labels: patch-available  (was: )

> Document that default configs are not supported for TOPIC entities
> --
>
> Key: KAFKA-10703
> URL: https://issues.apache.org/jira/browse/KAFKA-10703
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Colin McCabe
>Assignee: Tom Bentley
>Priority: Major
>  Labels: patch-available
>
> We should better document that default configs are not supported for TOPIC 
> entities.  Currently an attempt to set them gets confusing error messages.
> Using admin client's incrementalAlterConfigs with {type=TOPIC, name=""} gives 
> a cryptic error stack trace:
> Exception in thread "main" java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.InvalidRequestException: Invalid config value 
> for resource ConfigResource(type=TOPIC, name=''): Path must not end with / 
> character
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
> at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:91)
> at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
> Caused by: org.apache.kafka.common.errors.InvalidRequestException: Invalid 
> config value for resource ConfigResource(type=TOPIC, name=''): Path must not 
> end with / character
> Similarly, kafka-configs.sh is not very clear about this issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12286) Consider storage/cpu tradeoff in metadata record framing schema

2021-02-04 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278673#comment-17278673
 ] 

Tom Bentley commented on KAFKA-12286:
-

{quote}we will have options to deal with it if it does.{quote}

Just to note that one path to doing that is to decouple the serialized type 
from the Java type in a versioned way. Concretely 
[KIP-625|https://cwiki.apache.org/confluence/display/KAFKA/KIP-625%3A+Richer+encodings+for+integral-typed+protocol+fields#KIP625:Richerencodingsforintegraltypedprotocolfields-PublicInterfaces]
 describes migrating fields from 32 to 64 bits, for example.

> Consider storage/cpu tradeoff in metadata record framing schema
> ---
>
> Key: KAFKA-12286
> URL: https://issues.apache.org/jira/browse/KAFKA-12286
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Priority: Major
>
> KIP-631 calls for unsigned varints in order to represent the api key and 
> version. We are also adding a frame version, which will be unsigned varint. 
> The use of the varint means we can save a byte from each of these fields 
> compared to an int16. The downside is that it is that the serialization is a 
> little more expensive. For varints, we typically require one call to compute 
> the size of the field and a separate call to actually write it. If this 
> becomes an issue, there are a couple options:
> 1. We can use the int16 and pay the extra 3 bytes.
> 2. We can let the generated classes compute the encoded api key and version 
> as a byte and fail if we ever exceed the range of varint that can fit in a 
> single byte. This would let us replace `writeUnsignedVarint` with `writeByte`.
> The second option seems reasonable to me. I think it's extremely unlikely 
> we'll ever need more than a byte for either the api key or the version. At 
> least it should be years before that happens and we will have options to deal 
> with it if it does.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12263) Improve MockClient RequestMatcher interface

2021-02-04 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278735#comment-17278735
 ] 

Tom Bentley commented on KAFKA-12263:
-

[~hachikuji] any chance you could review my PR?

> Improve MockClient RequestMatcher interface
> ---
>
> Key: KAFKA-12263
> URL: https://issues.apache.org/jira/browse/KAFKA-12263
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Tom Bentley
>Priority: Major
>  Labels: patch-available
>
> MockClient has a RequestMatcher interface which is used to verify that a 
> request received by the client matches an expected type:
> {code}
> @FunctionalInterface
> public interface RequestMatcher {
> boolean matches(AbstractRequest body);
> }
> {code}
> The interface is awkward in practice because there is nothing we can do 
> except fail the test if the request does not match. But in that case, the 
> MockClient does not have enough context to throw a useful error explaining 
> why the match failed. Instead we just print a generic message about the match 
> failure.
> A better approach would probably to turn this into more of a RequestAssertion:
> {code}
> @FunctionalInterface
> public interface RequestAssertion {
> void assertRequest(AbstractRequest body);
> }
> {code}
> Then implementations could then be constructed of a sequence of Junit 
> assertions. When there is a failure, we can trace it back to the specific 
> assertion that failed. Of course they can do that now with RequestMatcher as 
> well, but the expectation would be more explicit.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12322) Why https://downloads.apache.org/kafka/ it doesnot download older version only 2.7.0 and 2.6.1 ,is there any changes done ?

2021-02-10 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17282522#comment-17282522
 ] 

Tom Bentley commented on KAFKA-12322:
-

Older versions are available from https://archive.apache.org/dist/kafka/
Only the latest releases are guaranteed to be available from 
https://downloads.apache.org/. That's because downloads.a.o gets mirrored to 
3rd parties who might not want to host every old version.

> Why https://downloads.apache.org/kafka/  it doesnot download older version 
> only 2.7.0 and 2.6.1 ,is there any changes done ?
> 
>
> Key: KAFKA-12322
> URL: https://issues.apache.org/jira/browse/KAFKA-12322
> Project: Kafka
>  Issue Type: Bug
>Reporter: suman tripathi
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12308) ConfigDef.parseType deadlock

2021-02-24 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17289998#comment-17289998
 ] 

Tom Bentley commented on KAFKA-12308:
-

I think this is caused by the fact the {{DelegatingClassLoader}} is not 
registered as parallel capable, but should be. It should be because, according 
to https://docs.oracle.com/javase/7/docs/technotes/guides/lang/cl-mt.html, to 
qualify for the acyclic delegation model "If the class is not found, the class 
loader asks its parent to locate the class. If the parent cannot find the 
class, the class loader attempts to locate the class itself.", but 
{{DelegatingClassLoader}} may actually ask the {{PluginClassLoader}} to load a 
class before it's tried {{super}}.

>From the stack dump provided
{noformat}
"StartAndStopExecutor-connect-1-5":
at java.lang.ClassLoader.loadClass(ClassLoader.java:398)
  // wait for DCL getClassLoadingLock
- waiting to lock <0x0006c222db00> (a 
org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader)
at 
org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.loadClass(DelegatingClassLoader.java:397)
 // deletate to super
at java.lang.ClassLoader.loadClass(ClassLoader.java:405)
  // super delegates to parent (DCL)
- locked <0x00077b9bf3c0> (a java.lang.Object)  
  // lock PCLY+name (super's getClassLoadingLock)
at 
org.apache.kafka.connect.runtime.isolation.PluginClassLoader.loadClass(PluginClassLoader.java:104)
- locked <0x00077b9bf3c0> (a java.lang.Object)  
  // lock PCLY+name (getClassLoadingLock)
- locked <0x0006c25b4e38> (a 
org.apache.kafka.connect.runtime.isolation.PluginClassLoader)// lock 
PCLY (synchronized)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
{noformat}

and 

{noformat}
"StartAndStopExecutor-connect-1-6":
at 
org.apache.kafka.connect.runtime.isolation.PluginClassLoader.loadClass(PluginClassLoader.java:91)
 // lock PCLX (synchronized)
- waiting to lock <0x0006c25b4e38> (a 
org.apache.kafka.connect.runtime.isolation.PluginClassLoader)
at 
org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.loadClass(DelegatingClassLoader.java:394)
 // delegated to PCL
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
 // ClassLoader.loadClass(String name) calling 
PCL.loadClass(String,
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
{noformat}

It also says 

{noformat}
"StartAndStopExecutor-connect-1-5":
  waiting to lock monitor 0x0203a553b6f8 (object 0x0006c222db00, a 
org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader),
  which is held by "StartAndStopExecutor-connect-1-6"
{noformat}

the {{0x0006c222db00}} doesn't appear in the stacktrace, I think that's 
because it's [held by the JVM 
itself|https://github.com/openjdk/jdk/blob/06170b7cbf6129274747b4406562184802d4ff07/src/hotspot/share/classfile/systemDictionary.cpp#L695].
 

If DelegatingClassloader is registered as parallel capable this won't happen

{noformat}
at java.lang.ClassLoader.loadClass(ClassLoader.java:398)
  // wait for DCL getClassLoadingLock
- waiting to lock <0x0006c222db00> (a 
org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader)
{noformat}

Because {{DCL.getClassLoadingLock}} will return an object specific to the class 
being loaded, rather than the DCL instance itself, which is locked by the JVM.

Does this seem plausible to you [~kkonstantine] [~ChrisEgerton]?

> ConfigDef.parseType deadlock
> 
>
> Key: KAFKA-12308
> URL: https://issues.apache.org/jira/browse/KAFKA-12308
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Affects Versions: 2.5.0
> Environment: kafka 2.5.0
> centos7
> java version "1.8.0_231"
>Reporter: cosmozhu
>Priority: Major
> Attachments: deadlock.log
>
>
> hi,
>  the problem was found, when I restarted *ConnectDistributed*
> I restart ConnectDistributed in the single node for the test, with not delete 
> connectors.
>  sometimes the process stopped when creating connectors.
> I add some logger and found it had a deadlock in `ConfigDef.parseType`.My 
> connectors always have the same transforms. I guess when connector startup 
> (in startAndStopExecutor which default 8 threads) and load the same class 
> file it has something wrong.
> I att

[jira] [Commented] (KAFKA-12308) ConfigDef.parseType deadlock

2021-02-25 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17290793#comment-17290793
 ] 

Tom Bentley commented on KAFKA-12308:
-

[~kkonstantine] I'm not an expert in classloaders but I'm still not sure that 
DCL shouldn't be considered parallel. The referred class loader guide 
explicitly says that an acyclic CL should delegate to {{super}} _first_. I 
understand that delegating to PCL first is intentional, but it doesn't fit with 
the definition given AFAICS. The fact that the CLs it delegates to are both 
parallel doesn't seem to be relevant. Also, the parent of the PCL is the DCL, 
which looks like a cycle to me (but, as I said, I'm no expert, so happy to be 
corrected). 

Assuming the {{synchronized}} was removed from PCL {{loadClass}}, then 
{noformat}
"StartAndStopExecutor-connect-1-6":
at 
org.apache.kafka.connect.runtime.isolation.PluginClassLoader.loadClass(PluginClassLoader.java:91)
 // lock PCLX (synchronized)
{noformat}
wouldn't get blocked, but there would still be two threads contenting two locks 
when racing to load the same class, those locks would be the 
{{getClassLoadingLock()}} on the PCL and the monitor of the DCL instance 
itself, so I think perhaps a deadlock would still be possible, just on 
different monitors. 

> ConfigDef.parseType deadlock
> 
>
> Key: KAFKA-12308
> URL: https://issues.apache.org/jira/browse/KAFKA-12308
> Project: Kafka
>  Issue Type: Bug
>  Components: config, KafkaConnect
>Affects Versions: 2.5.0
> Environment: kafka 2.5.0
> centos7
> java version "1.8.0_231"
>Reporter: cosmozhu
>Priority: Major
> Attachments: deadlock.log
>
>
> hi,
>  the problem was found, when I restarted *ConnectDistributed*
> I restart ConnectDistributed in the single node for the test, with not delete 
> connectors.
>  sometimes the process stopped when creating connectors.
> I add some logger and found it had a deadlock in `ConfigDef.parseType`.My 
> connectors always have the same transforms. I guess when connector startup 
> (in startAndStopExecutor which default 8 threads) and load the same class 
> file it has something wrong.
> I attached the jstack log file.
> thanks for any help.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8946) Single byte header issues WARN logging

2021-03-01 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17292852#comment-17292852
 ] 

Tom Bentley commented on KAFKA-8946:


I think this is a bug in org.apache.kafka.connect.data.Values#parse which 
should be fixed. Even if the input is not something it can parse it should be 
throwing a DataException rather than StringIndexOutOfBoundsException.

> Single byte header issues WARN logging
> --
>
> Key: KAFKA-8946
> URL: https://issues.apache.org/jira/browse/KAFKA-8946
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 2.3.0
>Reporter: Henning Treu
>Priority: Minor
>
> Setting a single byte header like
> {code:java}
> headers.add("MY_CUSTOM_HEADER", new byte[] { 1 });
> {code}
> will cause a WARN message with full stack trace:
> {code:java}
> [2019-08-29 06:27:40,599] WARN Failed to deserialize value for header 
> 'MY_CUSTOM_HEADER' on topic '', so using byte array 
> (org.apache.kafka.connect.storage.SimpleHeaderConverter)
> java.lang.StringIndexOutOfBoundsException: String index out of range: 0
> at java.lang.String.charAt(String.java:658)
> at org.apache.kafka.connect.data.Values.parse(Values.java:816)
> at org.apache.kafka.connect.data.Values.parseString(Values.java:373)
> at 
> org.apache.kafka.connect.storage.SimpleHeaderConverter.toConnectHeader(SimpleHeaderConverter.java:64)
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.convertHeadersFor(WorkerSinkTask.java:501)
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:469)
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:301)
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:205)
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:173)
> at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170)
> at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Since Kafka will continue with the headers as 
> {code:java}
> Schema.BYTES_SCHEMA
> {code}
>  the warning seems a little harsh.
> There are two options:
> # Handle none-String header values explicitly
> # Drop the stacktrace logging and put it to an extra DEBUG log



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12443) Add TranslateSurrogates SMT to Kafka Connect

2021-03-10 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17298651#comment-17298651
 ] 

Tom Bentley commented on KAFKA-12443:
-

Hi, Kafka has a process for adding new public APIs -- the [KIP 
process|https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals].
 You'll need to write a KIP and start a discussion thread. 

> Add TranslateSurrogates SMT to Kafka Connect
> 
>
> Key: KAFKA-12443
> URL: https://issues.apache.org/jira/browse/KAFKA-12443
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 2.7.0
>Reporter: Siva Kunapuli
>Priority: Minor
>  Labels: connect-transformation, new-feature, newbie, 
> pull-request-available
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> Kafka Connect does not have an out of the box way to process UTF16 surrogate 
> pairs. This SMT adds that capability.
> Pull request submitted - https://github.com/apache/kafka/pull/10287



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-8780) Set SCRAM passwords via the Admin interface

2019-08-09 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-8780:
--

 Summary: Set SCRAM passwords via the Admin interface
 Key: KAFKA-8780
 URL: https://issues.apache.org/jira/browse/KAFKA-8780
 Project: Kafka
  Issue Type: New Feature
  Components: admin
Reporter: Tom Bentley
Assignee: Tom Bentley


It should be possible to set user's SCRAM passwords via the Admin interface.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (KAFKA-8862) Misleading exception message for non-existant partition

2019-09-03 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-8862:
--

 Summary: Misleading exception message for non-existant partition
 Key: KAFKA-8862
 URL: https://issues.apache.org/jira/browse/KAFKA-8862
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 2.3.0
Reporter: Tom Bentley
Assignee: Tom Bentley


https://issues.apache.org/jira/browse/KAFKA-6833 changed the logic of the 
{{KafkaProducer.waitOnMetadata}} so that if a partition did not exist it would 
wait for it to exist.
It means that if called with an incorrect partition the method will eventually 
throw a {{TimeoutException}}, which covers both topic and partition 
non-existence cases.

However, the exception message was not changed for the case where 
{{metadata.awaitUpdate(version, remainingWaitMs)}} throws a 
{{TimeoutException}}.

This results in a confusing exception message. For example, if a producer tries 
to send to a non-existent partition of an existing topic the message is 
"Topic %s not present in metadata after %d ms.", when timeout via the other 
code path would come with message
"Partition %d of topic %s with partition count %d is not present in metadata 
after %d ms."





--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (KAFKA-8862) Misleading exception message for non-existant partition

2019-09-03 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley updated KAFKA-8862:
---
 Flags: Patch
Labels: patch-available  (was: )

> Misleading exception message for non-existant partition
> ---
>
> Key: KAFKA-8862
> URL: https://issues.apache.org/jira/browse/KAFKA-8862
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 2.3.0
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Major
>  Labels: patch-available
>
> https://issues.apache.org/jira/browse/KAFKA-6833 changed the logic of the 
> {{KafkaProducer.waitOnMetadata}} so that if a partition did not exist it 
> would wait for it to exist.
> It means that if called with an incorrect partition the method will 
> eventually throw a {{TimeoutException}}, which covers both topic and 
> partition non-existence cases.
> However, the exception message was not changed for the case where 
> {{metadata.awaitUpdate(version, remainingWaitMs)}} throws a 
> {{TimeoutException}}.
> This results in a confusing exception message. For example, if a producer 
> tries to send to a non-existent partition of an existing topic the message is 
> "Topic %s not present in metadata after %d ms.", when timeout via the other 
> code path would come with message
> "Partition %d of topic %s with partition count %d is not present in metadata 
> after %d ms."



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (KAFKA-8862) Misleading exception message for non-existant partition

2019-09-13 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16929073#comment-16929073
 ] 

Tom Bentley commented on KAFKA-8862:


[~hachikuji] any chance you could review this? Thanks.

> Misleading exception message for non-existant partition
> ---
>
> Key: KAFKA-8862
> URL: https://issues.apache.org/jira/browse/KAFKA-8862
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 2.3.0
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Major
>  Labels: patch-available
>
> https://issues.apache.org/jira/browse/KAFKA-6833 changed the logic of the 
> {{KafkaProducer.waitOnMetadata}} so that if a partition did not exist it 
> would wait for it to exist.
> It means that if called with an incorrect partition the method will 
> eventually throw a {{TimeoutException}}, which covers both topic and 
> partition non-existence cases.
> However, the exception message was not changed for the case where 
> {{metadata.awaitUpdate(version, remainingWaitMs)}} throws a 
> {{TimeoutException}}.
> This results in a confusing exception message. For example, if a producer 
> tries to send to a non-existent partition of an existing topic the message is 
> "Topic %s not present in metadata after %d ms.", when timeout via the other 
> code path would come with message
> "Partition %d of topic %s with partition count %d is not present in metadata 
> after %d ms."



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (KAFKA-8584) Allow "bytes" type to generated a ByteBuffer rather than byte arrays

2019-09-17 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931154#comment-16931154
 ] 

Tom Bentley commented on KAFKA-8584:


I for one would like to see this discussed in a KIP, even if it doesn't change 
the serialized form and therefore isn't really a public API in the strictest 
sense.

> Allow "bytes" type to generated a ByteBuffer rather than byte arrays
> 
>
> Key: KAFKA-8584
> URL: https://issues.apache.org/jira/browse/KAFKA-8584
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Nikolay Izhikov
>Priority: Major
>  Labels: newbie
>
> Right now in the RPC definition, type {{bytes}} would be translated into 
> {{byte[]}} in generated Java code. However, for some requests like 
> ProduceRequest#partitionData, the underlying type would better be a 
> ByteBuffer rather than a byte array.
> One proposal is to add an additional boolean tag {{useByteBuffer}} for 
> {{bytes}} type, which by default is false; when set to {{true}} set the 
> corresponding field to generate {{ByteBuffer}} instead of {{[]byte}}. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (KAFKA-12541) AdminClient.listOffsets should return the offset for the record with highest timestamp

2021-03-24 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17307794#comment-17307794
 ] 

Tom Bentley commented on KAFKA-12541:
-

Maybe you already knew this and are in the process of writing a KIP already, 
but if not... Since this change affects a public API there's a process for 
discussing and approving such changes that's separate from the PR process. 
You'll need to open a "KIP" and start a discussion thread. For more info see 
[https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals].

> AdminClient.listOffsets should return the offset for the record with highest 
> timestamp
> --
>
> Key: KAFKA-12541
> URL: https://issues.apache.org/jira/browse/KAFKA-12541
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin
>Reporter: Tom Scott
>Assignee: Tom Scott
>Priority: Minor
>
> In Kafka 2.7 the following method was added to AdminClient that provides this 
> information:
> {panel}
> {panel}
> |{{public}} {{ListOffsetsResult listOffsets(Map 
> topicPartitionOffsets,}}
> {{ }}{{ListOffsetsOptions options)}}|
> [https://kafka.apache.org/27/javadoc/org/apache/kafka/clients/admin/KafkaAdminClient.html#listOffsets-java.util.Map-org.apache.kafka.clients.admin.ListOffsetsOptions-]
> where OffsetSpec can be:
>  * OffsetSpec.EarliestSpec
>  * OffsetSpec.LatestSpec
>  * OffsetSpec.TimestampSpec
>  
> This ticket introduces a new spec:
> {panel}
> {panel}
> |{{OffsetSpec.MaxTimestampSpec }}{{// this returns the offset and timestamp 
> for the record with the highest timestamp.}}|
> This indicates to the AdminClient that we want to fetch the timestamp and 
> offset for the record with the largest timestamp produced to a partition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10201) Update codebase to use more inclusive terms

2021-03-26 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17309285#comment-17309285
 ] 

Tom Bentley commented on KAFKA-10201:
-

[~xvrl] are you intending to come back to this work?

> Update codebase to use more inclusive terms
> ---
>
> Key: KAFKA-10201
> URL: https://issues.apache.org/jira/browse/KAFKA-10201
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Xavier Léauté
>Priority: Major
> Fix For: 3.0.0
>
>
> see the corresponding KIP 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-629:+Use+racially+neutral+terms+in+our+codebase



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12408) Document omitted ReplicaManager metrics

2021-04-12 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley resolved KAFKA-12408.
-
Fix Version/s: 3.0.0
 Reviewer: Tom Bentley
   Resolution: Fixed

> Document omitted ReplicaManager metrics
> ---
>
> Key: KAFKA-12408
> URL: https://issues.apache.org/jira/browse/KAFKA-12408
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Dongjin Lee
>Assignee: Dongjin Lee
>Priority: Minor
> Fix For: 3.0.0
>
>
> There are several problems in ReplicaManager metrics documentation:
>  * kafka.server:type=ReplicaManager,name=OfflineReplicaCount is omitted.
>  * kafka.server:type=ReplicaManager,name=FailedIsrUpdatesPerSec is omitted.
>  * kafka.server:type=ReplicaManager,name=[PartitionCount|LeaderCount]'s 
> descriptions are omitted: 'mostly even across brokers'.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8863) Add InsertHeader and DropHeaders connect transforms KIP-145

2021-04-16 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17323847#comment-17323847
 ] 

Tom Bentley commented on KAFKA-8863:


These SMTs were added in https://github.com/apache/kafka/pull/9549.

> Add InsertHeader and DropHeaders connect transforms KIP-145
> ---
>
> Key: KAFKA-8863
> URL: https://issues.apache.org/jira/browse/KAFKA-8863
> Project: Kafka
>  Issue Type: New Feature
>  Components: clients, KafkaConnect
>Reporter: Albert Lozano
>Priority: Major
>
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-145+-+Expose+Record+Headers+in+Kafka+Connect]
> Continuing the work done in the PR 
> [https://github.com/apache/kafka/pull/4319] implementing the transforms to 
> work with headers would be awesome.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8863) Add InsertHeader and DropHeaders connect transforms KIP-145

2021-04-16 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley resolved KAFKA-8863.

Fix Version/s: 3.0.0
 Reviewer: Mickael Maison
 Assignee: Tom Bentley
   Resolution: Fixed

> Add InsertHeader and DropHeaders connect transforms KIP-145
> ---
>
> Key: KAFKA-8863
> URL: https://issues.apache.org/jira/browse/KAFKA-8863
> Project: Kafka
>  Issue Type: New Feature
>  Components: clients, KafkaConnect
>Reporter: Albert Lozano
>Assignee: Tom Bentley
>Priority: Major
> Fix For: 3.0.0
>
>
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-145+-+Expose+Record+Headers+in+Kafka+Connect]
> Continuing the work done in the PR 
> [https://github.com/apache/kafka/pull/4319] implementing the transforms to 
> work with headers would be awesome.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12685) ReplicaStateMachine attempts invalid transition

2021-04-19 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-12685:
---

 Summary: ReplicaStateMachine attempts invalid transition
 Key: KAFKA-12685
 URL: https://issues.apache.org/jira/browse/KAFKA-12685
 Project: Kafka
  Issue Type: Bug
  Components: replication
Affects Versions: 2.7.0
Reporter: Tom Bentley
 Attachments: invalid-transition.log

The ReplicaStateMachine tried to perform the invalid transition {{NewReplica}} 
-> {{NewReplica}}, in a cluster which was being rolling restarted at the same 
time as the problem partition was being reassigned (first removing and then 
re-adding the replica).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12776) Producer sends messages out-of-order inspite of enabling idempotence

2021-05-14 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17344376#comment-17344376
 ] 

Tom Bentley commented on KAFKA-12776:
-

[~neeraj.vaidya] how many partitions does the topic have? I don't see it stated 
so far. Kafka only guarantees ordering within a partition, so if records are 
written to multiple partitions you will only observe order being preserved for 
records sent to the same partition.

> Producer sends messages out-of-order inspite of enabling idempotence
> 
>
> Key: KAFKA-12776
> URL: https://issues.apache.org/jira/browse/KAFKA-12776
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 2.6.0, 2.7.0
> Environment: Linux RHEL 7.9 and Ubuntu 20.04
>Reporter: NEERAJ VAIDYA
>Priority: Major
> Attachments: mocker.zip
>
>
> I have an Apache Kafka 2.6 Producer which writes to topic-A (TA). 
> My application is basically a Spring boot web-application which accepts JSON 
> payloads via HTTP and then pushes each to a Kafka topic. I also use Spring 
> Cloud Stream Kafka in the application to create and use a Producer.
> For one of my failure handling test cases, I shutdown the Kafka cluster while 
> my applications are running. (Note : No messages have been published to the 
> Kafka cluster before I stop the cluster)
> When the producer application tries to write messages to TA, it cannot 
> because the cluster is down and hence (I assume) buffers the messages. Let's 
> say it receives 4 messages m1,m2,m3,m4 in increasing time order. (i.e. m1 is 
> first and m4 is last).
> When I bring the Kafka cluster back online, the producer sends the buffered 
> messages to the topic, but they are not in order. I receive for example, m2 
> then m3 then m1 and then m4.
> Why is that ? Is it because the buffering in the producer is multi-threaded 
> with each producing to the topic at the same time ?
> My project code is attached herewith.
> I can confirm that I have enabled idempotence. I have also tried with 
> ```max.in.flight.requests=1```



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-6987) Reimplement KafkaFuture with CompletableFuture

2021-05-14 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley updated KAFKA-6987:
---
Fix Version/s: 3.0.0

> Reimplement KafkaFuture with CompletableFuture
> --
>
> Key: KAFKA-6987
> URL: https://issues.apache.org/jira/browse/KAFKA-6987
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 2.0.0
>Reporter: Andras Beni
>Assignee: Tom Bentley
>Priority: Minor
> Fix For: 3.0.0
>
>
> KafkaFuture documentation states:
> {{This will eventually become a thin shim on top of Java 8's 
> CompletableFuture.}}
> With Java 7 support dropped in 2.0, it is time to get rid of custom code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12776) Producer sends messages out-of-order inspite of enabling idempotence

2021-05-14 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17344595#comment-17344595
 ] 

Tom Bentley commented on KAFKA-12776:
-

[~neeraj.vaidya] can you reproduce this without using Spring, just the 
KafkaProducer?

> Producer sends messages out-of-order inspite of enabling idempotence
> 
>
> Key: KAFKA-12776
> URL: https://issues.apache.org/jira/browse/KAFKA-12776
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 2.6.0, 2.7.0
> Environment: Linux RHEL 7.9 and Ubuntu 20.04
>Reporter: NEERAJ VAIDYA
>Priority: Major
> Attachments: mocker.zip
>
>
> I have an Apache Kafka 2.6 Producer which writes to topic-A (TA). 
> My application is basically a Spring boot web-application which accepts JSON 
> payloads via HTTP and then pushes each to a Kafka topic. I also use Spring 
> Cloud Stream Kafka in the application to create and use a Producer.
> For one of my failure handling test cases, I shutdown the Kafka cluster while 
> my applications are running. (Note : No messages have been published to the 
> Kafka cluster before I stop the cluster)
> When the producer application tries to write messages to TA, it cannot 
> because the cluster is down and hence (I assume) buffers the messages. Let's 
> say it receives 4 messages m1,m2,m3,m4 in increasing time order. (i.e. m1 is 
> first and m4 is last).
> When I bring the Kafka cluster back online, the producer sends the buffered 
> messages to the topic, but they are not in order. I receive for example, m2 
> then m3 then m1 and then m4.
> Why is that ? Is it because the buffering in the producer is multi-threaded 
> with each producing to the topic at the same time ?
> My project code is attached herewith.
> I can confirm that I have enabled idempotence. I have also tried with 
> ```max.in.flight.requests=1```



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12818) Memory leakage when kafka connect 2.7 uses directory config provider

2021-05-20 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17348517#comment-17348517
 ] 

Tom Bentley commented on KAFKA-12818:
-

As you can see from the code, DirectoryConfigProvider doesn't have any state, 
so this can't simply be objects being pinned in memory.

 

How is the memory usage in the graph being calculated (is it process RSS or 
some JVM metric)? Assuming the DirConfigProvider was looking at a mounted 
directory, was the Secret or ConfigMap for the mount being changed?

> Memory leakage when kafka connect 2.7 uses directory config provider
> 
>
> Key: KAFKA-12818
> URL: https://issues.apache.org/jira/browse/KAFKA-12818
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.7.0
> Environment: Azure AKS / Kubernetes v1.20
>Reporter: Viktor Utkin
>Priority: Critical
> Attachments: Screenshot 2021-05-20 at 14.53.05.png
>
>
> Hi, we noticed a Memory leakage problem when kafka connect 2.7 uses directory 
> config provider. We've got an OOM in kubernetes environment. K8s kills 
> container when limit reached. At same time we've not get any OOM in Java. 
> Heap dump did't show us anything interesting.
> JVM config:
> {code:java}
>  -XX:+HeapDumpOnOutOfMemoryError
>  -XX:HeapDumpPath=/tmp/
>  -XX:+UseContainerSupport
>  -XX:+OptimizeStringConcat
>  -XX:MaxRAMPercentage=75.0
>  -XX:InitialRAMPercentage=50.0
>  -XX:MaxMetaspaceSize=256M
>  -XX:MaxDirectMemorySize=256M
>  -XX:+UseStringDeduplication
>  -XX:+AlwaysActAsServerClassMachine{code}
>  
>  Kafka Connect config:
> {code:java}
> "config.providers": "directory"
>  "config.providers.directory.class": 
> "org.apache.kafka.common.config.provider.DirectoryConfigProvider"{code}
>  
>  Kubernetes pod resources limits:
> {code:java}
> resources:
>   requests:
> cpu: 1500m
> memory: 2Gi
>   limits:
> cpu: 3000m
> memory: 3Gi
> {code}
>  
> doker image used: confluentinc/cp-kafka-connect:6.1.1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12818) Memory leakage when kafka connect 2.7 uses directory config provider

2021-05-20 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17348590#comment-17348590
 ] 

Tom Bentley commented on KAFKA-12818:
-

Since your limit > request you're going to be at risk of the OOMKiller anyway, 
whether or not there's a memory leak. You were exceeding your request even with 
the FileConfigProvider.

I find it hard to believe the DirectoryConfigProvider itself is responsible for 
somehow using _that much_ (~200m) memory above what we see in the first curve. 
It's normally only used once when the config file is parsed and the values 
substituted. It doesn't watch the files (even if they do change). So I think we 
need better memory usage diagnostics here than just a graph of RSS.

> Memory leakage when kafka connect 2.7 uses directory config provider
> 
>
> Key: KAFKA-12818
> URL: https://issues.apache.org/jira/browse/KAFKA-12818
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.7.0
> Environment: Azure AKS / Kubernetes v1.20
>Reporter: Viktor Utkin
>Priority: Critical
> Attachments: Screenshot 2021-05-20 at 14.53.05.png
>
>
> Hi, we noticed a Memory leakage problem when kafka connect 2.7 uses directory 
> config provider. We've got an OOM in kubernetes environment. K8s kills 
> container when limit reached. At same time we've not get any OOM in Java. 
> Heap dump did't show us anything interesting.
> JVM config:
> {code:java}
>  -XX:+HeapDumpOnOutOfMemoryError
>  -XX:HeapDumpPath=/tmp/
>  -XX:+UseContainerSupport
>  -XX:+OptimizeStringConcat
>  -XX:MaxRAMPercentage=75.0
>  -XX:InitialRAMPercentage=50.0
>  -XX:MaxMetaspaceSize=256M
>  -XX:MaxDirectMemorySize=256M
>  -XX:+UseStringDeduplication
>  -XX:+AlwaysActAsServerClassMachine{code}
>  
>  Kafka Connect config:
> {code:java}
> "config.providers": "directory"
>  "config.providers.directory.class": 
> "org.apache.kafka.common.config.provider.DirectoryConfigProvider"{code}
>  
>  Kubernetes pod resources limits:
> {code:java}
> resources:
>   requests:
> cpu: 1500m
> memory: 2Gi
>   limits:
> cpu: 3000m
> memory: 3Gi
> {code}
>  
> doker image used: confluentinc/cp-kafka-connect:6.1.1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10846) FileStreamSourceTask buffer can grow without bound

2021-05-25 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley updated KAFKA-10846:

Fix Version/s: 2.7.2

> FileStreamSourceTask buffer can grow without bound
> --
>
> Key: KAFKA-10846
> URL: https://issues.apache.org/jira/browse/KAFKA-10846
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Major
> Fix For: 2.8.0, 2.7.2
>
>
> When reading a large file the buffer used by {{FileStreamSourceTask}} can 
> grow without bound. Even in the unit test 
> org.apache.kafka.connect.file.FileStreamSourceTaskTest#testBatchSize the 
> buffer grows from 1,024 to 524,288 bytes just reading 10,000 copies of a line 
> of <100 chars.
> The problem is that the condition for growing the buffer is incorrect. The 
> buffer is doubled whenever some bytes were read and the used space in the 
> buffer == the buffer length.
> The requirement to increase the buffer size should be related to whether 
> {{extractLine()}} actually managed to read any lines. It's only when no 
> complete lines were read since the last call to {{read()}} that we need to 
> increase the buffer size (to cope with the large line).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12879) Compatibility break in Admin.listOffsets()

2021-06-02 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-12879:
---

 Summary: Compatibility break in Admin.listOffsets()
 Key: KAFKA-12879
 URL: https://issues.apache.org/jira/browse/KAFKA-12879
 Project: Kafka
  Issue Type: Bug
  Components: admin
Affects Versions: 2.6.2, 2.7.1, 2.8.0
Reporter: Tom Bentley


KAFKA-12339 incompatibly changed the semantics of Admin.listOffsets(). 
Previously it would fail with {{UnknownTopicOrPartitionException}} when a topic 
didn't exist. Now it will (eventually) fail with {{TimeoutException}}. It seems 
this was more or less intentional, even though it would break code which was 
expecting and handling the {{UnknownTopicOrPartitionException}}. A workaround 
is to use {{retries=1}} and inspect the cause of the {{TimeoutException}}, but 
this isn't really suitable for cases where the same Admin client instance is 
being used for other calls where retries is desirable.

Furthermore as well as the intended effect on {{listOffsets()}} it seems that 
the change could actually affect other methods of Admin.

More generally, the Admin client API is vague about which exceptions can 
propagate from which methods. This means that it's not possible to say, in 
cases like this, whether the calling code _should_ have been relying on the 
{{UnknownTopicOrPartitionException}} or not.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-6987) Reimplement KafkaFuture with CompletableFuture

2021-06-12 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley updated KAFKA-6987:
---
Labels:   (was: needs-kip)

> Reimplement KafkaFuture with CompletableFuture
> --
>
> Key: KAFKA-6987
> URL: https://issues.apache.org/jira/browse/KAFKA-6987
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 2.0.0
>Reporter: Andras Beni
>Assignee: Tom Bentley
>Priority: Minor
> Fix For: 3.0.0
>
>
> KafkaFuture documentation states:
> {{This will eventually become a thin shim on top of Java 8's 
> CompletableFuture.}}
> With Java 7 support dropped in 2.0, it is time to get rid of custom code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-6987) Reimplement KafkaFuture with CompletableFuture

2021-06-12 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-6987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17362253#comment-17362253
 ] 

Tom Bentley commented on KAFKA-6987:


[~kkonstantine] this is covered by KIP-707. If you're able to review 
https://github.com/apache/kafka/pull/9878 I'd be most grateful.

> Reimplement KafkaFuture with CompletableFuture
> --
>
> Key: KAFKA-6987
> URL: https://issues.apache.org/jira/browse/KAFKA-6987
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 2.0.0
>Reporter: Andras Beni
>Assignee: Tom Bentley
>Priority: Minor
> Fix For: 3.0.0
>
>
> KafkaFuture documentation states:
> {{This will eventually become a thin shim on top of Java 8's 
> CompletableFuture.}}
> With Java 7 support dropped in 2.0, it is time to get rid of custom code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-6987) Reimplement KafkaFuture with CompletableFuture

2021-06-12 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley updated KAFKA-6987:
---
Labels: kip  (was: )

> Reimplement KafkaFuture with CompletableFuture
> --
>
> Key: KAFKA-6987
> URL: https://issues.apache.org/jira/browse/KAFKA-6987
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 2.0.0
>Reporter: Andras Beni
>Assignee: Tom Bentley
>Priority: Minor
>  Labels: kip
> Fix For: 3.0.0
>
>
> KafkaFuture documentation states:
> {{This will eventually become a thin shim on top of Java 8's 
> CompletableFuture.}}
> With Java 7 support dropped in 2.0, it is time to get rid of custom code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-6987) Reimplement KafkaFuture with CompletableFuture

2021-06-18 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley updated KAFKA-6987:
---
Labels: kip needs-review  (was: kip)

> Reimplement KafkaFuture with CompletableFuture
> --
>
> Key: KAFKA-6987
> URL: https://issues.apache.org/jira/browse/KAFKA-6987
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 2.0.0
>Reporter: Andras Beni
>Assignee: Tom Bentley
>Priority: Minor
>  Labels: kip, needs-review
> Fix For: 3.0.0
>
>
> KafkaFuture documentation states:
> {{This will eventually become a thin shim on top of Java 8's 
> CompletableFuture.}}
> With Java 7 support dropped in 2.0, it is time to get rid of custom code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-13049) Log recovery threads use default thread pool naming

2021-07-08 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-13049:
---

 Summary: Log recovery threads use default thread pool naming
 Key: KAFKA-13049
 URL: https://issues.apache.org/jira/browse/KAFKA-13049
 Project: Kafka
  Issue Type: Bug
  Components: core
Reporter: Tom Bentley
Assignee: Tom Bentley


The threads used for log recovery use a pool 
{{Executors.newFixedThreadPool(int)}} and hence pick up the naming scheme from 
{{Executors.defaultThreadFactory()}}. It's not so clear in a thread dump taken 
during log recovery what those threads are. They should have clearer names. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-13049) Log recovery threads use default thread pool naming

2021-07-14 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley resolved KAFKA-13049.
-
Fix Version/s: 3.1.0
   Resolution: Fixed

> Log recovery threads use default thread pool naming
> ---
>
> Key: KAFKA-13049
> URL: https://issues.apache.org/jira/browse/KAFKA-13049
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Minor
> Fix For: 3.1.0
>
>
> The threads used for log recovery use a pool 
> {{Executors.newFixedThreadPool(int)}} and hence pick up the naming scheme 
> from {{Executors.defaultThreadFactory()}}. It's not so clear in a thread dump 
> taken during log recovery what those threads are. They should have clearer 
> names. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-13017) Excessive logging on sink task deserialization errors

2021-07-16 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley resolved KAFKA-13017.
-
Fix Version/s: 3.1.0
   Resolution: Fixed

> Excessive logging on sink task deserialization errors
> -
>
> Key: KAFKA-13017
> URL: https://issues.apache.org/jira/browse/KAFKA-13017
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 3.0.0, 2.7.0, 2.8.0
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Major
> Fix For: 3.1.0
>
>
> Even with {{errors.log.enable}} set to {{false}}, deserialization failures 
> are still logged at {{ERROR}} level by the 
> {{org.apache.kafka.connect.runtime.WorkerSinkTask}} namespace. This becomes 
> problematic in pipelines with {{errors.tolerance}} set to {{all}}, and can 
> generate excessive logging of stack traces when deserialization errors are 
> encountered for most if not all of the records being consumed by a sink task.
> The logging added to the {{WorkerSinkTask}} class in KAFKA-9018 should be 
> removed and, if necessary, any valuable information from it not already 
> present in the log messages generated by Connect with {{errors.log.enable}} 
> and {{errors.log.include.messages}} set to {{true}} should be added in that 
> place instead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-15049) Flaky test DynamicBrokerReconfigurationTest#testAdvertisedListenerUpdate

2023-06-02 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-15049:
---

 Summary: Flaky test 
DynamicBrokerReconfigurationTest#testAdvertisedListenerUpdate
 Key: KAFKA-15049
 URL: https://issues.apache.org/jira/browse/KAFKA-15049
 Project: Kafka
  Issue Type: Bug
Reporter: Tom Bentley


While testing 3.4.1RC3 
DynamicBrokerReconfigurationTest.testAdvertisedListenerUpdate failed repeatedly 
on my machine always with the following stacktrace.

{{org.opentest4j.AssertionFailedError: Unexpected exception type thrown, 
expected:  but was: 

 at 
app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
 at app//org.junit.jupiter.api.AssertThrows.assertThrows(AssertThrows.java:67)
 at app//org.junit.jupiter.api.AssertThrows.assertThrows(AssertThrows.java:35)
 at app//org.junit.jupiter.api.Assertions.assertThrows(Assertions.java:3083)
 at 
app//kafka.server.DynamicBrokerReconfigurationTest.testAdvertisedListenerUpdate(DynamicBrokerReconfigurationTest.scala:1066)}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15050) Prompts in the quickstarts

2023-06-02 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-15050:
---

 Summary: Prompts in the quickstarts
 Key: KAFKA-15050
 URL: https://issues.apache.org/jira/browse/KAFKA-15050
 Project: Kafka
  Issue Type: Improvement
  Components: documentation
Reporter: Tom Bentley


In the quickstarts [Steps 
1-5|https://kafka.apache.org/documentation/#quickstart] use {{$}} to indicate 
the command prompt. When we start to use Kafka Connect in [Step 
6|https://kafka.apache.org/documentation/#quickstart_kafkaconnect] we switch to 
{{{}>{}}}. The [Kafka Streams 
quickstart|https://kafka.apache.org/documentation/streams/quickstart] also uses 
{{{}>{}}}. I don't think there's a reason for this, but if there is one (root 
vs user account?) it should be explained.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15050) Prompts in the quickstarts

2023-06-10 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17731175#comment-17731175
 ] 

Tom Bentley commented on KAFKA-15050:
-

Sure, go for it!

> Prompts in the quickstarts
> --
>
> Key: KAFKA-15050
> URL: https://issues.apache.org/jira/browse/KAFKA-15050
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Tom Bentley
>Priority: Trivial
>  Labels: newbie
>
> In the quickstarts [Steps 
> 1-5|https://kafka.apache.org/documentation/#quickstart] use {{$}} to indicate 
> the command prompt. When we start to use Kafka Connect in [Step 
> 6|https://kafka.apache.org/documentation/#quickstart_kafkaconnect] we switch 
> to {{{}>{}}}. The [Kafka Streams 
> quickstart|https://kafka.apache.org/documentation/streams/quickstart] also 
> uses {{{}>{}}}. I don't think there's a reason for this, but if there is one 
> (root vs user account?) it should be explained.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-3881) Remove the replacing logic from "." to "_" in Fetcher

2023-06-15 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-3881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17732918#comment-17732918
 ] 

Tom Bentley commented on KAFKA-3881:


[~kirktrue] the difficult is that metrics are a public API of Kafka. People 
will have built things on top of the existing metrics (e.g. dash boards, 
alerting etc) which will break if we go changing metric names. So either the 
change needs to happen in a major release, or it needs to be configurable and 
opt-in. In either case it will require a KIP.

> Remove the replacing logic from "." to "_" in Fetcher
> -
>
> Key: KAFKA-3881
> URL: https://issues.apache.org/jira/browse/KAFKA-3881
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, metrics
>Reporter: Guozhang Wang
>Assignee: Tom Bentley
>Priority: Major
>  Labels: newbie, patch-available
>
> The logic of replacing "." to "_" in metrics names / tags was originally 
> introduced in the core package's metrics since Graphite treats "." as 
> hierarchy separators (see KAFKA-1902); for the client metrics, it is supposed 
> that the GraphiteReported should take care of this itself rather than letting 
> Kafka metrics to special handle for it. In addition, right now only consumer 
> Fetcher had replace, but producer Sender does not have it actually.
> So we should consider removing this logic in the consumer Fetcher's metrics 
> package. NOTE that this is a public API backward incompatible change.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-15050) Prompts in the quickstarts

2023-06-20 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley reassigned KAFKA-15050:
---

Assignee: Joobi S B

> Prompts in the quickstarts
> --
>
> Key: KAFKA-15050
> URL: https://issues.apache.org/jira/browse/KAFKA-15050
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Tom Bentley
>Assignee: Joobi S B
>Priority: Trivial
>  Labels: newbie
>
> In the quickstarts [Steps 
> 1-5|https://kafka.apache.org/documentation/#quickstart] use {{$}} to indicate 
> the command prompt. When we start to use Kafka Connect in [Step 
> 6|https://kafka.apache.org/documentation/#quickstart_kafkaconnect] we switch 
> to {{{}>{}}}. The [Kafka Streams 
> quickstart|https://kafka.apache.org/documentation/streams/quickstart] also 
> uses {{{}>{}}}. I don't think there's a reason for this, but if there is one 
> (root vs user account?) it should be explained.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15459) Convert coordinator retriable errors to a known producer response error.

2023-09-12 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17764437#comment-17764437
 ] 

Tom Bentley commented on KAFKA-15459:
-

Is this _really_ the best compromise? AFAICS the linked PR and issue don't 
contain enough information to know what was considered.

The loss of specific error codes seems like a big disadvantage to me. Taken to 
its logical conclusion it would seem we only need a single error code to 
represent all retriable errors.

 

 

 

> Convert coordinator retriable errors to a known producer response error.
> 
>
> Key: KAFKA-15459
> URL: https://issues.apache.org/jira/browse/KAFKA-15459
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 3.6.0
>Reporter: Justine Olshan
>Assignee: Justine Olshan
>Priority: Blocker
> Fix For: 3.6.0
>
>
> While reviewing [https://github.com/apache/kafka/pull/14370] I added some of 
> the documentation for the returned errors in the produce response as well.
> There were concerns about the new errors:
>  * {@link Errors#COORDINATOR_LOAD_IN_PROGRESS}
>  * {@link Errors#COORDINATOR_NOT_AVAILABLE}
>  * {@link Errors#INVALID_TXN_STATE}
>  * {@link Errors#INVALID_PRODUCER_ID_MAPPING}
>  * {@link Errors#CONCURRENT_TRANSACTIONS}
> The coordinator load, not available, and concurrent transactions errors 
> should be retriable.
> The invalid txn state and pid mapping errors should be abortable.
> This is how older java clients handle the errors, but it is unclear how other 
> clients handle them. It seems that rdkafka (for example) treats the abortable 
> errors as fatal instead. The coordinator errors are retriable but not the 
> concurrent transactions error.
> It seems acceptable for the abortable errors to be fatal on some clients 
> since the error is likely on a zombie producer or in a state that may be 
> harder to recover from. However, for the retriable errors, we can return 
> NOT_ENOUGH_REPLICAS which is a known retriable response. We can use the 
> produce api's response string to specify the real cause of the error for 
> debugging. 
> There were trade-offs between making the older clients work and for clarity 
> in errors. This seems to be the best compromise.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15459) Convert coordinator retriable errors to a known producer response error.

2023-09-12 Thread Tom Bentley (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17764442#comment-17764442
 ] 

Tom Bentley commented on KAFKA-15459:
-

[~jolshan] thanks for the explanation. Is there somewhere which elaborates on 
part 2?

If error responses indicated their retriability (rather than it being inferred 
from the error code, and thus embedded within client libraries) it would enable 
better compatibility in the future for things like this, but I suspect your 
part 2 plans are simpler than that.

> Convert coordinator retriable errors to a known producer response error.
> 
>
> Key: KAFKA-15459
> URL: https://issues.apache.org/jira/browse/KAFKA-15459
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 3.6.0
>Reporter: Justine Olshan
>Assignee: Justine Olshan
>Priority: Blocker
> Fix For: 3.6.0
>
>
> While reviewing [https://github.com/apache/kafka/pull/14370] I added some of 
> the documentation for the returned errors in the produce response as well.
> There were concerns about the new errors:
>  * {@link Errors#COORDINATOR_LOAD_IN_PROGRESS}
>  * {@link Errors#COORDINATOR_NOT_AVAILABLE}
>  * {@link Errors#INVALID_TXN_STATE}
>  * {@link Errors#INVALID_PRODUCER_ID_MAPPING}
>  * {@link Errors#CONCURRENT_TRANSACTIONS}
> The coordinator load, not available, and concurrent transactions errors 
> should be retriable.
> The invalid txn state and pid mapping errors should be abortable.
> This is how older java clients handle the errors, but it is unclear how other 
> clients handle them. It seems that rdkafka (for example) treats the abortable 
> errors as fatal instead. The coordinator errors are retriable but not the 
> concurrent transactions error.
> It seems acceptable for the abortable errors to be fatal on some clients 
> since the error is likely on a zombie producer or in a state that may be 
> harder to recover from. However, for the retriable errors, we can return 
> NOT_ENOUGH_REPLICAS which is a known retriable response. We can use the 
> produce api's response string to specify the real cause of the error for 
> debugging. 
> There were trade-offs between making the older clients work and for clarity 
> in errors. This seems to be the best compromise.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-5459) Support kafka-console-producer.sh messages as whole file

2017-06-16 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-5459:
--

 Summary: Support kafka-console-producer.sh messages as whole file
 Key: KAFKA-5459
 URL: https://issues.apache.org/jira/browse/KAFKA-5459
 Project: Kafka
  Issue Type: Improvement
  Components: tools
Affects Versions: 0.10.2.1
Reporter: Tom Bentley
Priority: Trivial


{{kafka-console-producer.sh}} treats each line read as a separate message. This 
can be controlled using the {{--line-reader}} option and the corresponding 
{{MessageReader}} trait. It would be useful to have built-in support for 
sending the whole input stream/file as the message. 





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4985) kafka-acls should resolve dns names and accept ip ranges

2017-06-19 Thread Tom Bentley (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16053942#comment-16053942
 ] 

Tom Bentley commented on KAFKA-4985:


> The problem with resolving hostnames client-side is that it would cause a lot 
> of confusion when resolution happened differently client-side versus 
> server-side.

That argument could be applied to practically any use of DNS, so I'm not 
convinced it makes a good reason not to do this.

> kafka-acls should resolve dns names and accept ip ranges
> 
>
> Key: KAFKA-4985
> URL: https://issues.apache.org/jira/browse/KAFKA-4985
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Reporter: Ryan P
>
> Per KAFKA-2869 it looks like a conscious decision was made to move away from 
> using hostnames for authorization purposes. 
> This is fine however IP addresses are terrible inconvenient compared to 
> hostname with regard to configuring ACLs. 
> I'd like to propose the following two improvements to make managing these 
> ACLs easier for end-users. 
> 1. Allow for simple patterns to be matched 
> i.e --allow-host 10.17.81.11[1-9] 
> 2. Allow for hostnames to be used even if they are resolved on the client 
> side. Simple pattern matching on hostnames would be a welcome addition as well
> i.e. --allow-host host.name.com
> Accepting a comma delimited list of hostnames and ip addresses would also be 
> helpful.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-2967) Move Kafka documentation to ReStructuredText

2017-06-19 Thread Tom Bentley (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054289#comment-16054289
 ] 

Tom Bentley commented on KAFKA-2967:


Any progress on this [[~ceposta], [[~gwenshap]? I would like to help improve 
the documentation, and not having to edit the raw HTML would make that a nicer 
experience.

> Move Kafka documentation to ReStructuredText
> 
>
> Key: KAFKA-2967
> URL: https://issues.apache.org/jira/browse/KAFKA-2967
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
>
> Storing documentation as HTML is kind of BS :)
> * Formatting is a pain, and making it look good is even worse
> * Its just HTML, can't generate PDFs
> * Reading and editting is painful
> * Validating changes is hard because our formatting relies on all kinds of 
> Apache Server features.
> I suggest:
> * Move to RST
> * Generate HTML and PDF during build using Sphinx plugin for Gradle.
> Lots of Apache projects are doing this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


  1   2   3   4   >