[jira] [Created] (KAFKA-17328) Update Release.py script to point to dist.apache.org/dev

2024-08-13 Thread Josep Prat (Jira)
Josep Prat created KAFKA-17328:
--

 Summary: Update Release.py script to point to dist.apache.org/dev
 Key: KAFKA-17328
 URL: https://issues.apache.org/jira/browse/KAFKA-17328
 Project: Kafka
  Issue Type: Task
Reporter: Josep Prat


Infra communicated the decommission of the home.apache.org box. This box was 
used by the Release manager to store the RC candidates up for vote.

The release.py script is written to push the artifacts in the RM's home box. We 
need to update this process and use the dist.apache.org/dev space.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-12830) Remove Deprecated constructor in TimeWindowedDeserializer and TimeWindowedSerde

2024-08-09 Thread Josep Prat (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josep Prat resolved KAFKA-12830.

Resolution: Fixed

> Remove Deprecated constructor in TimeWindowedDeserializer and 
> TimeWindowedSerde
> ---
>
> Key: KAFKA-12830
> URL: https://issues.apache.org/jira/browse/KAFKA-12830
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Josep Prat
>Assignee: PoAn Yang
>Priority: Blocker
> Fix For: 4.0.0
>
>
> The single argument constructor and a factory method of the following classes 
> were deprecated in version 2.8:
>  * 
> org.apache.kafka.streams.kstream.TimeWindowedDeserializer#TimeWindowedDeserializer(org.apache.kafka.common.serialization.Deserializer)
>  * 
> org.apache.kafka.streams.kstream.WindowedSerdes.TimeWindowedSerde#TimeWindowedSerde(org.apache.kafka.common.serialization.Serde)
>  * 
> org.apache.kafka.streams.kstream.WindowedSerdes#timeWindowedSerdeFrom(java.lang.Class)
>  
>  
> See KAFKA-10366 & KAFKA-9649 and KIP-659



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-12832) Remove Deprecated methods under RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapter

2024-08-09 Thread Josep Prat (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josep Prat resolved KAFKA-12832.

Resolution: Fixed

> Remove Deprecated methods under 
> RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapter
> --
>
> Key: KAFKA-12832
> URL: https://issues.apache.org/jira/browse/KAFKA-12832
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Josep Prat
>Assignee: 黃竣陽
>Priority: Blocker
> Fix For: 4.0.0
>
>
> The following methods under were deprecated in version 3.0.0
>  * 
> org.apache.kafka.streams.state.internals.RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapter#maxBackgroundCompactions
>  * 
> org.apache.kafka.streams.state.internals.RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapter#setBaseBackgroundCompactions
>  * 
> org.apache.kafka.streams.state.internals.RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapter#setMaxBackgroundCompactions
>  * 
> org.apache.kafka.streams.state.internals.RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapter#maxBackgroundFlushes
>  * 
> org.apache.kafka.streams.state.internals.RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapter#setMaxBackgroundFlushes
>  
> See KAFKA-8897 and KIP-471
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17214) Add 3.8.0 Streams and Core to system tests

2024-07-29 Thread Josep Prat (Jira)
Josep Prat created KAFKA-17214:
--

 Summary: Add 3.8.0 Streams and Core to system tests
 Key: KAFKA-17214
 URL: https://issues.apache.org/jira/browse/KAFKA-17214
 Project: Kafka
  Issue Type: Bug
Reporter: Josep Prat


As per Release Instructions we should add 3.8.0 version to system tests. 
Example PRs:
 * Broker and clients: [https://github.com/apache/kafka/pull/12210]
 * Streams: [https://github.com/apache/kafka/pull/12209]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17085) Streams Cooperative Rebalance Upgrade Test fails in System Tests

2024-07-05 Thread Josep Prat (Jira)
Josep Prat created KAFKA-17085:
--

 Summary: Streams Cooperative Rebalance Upgrade Test fails in 
System Tests
 Key: KAFKA-17085
 URL: https://issues.apache.org/jira/browse/KAFKA-17085
 Project: Kafka
  Issue Type: Bug
  Components: system tests
Affects Versions: 3.8.0
Reporter: Josep Prat


StreamsCooperativeRebalanceUpgradeTest fails on system tests when upgrading 
from: 2.1.1, 2.2.2 and 2.3.1.


Tests that fail:

 
{noformat}
Module: kafkatest.tests.streams.streams_cooperative_rebalance_upgrade_test
Class:  StreamsCooperativeRebalanceUpgradeTest
Method: test_upgrade_to_cooperative_rebalance
Arguments:
{
  "upgrade_from_version": "2.1.1"
}
 
{noformat}
and

 
{noformat}
Module: kafkatest.tests.streams.streams_cooperative_rebalance_upgrade_test
Class:  StreamsCooperativeRebalanceUpgradeTest
Method: test_upgrade_to_cooperative_rebalance
Arguments:
{
  "upgrade_from_version": "2.2.2"
}
{noformat}
and

 

 
{noformat}
Module: kafkatest.tests.streams.streams_cooperative_rebalance_upgrade_test
Class:  StreamsCooperativeRebalanceUpgradeTest
Method: test_upgrade_to_cooperative_rebalance
Arguments:
{
  "upgrade_from_version": "2.3.1"
}
{noformat}
 

Failure for 2.1.1 is:
{noformat}
TimeoutError("Never saw 'first_bounce_phase-Processed [0-9]* records so far' 
message ubuntu@worker28")
Traceback (most recent call last):
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/tests/runner_client.py",
 line 184, in _do_run
data = self.run_test()
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/tests/runner_client.py",
 line 262, in run_test
return self.test_context.function(self.test)
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/mark/_mark.py",
 line 433, in wrapper
return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
  File 
"/home/semaphore/kafka-overlay/kafka/tests/kafkatest/tests/streams/streams_cooperative_rebalance_upgrade_test.py",
 line 101, in test_upgrade_to_cooperative_rebalance
self.maybe_upgrade_rolling_bounce_and_verify(processors,
  File 
"/home/semaphore/kafka-overlay/kafka/tests/kafkatest/tests/streams/streams_cooperative_rebalance_upgrade_test.py",
 line 182, in maybe_upgrade_rolling_bounce_and_verify
stdout_monitor.wait_until(verify_processing_msg,
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/cluster/remoteaccount.py",
 line 736, in wait_until
return wait_until(lambda: self.acct.ssh("tail -c +%d %s | grep '%s'" % 
(self.offset + 1, self.log, pattern),
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/utils/util.py",
 line 58, in wait_until
raise TimeoutError(err_msg() if callable(err_msg) else err_msg) from 
last_exception
ducktape.errors.TimeoutError: Never saw 'first_bounce_phase-Processed [0-9]* 
records so far' message ubuntu@worker28{noformat}
Failure for 2.2.2 is:
{noformat}
TimeoutError("Never saw 'first_bounce_phase-Processed [0-9]* records so far' 
message ubuntu@worker5")
Traceback (most recent call last):
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/tests/runner_client.py",
 line 184, in _do_run
data = self.run_test()
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/tests/runner_client.py",
 line 262, in run_test
return self.test_context.function(self.test)
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/mark/_mark.py",
 line 433, in wrapper
return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
  File 
"/home/semaphore/kafka-overlay/kafka/tests/kafkatest/tests/streams/streams_cooperative_rebalance_upgrade_test.py",
 line 101, in test_upgrade_to_cooperative_rebalance
self.maybe_upgrade_rolling_bounce_and_verify(processors,
  File 
"/home/semaphore/kafka-overlay/kafka/tests/kafkatest/tests/streams/streams_cooperative_rebalance_upgrade_test.py",
 line 182, in maybe_upgrade_rolling_bounce_and_verify
stdout_monitor.wait_until(verify_processing_msg,
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/cluster/remoteaccount.py",
 line 736, in wait_until
return wait_until(lambda: self.acct.ssh("tail -c +%d %s | grep '%s'" % 
(self.offset + 1, self.log, pattern),
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/utils/util.py",
 line 58, in wait_until
raise TimeoutError(err_msg() if callable(err_msg) else err_msg) from 
last_exception
ducktape.errors.TimeoutError: Never saw 'first_bounce_phase-Processed [0-9]* 
records so far' message ubuntu@worker5{noformat}
Failure for 2.3.1 is:
{noformat}
TimeoutError("Never saw 'first_bounce_phase-Processed [0-9]* records so far' 
message ubuntu@worker21")
Traceback (most recent call last

[jira] [Created] (KAFKA-17084) Network Degrade Test fails in System Tests

2024-07-05 Thread Josep Prat (Jira)
Josep Prat created KAFKA-17084:
--

 Summary: Network Degrade Test fails in System Tests
 Key: KAFKA-17084
 URL: https://issues.apache.org/jira/browse/KAFKA-17084
 Project: Kafka
  Issue Type: Bug
  Components: system tests
Affects Versions: 3.8.0
Reporter: Josep Prat


Tests for NetworkDegradeTest fail consistently on the 3.8 branch.

 

Tests failing are:

 
{noformat}
Module: kafkatest.tests.core.network_degrade_test
Class:  NetworkDegradeTest
Method: test_latency
Arguments:
{
  "device_name": "eth0",
  "latency_ms": 50,
  "rate_limit_kbit": 1000,
  "task_name": "latency-100-rate-1000"
}
{noformat}
 

and 

 
{noformat}
Module: kafkatest.tests.core.network_degrade_test
Class:  NetworkDegradeTest
Method: test_latency
Arguments:
{
  "device_name": "eth0",
  "latency_ms": 50,
  "rate_limit_kbit": 0,
  "task_name": "latency-100"
}
{noformat}
 

Failure for the first one is:
{noformat}
RemoteCommandError({'ssh_config': {'host': 'worker30', 'hostname': 
'10.140.34.105', 'user': 'ubuntu', 'port': 22, 'password': None, 
'identityfile': '/home/semaphore/kafka-overlay/semaphore-muckrake.pem'}, 
'hostname': 'worker30', 'ssh_hostname': '10.140.34.105', 'user': 'ubuntu', 
'externally_routable_ip': '10.140.34.105', '_logger': , 'os': 'linux', '_ssh_client': , '_sftp_client': , '_custom_ssh_exception_checks': None}, 'ping -i 1 -c 20 
worker21', 1, b'')
Traceback (most recent call last):
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/tests/runner_client.py",
 line 184, in _do_run
data = self.run_test()
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/tests/runner_client.py",
 line 262, in run_test
return self.test_context.function(self.test)
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/mark/_mark.py",
 line 433, in wrapper
return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
  File 
"/home/semaphore/kafka-overlay/kafka/tests/kafkatest/tests/core/network_degrade_test.py",
 line 66, in test_latency
for line in zk0.account.ssh_capture("ping -i 1 -c 20 %s" % 
zk1.account.hostname):
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/cluster/remoteaccount.py",
 line 680, in next
return next(self.iter_obj)
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/cluster/remoteaccount.py",
 line 347, in output_generator
raise RemoteCommandError(self, cmd, exit_status, stderr.read())
ducktape.cluster.remoteaccount.RemoteCommandError: ubuntu@worker30: Command 
'ping -i 1 -c 20 worker21' returned non-zero exit status 1.{noformat}
And for the second one is:
{noformat}
RemoteCommandError({'ssh_config': {'host': 'worker28', 'hostname': 
'10.140.41.79', 'user': 'ubuntu', 'port': 22, 'password': None, 'identityfile': 
'/home/semaphore/kafka-overlay/semaphore-muckrake.pem'}, 'hostname': 
'worker28', 'ssh_hostname': '10.140.41.79', 'user': 'ubuntu', 
'externally_routable_ip': '10.140.41.79', '_logger': , 'os': 'linux', '_ssh_client': , '_sftp_client': , '_custom_ssh_exception_checks': None}, 'ping -i 1 -c 20 
worker27', 1, b'')
Traceback (most recent call last):
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/tests/runner_client.py",
 line 184, in _do_run
data = self.run_test()
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/tests/runner_client.py",
 line 262, in run_test
return self.test_context.function(self.test)
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/mark/_mark.py",
 line 433, in wrapper
return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
  File 
"/home/semaphore/kafka-overlay/kafka/tests/kafkatest/tests/core/network_degrade_test.py",
 line 66, in test_latency
for line in zk0.account.ssh_capture("ping -i 1 -c 20 %s" % 
zk1.account.hostname):
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/cluster/remoteaccount.py",
 line 680, in next
return next(self.iter_obj)
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/cluster/remoteaccount.py",
 line 347, in output_generator
raise RemoteCommandError(self, cmd, exit_status, stderr.read())
ducktape.cluster.remoteaccount.RemoteCommandError: ubuntu@worker28: Command 
'ping -i 1 -c 20 worker27' returned non-zero exit status 1.{noformat}
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17083) KRaft Upgrade Failures in SystemTests

2024-07-05 Thread Josep Prat (Jira)
Josep Prat created KAFKA-17083:
--

 Summary: KRaft Upgrade Failures in SystemTests
 Key: KAFKA-17083
 URL: https://issues.apache.org/jira/browse/KAFKA-17083
 Project: Kafka
  Issue Type: Bug
  Components: system tests
Affects Versions: 3.8.0
Reporter: Josep Prat


2 System tests for "TestKRaftUpgrade are consistently failing on 3.8 in the 
system tests.
{noformat}
Module: kafkatest.tests.core.kraft_upgrade_test
Class:  TestKRaftUpgrade
Method: test_isolated_mode_upgrade
Arguments:
{
  "from_kafka_version": "dev",
  "metadata_quorum": "ISOLATED_KRAFT"
}
{noformat}
 

and 

 
{code:java}
Module: kafkatest.tests.core.kraft_upgrade_test
Class:  TestKRaftUpgrade
Method: test_combined_mode_upgrade
Arguments:
{
  "from_kafka_version": "dev",
  "metadata_quorum": "COMBINED_KRAFT"
}
{code}
 

Failure for Isolated is:
{noformat}
RemoteCommandError({'ssh_config': {'host': 'worker15', 'hostname': 
'10.140.39.207', 'user': 'ubuntu', 'port': 22, 'password': None, 
'identityfile': '/home/semaphore/kafka-overlay/semaphore-muckrake.pem'}, 
'hostname': 'worker15', 'ssh_hostname': '10.140.39.207', 'user': 'ubuntu', 
'externally_routable_ip': '10.140.39.207', '_logger': , 'os': 'linux', '_ssh_client': , '_sftp_client': , '_custom_ssh_exception_checks': None}, 
'/opt/kafka-dev/bin/kafka-features.sh --bootstrap-server 
worker15:9092,worker16:9092,worker17:9092 upgrade --metadata 3.7', 1, b'SLF4J: 
Class path contains multiple SLF4J bindings.\nSLF4J: Found binding in 
[jar:file:/vagrant/tools/build/dependant-libs-2.13.14/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]\nSLF4J:
 Found binding in 
[jar:file:/vagrant/trogdor/build/dependant-libs-2.13.14/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]\nSLF4J:
 See http://www.slf4j.org/codes.html#multiple_bindings for an 
explanation.\nSLF4J: Actual binding is of type 
[org.slf4j.impl.Reload4jLoggerFactory]\n1 out of 1 operation(s) failed.\n')
Traceback (most recent call last):
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/tests/runner_client.py",
 line 184, in _do_run
data = self.run_test()
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/tests/runner_client.py",
 line 262, in run_test
return self.test_context.function(self.test)
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/mark/_mark.py",
 line 433, in wrapper
return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
  File 
"/home/semaphore/kafka-overlay/kafka/tests/kafkatest/tests/core/kraft_upgrade_test.py",
 line 121, in test_isolated_mode_upgrade
self.run_upgrade(from_kafka_version)
  File 
"/home/semaphore/kafka-overlay/kafka/tests/kafkatest/tests/core/kraft_upgrade_test.py",
 line 105, in run_upgrade
self.run_produce_consume_validate(core_test_action=lambda: 
self.perform_version_change(from_kafka_version))
  File 
"/home/semaphore/kafka-overlay/kafka/tests/kafkatest/tests/produce_consume_validate.py",
 line 105, in run_produce_consume_validate
core_test_action(*args)
  File 
"/home/semaphore/kafka-overlay/kafka/tests/kafkatest/tests/core/kraft_upgrade_test.py",
 line 105, in 
self.run_produce_consume_validate(core_test_action=lambda: 
self.perform_version_change(from_kafka_version))
  File 
"/home/semaphore/kafka-overlay/kafka/tests/kafkatest/tests/core/kraft_upgrade_test.py",
 line 75, in perform_version_change
self.kafka.upgrade_metadata_version(LATEST_STABLE_METADATA_VERSION)
  File 
"/home/semaphore/kafka-overlay/kafka/tests/kafkatest/services/kafka/kafka.py", 
line 920, in upgrade_metadata_version
self.run_features_command("upgrade", new_version)
  File 
"/home/semaphore/kafka-overlay/kafka/tests/kafkatest/services/kafka/kafka.py", 
line 930, in run_features_command
self.nodes[0].account.ssh(cmd)
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/cluster/remoteaccount.py",
 line 35, in wrapper
return method(self, *args, **kwargs)
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/cluster/remoteaccount.py",
 line 293, in ssh
raise RemoteCommandError(self, cmd, exit_status, stderr.read())
ducktape.cluster.remoteaccount.RemoteCommandError: ubuntu@worker15: Command 
'/opt/kafka-dev/bin/kafka-features.sh --bootstrap-server 
worker15:9092,worker16:9092,worker17:9092 upgrade --metadata 3.7' returned 
non-zero exit status 1. Remote error message: b'SLF4J: Class path contains 
multiple SLF4J bindings.\nSLF4J: Found binding in 
[jar:file:/vagrant/tools/build/dependant-libs-2.13.14/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]\nSLF4J:
 Found binding in 
[jar:file:/vagrant/trogdor/build/dependant-libs-2.13.14/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]\nSLF4J:
 See http://www.slf4j.or

[jira] [Resolved] (KAFKA-15045) Move Streams task assignor to public configs

2024-06-19 Thread Josep Prat (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josep Prat resolved KAFKA-15045.

Resolution: Fixed

> Move Streams task assignor to public configs
> 
>
> Key: KAFKA-15045
> URL: https://issues.apache.org/jira/browse/KAFKA-15045
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: A. Sophie Blee-Goldman
>Assignee: A. Sophie Blee-Goldman
>Priority: Major
>  Labels: kip
> Fix For: 3.8.0
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-924%3A+customizable+task+assignment+for+Streams



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16988) InsufficientResourcesError in ConnectDistributedTest system test

2024-06-18 Thread Josep Prat (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josep Prat resolved KAFKA-16988.

Resolution: Fixed

> InsufficientResourcesError in ConnectDistributedTest system test
> 
>
> Key: KAFKA-16988
> URL: https://issues.apache.org/jira/browse/KAFKA-16988
> Project: Kafka
>  Issue Type: Bug
>Reporter: Luke Chen
>Assignee: Luke Chen
>Priority: Major
> Fix For: 3.8.0, 3.7.1
>
>
> Saw InsufficientResourcesError when running 
> `ConnectDistributedTest#test_exactly_once_source` system test.
>  
> {code:java}
>  name="test_exactly_once_source_clean=False_connect_protocol=compatible_metadata_quorum=ZK_use_new_coordinator=False"
>  classname="kafkatest.tests.connect.connect_distributed_test" 
> time="403.812"> requested: 1. linux nodes available: 0')" 
> type="exception">InsufficientResourcesError('linux nodes requested: 1. linux 
> nodes available: 0') Traceback (most recent call last): File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", 
> line 186, in _do_run data = self.run_test() File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", 
> line 246, in run_test return self.test_context.function(self.test) File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/mark/_mark.py", line 433, in 
> wrapper return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs) 
> File 
> "/opt/kafka-dev/tests/kafkatest/tests/connect/connect_distributed_test.py", 
> line 928, in test_exactly_once_source consumer_validator = 
> ConsoleConsumer(self.test_context, 1, self.kafka, self.source.topic, 
> consumer_timeout_ms=1000, print_key=True) File 
> "/opt/kafka-dev/tests/kafkatest/services/console_consumer.py", line 97, in 
> __init__ BackgroundThreadService.__init__(self, context, num_nodes) File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/services/background_thread.py",
>  line 26, in __init__ super(BackgroundThreadService, self).__init__(context, 
> num_nodes, cluster_spec, *args, **kwargs) File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/services/service.py", line 
> 107, in __init__ self.allocate_nodes() File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/services/service.py", line 
> 217, in allocate_nodes self.nodes = self.cluster.alloc(self.cluster_spec) 
> File "/usr/local/lib/python3.9/dist-packages/ducktape/cluster/cluster.py", 
> line 54, in alloc allocated = self.do_alloc(cluster_spec) File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/cluster/finite_subcluster.py",
>  line 37, in do_alloc good_nodes, bad_nodes = 
> self._available_nodes.remove_spec(cluster_spec) File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/cluster/node_container.py", 
> line 131, in remove_spec raise InsufficientResourcesError(err) 
> ducktape.cluster.node_container.InsufficientResourcesError: linux nodes 
> requested: 1. linux nodes available: 0 
>  name="test_exactly_once_source_clean=False_connect_protocol=sessioned_metadata_quorum=ZK_use_new_coordinator=False"
>  classname="kafkatest.tests.connect.connect_distributed_test" 
> time="376.160"> requested: 1. linux nodes available: 0')" 
> type="exception">InsufficientResourcesError('linux nodes requested: 1. linux 
> nodes available: 0') Traceback (most recent call last): File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", 
> line 186, in _do_run data = self.run_test() File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", 
> line 246, in run_test return self.test_context.function(self.test) File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/mark/_mark.py", line 433, in 
> wrapper return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs) 
> File 
> "/opt/kafka-dev/tests/kafkatest/tests/connect/connect_distributed_test.py", 
> line 928, in test_exactly_once_source consumer_validator = 
> ConsoleConsumer(self.test_context, 1, self.kafka, self.source.topic, 
> consumer_timeout_ms=1000, print_key=True) File 
> "/opt/kafka-dev/tests/kafkatest/services/console_consumer.py", line 97, in 
> __init__ BackgroundThreadService.__init__(self, context, num_nodes) File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/services/background_thread.py",
>  line 26, in __init__ super(BackgroundThreadService, self).__init__(context, 
> num_nodes, cluster_spec, *args, **kwargs) File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/services/service.py", line 
> 107, in __init__ self.allocate_nodes() File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/services/service.py", line 
> 217, in allocate_nodes self.nodes = self.cluster.alloc(self.cluster_spec) 
> File "/usr/local/lib/python3.9/dist-packages/ducktape/cluster/cluster.py", 
> line 54, in alloc allocated = self.do_alloc(

[jira] [Resolved] (KAFKA-16373) Docker Official Image for Apache Kafka

2024-06-17 Thread Josep Prat (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josep Prat resolved KAFKA-16373.

Resolution: Fixed

> Docker Official Image for Apache Kafka
> --
>
> Key: KAFKA-16373
> URL: https://issues.apache.org/jira/browse/KAFKA-16373
> Project: Kafka
>  Issue Type: New Feature
>Affects Versions: 3.8.0
>Reporter: Krish Vora
>Assignee: Krish Vora
>Priority: Major
>  Labels: KIP-1028
> Fix For: 3.8.0
>
>
> KIP-1028: Docker Official Image for Apache Kafka: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-1028%3A+Docker+Official+Image+for+Apache+Kafka]
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15875) Snapshot class is package protected but returned in public methods

2023-11-22 Thread Josep Prat (Jira)
Josep Prat created KAFKA-15875:
--

 Summary: Snapshot class is package protected but returned in 
public methods
 Key: KAFKA-15875
 URL: https://issues.apache.org/jira/browse/KAFKA-15875
 Project: Kafka
  Issue Type: Task
Affects Versions: 3.6.0
Reporter: Josep Prat
Assignee: Josep Prat


org.apache.kafka.timeline.Snapshot class is package protected but it is part of 
the public API of org.apache.kafka.timeline.SnapshotRegistry. This might cause 
compilation errors if we ever try to assign the returned object of these 
methods to a variable.

org.apache.kafka.controller.OffsetControlManager is calling SnapshotRegistry's 
methods that return a Snapshot and OffsetControlManager is in another package.

 

The SnapshotRegistry class seems to not be public API so I don't think this 
needs a KIP.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (KAFKA-14956) Flaky test org.apache.kafka.connect.integration.OffsetsApiIntegrationTest#testGetSinkConnectorOffsetsDifferentKafkaClusterTargeted

2023-09-29 Thread Josep Prat (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josep Prat reopened KAFKA-14956:


It happened again here: 
https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-14458/3/testReport/junit/org.apache.kafka.connect.integration/OffsetsApiIntegrationTest/Build___JDK_11_and_Scala_2_13___testGetSinkConnectorOffsetsDifferentKafkaClusterTargeted/

> Flaky test 
> org.apache.kafka.connect.integration.OffsetsApiIntegrationTest#testGetSinkConnectorOffsetsDifferentKafkaClusterTargeted
> --
>
> Key: KAFKA-14956
> URL: https://issues.apache.org/jira/browse/KAFKA-14956
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Sagar Rao
>Assignee: Yash Mayya
>Priority: Major
>  Labels: flaky-test
> Fix For: 3.5.0
>
>
> ```
> h4. Error
> org.opentest4j.AssertionFailedError: Condition not met within timeout 15000. 
> Sink connector consumer group offsets should catch up to the topic end 
> offsets ==> expected:  but was: 
> h4. Stacktrace
> org.opentest4j.AssertionFailedError: Condition not met within timeout 15000. 
> Sink connector consumer group offsets should catch up to the topic end 
> offsets ==> expected:  but was: 
>  at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
>  at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
>  at app//org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)
>  at app//org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)
>  at app//org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:211)
>  at 
> app//org.apache.kafka.test.TestUtils.lambda$waitForCondition$4(TestUtils.java:337)
>  at 
> app//org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:385)
>  at app//org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:334)
>  at app//org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:318)
>  at app//org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:291)
>  at 
> app//org.apache.kafka.connect.integration.OffsetsApiIntegrationTest.getAndVerifySinkConnectorOffsets(OffsetsApiIntegrationTest.java:150)
>  at 
> app//org.apache.kafka.connect.integration.OffsetsApiIntegrationTest.testGetSinkConnectorOffsetsDifferentKafkaClusterTargeted(OffsetsApiIntegrationTest.java:131)
>  at 
> java.base@17.0.7/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
>  at 
> java.base@17.0.7/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
>  at 
> java.base@17.0.7/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.base@17.0.7/java.lang.reflect.Method.invoke(Method.java:568)
>  at 
> app//org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> app//org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> app//org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> app//org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> app//org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>  at 
> app//org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>  at app//org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>  at 
> app//org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>  at app//org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>  at 
> app//org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>  at 
> app//org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>  at app//org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>  at app//org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>  at app//org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>  at app//org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>  at app//org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>  at app//org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>  at app//org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>  at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:108)
>  at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>  at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:40)
>  at 
> org.gradle.api.i

[jira] [Created] (KAFKA-15524) Flaky test org.apache.kafka.connect.integration.OffsetsApiIntegrationTest.testResetSinkConnectorOffsetsZombieSinkTasks

2023-09-29 Thread Josep Prat (Jira)
Josep Prat created KAFKA-15524:
--

 Summary: Flaky test 
org.apache.kafka.connect.integration.OffsetsApiIntegrationTest.testResetSinkConnectorOffsetsZombieSinkTasks
 Key: KAFKA-15524
 URL: https://issues.apache.org/jira/browse/KAFKA-15524
 Project: Kafka
  Issue Type: Bug
  Components: connect
Affects Versions: 3.5.1, 3.6.0
Reporter: Josep Prat


Last seen: 
[https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-14458/3/testReport/junit/org.apache.kafka.connect.integration/OffsetsApiIntegrationTest/Build___JDK_17_and_Scala_2_13___testResetSinkConnectorOffsetsZombieSinkTasks/]

 
h3. Error Message
{code:java}
java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.TimeoutException: The request timed out.{code}
h3. Stacktrace
{code:java}
java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.TimeoutException: The request timed out. at 
org.apache.kafka.connect.util.clusters.EmbeddedKafkaCluster.createTopic(EmbeddedKafkaCluster.java:427)
 at 
org.apache.kafka.connect.util.clusters.EmbeddedKafkaCluster.createTopic(EmbeddedKafkaCluster.java:401)
 at 
org.apache.kafka.connect.util.clusters.EmbeddedKafkaCluster.createTopic(EmbeddedKafkaCluster.java:392)
 at 
org.apache.kafka.connect.integration.OffsetsApiIntegrationTest.testResetSinkConnectorOffsetsZombieSinkTasks(OffsetsApiIntegrationTest.java:763)
 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method) at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
 at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.base/java.lang.reflect.Method.invoke(Method.java:568) at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
 at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
org.junit.runners.ParentRunner.run(ParentRunner.java:413) at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:112)
 at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
 at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:40)
 at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:60)
 at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:52)
 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method) at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
 at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.base/java.lang.reflect.Method.invoke(Method.java:568) at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
 at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
 at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
 at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
 at jdk.proxy1/jdk.proxy1.$Proxy2.processTestClass(Unknown Source) at 
org.gradle.api.internal.tasks.testing.worker.TestWorker$2.run(TestWorker.java:176)
 at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.executeAndMaintainThreadName(TestWorker.java:129)
 at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.execute(TestWorker.java:100)
 at 
org.g

[jira] [Created] (KAFKA-15522) Flaky test org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationExactlyOnceTest.testOneWayReplicationWithFrequentOffsetSyncs

2023-09-29 Thread Josep Prat (Jira)
Josep Prat created KAFKA-15522:
--

 Summary: Flaky test 
org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationExactlyOnceTest.testOneWayReplicationWithFrequentOffsetSyncs
 Key: KAFKA-15522
 URL: https://issues.apache.org/jira/browse/KAFKA-15522
 Project: Kafka
  Issue Type: Bug
Affects Versions: 3.5.1, 3.6.0
Reporter: Josep Prat


h3. Last seen: 
https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-14458/3/testReport/junit/org.apache.kafka.connect.mirror.integration/MirrorConnectorsIntegrationExactlyOnceTest/Build___JDK_17_and_Scala_2_13___testOneWayReplicationWithFrequentOffsetSyncs__/
h3. Error Message
{code:java}
java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.TimeoutException: The request timed out.{code}
h3. Stacktrace
{code:java}
java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.TimeoutException: The request timed out. at 
org.apache.kafka.connect.util.clusters.EmbeddedKafkaCluster.createTopic(EmbeddedKafkaCluster.java:427)
 at 
org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationBaseTest.createTopics(MirrorConnectorsIntegrationBaseTest.java:1276)
 at 
org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationBaseTest.startClusters(MirrorConnectorsIntegrationBaseTest.java:235)
 at 
org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationBaseTest.startClusters(MirrorConnectorsIntegrationBaseTest.java:149)
 at 
org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationExactlyOnceTest.startClusters(MirrorConnectorsIntegrationExactlyOnceTest.java:51)
 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method) at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
 at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.base/java.lang.reflect.Method.invoke(Method.java:568) at 
org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:728)
 at 
org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
 at 
org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
 at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptLifecycleMethod(TimeoutExtension.java:128)
 at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptBeforeEachMethod(TimeoutExtension.java:78)
 at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
 at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
 at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
 at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
 at 
org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeMethodInExtensionContext(ClassBasedTestDescriptor.java:521)
 at 
org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$synthesizeBeforeEachMethodAdapter$23(ClassBasedTestDescriptor.java:506)
 at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeEachMethods$3(TestMethodTestDescriptor.java:175)
 at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeMethodsOrCallbacksUntilExceptionOccurs$6(TestMethodTestDescriptor.java:203)
 at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
 at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeBeforeMethodsOrCallbacksUntilExceptionOccurs(TestMethodTestDescriptor.java:203)
 at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeBeforeEachMethods(TestMethodTestDescriptor.java:172)
 at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:135)
 at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:69)
 at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$exe

[jira] [Created] (KAFKA-15523) Flaky test org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationSSLTest.testSyncTopicConfigs

2023-09-29 Thread Josep Prat (Jira)
Josep Prat created KAFKA-15523:
--

 Summary: Flaky test  
org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationSSLTest.testSyncTopicConfigs
 Key: KAFKA-15523
 URL: https://issues.apache.org/jira/browse/KAFKA-15523
 Project: Kafka
  Issue Type: Bug
  Components: mirrormaker
Affects Versions: 3.5.1, 3.6.0
Reporter: Josep Prat


Last seen: 
[https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-14458/3/testReport/junit/org.apache.kafka.connect.mirror.integration/MirrorConnectorsIntegrationSSLTest/Build___JDK_17_and_Scala_2_13___testSyncTopicConfigs__/]

 
h3. Error Message
{code:java}
org.opentest4j.AssertionFailedError: Condition not met within timeout 3. 
Topic: mm2-status.backup.internal didn't get created in the cluster ==> 
expected:  but was: {code}
h3. Stacktrace
{code:java}
org.opentest4j.AssertionFailedError: Condition not met within timeout 3. 
Topic: mm2-status.backup.internal didn't get created in the cluster ==> 
expected:  but was:  at 
app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
 at 
app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
 at app//org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63) at 
app//org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36) at 
app//org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:210) at 
app//org.apache.kafka.test.TestUtils.lambda$waitForCondition$3(TestUtils.java:331)
 at 
app//org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:379)
 at app//org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:328) 
at app//org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:312) at 
app//org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:302) at 
app//org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationBaseTest.waitForTopicCreated(MirrorConnectorsIntegrationBaseTest.java:1041)
 at 
app//org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationBaseTest.startClusters(MirrorConnectorsIntegrationBaseTest.java:224)
 at 
app//org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationBaseTest.startClusters(MirrorConnectorsIntegrationBaseTest.java:149)
 at 
app//org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationSSLTest.startClusters(MirrorConnectorsIntegrationSSLTest.java:63)
 at 
java.base@17.0.7/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method) at 
java.base@17.0.7/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
 at 
java.base@17.0.7/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.base@17.0.7/java.lang.reflect.Method.invoke(Method.java:568) at 
app//org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:728)
 at 
app//org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
 at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
 at 
app//org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
 at 
app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptLifecycleMethod(TimeoutExtension.java:128)
 at 
app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptBeforeEachMethod(TimeoutExtension.java:78)
 at 
app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
 at 
app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
 at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
 at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
 at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
 at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
 at 
app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
 at 
app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
 at 
app//org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeMethodInExtensionContext(ClassBasedTestDescriptor.java:521)
 at 
app//org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$synthesizeBeforeEachMethodAdapter$23(ClassBasedTestDescriptor.java:506)
 at 
app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeEachMe

[jira] [Reopened] (KAFKA-13531) Flaky test NamedTopologyIntegrationTest

2023-09-29 Thread Josep Prat (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josep Prat reopened KAFKA-13531:


> Flaky test NamedTopologyIntegrationTest
> ---
>
> Key: KAFKA-13531
> URL: https://issues.apache.org/jira/browse/KAFKA-13531
> Project: Kafka
>  Issue Type: Test
>  Components: streams, unit tests
>Reporter: Matthias J. Sax
>Assignee: Matthew de Detrich
>Priority: Critical
>  Labels: flaky-test
> Attachments: 
> org.apache.kafka.streams.integration.NamedTopologyIntegrationTest.shouldRemoveOneNamedTopologyWhileAnotherContinuesProcessing().test.stdout
>
>
> org.apache.kafka.streams.integration.NamedTopologyIntegrationTest.shouldRemoveNamedTopologyToRunningApplicationWithMultipleNodesAndResetsOffsets
> {quote}java.lang.AssertionError: Did not receive all 3 records from topic 
> output-stream-2 within 6 ms, currently accumulated data is [] Expected: 
> is a value equal to or greater than <3> but: <0> was less than <3> at 
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.lambda$waitUntilMinKeyValueRecordsReceived$1(IntegrationTestUtils.java:648)
>  at 
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:368)
>  at 
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:336)
>  at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(IntegrationTestUtils.java:644)
>  at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(IntegrationTestUtils.java:617)
>  at 
> org.apache.kafka.streams.integration.NamedTopologyIntegrationTest.shouldRemoveNamedTopologyToRunningApplicationWithMultipleNodesAndResetsOffsets(NamedTopologyIntegrationTest.java:439){quote}
> STDERR
> {quote}java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.GroupSubscribedToTopicException: Deleting 
> offsets of a topic is forbidden while the consumer group is actively 
> subscribed to it. at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) 
> at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895) at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:165)
>  at 
> org.apache.kafka.streams.processor.internals.namedtopology.KafkaStreamsNamedTopologyWrapper.lambda$removeNamedTopology$3(KafkaStreamsNamedTopologyWrapper.java:213)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.lambda$whenComplete$2(KafkaFutureImpl.java:107)
>  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
>  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
>  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962) 
> at 
> org.apache.kafka.common.internals.KafkaCompletableFuture.kafkaComplete(KafkaCompletableFuture.java:39)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.complete(KafkaFutureImpl.java:122)
>  at 
> org.apache.kafka.streams.processor.internals.TopologyMetadata.maybeNotifyTopologyVersionWaiters(TopologyMetadata.java:154)
>  at 
> org.apache.kafka.streams.processor.internals.StreamThread.checkForTopologyUpdates(StreamThread.java:916)
>  at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:598)
>  at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:575)
>  Caused by: org.apache.kafka.common.errors.GroupSubscribedToTopicException: 
> Deleting offsets of a topic is forbidden while the consumer group is actively 
> subscribed to it. java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.GroupSubscribedToTopicException: Deleting 
> offsets of a topic is forbidden while the consumer group is actively 
> subscribed to it. at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) 
> at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895) at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:165)
>  at 
> org.apache.kafka.streams.processor.internals.namedtopology.KafkaStreamsNamedTopologyWrapper.lambda$removeNamedTopology$3(KafkaStreamsNamedTopologyWrapper.java:213)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.lambda$whenComplete$2(KafkaFutureImpl.java:107)
>  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
>  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
>  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableF

[jira] [Reopened] (KAFKA-13966) Flaky test `QuorumControllerTest.testUnregisterBroker`

2023-09-29 Thread Josep Prat (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josep Prat reopened KAFKA-13966:


> Flaky test `QuorumControllerTest.testUnregisterBroker`
> --
>
> Key: KAFKA-13966
> URL: https://issues.apache.org/jira/browse/KAFKA-13966
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: David Arthur
>Priority: Major
>
> We have seen the following assertion failure in 
> `QuorumControllerTest.testUnregisterBroker`:
> {code:java}
> org.opentest4j.AssertionFailedError: expected: <2> but was: <0>
>   at org.junit.jupiter.api.AssertionUtils.fail(AssertionUtils.java:55)
>   at 
> org.junit.jupiter.api.AssertionUtils.failNotEqual(AssertionUtils.java:62)
>   at 
> org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:166)
>   at 
> org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:161)
>   at org.junit.jupiter.api.Assertions.assertEquals(Assertions.java:628)
>   at 
> org.apache.kafka.controller.QuorumControllerTest.testUnregisterBroker(QuorumControllerTest.java:494)
>  {code}
> I reproduced it by running the test in a loop. It looks like what happens is 
> that the BrokerRegistration request is able to get interleaved between the 
> leader change event and the write of the bootstrap metadata. Something like 
> this:
>  # handleLeaderChange() start
>  # appendWriteEvent(registerBroker)
>  # appendWriteEvent(bootstrapMetadata)
>  # handleLeaderChange() finish
>  # registerBroker() -> writes broker registration to log
>  # bootstrapMetadata() -> writes bootstrap metadata to log



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15105) Flaky test FetchFromFollowerIntegrationTest.testFetchFromLeaderWhilePreferredReadReplicaIsUnavailable

2023-06-19 Thread Josep Prat (Jira)
Josep Prat created KAFKA-15105:
--

 Summary: Flaky test 
FetchFromFollowerIntegrationTest.testFetchFromLeaderWhilePreferredReadReplicaIsUnavailable
 Key: KAFKA-15105
 URL: https://issues.apache.org/jira/browse/KAFKA-15105
 Project: Kafka
  Issue Type: Bug
Reporter: Josep Prat


Test  
integration.kafka.server.FetchFromFollowerIntegrationTest.testFetchFromLeaderWhilePreferredReadReplicaIsUnavailable()
 became flaky. An example can be found here: 
[https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-13865/2/testReport/junit/integration.kafka.server/FetchFromFollowerIntegrationTest/Build___JDK_11_and_Scala_2_13___testFetchFromLeaderWhilePreferredReadReplicaIsUnavailable__/]

The error might be caused because of a previous kafka cluster used for another 
test wasn't cleaned up properly before this one run.

 
h3. Error Message
{code:java}
org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' 
already exists.{code}
h3. Stacktrace
{code:java}
org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' 
already exists. {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15104) Flaky test MetadataQuorumCommandTest for method testDescribeQuorumReplicationSuccessful

2023-06-19 Thread Josep Prat (Jira)
Josep Prat created KAFKA-15104:
--

 Summary: Flaky test MetadataQuorumCommandTest for method 
testDescribeQuorumReplicationSuccessful
 Key: KAFKA-15104
 URL: https://issues.apache.org/jira/browse/KAFKA-15104
 Project: Kafka
  Issue Type: Bug
  Components: tools
Affects Versions: 3.5.0
Reporter: Josep Prat


The MetadataQuorumCommandTest has become flaky on CI, I saw this failing: 
org.apache.kafka.tools.MetadataQuorumCommandTest.[1] Type=Raft-Combined, 
Name=testDescribeQuorumReplicationSuccessful, MetadataVersion=3.6-IV0, 
Security=PLAINTEXT

Link to the CI: 
https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-13865/2/testReport/junit/org.apache.kafka.tools/MetadataQuorumCommandTest/Build___JDK_8_and_Scala_2_121__Type_Raft_Combined__Name_testDescribeQuorumReplicationSuccessful__MetadataVersion_3_6_IV0__Security_PLAINTEXT/

 
h3. Error Message
{code:java}
java.util.concurrent.ExecutionException: java.lang.RuntimeException: Received a 
fatal error while waiting for the controller to acknowledge that we are caught 
up{code}
h3. Stacktrace
{code:java}
java.util.concurrent.ExecutionException: java.lang.RuntimeException: Received a 
fatal error while waiting for the controller to acknowledge that we are caught 
up at java.util.concurrent.FutureTask.report(FutureTask.java:122) at 
java.util.concurrent.FutureTask.get(FutureTask.java:192) at 
kafka.testkit.KafkaClusterTestKit.startup(KafkaClusterTestKit.java:419) at 
kafka.test.junit.RaftClusterInvocationContext.lambda$getAdditionalExtensions$5(RaftClusterInvocationContext.java:115)
 at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeTestExecutionCallbacks$5(TestMethodTestDescriptor.java:191)
 at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeMethodsOrCallbacksUntilExceptionOccurs$6(TestMethodTestDescriptor.java:202)
 at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
 at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeBeforeMethodsOrCallbacksUntilExceptionOccurs(TestMethodTestDescriptor.java:202)
 at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeBeforeTestExecutionCallbacks(TestMethodTestDescriptor.java:190)
 at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:136){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15103) Flaky test KRaftClusterTest.testCreateClusterAndPerformReassignment

2023-06-19 Thread Josep Prat (Jira)
Josep Prat created KAFKA-15103:
--

 Summary: Flaky test 
KRaftClusterTest.testCreateClusterAndPerformReassignment
 Key: KAFKA-15103
 URL: https://issues.apache.org/jira/browse/KAFKA-15103
 Project: Kafka
  Issue Type: Bug
  Components: core
Reporter: Josep Prat
 Fix For: 3.5.0


{{The test 
kafka.server.KRaftClusterTest.testCreateClusterAndPerformReassignment() is 
failing: 
[https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-13865/2/testReport/junit/kafka.server/KRaftClusterTest/Build___JDK_8_and_Scala_2_12___testCreateClusterAndPerformReassignment__/]}}
h3. Error Message
{code:java}
org.opentest4j.AssertionFailedError: Timed out waiting for replica assignments 
for topic foo. Wanted: List(List(2, 1, 0), List(0, 1, 2), List(2, 3), List(3, 
2, 0, 1)). Got: ArrayBuffer(ArrayBuffer(2, 1, 0), ArrayBuffer(0, 1, 2, 3), 
ArrayBuffer(2, 3), ArrayBuffer(3, 2, 0, 1)){code}
h3. Stacktrace
{code:java}
org.opentest4j.AssertionFailedError: Timed out waiting for replica assignments 
for topic foo. Wanted: List(List(2, 1, 0), List(0, 1, 2), List(2, 3), List(3, 
2, 0, 1)). Got: ArrayBuffer(ArrayBuffer(2, 1, 0), ArrayBuffer(0, 1, 2, 3), 
ArrayBuffer(2, 3), ArrayBuffer(3, 2, 0, 1)) at 
org.junit.jupiter.api.AssertionUtils.fail(AssertionUtils.java:38) at 
org.junit.jupiter.api.Assertions.fail(Assertions.java:135) at 
kafka.server.KRaftClusterTest.testCreateClusterAndPerformReassignment(KRaftClusterTest.scala:479)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
{code}
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15072) Flaky test MirrorConnectorsIntegrationExactlyOnceTest.testReplicationWithEmptyPartition

2023-06-07 Thread Josep Prat (Jira)
Josep Prat created KAFKA-15072:
--

 Summary: Flaky test 
MirrorConnectorsIntegrationExactlyOnceTest.testReplicationWithEmptyPartition
 Key: KAFKA-15072
 URL: https://issues.apache.org/jira/browse/KAFKA-15072
 Project: Kafka
  Issue Type: Bug
  Components: mirrormaker
Affects Versions: 3.5.0
Reporter: Josep Prat


Test 
MirrorConnectorsIntegrationExactlyOnceTest.testReplicationWithEmptyPartition 
became flaky again, but it's a different error this time.

Occurrence: 
[https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-13824/1/testReport/junit/org.apache.kafka.connect.mirror.integration/MirrorConnectorsIntegrationExactlyOnceTest/Build___JDK_17_and_Scala_2_13___testReplicationWithEmptyPartition__/]

 
h3. Error Message
{code:java}
java.lang.AssertionError: Connector MirrorHeartbeatConnector tasks did not 
start in time on cluster: backup-connect-cluster{code}
h3. Stacktrace
{code:java}
java.lang.AssertionError: Connector MirrorHeartbeatConnector tasks did not 
start in time on cluster: backup-connect-cluster at 
org.apache.kafka.connect.util.clusters.EmbeddedConnectClusterAssertions.assertConnectorAndAtLeastNumTasksAreRunning(EmbeddedConnectClusterAssertions.java:301)
 at 
org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationBaseTest.waitUntilMirrorMakerIsRunning(MirrorConnectorsIntegrationBaseTest.java:912)
 at 
org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationBaseTest.testReplicationWithEmptyPartition(MirrorConnectorsIntegrationBaseTest.java:415)
 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method) at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
 at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.base/java.lang.reflect.Method.invoke(Method.java:568) at 
org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:727)
 at 
org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
 at 
org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
 at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:147)
 at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:86)
 at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
 at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
 at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
 at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
 at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:217)
 at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
 at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:213)
 at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:138)
 at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:68)
 at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
 at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
 at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 
at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
 at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
 at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
 at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeT

[jira] [Created] (KAFKA-15071) Flaky test kafka.admin.LeaderElectionCommandTest.testPreferredReplicaElection for Type=ZK, MetadataVersion=3.5-IV2, Security=PLAINTEXT

2023-06-07 Thread Josep Prat (Jira)
Josep Prat created KAFKA-15071:
--

 Summary: Flaky test 
kafka.admin.LeaderElectionCommandTest.testPreferredReplicaElection for Type=ZK, 
MetadataVersion=3.5-IV2, Security=PLAINTEXT
 Key: KAFKA-15071
 URL: https://issues.apache.org/jira/browse/KAFKA-15071
 Project: Kafka
  Issue Type: Bug
Reporter: Josep Prat


Test became kafka.admin.LeaderElectionCommandTest.testPreferredReplicaElection 
flaky again but failing because of different reason. In this case it might be a 
missing cleanup

The values of the parameters are Type=ZK, MetadataVersion=3.5-IV2, 
Security=PLAINTEXT

Related to https://issues.apache.org/jira/browse/KAFKA-13737

Ocurred: 
https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-13824/1/testReport/junit/kafka.admin/LeaderElectionCommandTest/Build___JDK_8_and_Scala_2_123__Type_ZK__Name_testPreferredReplicaElection__MetadataVersion_3_5_IV2__Security_PLAINTEXT/
h3. Error Message
{code:java}
org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' 
already exists.{code}
h3. Stacktrace
{code:java}
org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' 
already exists.{code}
{{ }}
h3. Standard Output
{code:java}
Successfully completed leader election (UNCLEAN) for partitions unclean-topic-0 
[2023-06-07 14:42:33,845] ERROR [QuorumController id=3000] writeNoOpRecord: 
unable to start processing because of RejectedExecutionException. Reason: The 
event queue is shutting down (org.apache.kafka.controller.QuorumController:467) 
[2023-06-07 14:42:42,699] WARN [AdminClient clientId=adminclient-65] Connection 
to node -2 (localhost/127.0.0.1:35103) could not be established. Broker may not 
be available. (org.apache.kafka.clients.NetworkClient:814) Successfully 
completed leader election (UNCLEAN) for partitions unclean-topic-0 [2023-06-07 
14:42:44,416] ERROR [QuorumController id=0] writeNoOpRecord: unable to start 
processing because of RejectedExecutionException. Reason: The event queue is 
shutting down (org.apache.kafka.controller.QuorumController:467) [2023-06-07 
14:42:44,716] WARN maxCnxns is not configured, using default value 0. 
(org.apache.zookeeper.server.ServerCnxnFactory:309) [2023-06-07 14:42:44,765] 
WARN No meta.properties file under dir 
/tmp/kafka-2117748934951771120/meta.properties 
(kafka.server.BrokerMetadataCheckpoint:70) [2023-06-07 14:42:44,986] WARN No 
meta.properties file under dir /tmp/kafka-5133306871105583937/meta.properties 
(kafka.server.BrokerMetadataCheckpoint:70) [2023-06-07 14:42:45,214] WARN No 
meta.properties file under dir /tmp/kafka-8449809620400833553/meta.properties 
(kafka.server.BrokerMetadataCheckpoint:70) [2023-06-07 14:42:45,634] WARN 
[ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Received UNKNOWN_TOPIC_ID 
from the leader for partition __consumer_offsets-0. This error may be returned 
transiently when the partition is being created or deleted, but it is not 
expected to persist. (kafka.server.ReplicaFetcherThread:70) [2023-06-07 
14:42:45,634] WARN [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] 
Received UNKNOWN_TOPIC_ID from the leader for partition __consumer_offsets-4. 
This error may be returned transiently when the partition is being created or 
deleted, but it is not expected to persist. 
(kafka.server.ReplicaFetcherThread:70) [2023-06-07 14:42:45,872] WARN 
[ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Received UNKNOWN_TOPIC_ID 
from the leader for partition __consumer_offsets-1. This error may be returned 
transiently when the partition is being created or deleted, but it is not 
expected to persist. (kafka.server.ReplicaFetcherThread:70) [2023-06-07 
14:42:46,010] WARN [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error 
in response for fetch request (type=FetchRequest, replicaId=0, maxWait=500, 
minBytes=1, maxBytes=10485760, 
fetchData={__consumer_offsets-3=PartitionData(topicId=vAlEsYVbTFClcpnVRp3AOw, 
fetchOffset=0, logStartOffset=0, maxBytes=1048576, 
currentLeaderEpoch=Optional[0], lastFetchedEpoch=Optional.empty)}, 
isolationLevel=READ_UNCOMMITTED, removed=, replaced=, 
metadata=(sessionId=INVALID, epoch=INITIAL), rackId=) 
(kafka.server.ReplicaFetcherThread:72) java.io.IOException: Connection to 2 was 
disconnected before the response was read at 
org.apache.kafka.clients.NetworkClientUtils.sendAndReceive(NetworkClientUtils.java:99)
 at 
kafka.server.BrokerBlockingSender.sendRequest(BrokerBlockingSender.scala:113) 
at kafka.server.RemoteLeaderEndPoint.fetch(RemoteLeaderEndPoint.scala:79) at 
kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:316)
 at 
kafka.server.AbstractFetcherThread.$anonfun$maybeFetch$3(AbstractFetcherThread.scala:130)
 at 
kafka.server.AbstractFetcherThread.$anonfun$maybeFetch$3$adapted(AbstractFetcherThread.scala:129)
 at scala.Option.foreach(Option.scala:407) at 
kafka.server.AbstractFetc

[jira] [Created] (KAFKA-15070) Flaky test kafka.log.LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic for codec zstd

2023-06-07 Thread Josep Prat (Jira)
Josep Prat created KAFKA-15070:
--

 Summary: Flaky test 
kafka.log.LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic
 for codec zstd
 Key: KAFKA-15070
 URL: https://issues.apache.org/jira/browse/KAFKA-15070
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 3.5.0
Reporter: Josep Prat


Flaky tests with the following traces and output:
h3. Error Message

org.opentest4j.AssertionFailedError: Timed out waiting for deletion of old 
segments
h3. Stacktrace

org.opentest4j.AssertionFailedError: Timed out waiting for deletion of old 
segments at org.junit.jupiter.api.AssertionUtils.fail(AssertionUtils.java:38) 
at org.junit.jupiter.api.Assertions.fail(Assertions.java:135) at 
kafka.log.LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic(LogCleanerParameterizedIntegrationTest.scala:123)

...

 
h3. Standard Output

[2023-06-07 16:03:59,974] WARN [LocalLog partition=log-0, 
dir=/tmp/kafka-6339499869249617477] Record format version has been downgraded 
from V2 to V0. (kafka.log.LocalLog:70) [2023-06-07 16:04:01,691] WARN [LocalLog 
partition=log-0, dir=/tmp/kafka-6391328203703920459] Record format version has 
been downgraded from V2 to V0. (kafka.log.LocalLog:70) [2023-06-07 
16:04:02,661] WARN [LocalLog partition=log-0, 
dir=/tmp/kafka-7107559685120209313] Record format version has been downgraded 
from V2 to V0. (kafka.log.LocalLog:70) [2023-06-07 16:04:04,449] WARN [LocalLog 
partition=log-0, dir=/tmp/kafka-2334095685379242376] Record format version has 
been downgraded from V2 to V0. (kafka.log.LocalLog:70) [2023-06-07 
16:04:12,059] ERROR [LogLoader partition=log-0, 
dir=/tmp/kafka-4306370019245327987] Could not find offset index file 
corresponding to log file 
/tmp/kafka-4306370019245327987/log-0/0300.log, recovering 
segment and rebuilding index files... (kafka.log.LogLoader:74) [2023-06-07 
16:04:21,424] ERROR [LogLoader partition=log-0, 
dir=/tmp/kafka-8549848301585177643] Could not find offset index file 
corresponding to log file 
/tmp/kafka-8549848301585177643/log-0/0300.log, recovering 
segment and rebuilding index files... (kafka.log.LogLoader:74) [2023-06-07 
16:04:42,679] ERROR [LogLoader partition=log-0, 
dir=/tmp/kafka-8308685679443421785] Could not find offset index file 
corresponding to log file 
/tmp/kafka-8308685679443421785/log-0/0300.log, recovering 
segment and rebuilding index files... (kafka.log.LogLoader:74) [2023-06-07 
16:04:50,435] ERROR [LogLoader partition=log-0, 
dir=/tmp/kafka-2686097435338562303] Could not find offset index file 
corresponding to log file 
/tmp/kafka-2686097435338562303/log-0/0300.log, recovering 
segment and rebuilding index files... (kafka.log.LogLoader:74) [2023-06-07 
16:07:16,263] WARN [LocalLog partition=log-0, 
dir=/tmp/kafka-5435804108212698551] Record format version has been downgraded 
from V2 to V0. (kafka.log.LocalLog:70) [2023-06-07 16:07:35,193] WARN [LocalLog 
partition=log-0, dir=/tmp/kafka-4310277229895025994] Record format version has 
been downgraded from V2 to V0. (kafka.log.LocalLog:70) [2023-06-07 
16:07:55,323] WARN [LocalLog partition=log-0, 
dir=/tmp/kafka-3364951894697258113] Record format version has been downgraded 
from V2 to V0. (kafka.log.LocalLog:70) [2023-06-07 16:08:16,286] WARN [LocalLog 
partition=log-0, dir=/tmp/kafka-3161518940405121110] Record format version has 
been downgraded from V2 to V0. (kafka.log.LocalLog:70) [2023-06-07 
16:35:03,765] ERROR [LogLoader partition=log-0, 
dir=/tmp/kafka-2385863108707929062] Could not find offset index file 
corresponding to log file 
/tmp/kafka-2385863108707929062/log-0/0300.log, recovering 
segment and rebuilding index files... (kafka.log.LogLoader:74) [2023-06-07 
16:35:06,406] ERROR [LogLoader partition=log-0, 
dir=/tmp/kafka-5380450050465409057] Could not find offset index file 
corresponding to log file 
/tmp/kafka-5380450050465409057/log-0/0300.log, recovering 
segment and rebuilding index files... (kafka.log.LogLoader:74) [2023-06-07 
16:35:09,061] ERROR [LogLoader partition=log-0, 
dir=/tmp/kafka-7510941634638265317] Could not find offset index file 
corresponding to log file 
/tmp/kafka-7510941634638265317/log-0/0300.log, recovering 
segment and rebuilding index files... (kafka.log.LogLoader:74) [2023-06-07 
16:35:11,593] ERROR [LogLoader partition=log-0, 
dir=/tmp/kafka-7423113520781905391] Could not find offset index file 
corresponding to log file 
/tmp/kafka-7423113520781905391/log-0/0300.log, recovering 
segment and rebuilding index files... (kafka.log.LogLoader:74) [2023-06-07 
16:35:14,159] ERROR [LogLoader partition=log-0, 
dir=/tmp/kafka-2120426496175304835] Could not find offset index file 
corresponding to log file 
/tmp/kafka-212042649617530483

[jira] [Created] (KAFKA-13399) Towards Scala 3: Fix constructs or syntax features not present in Scala 3

2021-10-25 Thread Josep Prat (Jira)
Josep Prat created KAFKA-13399:
--

 Summary: Towards Scala 3: Fix constructs or syntax features not 
present in Scala 3
 Key: KAFKA-13399
 URL: https://issues.apache.org/jira/browse/KAFKA-13399
 Project: Kafka
  Issue Type: Sub-task
Reporter: Josep Prat


As discussed in 
[https://lists.apache.org/x/thread.html/r4ee305bef0e65e1352c358016eb4055b323b7f12df13c16b124aa5f1@%3Cdev.kafka.apache.org%3E]

This is the first step to ease the migration towards Scala 3.

Changes included here are:
 * Fixes where compiler has become more strict
 * Use of deprecated syntax
 * Right typing instead of relying of automatic widening of numeric types
 * Workaround for [https://github.com/lampepfl/dotty/issues/13549] as it won't 
be fixed



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (KAFKA-13128) Flaky Test StoreQueryIntegrationTest.shouldQueryStoresAfterAddingAndRemovingStreamThread

2021-09-08 Thread Josep Prat (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josep Prat reopened KAFKA-13128:


Sorry to reopen this issue, it just occurred in this PR 
[https://github.com/apache/kafka/pull/11302]

It's a different error though:

{{}}
{code:java}
java.lang.AssertionError: Unexpected exception thrown while getting the value 
from store.
Expected: is (a string containing "Cannot get state store source-table because 
the stream thread is PARTITIONS_ASSIGNED, not RUNNING" or a string containing 
"The state store, source-table, may have migrated to another instance" or a 
string containing "Cannot get state store source-table because the stream 
thread is STARTING, not RUNNING")
  but: was "Cannot get state store source-table because the stream thread is 
PARTITIONS_REVOKED, not RUNNING"{code}
[https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-11302/3/testReport/junit/org.apache.kafka.streams.integration/StoreQueryIntegrationTest/Build___JDK_8_and_Scala_2_12___shouldQueryStoresAfterAddingAndRemovingStreamThread/?cloudbees-analytics-link=scm-reporting%2Ftests%2Ffailed]

Let me know if I should have opened a new issue instead of reopening this one.

{{}}

> Flaky Test 
> StoreQueryIntegrationTest.shouldQueryStoresAfterAddingAndRemovingStreamThread
> 
>
> Key: KAFKA-13128
> URL: https://issues.apache.org/jira/browse/KAFKA-13128
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 3.0.0, 2.8.1
>Reporter: A. Sophie Blee-Goldman
>Assignee: Walker Carlson
>Priority: Blocker
>  Labels: flaky-test
> Fix For: 3.1.0
>
>
> h3. Stacktrace
> java.lang.AssertionError: Expected: is not null but: was null 
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) 
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:6)
>   at 
> org.apache.kafka.streams.integration.StoreQueryIntegrationTest.lambda$shouldQueryStoresAfterAddingAndRemovingStreamThread$19(StoreQueryIntegrationTest.java:461)
>   at 
> org.apache.kafka.streams.integration.StoreQueryIntegrationTest.until(StoreQueryIntegrationTest.java:506)
>   at 
> org.apache.kafka.streams.integration.StoreQueryIntegrationTest.shouldQueryStoresAfterAddingAndRemovingStreamThread(StoreQueryIntegrationTest.java:455)
>  
> https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-11085/5/testReport/org.apache.kafka.streams.integration/StoreQueryIntegrationTest/Build___JDK_16_and_Scala_2_13___shouldQueryStoresAfterAddingAndRemovingStreamThread_2/



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (KAFKA-8529) Flakey test ConsumerBounceTest#testCloseDuringRebalance

2021-08-18 Thread Josep Prat (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josep Prat reopened KAFKA-8529:
---

> Flakey test ConsumerBounceTest#testCloseDuringRebalance
> ---
>
> Key: KAFKA-8529
> URL: https://issues.apache.org/jira/browse/KAFKA-8529
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Boyang Chen
>Priority: Major
>
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/5450/consoleFull]
>  
> *16:16:10* kafka.api.ConsumerBounceTest > testCloseDuringRebalance 
> STARTED*16:16:22* kafka.api.ConsumerBounceTest.testCloseDuringRebalance 
> failed, log available in 
> /home/jenkins/jenkins-slave/workspace/kafka-pr-jdk11-scala2.12/core/build/reports/testOutput/kafka.api.ConsumerBounceTest.testCloseDuringRebalance.test.stdout*16:16:22*
>  *16:16:22* kafka.api.ConsumerBounceTest > testCloseDuringRebalance 
> FAILED*16:16:22* java.lang.AssertionError: Rebalance did not complete in 
> time*16:16:22* at org.junit.Assert.fail(Assert.java:89)*16:16:22* 
> at org.junit.Assert.assertTrue(Assert.java:42)*16:16:22* at 
> kafka.api.ConsumerBounceTest.waitForRebalance$1(ConsumerBounceTest.scala:402)*16:16:22*
>  at 
> kafka.api.ConsumerBounceTest.checkCloseDuringRebalance(ConsumerBounceTest.scala:416)*16:16:22*
>  at 
> kafka.api.ConsumerBounceTest.testCloseDuringRebalance(ConsumerBounceTest.scala:379)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12954) Add Support for Scala 3 in 4.0.0

2021-06-16 Thread Josep Prat (Jira)
Josep Prat created KAFKA-12954:
--

 Summary: Add Support for Scala 3 in 4.0.0
 Key: KAFKA-12954
 URL: https://issues.apache.org/jira/browse/KAFKA-12954
 Project: Kafka
  Issue Type: Improvement
  Components: build
Reporter: Josep Prat
 Fix For: 4.0.0


This is a follow up task from 
https://issues.apache.org/jira/browse/KAFKA-12895, in which Scala 2.12 support 
will be dropped.

It would be good to, at the same time, add support for Scala 3.
Initially it would be enough to only make the code compile with Scala 3 so we 
can generate the proper Scala 3 artifacts, this might be achieved with the 
proper compiler flags and an occasional rewrite.
Follow up tasks could be created to migrate to a more idiomatic Scala 3 writing 
if desired.

If I understand it correctly, this would need a KIP as we are modifying the 
public interfaces (new artifacts). If this is the case, let me know  and I will 
write it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12950) Replace EasyMock and PowerMock with Mockito for KafkaStreamsTest

2021-06-15 Thread Josep Prat (Jira)
Josep Prat created KAFKA-12950:
--

 Summary: Replace EasyMock and PowerMock with Mockito for 
KafkaStreamsTest
 Key: KAFKA-12950
 URL: https://issues.apache.org/jira/browse/KAFKA-12950
 Project: Kafka
  Issue Type: Sub-task
Reporter: Josep Prat






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12881) Consider Un-Deprecation of Consumer#committed methods

2021-06-02 Thread Josep Prat (Jira)
Josep Prat created KAFKA-12881:
--

 Summary: Consider Un-Deprecation of Consumer#committed methods
 Key: KAFKA-12881
 URL: https://issues.apache.org/jira/browse/KAFKA-12881
 Project: Kafka
  Issue Type: Task
  Components: clients
Reporter: Josep Prat


During KAFKA-8880, following 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-520%3A+Add+overloaded+Consumer%23committed+for+batching+partitions,]
 methods 
_org.apache.kafka.clients.consumer.Consumer#committed(org.apache.kafka.common.TopicPartition)_
 and 
_org.apache.kafka.clients.consumer.Consumer#committed(org.apache.kafka.common.TopicPartition,
 java.time.Duration)_  were deprecated.

 

As both methods are still widely used, it might be worth to either remove the 
deprecation for mentioned methods, or provide a deeper reasoning on why they 
should stay deprecated and eventually removed.

If the later is decided, then the original KIP should be updated to include 
said reasoning.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12862) Update ScalaFMT version to latest

2021-05-28 Thread Josep Prat (Jira)
Josep Prat created KAFKA-12862:
--

 Summary: Update ScalaFMT version to latest
 Key: KAFKA-12862
 URL: https://issues.apache.org/jira/browse/KAFKA-12862
 Project: Kafka
  Issue Type: Improvement
  Components: build
Reporter: Josep Prat
Assignee: Josep Prat


When upgrading to the latest stable scala fmt version (2.7.5) lots of classes 
need to be reformatted because of the dangling parentheses setting.

I thought it was worth creating an issue, so there is also a place to discuss 
or document possible Scala fmt config changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12850) Use JDK16 for builds instead of JDK15

2021-05-26 Thread Josep Prat (Jira)
Josep Prat created KAFKA-12850:
--

 Summary: Use JDK16 for builds instead of JDK15
 Key: KAFKA-12850
 URL: https://issues.apache.org/jira/browse/KAFKA-12850
 Project: Kafka
  Issue Type: Improvement
  Components: build
Reporter: Josep Prat


Given that JDK15 reached EOL in March 2021, it is probably worth migrating the 
Jenkins build pipelines to use JDK16 instead.

Unless there is a compelling reason to stay with JDK15.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12834) Remove Deprecated method MockProcessorContext#setTimestamp

2021-05-21 Thread Josep Prat (Jira)
Josep Prat created KAFKA-12834:
--

 Summary: Remove Deprecated method MockProcessorContext#setTimestamp
 Key: KAFKA-12834
 URL: https://issues.apache.org/jira/browse/KAFKA-12834
 Project: Kafka
  Issue Type: Sub-task
  Components: streams-test-utils
Reporter: Josep Prat
 Fix For: 4.0.0


Method org.apache.kafka.streams.processor.MockProcessorContext#setTimestamp was 
deprecated in 3.0.0

See KAFKA-10062 and KIP-622.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12833) Remove Deprecated methods under TopologyTestDriver

2021-05-21 Thread Josep Prat (Jira)
Josep Prat created KAFKA-12833:
--

 Summary: Remove Deprecated methods under TopologyTestDriver
 Key: KAFKA-12833
 URL: https://issues.apache.org/jira/browse/KAFKA-12833
 Project: Kafka
  Issue Type: Sub-task
  Components: streams-test-utils
Reporter: Josep Prat
 Fix For: 4.0.0


The following methods were at least deprecated in 2.8
 * 
org.apache.kafka.streams.TopologyTestDriver.KeyValueStoreFacade#init(org.apache.kafka.streams.processor.ProcessorContext,
 org.apache.kafka.streams.processor.StateStore)
 * 
org.apache.kafka.streams.TopologyTestDriver.WindowStoreFacade#init(org.apache.kafka.streams.processor.ProcessorContext,
 org.apache.kafka.streams.processor.StateStore)

 

*Disclaimer,* these methods might have been deprecated for a longer time, but 
they were definitely moved to this new "hierarchy position" in version 2.8

 

Move from standalone class to inner class was done under KAFKA-12435



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12832) Remove Deprecated methods under RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapter

2021-05-21 Thread Josep Prat (Jira)
Josep Prat created KAFKA-12832:
--

 Summary: Remove Deprecated methods under 
RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapter
 Key: KAFKA-12832
 URL: https://issues.apache.org/jira/browse/KAFKA-12832
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Josep Prat
 Fix For: 4.0.0


The following methods under were deprecated in version 3.0.0
 * 
org.apache.kafka.streams.state.internals.RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapter#maxBackgroundCompactions

 * 
org.apache.kafka.streams.state.internals.RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapter#setBaseBackgroundCompactions

 * 
org.apache.kafka.streams.state.internals.RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapter#setMaxBackgroundCompactions

 * 
org.apache.kafka.streams.state.internals.RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapter#maxBackgroundFlushes

 * 
org.apache.kafka.streams.state.internals.RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapter#setMaxBackgroundFlushes

 

See KAFKA-8897 and KIP-471

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12831) Remove Deprecated method StateStore#init

2021-05-21 Thread Josep Prat (Jira)
Josep Prat created KAFKA-12831:
--

 Summary: Remove Deprecated method StateStore#init
 Key: KAFKA-12831
 URL: https://issues.apache.org/jira/browse/KAFKA-12831
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Josep Prat
 Fix For: 4.0.0


The method 
org.apache.kafka.streams.processor.StateStore#init(org.apache.kafka.streams.processor.ProcessorContext,
 org.apache.kafka.streams.processor.StateStore) was deprected in version 2.7

 

See KAFKA-10562 and KIP-478

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12830) Remove Deprecated constructor in TimeWindowedDeserializer and TimeWindowedSerde

2021-05-21 Thread Josep Prat (Jira)
Josep Prat created KAFKA-12830:
--

 Summary: Remove Deprecated constructor in TimeWindowedDeserializer 
and TimeWindowedSerde
 Key: KAFKA-12830
 URL: https://issues.apache.org/jira/browse/KAFKA-12830
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Josep Prat
 Fix For: 4.0.0


The single argument constructor of the following classes were deprecated in 
version 2.8:
 * 
org.apache.kafka.streams.kstream.TimeWindowedDeserializer#TimeWindowedDeserializer(org.apache.kafka.common.serialization.Deserializer)
 * 
org.apache.kafka.streams.kstream.WindowedSerdes.TimeWindowedSerde#TimeWindowedSerde(org.apache.kafka.common.serialization.Serde)

 

See KAFKA-10366 & KAFKA-9649 and KIP-659



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12829) Remove Deprecated methods under Topology

2021-05-21 Thread Josep Prat (Jira)
Josep Prat created KAFKA-12829:
--

 Summary: Remove Deprecated methods under Topology
 Key: KAFKA-12829
 URL: https://issues.apache.org/jira/browse/KAFKA-12829
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Josep Prat
 Fix For: 4.0.0


The following methods were deprecated in version 2.7:
 * org.apache.kafka.streams.Topology#addProcessor(java.lang.String, 
org.apache.kafka.streams.processor.ProcessorSupplier, java.lang.String...) 
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.processor.TimestampExtractor, 
org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier) 

 

See KAFKA-10605 and KIP-478.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12828) Remove Deprecated methods under KeyQueryMetadata

2021-05-21 Thread Josep Prat (Jira)
Josep Prat created KAFKA-12828:
--

 Summary: Remove Deprecated methods under KeyQueryMetadata
 Key: KAFKA-12828
 URL: https://issues.apache.org/jira/browse/KAFKA-12828
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Josep Prat
 Fix For: 4.0.0


The following methods under KeyQueryMetadata were deprecated in version 2.7
 * org.apache.kafka.streams.KeyQueryMetadata#getActiveHost

 * org.apache.kafka.streams.KeyQueryMetadata#getStandbyHosts

 * org.apache.kafka.streams.KeyQueryMetadata#getPartition

See KAFKA-10316 and KIP-648



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12827) Remove Deprecated method KafkaStreams#setUncaughtExceptionHandler

2021-05-21 Thread Josep Prat (Jira)
Josep Prat created KAFKA-12827:
--

 Summary: Remove Deprecated method 
KafkaStreams#setUncaughtExceptionHandler
 Key: KAFKA-12827
 URL: https://issues.apache.org/jira/browse/KAFKA-12827
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Josep Prat
 Fix For: 4.0.0


++Method 
org.apache.kafka.streams.KafkaStreams#setUncaughtExceptionHandler(java.lang.Thread.UncaughtExceptionHandler)
  was deprecated in 2.8

 

See KAFKA-9331

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12826) Remove Deprecated Class Serdes (Streams)

2021-05-21 Thread Josep Prat (Jira)
Josep Prat created KAFKA-12826:
--

 Summary: Remove Deprecated Class Serdes (Streams)
 Key: KAFKA-12826
 URL: https://issues.apache.org/jira/browse/KAFKA-12826
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Josep Prat
 Fix For: 4.0.0


Class org.apache.kafka.streams.scala.Serdes was deprecated in version 2.7

See KAFKA-10020 and KIP-616



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12825) Remove Deprecated method StreamsBuilder#addGlobalStore

2021-05-21 Thread Josep Prat (Jira)
Josep Prat created KAFKA-12825:
--

 Summary: Remove Deprecated method StreamsBuilder#addGlobalStore
 Key: KAFKA-12825
 URL: https://issues.apache.org/jira/browse/KAFKA-12825
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Josep Prat
 Fix For: 4.0.0


Method org.apache.kafka.streams.scala.StreamsBuilder#addGlobalStore was 
deprecated in 2.7

 

See KAFKA-10379 and KIP-478



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12824) Remove Deprecated method KStream#branch

2021-05-21 Thread Josep Prat (Jira)
Josep Prat created KAFKA-12824:
--

 Summary: Remove Deprecated method KStream#branch
 Key: KAFKA-12824
 URL: https://issues.apache.org/jira/browse/KAFKA-12824
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Josep Prat
 Fix For: 4.0.0


The method org.apache.kafka.streams.scala.kstream.KStream#branch was deprecated 
in version 2.8

 

See KAFKA-5488 and KIP-418



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12823) Remove Deprecated method KStream

2021-05-21 Thread Josep Prat (Jira)
Josep Prat created KAFKA-12823:
--

 Summary: Remove Deprecated method KStream
 Key: KAFKA-12823
 URL: https://issues.apache.org/jira/browse/KAFKA-12823
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Josep Prat
 Fix For: 4.0.0


org.apache.kafka.streams.scala.kstream.KStream#through // 2.6



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12822) Remove Deprecated APIs of Kafka Streams in 4.0

2021-05-21 Thread Josep Prat (Jira)
Josep Prat created KAFKA-12822:
--

 Summary: Remove Deprecated APIs of Kafka Streams in 4.0
 Key: KAFKA-12822
 URL: https://issues.apache.org/jira/browse/KAFKA-12822
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Josep Prat
 Fix For: 4.0.0


This is an umbrella ticket that tries to collect all APIs under Kafka Streams 
that were deprecated after 2.5 (the current threshold for being removed in 
version 3.0.0).

Each subtask will de focusing on a specific API, so it's easy to discuss if it 
should be removed by 4.0.0 or maybe even at a later point.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12814) Remove Deprecated method StreamsConfig#getConsumerConfig

2021-05-19 Thread Josep Prat (Jira)
Josep Prat created KAFKA-12814:
--

 Summary: Remove Deprecated method StreamsConfig#getConsumerConfig
 Key: KAFKA-12814
 URL: https://issues.apache.org/jira/browse/KAFKA-12814
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Josep Prat
 Fix For: 3.0.0


Remove method that was deprecated in KIP-276: 
"StreamsConfig#getConsumerConfig". It has been deprecated since 2.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12813) Remove Deprecated schedule method in ProcessorContext

2021-05-19 Thread Josep Prat (Jira)
Josep Prat created KAFKA-12813:
--

 Summary: Remove Deprecated schedule method in ProcessorContext
 Key: KAFKA-12813
 URL: https://issues.apache.org/jira/browse/KAFKA-12813
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Josep Prat
 Fix For: 3.0.0


Method {{org.apache.kafka.streams.processor.ProcessorContext#schedule(long, 
org.apache.kafka.streams.processor.PunctuationType, 
org.apache.kafka.streams.processor.Punctuator)}} is deprecated since 2.1 version

 

As far as I understand, it is deprecated for long enough to be removed for 
version 3.0.

Those methods were deprecated during Task: 
https://issues.apache.org/jira/browse/KAFKA-7277

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12810) Remove deprecated TopologyDescription.Source#topics

2021-05-18 Thread Josep Prat (Jira)
Josep Prat created KAFKA-12810:
--

 Summary: Remove deprecated TopologyDescription.Source#topics
 Key: KAFKA-12810
 URL: https://issues.apache.org/jira/browse/KAFKA-12810
 Project: Kafka
  Issue Type: Sub-task
Reporter: Josep Prat
 Fix For: 3.0.0


As identified on https://issues.apache.org/jira/browse/KAFKA-12419



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12809) Remove De

2021-05-18 Thread Josep Prat (Jira)
Josep Prat created KAFKA-12809:
--

 Summary: Remove De
 Key: KAFKA-12809
 URL: https://issues.apache.org/jira/browse/KAFKA-12809
 Project: Kafka
  Issue Type: Task
Reporter: Josep Prat






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12808) Remove Deprecated methods under StreamsMetricsImpl

2021-05-18 Thread Josep Prat (Jira)
Josep Prat created KAFKA-12808:
--

 Summary: Remove Deprecated methods under StreamsMetricsImpl
 Key: KAFKA-12808
 URL: https://issues.apache.org/jira/browse/KAFKA-12808
 Project: Kafka
  Issue Type: Task
  Components: streams
Reporter: Josep Prat
 Fix For: 3.0.0


There are 4 methods in StreamsMetricsImpl that are deprecated since 2.5:
* 
org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImpl#recordLatency
* 
org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImpl#recordThroughput
* 
org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImpl#addLatencyAndThroughputSensor
* 
org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImpl#addThroughputSensor

As far as I understand, they are all deprecated for long enough to be removed 
for version 3.0.

Those methods were deprecated during task: 
https://issues.apache.org/jira/browse/KAFKA-9230



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12796) Removal of deprecated classes under `streams-scala`

2021-05-17 Thread Josep Prat (Jira)
Josep Prat created KAFKA-12796:
--

 Summary: Removal of deprecated classes under `streams-scala`
 Key: KAFKA-12796
 URL: https://issues.apache.org/jira/browse/KAFKA-12796
 Project: Kafka
  Issue Type: Task
Reporter: Josep Prat


There are 3 different classes that are deprecated under the streams-scala 
submodule:
 * 
streams/streams-scala/src/main/scala/org/apache/kafka/streams/scala/kstream/Suppressed.scala
 * 
streams/streams-scala/src/main/scala/org/apache/kafka/streams/scala/FunctionConversions.scala
 * 
streams/streams-scala/src/main/scala/org/apache/kafka/streams/scala/Serdes.scala

As far as I can tell, none of them are in use internally and could be removed 
for release 3.0.0

 

Does this change require a KIP?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12769) Backport of KAFKA-8562

2021-05-10 Thread Josep Prat (Jira)
Josep Prat created KAFKA-12769:
--

 Summary: Backport of KAFKA-8562
 Key: KAFKA-12769
 URL: https://issues.apache.org/jira/browse/KAFKA-12769
 Project: Kafka
  Issue Type: Task
  Components: network
Reporter: Josep Prat
Assignee: Josep Prat


Kafka-8562 solved the issue of SASL performing a reverse DNS lookup to resolve 
the IP.

This bug fix should be backported so it's present on 2.7.x and 2.8.x versions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)