[jira] [Created] (KAFKA-8099) java.nio.file.AccessDeniedException: .swap renamed to .log failed

2019-03-12 Thread wade wu (JIRA)
wade wu created KAFKA-8099:
--

 Summary: java.nio.file.AccessDeniedException: .swap renamed to 
.log failed
 Key: KAFKA-8099
 URL: https://issues.apache.org/jira/browse/KAFKA-8099
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 1.1.1
Reporter: wade wu


[2019-03-11 16:02:00,021] ERROR Error while loading log dir 
D:\data\Kafka\kafka-datalogs (kafka.log.LogManager)
java.nio.file.FileSystemException: 
D:\data\Kafka\kafka-datalogs\kattesttopic-23\03505795.log: The 
process cannot access the file because it is being used by another process.

at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
 at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
 at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
 at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:376)
 at 
sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
 at java.nio.file.Files.move(Files.java:1395)
 at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:697)
 at org.apache.kafka.common.record.FileRecords.renameTo(FileRecords.java:223)
 at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:415)
 at kafka.log.Log.replaceSegments(Log.scala:1697)
 at kafka.log.Log$$anonfun$completeSwapOperations$1.apply(Log.scala:391)
 at kafka.log.Log$$anonfun$completeSwapOperations$1.apply(Log.scala:380)
 at scala.collection.immutable.Set$Set1.foreach(Set.scala:94)
 at kafka.log.Log.completeSwapOperations(Log.scala:380)
 at kafka.log.Log.loadSegments(Log.scala:408)
 at kafka.log.Log.(Log.scala:216)
 at kafka.log.Log$.apply(Log.scala:1788)
 at kafka.log.LogManager.kafka$log$LogManager$$loadLog(LogManager.scala:260)
 at 
kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$11$$anonfun$apply$15$$anonfun$apply$2.apply$mcV$sp(LogManager.scala:340)
 at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)
 Suppressed: java.nio.file.AccessDeniedException: 
D:\data\Kafka\kafka-datalogs\kattesttopic-23\03505795.log.swap -> 
D:\data\Kafka\kafka-datalogs\kattesttopic-23\03505795.log
 at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
 at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
 at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301)
 at 
sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
 at java.nio.file.Files.move(Files.java:1395)
 at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:694)
 ... 18 more



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7965) Flaky Test ConsumerBounceTest#testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup

2019-03-12 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16791096#comment-16791096
 ] 

Matthias J. Sax commented on KAFKA-7965:


One more: 
[https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/3162/testReport/junit/kafka.api/ConsumerBounceTest/testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup/]

> Flaky Test 
> ConsumerBounceTest#testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup
> 
>
> Key: KAFKA-7965
> URL: https://issues.apache.org/jira/browse/KAFKA-7965
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, unit tests
>Affects Versions: 2.2.0, 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Stanislav Kozlovski
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/21/]
> {quote}java.lang.AssertionError: Received 0, expected at least 68 at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> kafka.api.ConsumerBounceTest.receiveAndCommit(ConsumerBounceTest.scala:557) 
> at 
> kafka.api.ConsumerBounceTest.$anonfun$testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup$1(ConsumerBounceTest.scala:320)
>  at 
> kafka.api.ConsumerBounceTest.$anonfun$testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup$1$adapted(ConsumerBounceTest.scala:319)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.api.ConsumerBounceTest.testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup(ConsumerBounceTest.scala:319){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7965) Flaky Test ConsumerBounceTest#testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup

2019-03-12 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16791083#comment-16791083
 ] 

Matthias J. Sax commented on KAFKA-7965:


Failed again: 
[https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3460/tests]

> Flaky Test 
> ConsumerBounceTest#testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup
> 
>
> Key: KAFKA-7965
> URL: https://issues.apache.org/jira/browse/KAFKA-7965
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, unit tests
>Affects Versions: 2.2.0, 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Stanislav Kozlovski
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/21/]
> {quote}java.lang.AssertionError: Received 0, expected at least 68 at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> kafka.api.ConsumerBounceTest.receiveAndCommit(ConsumerBounceTest.scala:557) 
> at 
> kafka.api.ConsumerBounceTest.$anonfun$testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup$1(ConsumerBounceTest.scala:320)
>  at 
> kafka.api.ConsumerBounceTest.$anonfun$testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup$1$adapted(ConsumerBounceTest.scala:319)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.api.ConsumerBounceTest.testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup(ConsumerBounceTest.scala:319){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7944) Add more natural Suppress test

2019-03-12 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-7944:
---
Fix Version/s: 2.3.0

> Add more natural Suppress test
> --
>
> Key: KAFKA-7944
> URL: https://issues.apache.org/jira/browse/KAFKA-7944
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Major
> Fix For: 2.3.0
>
>
> Can be integration or system test.
> The idea is to test with tighter time bounds with windows of say 30 seconds 
> and use system time without adding any extra time for verification.
> Currently, all the tests rely on artificially advancing system time, which 
> should be reliable, but you never know. So, we want to add a test that works 
> exactly like production code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7944) Add more natural Suppress test

2019-03-12 Thread John Roesler (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Roesler resolved KAFKA-7944.
-
Resolution: Fixed

> Add more natural Suppress test
> --
>
> Key: KAFKA-7944
> URL: https://issues.apache.org/jira/browse/KAFKA-7944
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Major
>
> Can be integration or system test.
> The idea is to test with tighter time bounds with windows of say 30 seconds 
> and use system time without adding any extra time for verification.
> Currently, all the tests rely on artificially advancing system time, which 
> should be reliable, but you never know. So, we want to add a test that works 
> exactly like production code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8094) Iterating over cache with get(key) is inefficient

2019-03-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16790849#comment-16790849
 ] 

ASF GitHub Bot commented on KAFKA-8094:
---

ableegoldman commented on pull request #6433: KAFKA-8094: Iterating over cache 
with get(key) is inefficient
URL: https://github.com/apache/kafka/pull/6433
 
 
   Use concurrent data structure for the underlying cache in NamedCache, and 
iterate over it with subMap instead of many calls to get()
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Iterating over cache with get(key) is inefficient 
> --
>
> Key: KAFKA-8094
> URL: https://issues.apache.org/jira/browse/KAFKA-8094
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Sophie Blee-Goldman
>Priority: Major
>  Labels: streams
>
> Currently, range queries in the caching layer are implemented by creating an 
> iterator over the subset of keys in the range, and calling get() on the 
> underlying TreeMap for each key. While this protects against 
> ConcurrentModificationException, we can improve performance by replacing the 
> TreeMap with a concurrent data structure such as ConcurrentSkipListMap and 
> then just iterating over a subMap.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8098) Flaky Test AdminClientIntegrationTest#testConsumerGroups

2019-03-12 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8098:
--

 Summary: Flaky Test AdminClientIntegrationTest#testConsumerGroups
 Key: KAFKA-8098
 URL: https://issues.apache.org/jira/browse/KAFKA-8098
 Project: Kafka
  Issue Type: Bug
  Components: core, unit tests
Affects Versions: 2.3.0
Reporter: Matthias J. Sax
 Fix For: 2.3.0


[https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3459/tests]
{quote}java.lang.AssertionError: expected:<2> but was:<0>
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.failNotEquals(Assert.java:835)
at org.junit.Assert.assertEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:633)
at 
kafka.api.AdminClientIntegrationTest.testConsumerGroups(AdminClientIntegrationTest.scala:1194){quote}
STDOUT
{quote}2019-03-12 10:52:33,482] ERROR [ReplicaFetcher replicaId=2, leaderId=1, 
fetcherId=0] Error for partition topic-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-12 10:52:33,485] ERROR [ReplicaFetcher replicaId=0, leaderId=1, 
fetcherId=0] Error for partition topic-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-12 10:52:35,880] WARN Unable to read additional data from client 
sessionid 0x104458575770003, likely client has closed socket 
(org.apache.zookeeper.server.NIOServerCnxn:376)
[2019-03-12 10:52:38,596] ERROR [ReplicaFetcher replicaId=2, leaderId=0, 
fetcherId=0] Error for partition elect-preferred-leaders-topic-1-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-12 10:52:38,797] ERROR [ReplicaFetcher replicaId=2, leaderId=0, 
fetcherId=0] Error for partition elect-preferred-leaders-topic-2-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-12 10:52:51,998] ERROR [ReplicaFetcher replicaId=2, leaderId=0, 
fetcherId=0] Error for partition topic-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-12 10:52:52,005] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
fetcherId=0] Error for partition topic-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-12 10:53:13,750] ERROR [ReplicaFetcher replicaId=0, leaderId=1, 
fetcherId=0] Error for partition mytopic2-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-12 10:53:13,754] ERROR [ReplicaFetcher replicaId=2, leaderId=1, 
fetcherId=0] Error for partition mytopic2-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-12 10:53:13,755] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
fetcherId=0] Error for partition mytopic2-1 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-12 10:53:13,760] ERROR [ReplicaFetcher replicaId=2, leaderId=0, 
fetcherId=0] Error for partition mytopic2-1 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-12 10:53:13,757] ERROR [ReplicaFetcher replicaId=0, leaderId=2, 
fetcherId=0] Error for partition mytopic2-2 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-12 10:53:13,769] ERROR [ReplicaFetcher replicaId=0, leaderId=2, 
fetcherId=0] Error for partition mytopic-1 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-12 10:53:13,778] ERROR [ReplicaFetcher replicaId=2, leaderId=1, 
fetcherId=0] Error for partition mytopic-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-12 10:53:15,062] WARN Unable to read additional data from client 
sessionid 0x10445861298, likely client has closed socket 

[jira] [Commented] (KAFKA-8094) Iterating over cache with get(key) is inefficient

2019-03-12 Thread Guozhang Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16790767#comment-16790767
 ] 

Guozhang Wang commented on KAFKA-8094:
--

I've not carefully looked into the unit tests that would be affected by this 
change, if you can have a quick PR to reflect the impact we can discuss more.

> Iterating over cache with get(key) is inefficient 
> --
>
> Key: KAFKA-8094
> URL: https://issues.apache.org/jira/browse/KAFKA-8094
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Sophie Blee-Goldman
>Priority: Major
>  Labels: streams
>
> Currently, range queries in the caching layer are implemented by creating an 
> iterator over the subset of keys in the range, and calling get() on the 
> underlying TreeMap for each key. While this protects against 
> ConcurrentModificationException, we can improve performance by replacing the 
> TreeMap with a concurrent data structure such as ConcurrentSkipListMap and 
> then just iterating over a subMap.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8094) Iterating over cache with get(key) is inefficient

2019-03-12 Thread Guozhang Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16790766#comment-16790766
 ] 

Guozhang Wang commented on KAFKA-8094:
--

[~ableegoldman] I'd agree with you:

1) the underlying store's iterator from built-in rocksDB is also on a snapshot 
of the store, so if more updates are applied by the stream thread, while the IQ 
caller thread is iterating the store, it will not be reflected either.
2) hence for the caching store, if some entries are added after the iterator's 
snapshot, it should be fine since as 1) dictates, the underlying store does not 
reflect latest image either; if some entries are evicted after the iterator's 
snapshot, then the merge-iterator of the cache / underlying should be able to 
cover it as well (it favors the entry in the cache iterator for the same keys).

> Iterating over cache with get(key) is inefficient 
> --
>
> Key: KAFKA-8094
> URL: https://issues.apache.org/jira/browse/KAFKA-8094
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Sophie Blee-Goldman
>Priority: Major
>  Labels: streams
>
> Currently, range queries in the caching layer are implemented by creating an 
> iterator over the subset of keys in the range, and calling get() on the 
> underlying TreeMap for each key. While this protects against 
> ConcurrentModificationException, we can improve performance by replacing the 
> TreeMap with a concurrent data structure such as ConcurrentSkipListMap and 
> then just iterating over a subMap.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8063) Flaky Test WorkerTest#testConverterOverrides

2019-03-12 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16790764#comment-16790764
 ] 

Matthias J. Sax commented on KAFKA-8063:


One more: 
[https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3458/tests]

> Flaky Test WorkerTest#testConverterOverrides
> 
>
> Key: KAFKA-8063
> URL: https://issues.apache.org/jira/browse/KAFKA-8063
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect, unit tests
>Affects Versions: 2.2.0, 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> [https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/20068/testReport/junit/org.apache.kafka.connect.runtime/WorkerTest/testConverterOverrides/]
> {quote}java.lang.AssertionError: Expectation failure on verify: 
> WorkerSourceTask.run(): expected: 1, actual: 1 at 
> org.easymock.internal.MocksControl.verify(MocksControl.java:242){quote}
> STDOUT
> {quote}[2019-03-07 02:28:25,482] (Test worker) ERROR Failed to start 
> connector test-connector (org.apache.kafka.connect.runtime.Worker:234) 
> org.apache.kafka.connect.errors.ConnectException: Failed to find Connector at 
> org.easymock.internal.MockInvocationHandler.invoke(MockInvocationHandler.java:46)
>  at 
> org.easymock.internal.ObjectMethodsFilter.invoke(ObjectMethodsFilter.java:101)
>  at 
> org.easymock.internal.ClassProxyFactory$MockMethodInterceptor.intercept(ClassProxyFactory.java:97)
>  at 
> org.apache.kafka.connect.runtime.isolation.Plugins$$EnhancerByCGLIB$$205db954.newConnector()
>  at org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:226) 
> at 
> org.apache.kafka.connect.runtime.WorkerTest.testStartConnectorFailure(WorkerTest.java:256)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.internal.runners.TestMethod.invoke(TestMethod.java:68) at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:326)
>  at org.junit.internal.runners.MethodRoadie$2.run(MethodRoadie.java:89) at 
> org.junit.internal.runners.MethodRoadie.runBeforesThenTestThenAfters(MethodRoadie.java:97)
>  at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.executeTest(PowerMockJUnit44RunnerDelegateImpl.java:310)
>  at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTestInSuper(PowerMockJUnit47RunnerDelegateImpl.java:131)
>  at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.access$100(PowerMockJUnit47RunnerDelegateImpl.java:59)
>  at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner$TestExecutorStatement.evaluate(PowerMockJUnit47RunnerDelegateImpl.java:147)
>  at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.evaluateStatement(PowerMockJUnit47RunnerDelegateImpl.java:107)
>  at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTest(PowerMockJUnit47RunnerDelegateImpl.java:82)
>  at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runBeforesThenTestThenAfters(PowerMockJUnit44RunnerDelegateImpl.java:298)
>  at org.junit.internal.runners.MethodRoadie.runTest(MethodRoadie.java:87) at 
> org.junit.internal.runners.MethodRoadie.run(MethodRoadie.java:50) at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.invokeTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:218)
>  at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.runMethods(PowerMockJUnit44RunnerDelegateImpl.java:160)
>  at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$1.run(PowerMockJUnit44RunnerDelegateImpl.java:134)
>  at 
> org.junit.internal.runners.ClassRoadie.runUnprotected(ClassRoadie.java:34) at 
> org.junit.internal.runners.ClassRoadie.runProtected(ClassRoadie.java:44) at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.run(PowerMockJUnit44RunnerDelegateImpl.java:136)
>  at 
> org.powermock.modules.junit4.common.internal.impl.JUnit4TestSuiteChunkerImpl.run(JUnit4TestSuiteChunkerImpl.java:117)
>  at 
> 

[jira] [Commented] (KAFKA-7978) Flaky Test SaslSslAdminClientIntegrationTest#testConsumerGroups

2019-03-12 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16790763#comment-16790763
 ] 

Matthias J. Sax commented on KAFKA-7978:


Failed again: 
[https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3458/tests]

> Flaky Test SaslSslAdminClientIntegrationTest#testConsumerGroups
> ---
>
> Key: KAFKA-7978
> URL: https://issues.apache.org/jira/browse/KAFKA-7978
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0, 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/25/]
> {quote}java.lang.AssertionError: expected:<2> but was:<0> at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.failNotEquals(Assert.java:834) at 
> org.junit.Assert.assertEquals(Assert.java:645) at 
> org.junit.Assert.assertEquals(Assert.java:631) at 
> kafka.api.AdminClientIntegrationTest.testConsumerGroups(AdminClientIntegrationTest.scala:1157)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7944) Add more natural Suppress test

2019-03-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16790751#comment-16790751
 ] 

ASF GitHub Bot commented on KAFKA-7944:
---

guozhangwang commented on pull request #6382: KAFKA-7944: Improve Suppress test 
coverage
URL: https://github.com/apache/kafka/pull/6382
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add more natural Suppress test
> --
>
> Key: KAFKA-7944
> URL: https://issues.apache.org/jira/browse/KAFKA-7944
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Major
>
> Can be integration or system test.
> The idea is to test with tighter time bounds with windows of say 30 seconds 
> and use system time without adding any extra time for verification.
> Currently, all the tests rely on artificially advancing system time, which 
> should be reliable, but you never know. So, we want to add a test that works 
> exactly like production code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-3522) Consider adding version information into rocksDB storage format

2019-03-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-3522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16790713#comment-16790713
 ] 

ASF GitHub Bot commented on KAFKA-3522:
---

bbejeck commented on pull request #6356: KAFKA-3522: add missing guards for 
TimestampedXxxStore
URL: https://github.com/apache/kafka/pull/6356
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Consider adding version information into rocksDB storage format
> ---
>
> Key: KAFKA-3522
> URL: https://issues.apache.org/jira/browse/KAFKA-3522
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Matthias J. Sax
>Priority: Major
>  Labels: architecture
>
> Kafka Streams does not introduce any modifications to the data format in the 
> underlying Kafka protocol, but it does use RocksDB for persistent state 
> storage, and currently its data format is fixed and hard-coded. We want to 
> consider the evolution path in the future we we change the data format, and 
> hence having some version info stored along with the storage file / directory 
> would be useful.
> And this information could be even out of the storage file; for example, we 
> can just use a small "version indicator" file in the rocksdb directory for 
> this purposes. Thoughts? [~enothereska] [~jkreps]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-7157) Connect TimestampConverter SMT doesn't handle null values

2019-03-12 Thread Jeff Beagley (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16790069#comment-16790069
 ] 

Jeff Beagley edited comment on KAFKA-7157 at 3/12/19 2:59 PM:
--

I believe, through writing my own SMT, I have found a workaround. My transform 
accepts all date fields and immediately sets the schema of each field to 
OPTIONAL_STRING_SCHEMA, and then I format appropriately if the value is other 
than null. 

Code for my SMT (Specific to unix timestamp to smalldatetime within SQL) can be 
found here, [https://github.com/jeffbeagley/kafka-connect-transform] hope this 
helps!


was (Author: jeffbeagley):
I believe, through writing my own SMT, I have found a workaround (will post to 
github shortly). My transform accepts all date fields and immediately sets the 
schema of each field to OPTIONAL_STRING_SCHEMA, and then I format appropriately 
if the value is other than null. 

> Connect TimestampConverter SMT doesn't handle null values
> -
>
> Key: KAFKA-7157
> URL: https://issues.apache.org/jira/browse/KAFKA-7157
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.2.0
>Reporter: Randall Hauch
>Assignee: Valeria Vasylieva
>Priority: Major
>
> TimestampConverter SMT is not able to handle null values (in any versions), 
> so it's always trying to apply the transformation to the value. Instead, it 
> needs to check for null and use the default value for the new schema's field.
> {noformat}
> [2018-07-03 02:31:52,490] ERROR Task MySourceConnector-2 threw an uncaught 
> and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask) 
> java.lang.NullPointerException 
> at 
> org.apache.kafka.connect.transforms.TimestampConverter$2.toRaw(TimestampConverter.java:137)
>  
> at 
> org.apache.kafka.connect.transforms.TimestampConverter.convertTimestamp(TimestampConverter.java:440)
>  
> at 
> org.apache.kafka.connect.transforms.TimestampConverter.applyValueWithSchema(TimestampConverter.java:368)
>  
> at 
> org.apache.kafka.connect.transforms.TimestampConverter.applyWithSchema(TimestampConverter.java:358)
>  
> at 
> org.apache.kafka.connect.transforms.TimestampConverter.apply(TimestampConverter.java:275)
>  
> at 
> org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:38)
>  
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:435)
>  
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:264) 
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:182)
>  
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:150)
>  
> at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:146) 
> at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:190) 
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  
> at java.lang.Thread.run(Thread.java:748) 
> [2018-07-03 02:31:52,491] ERROR Task is being killed and will not recover 
> until manually restarted (org.apache.kafka.connect.runtime.WorkerTask) 
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8097) Kafka broker crashes with java.nio.file.FileSystemException Exception

2019-03-12 Thread Kartik (JIRA)
Kartik created KAFKA-8097:
-

 Summary: Kafka broker crashes with 
java.nio.file.FileSystemException Exception
 Key: KAFKA-8097
 URL: https://issues.apache.org/jira/browse/KAFKA-8097
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.1.1
Reporter: Kartik
Assignee: Kartik


Kafka broker crashes with below exception while deleting the segments

 

Exception thrown:

The process cannot access the file because it is being used by another process.

at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
 at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
 at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
 at 
sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
 at java.nio.file.Files.move(Files.java:1395)
 at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:809)
 at kafka.log.AbstractIndex.renameTo(AbstractIndex.scala:205)
 at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:489)
 at kafka.log.Log.asyncDeleteSegment(Log.scala:1907)
 at kafka.log.Log.deleteSegment(Log.scala:1892)
 at kafka.log.Log.$anonfun$deleteSegments$3(Log.scala:1438)
 at kafka.log.Log.$anonfun$deleteSegments$3$adapted(Log.scala:1438)
 at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58)
 at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51)
 at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
 at kafka.log.Log.$anonfun$deleteSegments$2(Log.scala:1438)
 at scala.runtime.java8.JFunction0$mcI$sp.apply(JFunction0$mcI$sp.java:12)
 at kafka.log.Log.maybeHandleIOException(Log.scala:1996)
 at kafka.log.Log.deleteSegments(Log.scala:1429)
 at kafka.log.Log.deleteOldSegments(Log.scala:1424)
 at kafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1502)
 at kafka.log.Log.deleteOldSegments(Log.scala:1492)
 at kafka.log.LogManager.$anonfun$cleanupLogs$3(LogManager.scala:898)
 at kafka.log.LogManager.$anonfun$cleanupLogs$3$adapted(LogManager.scala:895)
 at scala.collection.immutable.List.foreach(List.scala:388)
 at kafka.log.LogManager.cleanupLogs(LogManager.scala:895)
 at kafka.log.LogManager.$anonfun$startup$2(LogManager.scala:395)
 at kafka.utils.KafkaScheduler.$anonfun$schedule$2(KafkaScheduler.scala:114)
 at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:63)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
 at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)
 Suppressed: java.nio.file.FileSystemException: 
C:\Users\Documents\Kafka-runner\kafka\bin\windows\UsersDocumentsKafka-runnerkafkakafka-logs\test-9\.index
 -> 
C:\Users\Documents\Kafka-runner\kafka\bin\windows\UsersDocumentsKafka-runnerkafkakafka-logs\test-9\.index.deleted:
 The process cannot access the file because it is being used by another process.

at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
 at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
 at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301)
 at 
sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
 at java.nio.file.Files.move(Files.java:1395)
 at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:806)
 ... 30 more
[2019-03-12 12:14:12,830] INFO [ReplicaManager broker=0] Stopping serving 
replicas in dir

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6188) Broker fails with FATAL Shutdown - log dirs have failed

2019-03-12 Thread prehistoricpenguin (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16790418#comment-16790418
 ] 

prehistoricpenguin commented on KAFKA-6188:
---

Hi, [~lindong], [~TeilaRei]

I have made a fix for this issue and our manually test passed via this 
[PR|https://github.com/apache/kafka/pull/6403], can you please help me review 
my code? Thank you so much!

> Broker fails with FATAL Shutdown - log dirs have failed
> ---
>
> Key: KAFKA-6188
> URL: https://issues.apache.org/jira/browse/KAFKA-6188
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, log
>Affects Versions: 1.0.0, 1.0.1
> Environment: Windows 10
>Reporter: Valentina Baljak
>Assignee: Dong Lin
>Priority: Blocker
>  Labels: windows
> Attachments: Segments are opened before deletion, 
> kafka_2.10-0.10.2.1.zip, output.txt
>
>
> Just started with version 1.0.0 after a 4-5 months of using 0.10.2.1. The 
> test environment is very simple, with only one producer and one consumer. 
> Initially, everything started fine, stand alone tests worked as expected. 
> However, running my code, Kafka clients fail after approximately 10 minutes. 
> Kafka won't start after that and it fails with the same error. 
> Deleting logs helps to start again, and the same problem occurs.
> Here is the error traceback:
> [2017-11-08 08:21:57,532] INFO Starting log cleanup with a period of 30 
> ms. (kafka.log.LogManager)
> [2017-11-08 08:21:57,548] INFO Starting log flusher with a default period of 
> 9223372036854775807 ms. (kafka.log.LogManager)
> [2017-11-08 08:21:57,798] INFO Awaiting socket connections on 0.0.0.0:9092. 
> (kafka.network.Acceptor)
> [2017-11-08 08:21:57,813] INFO [SocketServer brokerId=0] Started 1 acceptor 
> threads (kafka.network.SocketServer)
> [2017-11-08 08:21:57,829] INFO [ExpirationReaper-0-Produce]: Starting 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
> [2017-11-08 08:21:57,845] INFO [ExpirationReaper-0-DeleteRecords]: Starting 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
> [2017-11-08 08:21:57,845] INFO [ExpirationReaper-0-Fetch]: Starting 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
> [2017-11-08 08:21:57,845] INFO [LogDirFailureHandler]: Starting 
> (kafka.server.ReplicaManager$LogDirFailureHandler)
> [2017-11-08 08:21:57,860] INFO [ReplicaManager broker=0] Stopping serving 
> replicas in dir C:\Kafka\kafka_2.12-1.0.0\kafka-logs 
> (kafka.server.ReplicaManager)
> [2017-11-08 08:21:57,860] INFO [ReplicaManager broker=0] Partitions  are 
> offline due to failure on log directory C:\Kafka\kafka_2.12-1.0.0\kafka-logs 
> (kafka.server.ReplicaManager)
> [2017-11-08 08:21:57,860] INFO [ReplicaFetcherManager on broker 0] Removed 
> fetcher for partitions  (kafka.server.ReplicaFetcherManager)
> [2017-11-08 08:21:57,892] INFO [ReplicaManager broker=0] Broker 0 stopped 
> fetcher for partitions  because they are in the failed log dir 
> C:\Kafka\kafka_2.12-1.0.0\kafka-logs (kafka.server.ReplicaManager)
> [2017-11-08 08:21:57,892] INFO Stopping serving logs in dir 
> C:\Kafka\kafka_2.12-1.0.0\kafka-logs (kafka.log.LogManager)
> [2017-11-08 08:21:57,892] FATAL Shutdown broker because all log dirs in 
> C:\Kafka\kafka_2.12-1.0.0\kafka-logs have failed (kafka.log.LogManager)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8095) update ReplicationVerification to support command-config

2019-03-12 Thread Yuexin Zhang (JIRA)
Yuexin Zhang created KAFKA-8095:
---

 Summary: update ReplicationVerification to support command-config
 Key: KAFKA-8095
 URL: https://issues.apache.org/jira/browse/KAFKA-8095
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: Yuexin Zhang


Currently, there's no way to provide a config file to switch to SASL security 
protocol. We'd enable ReplicationVerification tool to be able pick up a 
command-config, like other tools such as 
[ConsumerGroupCommand.scala|https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala#L433-L436],
 that way ReplicationVerification tool can support SASL client:

 

[https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala#L734-L737]

 

[https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala#L433-L436]

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7976) Flaky Test DynamicBrokerReconfigurationTest#testUncleanLeaderElectionEnable

2019-03-12 Thread Rajini Sivaram (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-7976.
---
Resolution: Fixed

> Flaky Test DynamicBrokerReconfigurationTest#testUncleanLeaderElectionEnable
> ---
>
> Key: KAFKA-7976
> URL: https://issues.apache.org/jira/browse/KAFKA-7976
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0, 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Rajini Sivaram
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.2-jdk8/detail/kafka-2.2-jdk8/28/]
> {quote}java.lang.AssertionError: Unclean leader not elected
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at 
> kafka.server.DynamicBrokerReconfigurationTest.testUncleanLeaderElectionEnable(DynamicBrokerReconfigurationTest.scala:488){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8096) Offsets out of range with no configured reset policy for partitions

2019-03-12 Thread kun'qin (JIRA)
kun'qin  created KAFKA-8096:
---

 Summary: Offsets out of range with no configured reset policy for 
partitions
 Key: KAFKA-8096
 URL: https://issues.apache.org/jira/browse/KAFKA-8096
 Project: Kafka
  Issue Type: Bug
Reporter: kun'qin 


Spark Streaming Consumption Kafka An error occurred Offsets out of range with 
no configured reset policy for partitions



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7976) Flaky Test DynamicBrokerReconfigurationTest#testUncleanLeaderElectionEnable

2019-03-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16790396#comment-16790396
 ] 

ASF GitHub Bot commented on KAFKA-7976:
---

rajinisivaram commented on pull request #6426: KAFKA-7976; Update config before 
notifying controller of unclean leader update
URL: https://github.com/apache/kafka/pull/6426
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Flaky Test DynamicBrokerReconfigurationTest#testUncleanLeaderElectionEnable
> ---
>
> Key: KAFKA-7976
> URL: https://issues.apache.org/jira/browse/KAFKA-7976
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0, 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Rajini Sivaram
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.2-jdk8/detail/kafka-2.2-jdk8/28/]
> {quote}java.lang.AssertionError: Unclean leader not elected
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at 
> kafka.server.DynamicBrokerReconfigurationTest.testUncleanLeaderElectionEnable(DynamicBrokerReconfigurationTest.scala:488){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8095) update ReplicationVerification to support command-config

2019-03-12 Thread Yuexin Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuexin Zhang updated KAFKA-8095:

Description: 
Currently, there's no way to provide a config file to switch to SASL security 
protocol. We'd enable ReplicationVerification tool to be able pick up a 
command-config, like other tools such as 
[ConsumerGroupCommand.scala|https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala],
 that way ReplicationVerification tool can support SASL client:

 

[https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala#L734-L737]

 

[https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala#L433-L436]

 

  was:
Currently, there's no way to provide a config file to switch to SASL security 
protocol. We'd enable ReplicationVerification tool to be able pick up a 
command-config, like other tools such as 
[ConsumerGroupCommand.scala|https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala#L433-L436],
 that way ReplicationVerification tool can support SASL client:

 

[https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala#L734-L737]

 

[https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala#L433-L436]

 


> update ReplicationVerification to support command-config
> 
>
> Key: KAFKA-8095
> URL: https://issues.apache.org/jira/browse/KAFKA-8095
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Yuexin Zhang
>Priority: Minor
>
> Currently, there's no way to provide a config file to switch to SASL security 
> protocol. We'd enable ReplicationVerification tool to be able pick up a 
> command-config, like other tools such as 
> [ConsumerGroupCommand.scala|https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala],
>  that way ReplicationVerification tool can support SASL client:
>  
> [https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala#L734-L737]
>  
> [https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala#L433-L436]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7801) TopicCommand should not be able to alter transaction topic partition count

2019-03-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16790262#comment-16790262
 ] 

ASF GitHub Bot commented on KAFKA-7801:
---

omkreddy commented on pull request #6109: KAFKA-7801: TopicCommand should not 
be able to alter transaction topic partition count
URL: https://github.com/apache/kafka/pull/6109
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> TopicCommand should not be able to alter transaction topic partition count
> --
>
> Key: KAFKA-7801
> URL: https://issues.apache.org/jira/browse/KAFKA-7801
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 2.1.0
>Reporter: huxihx
>Assignee: huxihx
>Priority: Major
> Fix For: 2.3.0
>
>
> To keep align with the way it handles the offset topic, TopicCommand should 
> not be able to alter transaction topic partition count.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7801) TopicCommand should not be able to alter transaction topic partition count

2019-03-12 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7801.
--
   Resolution: Fixed
Fix Version/s: 2.3.0

Issue resolved by pull request 6109
[https://github.com/apache/kafka/pull/6109]

> TopicCommand should not be able to alter transaction topic partition count
> --
>
> Key: KAFKA-7801
> URL: https://issues.apache.org/jira/browse/KAFKA-7801
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 2.1.0
>Reporter: huxihx
>Assignee: huxihx
>Priority: Major
> Fix For: 2.3.0
>
>
> To keep align with the way it handles the offset topic, TopicCommand should 
> not be able to alter transaction topic partition count.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8020) Consider changing design of ThreadCache

2019-03-12 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16790263#comment-16790263
 ] 

Matthias J. Sax commented on KAFKA-8020:


I think I understand your intention better now. However, I am wondering if you 
have any data that backs your claim? It's tricky to rewrite the caches and we 
should only do it, if we get a reasonable performance improvement (otherwise, 
we risk to introduce bug with no reasonable benefits).
{quote}elements which have been inserted usually have a certain lifetime
{quote}
What would this lifetime be? Do you refer to windowed and session store 
retention time? KeyValue stores don't have a retention time though.
{quote}In terms of a KIP, I don't think that it requires ones. There shouldn't 
be any changes to public API, and we are offering a performance enhancement, so 
a configuration which chooses between different caching policies shouldn't be 
necessary. 

Oh about implementing this policy for all caches. I'm not too sure about that. 
I was only planning on implementing this policy for ThreadCache, since I'm 
somewhat familiar with this part of Kafka Streams. 
{quote}
I see. If you target ThreadCache, I agree that a KIP is not required. We need 
to evaluate if there is a measurable performance improvement though before we 
would merge this change.
 
 

> Consider changing design of ThreadCache 
> 
>
> Key: KAFKA-8020
> URL: https://issues.apache.org/jira/browse/KAFKA-8020
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Richard Yu
>Priority: Major
>
> In distributed systems, time-aware LRU Caches offers a superior eviction 
> policy better than traditional LRU models, having more cache hits than 
> misses. In this new policy, if an item is stored beyond its useful lifespan, 
> then it is removed. For example, in {{CachingWindowStore}}, a window usually 
> is of limited size. After it expires, it would no longer be queried for, but 
> it potentially could stay in the ThreadCache for an unnecessary amount of 
> time if it is not evicted (i.e. the number of entries being inserted is few). 
> For better allocation of memory, it would be better if we implement a 
> time-aware LRU Cache which takes into account the lifespan of an entry and 
> removes it once it has expired.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)