[ 
https://issues.apache.org/jira/browse/KAFKA-14742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Harris updated KAFKA-14742:
--------------------------------
    Labels: flaky-test  (was: )

> Flaky ExactlyOnceSourceIntegrationTest.testConnectorBoundary OOMs
> -----------------------------------------------------------------
>
>                 Key: KAFKA-14742
>                 URL: https://issues.apache.org/jira/browse/KAFKA-14742
>             Project: Kafka
>          Issue Type: Improvement
>            Reporter: Greg Harris
>            Assignee: Greg Harris
>            Priority: Minor
>              Labels: flaky-test
>
> The ExactlyOnceSourceIntegrationTest appears to occasionally throw the 
> following exception in my local test runs:
> {noformat}
> java.lang.OutOfMemoryError: GC overhead limit exceeded
>       at java.base/java.util.HashMap.newNode(HashMap.java:1901)
>       at java.base/java.util.HashMap.putVal(HashMap.java:629)
>       at java.base/java.util.HashMap.put(HashMap.java:610)
>       at java.base/java.util.HashSet.add(HashSet.java:221)
>       at 
> java.base/java.util.stream.Collectors$$Lambda$6/0x800000011.accept(Unknown 
> Source)
>       at 
> java.base/java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169)
>       at 
> java.base/java.util.stream.LongPipeline$1$1.accept(LongPipeline.java:177)
>       at 
> java.base/java.util.stream.Streams$RangeLongSpliterator.forEachRemaining(Streams.java:228)
>       at 
> java.base/java.util.Spliterator$OfLong.forEachRemaining(Spliterator.java:775)
>       at 
> java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
>       at 
> java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
>       at 
> java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921)
>       at 
> java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
>       at 
> java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682)
>       at 
> org.apache.kafka.connect.integration.ExactlyOnceSourceIntegrationTest.lambda$assertSeqnos$9(ExactlyOnceSourceIntegrationTest.java:964)
>       at 
> org.apache.kafka.connect.integration.ExactlyOnceSourceIntegrationTest$$Lambda$2500/0x00000008015a1908.accept(Unknown
>  Source)
>       at java.base/java.util.HashMap.forEach(HashMap.java:1421)
>       at 
> org.apache.kafka.connect.integration.ExactlyOnceSourceIntegrationTest.assertSeqnos(ExactlyOnceSourceIntegrationTest.java:961)
>       at 
> org.apache.kafka.connect.integration.ExactlyOnceSourceIntegrationTest.assertExactlyOnceSeqnos(ExactlyOnceSourceIntegrationTest.java:939)
>       at 
> org.apache.kafka.connect.integration.ExactlyOnceSourceIntegrationTest.testIntervalBoundary(ExactlyOnceSourceIntegrationTest.java:358)
>       at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
>       at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.base/java.lang.reflect.Method.invoke(Method.java:568)
>       at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>       at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>       at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>       at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>       at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>       at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>       at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>       at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100){noformat}
> It appears that the data produced by the connectors under test is too large 
> to be asserted on with the current assertions' memory overhead. We should try 
> to optimize the assertions' overhead and or reduce the number of records 
> being asserted on.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to