[jira] [Created] (FLINK-35499) EventTimeWindowCheckpointingITCase times out due to Checkpoint expired before completing

2024-05-31 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-35499:
---

 Summary: EventTimeWindowCheckpointingITCase times out due to 
Checkpoint expired before completing
 Key: FLINK-35499
 URL: https://issues.apache.org/jira/browse/FLINK-35499
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.20.0
Reporter: Ryan Skraba


* 1.20 AdaptiveScheduler / Test (module: tests) 
https://github.com/apache/flink/actions/runs/9311892945/job/25632037990#step:10:8702
* 1.20 Default (Java 8) / Test (module: tests) 
https://github.com/apache/flink/actions/runs/9275522134/job/25520829730#step:10:8264

Going into the logs, we see the following error occurs:
{code:java}

Test testTumblingTimeWindow[statebackend type =ROCKSDB_INCREMENTAL, 
buffersPerChannel = 
2](org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase) is 
running.

<...>
20:24:23,562 [Checkpoint Timer] INFO  
org.apache.flink.runtime.checkpoint.CheckpointCoordinator[] - Triggering 
checkpoint 22 (type=CheckpointType{name='Checkpoint', 
sharingFilesStrategy=FORWARD_BACKWARD}) @ 1716927863562 for job 
15d0a663cb415b09b9a68ccc40640c6d.
20:24:23,609 [jobmanager-io-thread-2] INFO  
org.apache.flink.runtime.checkpoint.CheckpointCoordinator[] - Completed 
checkpoint 22 for job 15d0a663cb415b09b9a68ccc40640c6d (2349132 bytes, 
checkpointDuration=43 ms, finalizationTime=4 ms).
20:24:23,610 [Checkpoint Timer] INFO  
org.apache.flink.runtime.checkpoint.CheckpointCoordinator[] - Triggering 
checkpoint 23 (type=CheckpointType{name='Checkpoint', 
sharingFilesStrategy=FORWARD_BACKWARD}) @ 1716927863610 for job 
15d0a663cb415b09b9a68ccc40640c6d.
20:24:23,620 [jobmanager-io-thread-2] WARN  
org.apache.flink.runtime.jobmaster.JobMaster [] - Error while 
processing AcknowledgeCheckpoint message
java.lang.IllegalStateException: Attempt to reference unknown state: 
a9a90973-4ee5-384f-acef-58a7c7560920
at 
org.apache.flink.util.Preconditions.checkState(Preconditions.java:193) 
~[flink-core-1.20-SNAPSHOT.jar:1.20-SNAPSHOT]
at 
org.apache.flink.runtime.state.SharedStateRegistryImpl.registerReference(SharedStateRegistryImpl.java:97)
 ~[flink-runtime-1.20-SNAPSHOT.jar:1.20-SNAPSHOT]
at 
org.apache.flink.runtime.state.SharedStateRegistry.registerReference(SharedStateRegistry.java:53)
 ~[flink-runtime-1.20-SNAPSHOT.jar:1.20-SNAPSHOT]
at 
org.apache.flink.runtime.state.IncrementalRemoteKeyedStateHandle.registerSharedStates(IncrementalRemoteKeyedStateHandle.java:289)
 ~[flink-runtime-1.20-SNAPSHOT.jar:1.20-SNAPSHOT]
at 
org.apache.flink.runtime.checkpoint.OperatorSubtaskState.registerSharedState(OperatorSubtaskState.java:243)
 ~[flink-runtime-1.20-SNAPSHOT.jar:1.20-SNAPSHOT]
at 
org.apache.flink.runtime.checkpoint.OperatorSubtaskState.registerSharedStates(OperatorSubtaskState.java:226)
 ~[flink-runtime-1.20-SNAPSHOT.jar:1.20-SNAPSHOT]
at 
org.apache.flink.runtime.checkpoint.TaskStateSnapshot.registerSharedStates(TaskStateSnapshot.java:193)
 ~[flink-runtime-1.20-SNAPSHOT.jar:1.20-SNAPSHOT]
at 
org.apache.flink.runtime.checkpoint.CheckpointCoordinator.receiveAcknowledgeMessage(CheckpointCoordinator.java:1245)
 ~[flink-runtime-1.20-SNAPSHOT.jar:1.20-SNAPSHOT]
at 
org.apache.flink.runtime.scheduler.ExecutionGraphHandler.lambda$acknowledgeCheckpoint$2(ExecutionGraphHandler.java:109)
 ~[flink-runtime-1.20-SNAPSHOT.jar:1.20-SNAPSHOT]
at 
org.apache.flink.runtime.scheduler.ExecutionGraphHandler.lambda$processCheckpointCoordinatorMessage$4(ExecutionGraphHandler.java:139)
 ~[flink-runtime-1.20-SNAPSHOT.jar:1.20-SNAPSHOT]
at 
org.apache.flink.util.MdcUtils.lambda$wrapRunnable$1(MdcUtils.java:64) 
~[flink-core-1.20-SNAPSHOT.jar:1.20-SNAPSHOT]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_392]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_392]
at java.lang.Thread.run(Thread.java:750) [?:1.8.0_392]
20:24:23,663 [Source: Custom Source (1/1)#1] INFO  
org.apache.flink.runtime.taskmanager.Task[] - Source: 
Custom Source (1/1)#1 
(bc4de0d149fba0ca825771ff7eeae08d_bc764cd8ddf7a0cff126f51c16239658_0_1) 
switched from RUNNING to FINISHED.
20:24:23,663 [Source: Custom Source (1/1)#1] INFO  
org.apache.flink.runtime.taskmanager.Task[] - Freeing task 
resources for Source: Custom Source (1/1)#1 
(bc4de0d149fba0ca825771ff7eeae08d_bc764cd8ddf7a0cff126f51c16239658_0_1).
20:24:23,663 [flink-pekko.actor.default-dispatcher-8] INFO  
org.apache.flink.runtime.taskexecutor.TaskExecutor   [] - 
Un-registering task and sending final execution state FINISHED to JobManager 
fo

[jira] [Created] (FLINK-35446) FileMergingSnapshotManagerBase throws a NullPointerException

2024-05-24 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-35446:
---

 Summary: FileMergingSnapshotManagerBase throws a 
NullPointerException
 Key: FLINK-35446
 URL: https://issues.apache.org/jira/browse/FLINK-35446
 Project: Flink
  Issue Type: Bug
Reporter: Ryan Skraba


* 1.20 Java 11 / Test (module: tests) 
https://github.com/apache/flink/actions/runs/9217608897/job/25360103124#step:10:8641

{{ResumeCheckpointManuallyITCase.testExternalizedIncrementalRocksDBCheckpointsWithLocalRecoveryZookeeper}}
 throws a NullPointerException when it tries to restore state handles: 

{code}
Error: 02:57:52 02:57:52.551 [ERROR] Tests run: 48, Failures: 0, Errors: 1, 
Skipped: 0, Time elapsed: 268.6 s <<< FAILURE! -- in 
org.apache.flink.test.checkpointing.ResumeCheckpointManuallyITCase
Error: 02:57:52 02:57:52.551 [ERROR] 
org.apache.flink.test.checkpointing.ResumeCheckpointManuallyITCase.testExternalizedIncrementalRocksDBCheckpointsWithLocalRecoveryZookeeper[RestoreMode
 = CLAIM] -- Time elapsed: 3.145 s <<< ERROR!
May 24 02:57:52 org.apache.flink.runtime.JobException: Recovery is suppressed 
by NoRestartBackoffTimeStrategy
May 24 02:57:52 at 
org.apache.flink.runtime.executiongraph.failover.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:219)
May 24 02:57:52 at 
org.apache.flink.runtime.executiongraph.failover.ExecutionFailureHandler.handleFailureAndReport(ExecutionFailureHandler.java:166)
May 24 02:57:52 at 
org.apache.flink.runtime.executiongraph.failover.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:121)
May 24 02:57:52 at 
org.apache.flink.runtime.scheduler.DefaultScheduler.recordTaskFailure(DefaultScheduler.java:279)
May 24 02:57:52 at 
org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:270)
May 24 02:57:52 at 
org.apache.flink.runtime.scheduler.DefaultScheduler.onTaskFailed(DefaultScheduler.java:263)
May 24 02:57:52 at 
org.apache.flink.runtime.scheduler.SchedulerBase.onTaskExecutionStateUpdate(SchedulerBase.java:788)
May 24 02:57:52 at 
org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:765)
May 24 02:57:52 at 
org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:83)
May 24 02:57:52 at 
org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:496)
May 24 02:57:52 at 
jdk.internal.reflect.GeneratedMethodAccessor29.invoke(Unknown Source)
May 24 02:57:52 at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
May 24 02:57:52 at 
java.base/java.lang.reflect.Method.invoke(Method.java:566)
May 24 02:57:52 at 
org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.lambda$handleRpcInvocation$1(PekkoRpcActor.java:318)
May 24 02:57:52 at 
org.apache.flink.runtime.concurrent.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83)
May 24 02:57:52 at 
org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRpcInvocation(PekkoRpcActor.java:316)
May 24 02:57:52 at 
org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRpcMessage(PekkoRpcActor.java:229)
May 24 02:57:52 at 
org.apache.flink.runtime.rpc.pekko.FencedPekkoRpcActor.handleRpcMessage(FencedPekkoRpcActor.java:88)
May 24 02:57:52 at 
org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleMessage(PekkoRpcActor.java:174)
May 24 02:57:52 at 
org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33)
May 24 02:57:52 at 
org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29)
May 24 02:57:52 at 
scala.PartialFunction.applyOrElse(PartialFunction.scala:127)
May 24 02:57:52 at 
scala.PartialFunction.applyOrElse$(PartialFunction.scala:126)
May 24 02:57:52 at 
org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29)
May 24 02:57:52 at 
scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:175)
May 24 02:57:52 at 
scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176)
May 24 02:57:52 at 
scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176)
May 24 02:57:52 at 
org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547)
May 24 02:57:52 at 
org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545)
May 24 02:57:52 at 
org.apache.pekko.actor.AbstractActor.aroundReceive(AbstractActor.scala:229)
May 24 02:57:52 at 
org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590)
May 24 02:57:52 at 
org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557)
May 24 02:57:52 at 
org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:28

[jira] [Created] (FLINK-35438) SourceCoordinatorTest.testErrorThrownFromSplitEnumerator fails on wrong error

2024-05-23 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-35438:
---

 Summary: SourceCoordinatorTest.testErrorThrownFromSplitEnumerator 
fails on wrong error
 Key: FLINK-35438
 URL: https://issues.apache.org/jira/browse/FLINK-35438
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.18.2
Reporter: Ryan Skraba


* 1.18 Java 11 / Test (module: core) 
https://github.com/apache/flink/actions/runs/9201159842/job/25309197630#step:10:7375

We expect to see an artificial {{Error("Test Error")}} being reported in the 
test as the cause of a job failure, but the reported job failure is null:

{code}
Error: 02:32:31 02:32:31.950 [ERROR] Tests run: 18, Failures: 1, Errors: 0, 
Skipped: 0, Time elapsed: 0.187 s <<< FAILURE! - in 
org.apache.flink.runtime.source.coordinator.SourceCoordinatorTest
Error: 02:32:31 02:32:31.950 [ERROR] 
org.apache.flink.runtime.source.coordinator.SourceCoordinatorTest.testErrorThrownFromSplitEnumerator
  Time elapsed: 0.01 s  <<< FAILURE!
May 23 02:32:31 org.opentest4j.AssertionFailedError: 
May 23 02:32:31 
May 23 02:32:31 expected: 
May 23 02:32:31   java.lang.Error: Test Error
May 23 02:32:31 at 
org.apache.flink.runtime.source.coordinator.SourceCoordinatorTest.testErrorThrownFromSplitEnumerator(SourceCoordinatorTest.java:296)
May 23 02:32:31 at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
May 23 02:32:31 at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
May 23 02:32:31 ...(57 remaining lines not displayed - this can be 
changed with Assertions.setMaxStackTraceElementsDisplayed)
May 23 02:32:31  but was: 
May 23 02:32:31   null
May 23 02:32:31 at 
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
 Method)
May 23 02:32:31 at 
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
May 23 02:32:31 at 
java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
May 23 02:32:31 at 
org.apache.flink.runtime.source.coordinator.SourceCoordinatorTest.testErrorThrownFromSplitEnumerator(SourceCoordinatorTest.java:322)
May 23 02:32:31 at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
May 23 02:32:31 at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
May 23 02:32:31 at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
May 23 02:32:31 at 
java.base/java.lang.reflect.Method.invoke(Method.java:566)
May 23 02:32:31 at 
org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:727)
May 23 02:32:31 at 
org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
May 23 02:32:31 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
May 23 02:32:31 at 
org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
May 23 02:32:31 at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:147)
May 23 02:32:31 at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:86)
May 23 02:32:31 at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
May 23 02:32:31 at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
May 23 02:32:31 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
May 23 02:32:31 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
May 23 02:32:31 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
May 23 02:32:31 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
May 23 02:32:31 at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
May 23 02:32:31 at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
{code}

This looks like it's a multithreading error with the test 
{{MockOperatorCoordinatorContext}}, perhaps where {{isJobFailure}} can return 
true before the reason has been populated. I couldn't reproduce it after 
running it 1M times.

[jira] [Created] (FLINK-35418) EventTimeWindowCheckpointingITCase fails with an NPE

2024-05-22 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-35418:
---

 Summary: EventTimeWindowCheckpointingITCase fails with an NPE
 Key: FLINK-35418
 URL: https://issues.apache.org/jira/browse/FLINK-35418
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.20.0
Reporter: Ryan Skraba


* 1.20 Default (Java 8) / Test (module: tests) 
[https://github.com/apache/flink/actions/runs/9185169193/job/25258948607#step:10:8106]

It looks like it's possible for PhysicalFile to generate a NullPointerException 
while a checkpoint is being aborted:

{code}
May 22 04:35:18 Starting 
org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase#testTumblingTimeWindow[statebackend
 type =ROCKSDB_INCREMENTAL_ZK, buffersPerChannel = 2].
org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
at 
org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
at 
org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:141)
at 
java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
at 
java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at 
java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
at 
org.apache.flink.runtime.rpc.pekko.PekkoInvocationHandler.lambda$invokeRpc$1(PekkoInvocationHandler.java:268)
at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at 
java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
at 
org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1287)
at 
org.apache.flink.runtime.concurrent.ClassLoadingUtils.lambda$null$1(ClassLoadingUtils.java:93)
at 
org.apache.flink.runtime.concurrent.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
at 
org.apache.flink.runtime.concurrent.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92)
at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at 
java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
at 
org.apache.flink.runtime.concurrent.pekko.ScalaFutureUtils$1.onComplete(ScalaFutureUtils.java:47)
at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:310)
at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307)
at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234)
at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at 
org.apache.flink.runtime.concurrent.pekko.ScalaFutureUtils$DirectExecutionContext.execute(ScalaFutureUtils.java:65)
at 
scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:72)
at 
scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:288)
at 
scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:288)
at 
scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:288)
at org.apache.pekko.pattern.PromiseActorRef.$bang(AskSupport.scala:629)
at 
org.apache.pekko.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:34)
at 
org.apache.pekko.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:33)
at scala.concurrent.Future.$anonfun$andThen$1(Future.scala:536)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at 
org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73)
at 
org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110)
at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at 
scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:85)
at 
org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110)
at 
org.apache.pekko.dispatch.TaskInvocation.run

[jira] [Created] (FLINK-35413) VertexFinishedStateCheckerTest causes exit 239

2024-05-21 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-35413:
---

 Summary: VertexFinishedStateCheckerTest causes exit 239
 Key: FLINK-35413
 URL: https://issues.apache.org/jira/browse/FLINK-35413
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.20.0
Reporter: Ryan Skraba


1.20 test_cron_azure core 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59676=logs=77a9d8e1-d610-59b3-fc2a-4766541e0e33=125e07e7-8de0-5c6c-a541-a567415af3ef=9429

{code}
May 21 01:31:42 01:31:42.160 [ERROR] 
org.apache.flink.runtime.checkpoint.VertexFinishedStateCheckerTest
May 21 01:31:42 01:31:42.160 [ERROR] 
org.apache.maven.surefire.booter.SurefireBooterForkException: 
ExecutionException The forked VM terminated without properly saying goodbye. VM 
crash or System.exit called?
May 21 01:31:42 01:31:42.160 [ERROR] Command was /bin/sh -c cd 
'/__w/1/s/flink-runtime' && '/usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java' 
'-XX:+UseG1GC' '-Xms256m' '-XX:+IgnoreUnrecognizedVMOptions' 
'--add-opens=java.base/java.util=ALL-UNNAMED' 
'--add-opens=java.base/java.lang=ALL-UNNAMED' 
'--add-opens=java.base/java.net=ALL-UNNAMED' 
'--add-opens=java.base/java.io=ALL-UNNAMED' 
'--add-opens=java.base/java.util.concurrent=ALL-UNNAMED' '-Xmx768m' '-jar' 
'/__w/1/s/flink-runtime/target/surefire/surefirebooter-20240521011847857_99.jar'
 '/__w/1/s/flink-runtime/target/surefire' '2024-05-21T01-15-09_325-jvmRun1' 
'surefire-20240521011847857_97tmp' 'surefire_29-20240521011847857_98tmp'
May 21 01:31:42 01:31:42.160 [ERROR] Error occurred in starting fork, check 
output in log
May 21 01:31:42 01:31:42.160 [ERROR] Process Exit Code: 239
May 21 01:31:42 01:31:42.160 [ERROR] Crashed tests:
May 21 01:31:42 01:31:42.160 [ERROR] 
org.apache.flink.runtime.checkpoint.VertexFinishedStateCheckerTest
May 21 01:31:42 01:31:42.160 [ERROR]at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:456)
May 21 01:31:42 01:31:42.160 [ERROR]at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkOnceMultiple(ForkStarter.java:358)
May 21 01:31:42 01:31:42.160 [ERROR]at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:296)
May 21 01:31:42 01:31:42.160 [ERROR]at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:250)
May 21 01:31:42 01:31:42.160 [ERROR]at 
org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1240)
May 21 01:31:42 01:31:42.160 [ERROR]at 
org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:1089)
May 21 01:31:42 01:31:42.160 [ERROR]at 
org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:905)
May 21 01:31:42 01:31:42.160 [ERROR]at 
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:137)
{code}

In the build artifact {{mvn-1.log}} the following FATAL error is found:

{code}
01:19:08,584 [ pool-9-thread-1] ERROR 
org.apache.flink.util.FatalExitExceptionHandler  [] - FATAL: Thread 
'pool-9-thread-1' produced an uncaught exception. Stopping the process...
java.util.concurrent.CompletionException: 
java.util.concurrent.RejectedExecutionException: Task 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@5ead9062 
rejected from 
java.util.concurrent.ScheduledThreadPoolExecutor@4d0e55ac[Shutting down, pool 
size = 1, active threads = 1, queued tasks = 1, completed tasks = 194]
at 
java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)
 ~[?:1.8.0_292]
at 
java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)
 ~[?:1.8.0_292]
at 
java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:838) 
~[?:1.8.0_292]
at 
java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)
 ~[?:1.8.0_292]
at 
java.util.concurrent.CompletableFuture.uniHandleStage(CompletableFuture.java:851)
 ~[?:1.8.0_292]
at 
java.util.concurrent.CompletableFuture.handleAsync(CompletableFuture.java:2178) 
~[?:1.8.0_292]
at 
org.apache.flink.runtime.resourcemanager.slotmanager.DefaultSlotStatusSyncer.allocateSlot(DefaultSlotStatusSyncer.java:138)
 ~[classes/:?]
at 
org.apache.flink.runtime.resourcemanager.slotmanager.FineGrainedSlotManager.allocateSlotsAccordingTo(FineGrainedSlotManager.java:722)
 ~[classes/:?]
at 
org.apache.flink.runtime.resourcemanager.slotmanager.FineGrainedSlotManager.checkResourceRequirements(FineGrainedSlotManager.java:645)
 ~[classes/:?]
at 
org.apache.flink.runtime.resourcemanager.slotmanager.FineGrainedSlotManager.lambda$null$12(FineGrainedSlotManager.java:603)
 ~[classes/:?]
at 
java.util.concurrent.Executors$RunnableAdap

[jira] [Created] (FLINK-35382) ChangelogCompatibilityITCase.testRestore fails with an NPE

2024-05-16 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-35382:
---

 Summary: ChangelogCompatibilityITCase.testRestore fails with an NPE
 Key: FLINK-35382
 URL: https://issues.apache.org/jira/browse/FLINK-35382
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.20.0
Reporter: Ryan Skraba


* 1.20 Java 8 / Test (module: tests) 
https://github.com/apache/flink/actions/runs/9110398985/job/25045798401#step:10:8192

It looks like there can be a [NullPointerException at this 
line|https://github.com/apache/flink/blob/9a5a99b1a30054268bbde36d565cbb1b81018890/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/filemerging/FileMergingSnapshotManagerBase.java#L666]
 causing a test failure:

{code}
Error: 10:36:23 10:36:23.312 [ERROR] Tests run: 9, Failures: 0, Errors: 1, 
Skipped: 0, Time elapsed: 19.31 s <<< FAILURE! -- in 
org.apache.flink.test.state.ChangelogCompatibilityITCase
Error: 10:36:23 10:36:23.313 [ERROR] 
org.apache.flink.test.state.ChangelogCompatibilityITCase.testRestore[startWithChangelog=false,
 restoreWithChangelog=true, restoreFrom=CHECKPOINT, allowStore=true, 
allowRestore=true] -- Time elapsed: 1.492 s <<< ERROR!
May 16 10:36:23 java.lang.RuntimeException: 
org.opentest4j.AssertionFailedError: Graph is in globally terminal state 
(FAILED)
May 16 10:36:23 at 
org.apache.flink.test.state.ChangelogCompatibilityITCase.tryRun(ChangelogCompatibilityITCase.java:204)
May 16 10:36:23 at 
org.apache.flink.test.state.ChangelogCompatibilityITCase.restoreAndValidate(ChangelogCompatibilityITCase.java:190)
May 16 10:36:23 at java.util.Optional.ifPresent(Optional.java:159)
May 16 10:36:23 at 
org.apache.flink.test.state.ChangelogCompatibilityITCase.testRestore(ChangelogCompatibilityITCase.java:118)
May 16 10:36:23 at java.lang.reflect.Method.invoke(Method.java:498)
May 16 10:36:23 Caused by: org.opentest4j.AssertionFailedError: Graph is in 
globally terminal state (FAILED)
May 16 10:36:23 at 
org.junit.jupiter.api.AssertionUtils.fail(AssertionUtils.java:42)
May 16 10:36:23 at 
org.junit.jupiter.api.Assertions.fail(Assertions.java:150)
May 16 10:36:23 at 
org.apache.flink.runtime.testutils.CommonTestUtils.lambda$waitForAllTaskRunning$3(CommonTestUtils.java:214)
May 16 10:36:23 at 
org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:151)
May 16 10:36:23 at 
org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:145)
May 16 10:36:23 at 
org.apache.flink.runtime.testutils.CommonTestUtils.waitForAllTaskRunning(CommonTestUtils.java:209)
May 16 10:36:23 at 
org.apache.flink.runtime.testutils.CommonTestUtils.waitForAllTaskRunning(CommonTestUtils.java:182)
May 16 10:36:23 at 
org.apache.flink.test.state.ChangelogCompatibilityITCase.submit(ChangelogCompatibilityITCase.java:284)
May 16 10:36:23 at 
org.apache.flink.test.state.ChangelogCompatibilityITCase.tryRun(ChangelogCompatibilityITCase.java:197)
May 16 10:36:23 ... 4 more
May 16 10:36:23 Caused by: org.apache.flink.runtime.JobException: 
org.apache.flink.runtime.JobException: Recovery is suppressed by 
NoRestartBackoffTimeStrategy
May 16 10:36:23 at 
org.apache.flink.runtime.executiongraph.failover.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:219)
May 16 10:36:23 at 
org.apache.flink.runtime.executiongraph.failover.ExecutionFailureHandler.handleFailureAndReport(ExecutionFailureHandler.java:166)
May 16 10:36:23 at 
org.apache.flink.runtime.executiongraph.failover.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:121)
May 16 10:36:23 at 
org.apache.flink.runtime.scheduler.DefaultScheduler.recordTaskFailure(DefaultScheduler.java:279)
May 16 10:36:23 at 
org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:270)
May 16 10:36:23 at 
org.apache.flink.runtime.scheduler.DefaultScheduler.onTaskFailed(DefaultScheduler.java:263)
May 16 10:36:23 at 
org.apache.flink.runtime.scheduler.SchedulerBase.onTaskExecutionStateUpdate(SchedulerBase.java:788)
May 16 10:36:23 at 
org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:765)
May 16 10:36:23 at 
org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:83)
May 16 10:36:23 at 
org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:496)
May 16 10:36:23 at java.lang.reflect.Method.invoke(Method.java:498)
May 16 10:36:23 at 
org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.lambda$handleRpcInvocation$1(PekkoRpcActor.java:318)
May 16 10:36:23 at 
org.apache.flink.runtime.concurrent.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.

[jira] [Created] (FLINK-35381) LocalRecoveryITCase failure on deleting directory

2024-05-16 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-35381:
---

 Summary: LocalRecoveryITCase failure on deleting directory
 Key: FLINK-35381
 URL: https://issues.apache.org/jira/browse/FLINK-35381
 Project: Flink
  Issue Type: Bug
Reporter: Ryan Skraba


* 1.20 Java 11 / Test (module: tests) 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=54856=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=11288
F

It looks like some resources in a subdirectory of a JUnit4 {{ClassRule}} temp 
directory prevent it from being cleaned up.  This was fixed in a different test 
in FLINK-33641.

{code}
SEVERE: Caught exception while closing extension context: 
org.junit.jupiter.engine.descriptor.MethodExtensionContext@2fc91366
java.io.IOException: Failed to delete temp directory 
/tmp/junit7935976901063386613. The following paths could not be deleted (see 
suppressed exceptions for details): 
tm_taskManager_0/localState/aid_1501e77149be2f931eab0a6c2e818f81/jid_fe61a39afa9873389353abb8bfbfba66/vtx_0a448493b4782967b150582570326227_sti_0,
 
tm_taskManager_0/localState/aid_1501e77149be2f931eab0a6c2e818f81/jid_fe61a39afa9873389353abb8bfbfba66/vtx_bc764cd8ddf7a0cff126f51c16239658_sti_0/chk_51
at 
org.junit.jupiter.engine.extension.TempDirectory$CloseablePath.createIOExceptionWithAttachedFailures(TempDirectory.java:431)
at 
org.junit.jupiter.engine.extension.TempDirectory$CloseablePath.close(TempDirectory.java:312)
at 
org.junit.jupiter.engine.descriptor.AbstractExtensionContext.lambda$static$0(AbstractExtensionContext.java:45)
at 
org.junit.platform.engine.support.store.NamespacedHierarchicalStore$EvaluatedValue.close(NamespacedHierarchicalStore.java:333)
at 
org.junit.platform.engine.support.store.NamespacedHierarchicalStore$EvaluatedValue.access$800(NamespacedHierarchicalStore.java:317)
at 
org.junit.platform.engine.support.store.NamespacedHierarchicalStore.lambda$close$3(NamespacedHierarchicalStore.java:98)
at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at 
org.junit.platform.engine.support.store.NamespacedHierarchicalStore.lambda$close$4(NamespacedHierarchicalStore.java:98)
at 
java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1541)
at 
java.base/java.util.stream.SortedOps$RefSortingSink.end(SortedOps.java:395)
at java.base/java.util.stream.Sink$ChainedReference.end(Sink.java:258)
at java.base/java.util.stream.Sink$ChainedReference.end(Sink.java:258)
at 
java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:485)
at 
java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
at 
java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
at 
java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
at 
java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at 
java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:497)
at 
org.junit.platform.engine.support.store.NamespacedHierarchicalStore.close(NamespacedHierarchicalStore.java:98)
at 
org.junit.jupiter.engine.descriptor.AbstractExtensionContext.close(AbstractExtensionContext.java:87)
at 
org.junit.jupiter.engine.execution.JupiterEngineExecutionContext.close(JupiterEngineExecutionContext.java:53)
at 
org.junit.jupiter.engine.descriptor.JupiterTestDescriptor.cleanUp(JupiterTestDescriptor.java:224)
at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$cleanUp$1(TestMethodTestDescriptor.java:156)
at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.cleanUp(TestMethodTestDescriptor.java:156)
at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.cleanUp(TestMethodTestDescriptor.java:69)
at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$cleanUp$10(NodeTestTask.java:167)
at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.cleanUp(NodeTestTask.java:167)
at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:98)
at 
org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService$ExclusiveTask.compute(ForkJoinPoolHierarchicalTestExecutorService.java:202

[jira] [Created] (FLINK-35380) ResumeCheckpointManuallyITCase hanging on tests

2024-05-16 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-35380:
---

 Summary: ResumeCheckpointManuallyITCase hanging on tests 
 Key: FLINK-35380
 URL: https://issues.apache.org/jira/browse/FLINK-35380
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.20.0
Reporter: Ryan Skraba


* 1.20 Default (Java 8) / Test (module: tests) 
https://github.com/apache/flink/actions/runs/9105407291/job/25031170942#step:10:11841
 

(This is a slightly different error, waiting in a different place than 
FLINK-28319)

{code}
May 16 03:23:58 
==
May 16 03:23:58 Process produced no output for 900 seconds.
May 16 03:23:58 
==

... snip until stack trace ...

ay 16 03:23:58  at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
May 16 03:23:58 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
May 16 03:23:58 at 
java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
May 16 03:23:58 at 
org.apache.flink.test.checkpointing.ResumeCheckpointManuallyITCase.runJobAndGetExternalizedCheckpoint(ResumeCheckpointManuallyITCase.java:410)
May 16 03:23:58 at 
org.apache.flink.test.checkpointing.ResumeCheckpointManuallyITCase.testExternalizedCheckpoints(ResumeCheckpointManuallyITCase.java:378)
May 16 03:23:58 at 
org.apache.flink.test.checkpointing.ResumeCheckpointManuallyITCase.testExternalizedCheckpoints(ResumeCheckpointManuallyITCase.java:318)
May 16 03:23:58 at 
org.apache.flink.test.checkpointing.ResumeCheckpointManuallyITCase.testExternalizedFullRocksDBCheckpointsWithLocalRecoveryStandalone(ResumeCheckpointManuallyITCase.java:133)
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Community over Code EU 2024: The countdown has started!

2024-05-14 Thread Ryan Skraba
[Note: You're receiving this email because you are subscribed to one
or more project dev@ mailing lists at the Apache Software Foundation.]

We are very close to Community Over Code EU -- check out the amazing
program and the special discounts that we have for you.

Special discounts

You still have the opportunity to secure your ticket for Community
Over Code EU. Explore the various options available, including the
regular pass, the committer and groups pass, and now introducing the
one-day pass tailored for locals in Bratislava.

We also have a special discount for you to attend both Community Over
Code and Berlin Buzzwords from June 9th to 11th. Visit our website to
find out more about this opportunity and contact te...@sg.com.mx to
get the discount code.

Take advantage of the discounts and register now!
https://eu.communityovercode.org/tickets/

Check out the full program!

This year Community Over Code Europe will bring to you three days of
keynotes and sessions that cover topics of interest for ASF projects
and the greater open source ecosystem including data engineering,
performance engineering, search, Internet of Things (IoT) as well as
sessions with tips and lessons learned on building a healthy open
source community.

Check out the program: https://eu.communityovercode.org/program/

Keynote speaker highlights for Community Over Code Europe include:

* Dirk-Willem Van Gulik, VP of Public Policy at the Apache Software
Foundation, will discuss the Cyber Resiliency Act and its impact on
open source (All your code belongs to Policy Makers, Politicians, and
the Law).

* Dr. Sherae Daniel will share the results of her study on the impact
of self-promotion for open source software developers (To Toot or not
to Toot, that is the question).

* Asim Hussain, Executive Director of the Green Software Foundation
will present a framework they have developed for quantifying the
environmental impact of software (Doing for Sustainability what Open
Source did for Software).

* Ruth Ikegah will  discuss the growth of the open source movement in
Africa (From Local Roots to Global Impact: Building an Inclusive Open
Source Community in Africa)

* A discussion panel on EU policies and regulations affecting
specialists working in Open Source Program Offices

Additional activities

* Poster sessions: We invite you to stop by our poster area and see if
the ideas presented ignite a conversation within your team.

* BOF time: Don't miss the opportunity to discuss in person with your
open source colleagues on your shared interests.

* Participants reception: At the end of the first day, we will have a
reception at the event venue. All participants are welcome to attend!

* Spontaneous talks: There is a dedicated room and social space for
having spontaneous talks and sessions. Get ready to share with your
peers.

* Lighting talks: At the end of the event we will have the awaited
Lighting talks, where every participant is welcome to share and
enlighten us.

Please remember:  If you haven't applied for the visa, we will provide
the necessary letter for the process. In the unfortunate case of a
visa rejection, your ticket will be reimbursed.

See you in Bratislava,

Community Over Code EU Team


[jira] [Created] (FLINK-35342) MaterializedTableStatementITCase test can check for wrong status

2024-05-13 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-35342:
---

 Summary: MaterializedTableStatementITCase test can check for wrong 
status
 Key: FLINK-35342
 URL: https://issues.apache.org/jira/browse/FLINK-35342
 Project: Flink
  Issue Type: Bug
Reporter: Ryan Skraba


* 1.20 AdaptiveScheduler / Test (module: table) 
https://github.com/apache/flink/actions/runs/9056197319/job/24879135605#step:10:12490
 
It looks like 
{{MaterializedTableStatementITCase.testAlterMaterializedTableSuspendAndResume}} 
can be flaky, where the expected status is not yet RUNNING:

{code}
Error: 03:24:03 03:24:03.902 [ERROR] Tests run: 6, Failures: 1, Errors: 0, 
Skipped: 0, Time elapsed: 26.78 s <<< FAILURE! -- in 
org.apache.flink.table.gateway.service.MaterializedTableStatementITCase
Error: 03:24:03 03:24:03.902 [ERROR] 
org.apache.flink.table.gateway.service.MaterializedTableStatementITCase.testAlterMaterializedTableSuspendAndResume(Path,
 RestClusterClient) -- Time elapsed: 3.850 s <<< FAILURE!
May 13 03:24:03 org.opentest4j.AssertionFailedError: 
May 13 03:24:03 
May 13 03:24:03 expected: "RUNNING"
May 13 03:24:03  but was: "CREATED"
May 13 03:24:03 at 
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
May 13 03:24:03 at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
May 13 03:24:03 at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
May 13 03:24:03 at 
org.apache.flink.table.gateway.service.MaterializedTableStatementITCase.testAlterMaterializedTableSuspendAndResume(MaterializedTableStatementITCase.java:650)
May 13 03:24:03 at java.lang.reflect.Method.invoke(Method.java:498)
May 13 03:24:03 at 
java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189)
May 13 03:24:03 at 
java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
May 13 03:24:03 at 
java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
May 13 03:24:03 at 
java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
May 13 03:24:03 at 
java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175)
May 13 03:24:03 
May 13 03:24:04 03:24:04.270 [INFO] 
May 13 03:24:04 03:24:04.270 [INFO] Results:
May 13 03:24:04 03:24:04.270 [INFO] 
Error: 03:24:04 03:24:04.270 [ERROR] Failures: 
Error: 03:24:04 03:24:04.271 [ERROR]   
MaterializedTableStatementITCase.testAlterMaterializedTableSuspendAndResume:650 
May 13 03:24:04 expected: "RUNNING"
May 13 03:24:04  but was: "CREATED"
May 13 03:24:04 03:24:04.271 [INFO] 
Error: 03:24:04 03:24:04.271 [ERROR] Tests run: 82, Failures: 1, Errors: 0, 
Skipped: 0
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35339) Compilation timeout while building flink-dist

2024-05-13 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-35339:
---

 Summary: Compilation timeout while building flink-dist
 Key: FLINK-35339
 URL: https://issues.apache.org/jira/browse/FLINK-35339
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.19.1
Reporter: Ryan Skraba


* 1.19 Java 17 / Test (module: python) 
https://github.com/apache/flink/actions/runs/9040330904/job/24844527283#step:10:14325

The CI pipeline fails with:

{code}
May 11 02:44:25 Process exited with EXIT CODE: 143.
May 11 02:44:25 Trying to KILL watchdog (49546).
May 11 02:44:25 
==
May 11 02:44:25 Compilation failure detected, skipping test execution.
May 11 02:44:25 
==
{code}

It looks like this is due to a failed network connection while building 
src/assemblies/bin.xml :

{code}
May 11 02:44:25java.lang.Thread.State: RUNNABLE
May 11 02:44:25 at sun.nio.ch.Net.connect0(java.base@17.0.7/Native 
Method)
May 11 02:44:25 at sun.nio.ch.Net.connect(java.base@17.0.7/Net.java:579)
May 11 02:44:25 at sun.nio.ch.Net.connect(java.base@17.0.7/Net.java:568)
May 11 02:44:25 at 
sun.nio.ch.NioSocketImpl.connect(java.base@17.0.7/NioSocketImpl.java:588)
May 11 02:44:25 at 
java.net.SocksSocketImpl.connect(java.base@17.0.7/SocksSocketImpl.java:327)
May 11 02:44:25 at 
java.net.Socket.connect(java.base@17.0.7/Socket.java:633)
May 11 02:44:25 at 
org.apache.maven.wagon.providers.http.httpclient.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:368)
May 11 02:44:25 at 
org.apache.maven.wagon.providers.http.httpclient.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142)
May 11 02:44:25 at 
org.apache.maven.wagon.providers.http.httpclient.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376)
May 11 02:44:25 at 
org.apache.maven.wagon.providers.http.httpclient.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393)
May 11 02:44:25 at 
org.apache.maven.wagon.providers.http.httpclient.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
May 11 02:44:25 at 
org.apache.maven.wagon.providers.http.httpclient.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186)
May 11 02:44:25 at 
org.apache.maven.wagon.providers.http.httpclient.impl.execchain.RetryExec.execute(RetryExec.java:89)
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35335) StateCheckpointedITCase failed fatally with 127 exit code

2024-05-13 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-35335:
---

 Summary: StateCheckpointedITCase failed fatally with 127 exit code
 Key: FLINK-35335
 URL: https://issues.apache.org/jira/browse/FLINK-35335
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.19.1
Reporter: Ryan Skraba


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59499=logs=8fd9202e-fd17-5b26-353c-ac1ff76c8f28=ea7cf968-e585-52cb-e0fc-f48de023a7ca=8379

{code}
May 13 01:50:22 01:50:22.272 [INFO] Tests run: 6, Failures: 0, Errors: 0, 
Skipped: 0, Time elapsed: 30.03 s -- in 
org.apache.flink.test.streaming.runtime.CacheITCase
May 13 01:50:23 01:50:23.142 [INFO] Tests run: 1, Failures: 0, Errors: 0, 
Skipped: 0, Time elapsed: 5.234 s -- in 
org.apache.flink.test.streaming.experimental.CollectITCase
May 13 01:50:23 01:50:23.611 [INFO] 
May 13 01:50:23 01:50:23.611 [INFO] Results:
May 13 01:50:23 01:50:23.611 [INFO] 
May 13 01:50:23 01:50:23.611 [WARNING] Tests run: 1960, Failures: 0, Errors: 0, 
Skipped: 25
May 13 01:50:23 01:50:23.611 [INFO] 
May 13 01:50:23 01:50:23.674 [INFO] 

May 13 01:50:23 01:50:23.674 [INFO] BUILD FAILURE
May 13 01:50:23 01:50:23.674 [INFO] 

May 13 01:50:23 01:50:23.676 [INFO] Total time:  41:24 min
May 13 01:50:23 01:50:23.677 [INFO] Finished at: 2024-05-13T01:50:23Z
May 13 01:50:23 01:50:23.677 [INFO] 

May 13 01:50:23 01:50:23.677 [WARNING] The requested profile "skip-webui-build" 
could not be activated because it does not exist.
May 13 01:50:23 01:50:23.678 [ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:3.2.2:test (integration-tests) 
on project flink-tests: 
May 13 01:50:23 01:50:23.678 [ERROR] 
May 13 01:50:23 01:50:23.678 [ERROR] Please refer to 
/__w/2/s/flink-tests/target/surefire-reports for the individual test results.
May 13 01:50:23 01:50:23.678 [ERROR] Please refer to dump files (if any exist) 
[date].dump, [date]-jvmRun[N].dump and [date].dumpstream.
May 13 01:50:23 01:50:23.678 [ERROR] ExecutionException The forked VM 
terminated without properly saying goodbye. VM crash or System.exit called?
May 13 01:50:23 01:50:23.678 [ERROR] Command was /bin/sh -c cd 
'/__w/2/s/flink-tests' && '/usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java' 
'-XX:+UseG1GC' '-Xms256m' '-XX:+IgnoreUnrecognizedVMOptions' 
'--add-opens=java.base/java.util=ALL-UNNAMED' 
'--add-opens=java.base/java.io=ALL-UNNAMED' '-Xmx1536m' '-jar' 
'/__w/2/s/flink-tests/target/surefire/surefirebooter-20240513010926195_686.jar' 
'/__w/2/s/flink-tests/target/surefire' '2024-05-13T01-09-20_665-jvmRun1' 
'surefire-20240513010926195_684tmp' 'surefire_206-20240513010926195_685tmp'
May 13 01:50:23 01:50:23.679 [ERROR] Error occurred in starting fork, check 
output in log
May 13 01:50:23 01:50:23.679 [ERROR] Process Exit Code: 127
May 13 01:50:23 01:50:23.679 [ERROR] Crashed tests:
May 13 01:50:23 01:50:23.679 [ERROR] 
org.apache.flink.test.checkpointing.StateCheckpointedITCase
May 13 01:50:23 01:50:23.679 [ERROR] 
org.apache.maven.surefire.booter.SurefireBooterForkException: 
ExecutionException The forked VM terminated without properly saying goodbye. VM 
crash or System.exit called?
May 13 01:50:23 01:50:23.679 [ERROR] Command was /bin/sh -c cd 
'/__w/2/s/flink-tests' && '/usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java' 
'-XX:+UseG1GC' '-Xms256m' '-XX:+IgnoreUnrecognizedVMOptions' 
'--add-opens=java.base/java.util=ALL-UNNAMED' 
'--add-opens=java.base/java.io=ALL-UNNAMED' '-Xmx1536m' '-jar' 
'/__w/2/s/flink-tests/target/surefire/surefirebooter-20240513010926195_686.jar' 
'/__w/2/s/flink-tests/target/surefire' '2024-05-13T01-09-20_665-jvmRun1' 
'surefire-20240513010926195_684tmp' 'surefire_206-20240513010926195_685tmp'
May 13 01:50:23 01:50:23.679 [ERROR] Error occurred in starting fork, check 
output in log
May 13 01:50:23 01:50:23.679 [ERROR] Process Exit Code: 127
May 13 01:50:23 01:50:23.679 [ERROR] Crashed tests:
May 13 01:50:23 01:50:23.679 [ERROR] 
org.apache.flink.test.checkpointing.StateCheckpointedITCase
May 13 01:50:23 01:50:23.679 [ERROR]at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:456)
May 13 01:50:23 01:50:23.679 [ERROR]at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:418)
May 13 01:50:23 01:50:23.679 [ERROR]at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:297)
May 13 01:50:23 01:50:23.679 [ERROR]at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:250)
{code}

In the maven logs, {{runCheckpointedProgram[FailoverStrategy: 
RestartPipelinedRegionFailoverStrategy]}} is s

[jira] [Created] (FLINK-35284) Streaming File Sink end-to-end test times out

2024-05-02 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-35284:
---

 Summary: Streaming File Sink end-to-end test times out
 Key: FLINK-35284
 URL: https://issues.apache.org/jira/browse/FLINK-35284
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.20.0
Reporter: Ryan Skraba


1.20 e2e_2_cron_adaptive_scheduler 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59303=logs=fb37c667-81b7-5c22-dd91-846535e99a97=011e961e-597c-5c96-04fe-7941c8b83f23=3076

{code}
May 01 01:08:42 Test (pid: 127498) did not finish after 900 seconds.
May 01 01:08:42 Printing Flink logs and killing it:
{code}

This looks like a consequence of hundreds of {{RecipientUnreachableException}}s 
like: 

{code}
2024-05-01 00:55:00,496 WARN  
org.apache.flink.runtime.resourcemanager.slotmanager.DefaultSlotStatusSyncer [] 
- Slot allocation for allocation 2ec550d8331cd53c32fd899e1e9a0fa5 for job 
5654b195450b352be998673f1637fc43 failed.
org.apache.flink.runtime.rpc.exceptions.RecipientUnreachableException: Could 
not send message [RemoteRpcInvocation(TaskExecutorGateway.requestSlot(SlotID, 
JobID, AllocationID, ResourceProfile, String, ResourceManagerId, Time))] from 
sender [Actor[pekko://flink/temp/taskmanager_0$De]] to recipient 
[Actor[pekko.ssl.tcp://flink@localhost:40665/user/rpc/taskmanager_0#-299862847]],
 because the recipient is unreachable. This can either mean that the recipient 
has been terminated or that the remote RpcService is currently not reachable.
at 
org.apache.flink.runtime.rpc.pekko.DeadLettersActor.handleDeadLetter(DeadLettersActor.java:61)
 ~[flink-rpc-akkafe85d469-8ced-4732-922e-62c82b554871.jar:1.20-SNAPSHOT]
at 
org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) 
~[flink-rpc-akkafe85d469-8ced-4732-922e-62c82b554871.jar:1.20-SNAPSHOT]
at 
org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) 
~[flink-rpc-akkafe85d469-8ced-4732-922e-62c82b554871.jar:1.20-SNAPSHOT]
at scala.PartialFunction.applyOrElse(PartialFunction.scala:127) 
~[flink-rpc-akkafe85d469-8ced-4732-922e-62c82b554871.jar:1.20-SNAPSHOT]
{code}





--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35276) SortCodeGeneratorTest.testMultiKeys fails on negative zero

2024-04-30 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-35276:
---

 Summary: SortCodeGeneratorTest.testMultiKeys fails on negative zero
 Key: FLINK-35276
 URL: https://issues.apache.org/jira/browse/FLINK-35276
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.20.0, 1.19.1
Reporter: Ryan Skraba


1.19 AdaptiveScheduler / Test (module: table) 
[https://github.com/apache/flink/actions/runs/8864296211/job/24339523745#step:10:10757]

SortCodeGeneratorTest can fail if one of the generated random row values is 
-0.0f.
{code:java}
Apr 28 02:38:03 expect: +I(,SqlRawValue{?},0.0,false); actual: 
+I(,SqlRawValue{?},-0.0,false)
Apr 28 02:38:03 expect: +I(,SqlRawValue{?},-0.0,false); actual: 
+I(,SqlRawValue{?},0.0,false)
...

...
Apr 28 02:38:04 expect: +I(,null,4.9695407E17,false); actual: 
+I(,null,4.9695407E17,false)
Apr 28 02:38:04 expect: +I(,null,-3.84924672E18,false); actual: 
+I(,null,-3.84924672E18,false)
Apr 28 02:38:04 types: [[RAW('java.lang.Integer', ?), FLOAT, BOOLEAN]]
Apr 28 02:38:04 keys: [0, 1]] 
Apr 28 02:38:04 expected: 0.0f
Apr 28 02:38:04  but was: -0.0f
Apr 28 02:38:04 at 
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
Apr 28 02:38:04 at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
Apr 28 02:38:04 at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
Apr 28 02:38:04 at 
org.apache.flink.table.planner.codegen.SortCodeGeneratorTest.testInner(SortCodeGeneratorTest.java:632)
Apr 28 02:38:04 at 
org.apache.flink.table.planner.codegen.SortCodeGeneratorTest.testMultiKeys(SortCodeGeneratorTest.java:143)
Apr 28 02:38:04 at java.lang.reflect.Method.invoke(Method.java:498)
{code}

In the test code, this is extremely unlikely to occur (one in 2²⁴?) but *has* 
happened at this line (when the {{rnd.nextFloat()}} is {{0.0f}} and 
{{rnd.nextLong()}} is negative:

[https://github.com/apache/flink/blob/e7ce0a2969633168b9395c683921aa49362ad7a4/flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/codegen/SortCodeGeneratorTest.java#L255]

We can reproduce the failure by changing how likely {{0.0f}} is to be generated 
at that line.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35207) Kubernetes session E2E test fails to fetch packages.

2024-04-22 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-35207:
---

 Summary: Kubernetes session E2E test fails to fetch packages.
 Key: FLINK-35207
 URL: https://issues.apache.org/jira/browse/FLINK-35207
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.18.2
Reporter: Ryan Skraba


1.18 Default (Java 8) / E2E (group 1) 
https://github.com/apache/flink/commit/aacc735806acf1d63fa732706e079bc2ca1bb4fc/checks/24027142976/logs

Looks like some flakiness when fetching packages to install (and just to track 
if this happens again)

{code}
2024-04-19T14:28:15.9116531Z Apr 19 14:28:15 
==
2024-04-19T14:28:15.9117204Z Apr 19 14:28:15 Running 'Run kubernetes session 
test (custom fs plugin)'
2024-04-19T14:28:15.9118209Z Apr 19 14:28:15 
==
2024-04-19T14:28:15.9119866Z Apr 19 14:28:15 TEST_DATA_DIR: 
/home/runner/work/flink/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-15907928199
2024-04-19T14:28:16.1477984Z Apr 19 14:28:16 Flink dist directory: 
/home/runner/work/flink/flink/flink-dist/target/flink-1.18-SNAPSHOT-bin/flink-1.18-SNAPSHOT
2024-04-19T14:28:16.1546131Z Apr 19 14:28:16 Flink dist directory: 
/home/runner/work/flink/flink/flink-dist/target/flink-1.18-SNAPSHOT-bin/flink-1.18-SNAPSHOT
2024-04-19T14:28:16.1670878Z Apr 19 14:28:16 Docker version 24.0.9, build 
2936816
2024-04-19T14:28:16.5441575Z Apr 19 14:28:16 docker-compose version 1.29.2, 
build 5becea4c
2024-04-19T14:28:16.7073581Z Apr 19 14:28:16 Reading package lists...
2024-04-19T14:28:16.8529977Z Apr 19 14:28:16 Building dependency tree...
2024-04-19T14:28:16.8541118Z Apr 19 14:28:16 Reading state information...
2024-04-19T14:28:16.9872695Z Apr 19 14:28:16 conntrack is already the newest 
version (1:1.4.6-2build2).
2024-04-19T14:28:16.9873637Z Apr 19 14:28:16 0 upgraded, 0 newly installed, 0 
to remove and 20 not upgraded.
2024-04-19T14:28:17.5567699Z 2024-04-19 14:28:17 
URL:https://objects.githubusercontent.com/github-production-release-asset-2e65be/80172100/7186c302-3766-4ed5-920a-f85c9d6334ac?X-Amz-Algorithm=AWS4-HMAC-SHA256=AKIAVCODYLSA53PQK4ZA%2F20240419%2Fus-east-1%2Fs3%2Faws4_request=20240419T142817Z=300=fe759ee1ce1eb3ebeaee7d8e714aedcedbc9035c75bc656f3ecda57836820bdf=host_id=0_id=0_id=80172100=attachment%3B%20filename%3Dcrictl-v1.24.2-linux-amd64.tar.gz=application%2Foctet-stream
 [14553934/14553934] -> "crictl-v1.24.2-linux-amd64.tar.gz" [1]
2024-04-19T14:28:17.5668524Z Apr 19 14:28:17 crictl
2024-04-19T14:28:18.1236206Z 2024-04-19 14:28:18 
URL:https://objects.githubusercontent.com/github-production-release-asset-2e65be/318491505/e304ee45-ccad-4438-bc2c-039c8f6755d1?X-Amz-Algorithm=AWS4-HMAC-SHA256=AKIAVCODYLSA53PQK4ZA%2F20240419%2Fus-east-1%2Fs3%2Faws4_request=20240419T142817Z=300=43ca222d979f2595126d94c344045c628c246597bfc9339d6b4dbf223e8b6be3=host_id=0_id=0_id=318491505=attachment%3B%20filename%3Dcri-dockerd-0.2.3.amd64.tgz=application%2Foctet-stream
 [23042323/23042323] -> "cri-dockerd-0.2.3.amd64.tgz.2" [1]
2024-04-19T14:28:18.1292589Z Apr 19 14:28:18 cri-dockerd/cri-dockerd
2024-04-19T14:28:18.6786614Z 2024-04-19 14:28:18 
URL:https://raw.githubusercontent.com/Mirantis/cri-dockerd/v0.2.3/packaging/systemd/cri-docker.service
 [1337/1337] -> "cri-docker.service" [1]
2024-04-19T14:28:18.8479307Z 2024-04-19 14:28:18 
URL:https://raw.githubusercontent.com/Mirantis/cri-dockerd/v0.2.3/packaging/systemd/cri-docker.socket
 [204/204] -> "cri-docker.socket" [1]
2024-04-19T14:28:19.5285798Z Apr 19 14:28:19 fs.protected_regular = 0
2024-04-19T14:28:19.6167026Z Apr 19 14:28:19 minikube
2024-04-19T14:28:19.6167602Z Apr 19 14:28:19 type: Control Plane
2024-04-19T14:28:19.6168146Z Apr 19 14:28:19 host: Stopped
2024-04-19T14:28:19.6170872Z Apr 19 14:28:19 kubelet: Stopped
2024-04-19T14:28:19.6175184Z Apr 19 14:28:19 apiserver: Stopped
2024-04-19T14:28:19.6179746Z Apr 19 14:28:19 kubeconfig: Stopped
2024-04-19T14:28:19.6180518Z Apr 19 14:28:19 
2024-04-19T14:28:19.6211870Z Apr 19 14:28:19 Starting minikube ...
2024-04-19T14:28:19.6893918Z Apr 19 14:28:19 * minikube v1.28.0 on Ubuntu 22.04
2024-04-19T14:28:19.6934942Z Apr 19 14:28:19 * Using the none driver based on 
existing profile
2024-04-19T14:28:19.6951845Z Apr 19 14:28:19 * Starting control plane node 
minikube in cluster minikube
2024-04-19T14:28:19.7246076Z Apr 19 14:28:19 * Restarting existing none bare 
metal machine for "minikube" ...
2024-04-19T14:28:19.7365028Z Apr 19 14:28:19 * OS release is Ubuntu 22.04.4 LTS
2024-04-19T14:28:22.1596670Z Apr 19 14:28:22 * Preparing Kubernetes v1.25.3 on 
Docker 24.0.9 ...
2024-04-19T14:28:22.1618992Z Apr 19 14:28:22   - 
kubelet.image-gc-high-threshold=99
2024-04-19T14:28:22.1622618Z Apr 19 14:28:22   - 
kubelet.image-gc-low-threshold=98
202

[jira] [Created] (FLINK-35175) HadoopDataInputStream can't compile with Hadoop 3.2.3

2024-04-19 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-35175:
---

 Summary: HadoopDataInputStream can't compile with Hadoop 3.2.3
 Key: FLINK-35175
 URL: https://issues.apache.org/jira/browse/FLINK-35175
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.20.0
Reporter: Ryan Skraba


Unfortunately, introduced in FLINK-35045: 
[PREADWRITEBUFFER|https://github.com/apache/flink/commit/a312a3bdd258e0ff7d6f94e979b32e2bc762b82f#diff-3ed57be01895ba0f792110e40f4283427c55528f11a5105b4bf34ebd4e6fef0dR182]
 was added in Hadoop releases 
[3.3.0|https://github.com/apache/hadoop/blob/rel/release-3.3.0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/StreamCapabilities.java#L72]
 and 
[2.10.0|https://github.com/apache/hadoop/blob/rel/release-2.10.0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/StreamCapabilities.java#L72].

It doesn't exist in flink.hadoop.version 
[3.2.3|https://github.com/apache/hadoop/blob/rel/release-3.2.3/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/StreamCapabilities.java],
 which we are using in end-to-end tests.
{code:java}
00:23:55.093 [ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.8.0:compile (default-compile) 
on project flink-hadoop-fs: Compilation failure: Compilation failure: 
00:23:55.093 [ERROR] 
/home/vsts/work/1/s/flink-filesystems/flink-hadoop-fs/src/main/java/org/apache/flink/runtime/fs/hdfs/HadoopDataInputStream.java:[151,63]
 cannot find symbol
00:23:55.094 [ERROR]   symbol:   variable READBYTEBUFFER
00:23:55.094 [ERROR]   location: interface 
org.apache.hadoop.fs.StreamCapabilities
00:23:55.094 [ERROR] 
/home/vsts/work/1/s/flink-filesystems/flink-hadoop-fs/src/main/java/org/apache/flink/runtime/fs/hdfs/HadoopDataInputStream.java:[182,63]
 cannot find symbol
00:23:55.094 [ERROR]   symbol:   variable PREADBYTEBUFFER
00:23:55.094 [ERROR]   location: interface 
org.apache.hadoop.fs.StreamCapabilities
00:23:55.094 [ERROR] 
/home/vsts/work/1/s/flink-filesystems/flink-hadoop-fs/src/main/java/org/apache/flink/runtime/fs/hdfs/HadoopDataInputStream.java:[183,43]
 incompatible types: long cannot be converted to 
org.apache.hadoop.io.ByteBufferPool
00:23:55.094 [ERROR] -> [Help 1] {code}
* 1.20 compile_cron_hadoop313 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59012=logs=87489130-75dc-54e4-1f45-80c30aa367a3=73da6d75-f30d-5d5a-acbe-487a9dcff678=3630



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35146) CompileAndExecuteRemotePlanITCase.testCompileAndExecutePlan

2024-04-17 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-35146:
---

 Summary: 
CompileAndExecuteRemotePlanITCase.testCompileAndExecutePlan
 Key: FLINK-35146
 URL: https://issues.apache.org/jira/browse/FLINK-35146
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.19.1
Reporter: Ryan Skraba


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58960=logs=fb37c667-81b7-5c22-dd91-846535e99a97=011e961e-597c-5c96-04fe-7941c8b83f23=16690

{code}
Apr 17 06:27:47 06:27:47.363 [ERROR] Tests run: 2, Failures: 1, Errors: 0, 
Skipped: 1, Time elapsed: 64.51 s <<< FAILURE! -- in 
org.apache.flink.table.sql.CompileAndExecuteRemotePlanITCase
Apr 17 06:27:47 06:27:47.364 [ERROR] 
org.apache.flink.table.sql.CompileAndExecuteRemotePlanITCase.testCompileAndExecutePlan[executionMode]
 -- Time elapsed: 56.55 s <<< FAILURE!
Apr 17 06:27:47 org.opentest4j.AssertionFailedError: Did not get expected 
results before timeout, actual result: null. ==> expected:  but was: 

Apr 17 06:27:47 at 
org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
Apr 17 06:27:47 at 
org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
Apr 17 06:27:47 at 
org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)
Apr 17 06:27:47 at 
org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)
Apr 17 06:27:47 at 
org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:214)
Apr 17 06:27:47 at 
org.apache.flink.table.sql.SqlITCaseBase.checkResultFile(SqlITCaseBase.java:216)
Apr 17 06:27:47 at 
org.apache.flink.table.sql.SqlITCaseBase.runAndCheckSQL(SqlITCaseBase.java:149)
Apr 17 06:27:47 at 
org.apache.flink.table.sql.SqlITCaseBase.runAndCheckSQL(SqlITCaseBase.java:133)
Apr 17 06:27:47 at 
org.apache.flink.table.sql.CompileAndExecuteRemotePlanITCase.testCompileAndExecutePlan(CompileAndExecuteRemotePlanITCase.java:70)
Apr 17 06:27:47 at java.lang.reflect.Method.invoke(Method.java:498)
Apr 17 06:27:47 at 
org.apache.flink.util.ExternalResource$1.evaluate(ExternalResource.java:48)
Apr 17 06:27:47 at 
org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
Apr 17 06:27:47 

{code}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35095) ExecutionEnvironmentImplTest.testFromSource failure on GitHub CI

2024-04-12 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-35095:
---

 Summary: ExecutionEnvironmentImplTest.testFromSource failure on 
GitHub CI
 Key: FLINK-35095
 URL: https://issues.apache.org/jira/browse/FLINK-35095
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.20.0
Reporter: Ryan Skraba


1.20 Java 17: Test (module: misc) 
https://github.com/apache/flink/actions/runs/8655935935/job/23735920630#step:10:3
{code}
Error: 02:29:05 02:29:05.708 [ERROR] Tests run: 5, Failures: 1, Errors: 0, 
Skipped: 0, Time elapsed: 0.360 s <<< FAILURE! -- in 
org.apache.flink.datastream.impl.ExecutionEnvironmentImplTest
Error: 02:29:05 02:29:05.708 [ERROR] 
org.apache.flink.datastream.impl.ExecutionEnvironmentImplTest.testFromSource -- 
Time elapsed: 0.131 s <<< FAILURE!
Apr 12 02:29:05 java.lang.AssertionError: 
Apr 12 02:29:05 
Apr 12 02:29:05 Expecting actual:
Apr 12 02:29:05   [47]
Apr 12 02:29:05 to contain exactly (and in same order):
Apr 12 02:29:05   [48]
Apr 12 02:29:05 but some elements were not found:
Apr 12 02:29:05   [48]
Apr 12 02:29:05 and others were not expected:
Apr 12 02:29:05   [47]
Apr 12 02:29:05 
Apr 12 02:29:05 at 
org.apache.flink.datastream.impl.ExecutionEnvironmentImplTest.testFromSource(ExecutionEnvironmentImplTest.java:97)
Apr 12 02:29:05 at 
java.base/java.lang.reflect.Method.invoke(Method.java:568)
Apr 12 02:29:05 at 
java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
Apr 12 02:29:05 at 
java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
Apr 12 02:29:05 at 
java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
Apr 12 02:29:05 at 
java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)
Apr 12 02:29:05 at 
java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622)
Apr 12 02:29:05 at 
java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)
Apr 12 02:29:05 
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35074) SavepointITCase.testStopWithSavepointWithDrainGlobalFailoverIfSavepointAborted

2024-04-10 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-35074:
---

 Summary: 
SavepointITCase.testStopWithSavepointWithDrainGlobalFailoverIfSavepointAborted
 Key: FLINK-35074
 URL: https://issues.apache.org/jira/browse/FLINK-35074
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Checkpointing
Affects Versions: 1.18.2
Reporter: Ryan Skraba


AdaptiveScheduler: Test (module: tests) 
https://github.com/apache/flink/actions/runs/8609297979/job/23593291616#step:10:7708

{code}
Error: 02:38:03 02:38:03.567 [ERROR] 
org.apache.flink.test.checkpointing.SavepointITCase.testStopWithSavepointWithDrainGlobalFailoverIfSavepointAborted
  Time elapsed: 0.62 s  <<< ERROR!
Apr 09 02:38:03 java.util.concurrent.ExecutionException: 
org.apache.flink.util.FlinkException: Stop with savepoint operation could not 
be completed.
Apr 09 02:38:03 at 
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
Apr 09 02:38:03 at 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
Apr 09 02:38:03 at 
org.apache.flink.test.checkpointing.SavepointITCase.testStopWithSavepointWithDrainGlobalFailoverIfSavepointAborted(SavepointITCase.java:1072)
Apr 09 02:38:03 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
Apr 09 02:38:03 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
Apr 09 02:38:03 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Apr 09 02:38:03 at java.lang.reflect.Method.invoke(Method.java:498)
Apr 09 02:38:03 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
Apr 09 02:38:03 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
Apr 09 02:38:03 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
Apr 09 02:38:03 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
Apr 09 02:38:03 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
Apr 09 02:38:03 at 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
Apr 09 02:38:03 at 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
Apr 09 02:38:03 at 
org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
Apr 09 02:38:03 at 
org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
Apr 09 02:38:03 at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
Apr 09 02:38:03 at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
Apr 09 02:38:03 at 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
Apr 09 02:38:03 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
Apr 09 02:38:03 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
Apr 09 02:38:03 at 
org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
Apr 09 02:38:03 at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
Apr 09 02:38:03 at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
Apr 09 02:38:03 at 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
Apr 09 02:38:03 at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
Apr 09 02:38:03 at 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
Apr 09 02:38:03 at org.junit.rules.RunRules.evaluate(RunRules.java:20)
Apr 09 02:38:03 at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
Apr 09 02:38:03 at 
org.junit.runners.ParentRunner.run(ParentRunner.java:413)
Apr 09 02:38:03 at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
Apr 09 02:38:03 at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
Apr 09 02:38:03 at 
org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
Apr 09 02:38:03 at 
org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
Apr 09 02:38:03 at 
org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
Apr 09 02:38:03 at 
org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:147)
Apr 09 02:38:03 at 
org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:127)
Apr 09 02:38:03 at 
org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:90)
Apr 09 02:38:03 at 
org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:55)
Apr 09

[jira] [Created] (FLINK-35012) ChangelogNormalizeRestoreTest.testRestore failure

2024-04-04 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-35012:
---

 Summary: ChangelogNormalizeRestoreTest.testRestore failure
 Key: FLINK-35012
 URL: https://issues.apache.org/jira/browse/FLINK-35012
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.20.0
Reporter: Ryan Skraba


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58716=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4=11921

{code}
Apr 03 22:57:43 22:57:43.159 [ERROR] Failures: 
Apr 03 22:57:43 22:57:43.160 [ERROR]   
ChangelogNormalizeRestoreTest>RestoreTestBase.testRestore:337 
Apr 03 22:57:43 Expecting actual:
Apr 03 22:57:43   ["+I[two, 2, b]",
Apr 03 22:57:43 "+I[one, 1, a]",
Apr 03 22:57:43 "+I[three, 3, c]",
Apr 03 22:57:43 "-U[one, 1, a]",
Apr 03 22:57:43 "+U[one, 1, aa]",
Apr 03 22:57:43 "-U[three, 3, c]",
Apr 03 22:57:43 "+U[three, 3, cc]",
Apr 03 22:57:43 "-D[two, 2, b]",
Apr 03 22:57:43 "+I[four, 4, d]",
Apr 03 22:57:43 "+I[five, 5, e]",
Apr 03 22:57:43 "-U[four, 4, d]",
Apr 03 22:57:43 "+U[four, 4, dd]"]
Apr 03 22:57:43 to contain exactly in any order:
Apr 03 22:57:43   ["+I[one, 1, a]",
Apr 03 22:57:43 "+I[two, 2, b]",
Apr 03 22:57:43 "-U[one, 1, a]",
Apr 03 22:57:43 "+U[one, 1, aa]",
Apr 03 22:57:43 "+I[three, 3, c]",
Apr 03 22:57:43 "-D[two, 2, b]",
Apr 03 22:57:43 "-U[three, 3, c]",
Apr 03 22:57:43 "+U[three, 3, cc]",
Apr 03 22:57:43 "+I[four, 4, d]",
Apr 03 22:57:43 "+I[five, 5, e]",
Apr 03 22:57:43 "-U[four, 4, d]",
Apr 03 22:57:43 "+U[four, 4, dd]",
Apr 03 22:57:43 "+I[six, 6, f]",
Apr 03 22:57:43 "-D[six, 6, f]"]
Apr 03 22:57:43 but could not find the following elements:
Apr 03 22:57:43   ["+I[six, 6, f]", "-D[six, 6, f]"]
Apr 03 22:57:43 
{code}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Community over Code EU 2024: Start planning your trip!

2024-04-03 Thread Ryan Skraba
[Note: You're receiving this email because you are subscribed to one
or more project dev@ mailing lists at the Apache Software Foundation.]

Dear community,

We hope you are doing great, are you ready for Community Over Code EU?
Check out the featured sessions, get your tickets with special
discounts and start planning your trip.

Save your spot! Take a look at our lineup of sessions, panelists and
featured speakers and make your final choice:

* EU policies and regulations affecting open source specialists working in OSPOs

The panel will discuss how EU legislation affects the daily work of
open source operations. Panelists will cover some recent policy
updates, the challenges of staying compliant when managing open source
contribution and usage within organizations, and their personal
experiences in adapting to the changing European regulatory
environment.

* Doing for sustainability, what open source did for software

In this keynote Asim Hussain will explain the history of Impact
Framework, a coalition of hundreds of software practitioners with
tangible solutions that directly foster meaningful change by measuring
the environmental impacts of a piece of software.

Don’t forget that we have special discounts for groups, students and
Apache committers. Visit the website to discover more about these
rates.[1]

It's time for you to start planning your trip. Remember that we have
prepared a “How to get there” guide that will be helpful to find out
the best transportation, either train, bus, flight or boat to
Bratislava from wherever you are coming from. Take a look at the
different options and please reach out to us if you have any
questions.

We have available rooms -with a special rate- at the Radisson Blu
Carlton Hotel, where the event will take place and at the Park Inn
Hotel which is only 5 minutes walking from the venue. [2] However, you
are free to choose any other accommodation options around the city.

See you in Bratislava,
Community Over Code EU Team

[1]: https://eu.communityovercode.org/tickets/ "Register"
[2]: https://eu.communityovercode.org/venue/ "Where to stay"


[jira] [Created] (FLINK-35005) SqlClientITCase Failed to build JobManager image

2024-04-03 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-35005:
---

 Summary: SqlClientITCase Failed to build JobManager image
 Key: FLINK-35005
 URL: https://issues.apache.org/jira/browse/FLINK-35005
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.20.0
Reporter: Ryan Skraba


jdk21 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58708=logs=dc1bf4ed-4646-531a-f094-e103042be549=fb3d654d-52f8-5b98-fe9d-b18dd2e2b790=15140

{code}
Apr 03 02:59:16 02:59:16.247 [INFO] 
---
Apr 03 02:59:16 02:59:16.248 [INFO]  T E S T S
Apr 03 02:59:16 02:59:16.248 [INFO] 
---
Apr 03 02:59:17 02:59:17.841 [INFO] Running SqlClientITCase
Apr 03 03:03:15 at 
java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312)
Apr 03 03:03:15 at 
java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843)
Apr 03 03:03:15 at 
java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808)
Apr 03 03:03:15 at 
java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188)
Apr 03 03:03:15 Caused by: 
org.apache.flink.connector.testframe.container.ImageBuildException: Failed to 
build image "flink-configured-jobmanager"
Apr 03 03:03:15 at 
org.apache.flink.connector.testframe.container.FlinkImageBuilder.build(FlinkImageBuilder.java:234)
Apr 03 03:03:15 at 
org.apache.flink.connector.testframe.container.FlinkTestcontainersConfigurator.configureJobManagerContainer(FlinkTestcontainersConfigurator.java:65)
Apr 03 03:03:15 ... 12 more
Apr 03 03:03:15 Caused by: java.lang.RuntimeException: 
com.github.dockerjava.api.exception.DockerClientException: Could not build 
image: Head 
"https://registry-1.docker.io/v2/library/eclipse-temurin/manifests/21-jre-jammy":
 received unexpected HTTP status: 500 Internal Server Error
Apr 03 03:03:15 at 
org.rnorth.ducttape.timeouts.Timeouts.callFuture(Timeouts.java:68)
Apr 03 03:03:15 at 
org.rnorth.ducttape.timeouts.Timeouts.getWithTimeout(Timeouts.java:43)
Apr 03 03:03:15 at 
org.testcontainers.utility.LazyFuture.get(LazyFuture.java:47)
Apr 03 03:03:15 at 
org.apache.flink.connector.testframe.container.FlinkImageBuilder.buildBaseImage(FlinkImageBuilder.java:255)
Apr 03 03:03:15 at 
org.apache.flink.connector.testframe.container.FlinkImageBuilder.build(FlinkImageBuilder.java:206)
Apr 03 03:03:15 ... 13 more
Apr 03 03:03:15 Caused by: 
com.github.dockerjava.api.exception.DockerClientException: Could not build 
image: Head 
"https://registry-1.docker.io/v2/library/eclipse-temurin/manifests/21-jre-jammy":
 received unexpected HTTP status: 500 Internal Server Error
Apr 03 03:03:15 at 
com.github.dockerjava.api.command.BuildImageResultCallback.getImageId(BuildImageResultCallback.java:78)
Apr 03 03:03:15 at 
com.github.dockerjava.api.command.BuildImageResultCallback.awaitImageId(BuildImageResultCallback.java:50)
Apr 03 03:03:15 at 
org.testcontainers.images.builder.ImageFromDockerfile.resolve(ImageFromDockerfile.java:159)
Apr 03 03:03:15 at 
org.testcontainers.images.builder.ImageFromDockerfile.resolve(ImageFromDockerfile.java:40)
Apr 03 03:03:15 at 
org.testcontainers.utility.LazyFuture.getResolvedValue(LazyFuture.java:19)
Apr 03 03:03:15 at 
org.testcontainers.utility.LazyFuture.get(LazyFuture.java:41)
Apr 03 03:03:15 at 
java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)
Apr 03 03:03:15 at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
Apr 03 03:03:15 at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
Apr 03 03:03:15 at java.base/java.lang.Thread.run(Thread.java:1583)
Apr 03 03:03:15 
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35004) SqlGatewayE2ECase could not start container

2024-04-03 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-35004:
---

 Summary: SqlGatewayE2ECase could not start container
 Key: FLINK-35004
 URL: https://issues.apache.org/jira/browse/FLINK-35004
 Project: Flink
  Issue Type: Bug
Reporter: Ryan Skraba


1.20, jdk17: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58708=logs=e8e46ef5-75cc-564f-c2bd-1797c35cbebe=60c49903-2505-5c25-7e46-de91b1737bea=15078

There is an error: "Process failed due to timeout" in 
{{SqlGatewayE2ECase.testSqlClientExecuteStatement}}.  In the maven logs, we can 
see:

{code:java}
02:57:26,979 [main] INFO  tc.prestodb/hdp2.6-hive:10
   [] - Image prestodb/hdp2.6-hive:10 pull took 
PT43.59218S02:57:26,991 [main] INFO  
tc.prestodb/hdp2.6-hive:10   [] - Creating 
container for image: prestodb/hdp2.6-hive:1002:57:27,032 [main] 
INFO  tc.prestodb/hdp2.6-hive:10   [] - 
Container prestodb/hdp2.6-hive:10 is starting: 
162069678c7d03252a42ed81ca43e1911ca7357c476a4a5de294ffe55bd8314502:57:42,846 [  
  main] INFO  tc.prestodb/hdp2.6-hive:10
   [] - Container prestodb/hdp2.6-hive:10 started in 
PT15.855339866S02:57:53,447 [main] ERROR 
tc.prestodb/hdp2.6-hive:10   [] - Could not 
start containerjava.lang.RuntimeException: java.net.SocketTimeoutException: 
timeoutat 
org.apache.flink.table.gateway.containers.HiveContainer.containerIsStarted(HiveContainer.java:94)
 ~[test-classes/:?]at 
org.testcontainers.containers.GenericContainer.containerIsStarted(GenericContainer.java:723)
 ~[testcontainers-1.19.1.jar:1.19.1]at 
org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:543)
 ~[testcontainers-1.19.1.jar:1.19.1]at 
org.testcontainers.containers.GenericContainer.lambda$doStart$0(GenericContainer.java:354)
 ~[testcontainers-1.19.1.jar:1.19.1]at 
org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:81)
 ~[duct-tape-1.0.8.jar:?]at 
org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:344)
 ~[testcontainers-1.19.1.jar:1.19.1]at 
org.apache.flink.table.gateway.containers.HiveContainer.doStart(HiveContainer.java:69)
 ~[test-classes/:?]at 
org.testcontainers.containers.GenericContainer.start(GenericContainer.java:334) 
~[testcontainers-1.19.1.jar:1.19.1]at 
org.testcontainers.containers.GenericContainer.starting(GenericContainer.java:1144)
 ~[testcontainers-1.19.1.jar:1.19.1]at 
org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:28)
 ~[testcontainers-1.19.1.jar:1.19.1]at 
org.junit.rules.RunRules.evaluate(RunRules.java:20) ~[junit-4.13.2.jar:4.13.2]  
  at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) 
~[junit-4.13.2.jar:4.13.2]at 
org.junit.runners.ParentRunner.run(ParentRunner.java:413) 
~[junit-4.13.2.jar:4.13.2]at 
org.junit.runner.JUnitCore.run(JUnitCore.java:137) ~[junit-4.13.2.jar:4.13.2]   
 at org.junit.runner.JUnitCore.run(JUnitCore.java:115) 
~[junit-4.13.2.jar:4.13.2]at 
org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
 ~[junit-vintage-engine-5.10.1.jar:5.10.1]at 
org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
 ~[junit-vintage-engine-5.10.1.jar:5.10.1]at 
org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72) 
~[junit-vintage-engine-5.10.1.jar:5.10.1]at 
org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:198)
 ~[junit-platform-launcher-1.10.1.jar:1.10.1]at 
org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:169)
 ~[junit-platform-launcher-1.10.1.jar:1.10.1]at 
org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:93)
 ~[junit-platform-launcher-1.10.1.jar:1.10.1]at 
org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:58)
 ~[junit-platform-launcher-1.10.1.jar:1.10.1]at 
org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:141)
 [junit-platform-launcher-1.10.1.jar:1.10.1]at 
org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:57)
 [junit-platform-launcher-1.10.1.jar:1.10.1]at 
org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:103)
 [junit-platform-launcher-1.10.1.jar:1.10.1]at 
org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:85)
 [junit-platform-launcher-1.10.1.

[jira] [Created] (FLINK-35002) GitHub action/upload-artifact@v4 can timeout

2024-04-03 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-35002:
---

 Summary: GitHub action/upload-artifact@v4 can timeout
 Key: FLINK-35002
 URL: https://issues.apache.org/jira/browse/FLINK-35002
 Project: Flink
  Issue Type: Bug
  Components: Build System
Reporter: Ryan Skraba


A timeout can occur when uploading a successfully built artifact:
 * [https://github.com/apache/flink/actions/runs/8516411871/job/23325392650]

{code:java}
2024-04-02T02:20:15.6355368Z With the provided path, there will be 1 file 
uploaded
2024-04-02T02:20:15.6360133Z Artifact name is valid!
2024-04-02T02:20:15.6362872Z Root directory input is valid!
2024-04-02T02:20:20.6975036Z Attempt 1 of 5 failed with error: Request timeout: 
/twirp/github.actions.results.api.v1.ArtifactService/CreateArtifact. Retrying 
request in 3000 ms...
2024-04-02T02:20:28.7084937Z Attempt 2 of 5 failed with error: Request timeout: 
/twirp/github.actions.results.api.v1.ArtifactService/CreateArtifact. Retrying 
request in 4785 ms...
2024-04-02T02:20:38.5015936Z Attempt 3 of 5 failed with error: Request timeout: 
/twirp/github.actions.results.api.v1.ArtifactService/CreateArtifact. Retrying 
request in 7375 ms...
2024-04-02T02:20:50.8901508Z Attempt 4 of 5 failed with error: Request timeout: 
/twirp/github.actions.results.api.v1.ArtifactService/CreateArtifact. Retrying 
request in 14988 ms...
2024-04-02T02:21:10.9028438Z ##[error]Failed to CreateArtifact: Failed to make 
request after 5 attempts: Request timeout: 
/twirp/github.actions.results.api.v1.ArtifactService/CreateArtifact
2024-04-02T02:22:59.9893296Z Post job cleanup.
2024-04-02T02:22:59.9958844Z Post job cleanup. {code}
(This is unlikely to be something we can fix, but we can track it.)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34963) Compilation error in ProcessFunctionTestHarnesses

2024-03-29 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-34963:
---

 Summary: Compilation error in ProcessFunctionTestHarnesses
 Key: FLINK-34963
 URL: https://issues.apache.org/jira/browse/FLINK-34963
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.20.0
Reporter: Ryan Skraba


 

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58627=logs=64debf87-ecdb-5aef-788d-8720d341b5cb=f041a596-2626-58e5-69fa-facfbaf86c0f=6435]

At the compile step in *e2e_1_cron_jdk17 / Build Flink* step:
{code:java}
-
00:24:17.239 [ERROR] COMPILATION ERROR : 
00:24:17.239 [INFO] 
-
00:24:17.239 [ERROR] 
/home/vsts/work/1/s/flink-streaming-java/src/test/java/org/apache/flink/streaming/util/ProcessFunctionTestHarnesses.java:[55,54]
 incompatible types: cannot infer type arguments for 
org.apache.flink.streaming.util.OneInputStreamOperatorTestHarness<>
reason: inference variable IN has incompatible equality constraints 
OUT,IN,IN
00:24:17.239 [INFO] 1 error

{code}
This is particularly curious because *e2e_2_cron_jdk17* with an identical build 
step succeeds.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34920) ZooKeeperLeaderRetrievalConnectionHandlingTest

2024-03-22 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-34920:
---

 Summary: ZooKeeperLeaderRetrievalConnectionHandlingTest 
 Key: FLINK-34920
 URL: https://issues.apache.org/jira/browse/FLINK-34920
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.19.1
Reporter: Ryan Skraba


[https://github.com/apache/flink/actions/runs/8384423618/job/22961979482#step:10:8939]
{code:java}
[ERROR] Process Exit Code: 2
[ERROR] Crashed tests:
[ERROR] 
org.apache.flink.runtime.leaderretrieval.ZooKeeperLeaderRetrievalConnectionHandlingTest
[ERROR] at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:456)
 {code}
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34919) WebMonitorEndpointTest.cleansUpExpiredExecutionGraphs fails starting REST server

2024-03-22 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-34919:
---

 Summary: WebMonitorEndpointTest.cleansUpExpiredExecutionGraphs 
fails starting REST server
 Key: FLINK-34919
 URL: https://issues.apache.org/jira/browse/FLINK-34919
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.20.0
Reporter: Ryan Skraba


[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58482=logs=77a9d8e1-d610-59b3-fc2a-4766541e0e33=125e07e7-8de0-5c6c-a541-a567415af3ef=8641]
{code:java}
Mar 22 04:12:50 04:12:50.260 [INFO] Running 
org.apache.flink.runtime.webmonitor.WebMonitorEndpointTest
Mar 22 04:12:50 04:12:50.609 [ERROR] Tests run: 1, Failures: 0, Errors: 1, 
Skipped: 0, Time elapsed: 0.318 s <<< FAILURE! -- in 
org.apache.flink.runtime.webmonitor.WebMonitorEndpointTest
Mar 22 04:12:50 04:12:50.609 [ERROR] 
org.apache.flink.runtime.webmonitor.WebMonitorEndpointTest.cleansUpExpiredExecutionGraphs
 -- Time elapsed: 0.303 s <<< ERROR!
Mar 22 04:12:50 java.net.BindException: Could not start rest endpoint on any 
port in port range 8081
Mar 22 04:12:50 at 
org.apache.flink.runtime.rest.RestServerEndpoint.start(RestServerEndpoint.java:286)
Mar 22 04:12:50 at 
org.apache.flink.runtime.webmonitor.WebMonitorEndpointTest.cleansUpExpiredExecutionGraphs(WebMonitorEndpointTest.java:69)
Mar 22 04:12:50 at java.lang.reflect.Method.invoke(Method.java:498)
Mar 22 04:12:50 at 
java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189)
Mar 22 04:12:50 at 
java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
Mar 22 04:12:50 at 
java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
Mar 22 04:12:50 at 
java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
Mar 22 04:12:50 at 
java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175)
Mar 22 04:12:50  {code}
This was noted as a symptom of FLINK-22980, but doesn't have the same failure.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34911) ChangelogRecoveryRescaleITCase failed fatally with 127 exit code

2024-03-21 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-34911:
---

 Summary: ChangelogRecoveryRescaleITCase failed fatally with 127 
exit code
 Key: FLINK-34911
 URL: https://issues.apache.org/jira/browse/FLINK-34911
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.20.0
Reporter: Ryan Skraba


[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58455=logs=a657ddbf-d986-5381-9649-342d9c92e7fb=dc085d4a-05c8-580e-06ab-21f5624dab16=9029]

 
{code:java}
 Mar 21 01:50:42 01:50:42.553 [ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:3.2.2:test (integration-tests) 
on project flink-tests: 
Mar 21 01:50:42 01:50:42.553 [ERROR] 
Mar 21 01:50:42 01:50:42.553 [ERROR] Please refer to 
/__w/1/s/flink-tests/target/surefire-reports for the individual test results.
Mar 21 01:50:42 01:50:42.553 [ERROR] Please refer to dump files (if any exist) 
[date].dump, [date]-jvmRun[N].dump and [date].dumpstream.
Mar 21 01:50:42 01:50:42.553 [ERROR] ExecutionException The forked VM 
terminated without properly saying goodbye. VM crash or System.exit called?
Mar 21 01:50:42 01:50:42.553 [ERROR] Command was /bin/sh -c cd 
'/__w/1/s/flink-tests' && '/usr/lib/jvm/jdk-21.0.1+12/bin/java' '-XX:+UseG1GC' 
'-Xms256m' '-XX:+IgnoreUnrecognizedVMOptions' 
'--add-opens=java.base/java.util=ALL-UNNAMED' 
'--add-opens=java.base/java.io=ALL-UNNAMED' '-Xmx1536m' '-jar' 
'/__w/1/s/flink-tests/target/surefire/surefirebooter-20240321010847189_810.jar' 
'/__w/1/s/flink-tests/target/surefire' '2024-03-21T01-08-44_720-jvmRun3' 
'surefire-20240321010847189_808tmp' 'surefire_207-20240321010847189_809tmp'
Mar 21 01:50:42 01:50:42.553 [ERROR] Error occurred in starting fork, check 
output in log
Mar 21 01:50:42 01:50:42.553 [ERROR] Process Exit Code: 127
Mar 21 01:50:42 01:50:42.553 [ERROR] Crashed tests:
Mar 21 01:50:42 01:50:42.553 [ERROR] 
org.apache.flink.test.checkpointing.ChangelogRecoveryRescaleITCase
Mar 21 01:50:42 01:50:42.553 [ERROR] 
org.apache.maven.surefire.booter.SurefireBooterForkException: 
ExecutionException The forked VM terminated without properly saying goodbye. VM 
crash or System.exit called?
Mar 21 01:50:42 01:50:42.553 [ERROR] Command was /bin/sh -c cd 
'/__w/1/s/flink-tests' && '/usr/lib/jvm/jdk-21.0.1+12/bin/java' '-XX:+UseG1GC' 
'-Xms256m' '-XX:+IgnoreUnrecognizedVMOptions' 
'--add-opens=java.base/java.util=ALL-UNNAMED' 
'--add-opens=java.base/java.io=ALL-UNNAMED' '-Xmx1536m' '-jar' 
'/__w/1/s/flink-tests/target/surefire/surefirebooter-20240321010847189_810.jar' 
'/__w/1/s/flink-tests/target/surefire' '2024-03-21T01-08-44_720-jvmRun3' 
'surefire-20240321010847189_808tmp' 'surefire_207-20240321010847189_809tmp'
Mar 21 01:50:42 01:50:42.553 [ERROR] Error occurred in starting fork, check 
output in log
Mar 21 01:50:42 01:50:42.553 [ERROR] Process Exit Code: 127
Mar 21 01:50:42 01:50:42.553 [ERROR] Crashed tests:
Mar 21 01:50:42 01:50:42.553 [ERROR] 
org.apache.flink.test.checkpointing.ChangelogRecoveryRescaleITCase
Mar 21 01:50:42 01:50:42.553 [ERROR]at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:456)
Mar 21 01:50:42 01:50:42.553 [ERROR]at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:418)
Mar 21 01:50:42 01:50:42.553 [ERROR]at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:297)
Mar 21 01:50:42 01:50:42.553 [ERROR]at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:250)
Mar 21 01:50:42 01:50:42.554 [ERROR]at 
org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1240)
{code}
>From the watchdog, only {{ChangelogRecoveryRescaleITCase}} didn't complete, 
>specifically parameterized with an {{EmbeddedRocksDBStateBackend}} with 
>incremental checkpointing enabled.

The base class ({{{}ChangelogRecoveryITCaseBase{}}}) starts a 
{{MiniClusterWithClientResource}}
{code:java}
~/Downloads/CI/logs-cron_jdk21-test_cron_jdk21_tests-1710982836$ cat watchdog| 
grep "Tests run\|Running org.apache.flink" | grep -o "org.apache.flink[^ ]*$" | 
sort | uniq -c | sort -n | head
      1 org.apache.flink.test.checkpointing.ChangelogRecoveryRescaleITCase
      2 org.apache.flink.api.connector.source.lib.NumberSequenceSourceITCase
      2 org.apache.flink.api.connector.source.lib.util.GatedRateLimiterTest
      2 
org.apache.flink.api.connector.source.lib.util.RateLimitedSourceReaderITCase
      2 org.apache.flink.api.datastream.DataStreamBatchExecutionITCase
      2 org.apache.flink.api.datastream.DataStreamCollectTestITCase{code}
 
{color:#00} {color}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34891) RemoteStorageScannerTest causes exit 239

2024-03-20 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-34891:
---

 Summary: RemoteStorageScannerTest causes exit 239
 Key: FLINK-34891
 URL: https://issues.apache.org/jira/browse/FLINK-34891
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Network
Affects Versions: 1.20.0
Reporter: Ryan Skraba


[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58432=logs=f0ac5c25-1168-55a5-07ff-0e88223afed9=50bf7a25-bdc4-5e56-5478-c7b4511dde53=9121]
{code:java}
 
Mar 20 01:22:54 01:22:54.671 [ERROR]at 
org.apache.maven.lifecycle.internal.MojoExecutor.doExecute(MojoExecutor.java:351)
Mar 20 01:22:54 01:22:54.671 [ERROR]at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:215)
Mar 20 01:22:54 01:22:54.671 [ERROR]at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:171)
Mar 20 01:22:54 01:22:54.671 [ERROR]at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:163)
Mar 20 01:22:54 01:22:54.671 [ERROR]at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:117)
Mar 20 01:22:54 01:22:54.671 [ERROR]at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:81)
Mar 20 01:22:54 01:22:54.671 [ERROR]at 
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:56)
Mar 20 01:22:54 01:22:54.671 [ERROR]at 
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
Mar 20 01:22:54 01:22:54.671 [ERROR]at 
org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:294)
Mar 20 01:22:54 01:22:54.671 [ERROR]at 
org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:192)
Mar 20 01:22:54 01:22:54.671 [ERROR]at 
org.apache.maven.DefaultMaven.execute(DefaultMaven.java:105)
Mar 20 01:22:54 01:22:54.671 [ERROR]at 
org.apache.maven.cli.MavenCli.execute(MavenCli.java:960)
Mar 20 01:22:54 01:22:54.671 [ERROR]at 
org.apache.maven.cli.MavenCli.doMain(MavenCli.java:293)
Mar 20 01:22:54 01:22:54.671 [ERROR]at 
org.apache.maven.cli.MavenCli.main(MavenCli.java:196)
Mar 20 01:22:54 01:22:54.671 [ERROR]at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
Mar 20 01:22:54 01:22:54.671 [ERROR]at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
Mar 20 01:22:54 01:22:54.671 [ERROR]at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Mar 20 01:22:54 01:22:54.671 [ERROR]at 
java.base/java.lang.reflect.Method.invoke(Method.java:566)
Mar 20 01:22:54 01:22:54.671 [ERROR]at 
org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:282)
Mar 20 01:22:54 01:22:54.671 [ERROR]at 
org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:225)
Mar 20 01:22:54 01:22:54.671 [ERROR]at 
org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:406)
Mar 20 01:22:54 01:22:54.671 [ERROR]at 
org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:347)
Mar 20 01:22:54 01:22:54.671 [ERROR]at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
Mar 20 01:22:54 01:22:54.671 [ERROR]at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
Mar 20 01:22:54 01:22:54.671 [ERROR]at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Mar 20 01:22:54 01:22:54.671 [ERROR]at 
java.base/java.lang.reflect.Method.invoke(Method.java:566)
Mar 20 01:22:54 01:22:54.671 [ERROR]at 
org.apache.maven.wrapper.BootstrapMainStarter.start(BootstrapMainStarter.java:52)
Mar 20 01:22:54 01:22:54.671 [ERROR]at 
org.apache.maven.wrapper.WrapperExecutor.execute(WrapperExecutor.java:161)
Mar 20 01:22:54 01:22:54.671 [ERROR]at 
org.apache.maven.wrapper.MavenWrapperMain.main(MavenWrapperMain.java:73)
Mar 20 01:22:54 01:22:54.671 [ERROR] Caused by: 
org.apache.maven.surefire.booter.SurefireBooterForkException: The forked VM 
terminated without properly saying goodbye. VM crash or System.exit called?
Mar 20 01:22:54 01:22:54.671 [ERROR] Command was /bin/sh -c cd 
'/__w/2/s/flink-runtime' && '/usr/lib/jvm/jdk-11.0.19+7/bin/java' 
'-XX:+UseG1GC' '-Xms256m' '-XX:+IgnoreUnrecognizedVMOptions' 
'--add-opens=java.base/java.util=ALL-UNNAMED' 
'--add-opens=java.base/java.lang=ALL-UNNAMED' 
'--add-opens=java.base/java.net=ALL-UNNAMED' 
'--add-opens=java.base/java.io=ALL-UNNAMED' 
'--add-opens=java.base/java.util.concurrent=ALL-UNNAMED' '-Xmx768m' '-jar' 
'/__w/2/s/flink-runtime/target/surefire/surefirebooter-20240320011505720_97.jar'
 '/__w/2/s/flink-runtime/target/surefire' '2024-03-20T01-12-28_109-jvmRun1' 
'

Wiki permissions

2024-03-18 Thread Ryan Skraba
Hello!

I updated some instructions that are useful for the
apache-flink.slack.com #builds channel -- you used to be able to get
instructions how to setup the Redirector extension from an old Slack
comment.  Since we only have a limited history with the free plan, I
copied some into the Canvas for the channel.

Would it be possible to give me edit permissions on the Flink wiki[1]?
 My committer ID is rskraba (tho' I am not a Flink committer).

Thanks, Ryan

[1]: https://cwiki.apache.org/confluence/display/FLINK/Flink+Release+Management


[jira] [Created] (FLINK-34720) Deploy Maven Snapshot failed on AZP

2024-03-18 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-34720:
---

 Summary: Deploy Maven Snapshot failed on AZP
 Key: FLINK-34720
 URL: https://issues.apache.org/jira/browse/FLINK-34720
 Project: Flink
  Issue Type: Bug
  Components: Build System / CI
Affects Versions: 1.20.0
Reporter: Ryan Skraba


 

There isn't any obvious reason that {{mvn: command not found}} could have 
occurred, but we saw it twice this weekend.
 * 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58352=logs=eca6b3a6-1600-56cc-916a-c549b3cde3ff=7b3c1df5-9194-5183-5ebd-5567f52d5f8f]
  
 * 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58359=logs=eca6b3a6-1600-56cc-916a-c549b3cde3ff=7b3c1df5-9194-5183-5ebd-5567f52d5f8f=36]

 
{code:java}
+ [[ tools != \t\o\o\l\s ]]
+ cd ..
+ echo 'Deploying to repository.apache.org'
+ COMMON_OPTIONS='-Prelease,docs-and-source -DskipTests 
-DretryFailedDeploymentCount=10 -Dmaven.repo.local=/__w/1/.m2/repository 
-Dmaven.wagon.http.pool=false -Dorg.slf4j.simpleLogger.showDateTime=true 
-Dorg.slf4j.simpleLogger.dateTimeFormat=HH:mm:ss.SSS 
-Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=warn
 --no-snapshot-updates -B   -Dgpg.skip -Drat.skip -Dcheckstyle.skip --settings 
/__w/1/s/tools/deploy-settings.xml'
+ mvn clean deploy -Prelease,docs-and-source -DskipTests 
-DretryFailedDeploymentCount=10 -Dmaven.repo.local=/__w/1/.m2/repository 
-Dmaven.wagon.http.pool=false -Dorg.slf4j.simpleLogger.showDateTime=true 
-Dorg.slf4j.simpleLogger.dateTimeFormat=HH:mm:ss.SSS 
-Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=warn
 --no-snapshot-updates -B -Dgpg.skip -Drat.skip -Dcheckstyle.skip --settings 
/__w/1/s/tools/deploy-settings.xml
Deploying to repository.apache.org
./releasing/deploy_staging_jars.sh: line 46: mvn: command not found

##[error]Bash exited with code '127'.
Finishing: Deploy maven snapshot
 {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34719) StreamRecordTest#testWithTimestamp fails on Azure

2024-03-18 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-34719:
---

 Summary: StreamRecordTest#testWithTimestamp fails on Azure
 Key: FLINK-34719
 URL: https://issues.apache.org/jira/browse/FLINK-34719
 Project: Flink
  Issue Type: Bug
  Components: Tests
Affects Versions: 1.20.0
Reporter: Ryan Skraba


The ClassCastException *message* expected in StreamRecordTest#testWithTimestamp 
as well as StreamRecordTest#testWithNoTimestamp fails on JDK 11, 17, and 21
 * 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58352=logs=f0ac5c25-1168-55a5-07ff-0e88223afed9=50bf7a25-bdc4-5e56-5478-c7b4511dde53=10341]
 * 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58352=logs=675bf62c-8558-587e-2555-dcad13acefb5=5878eed3-cc1e-5b12-1ed0-9e7139ce0992=9828]
 * 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58352=logs=d06b80b4-9e88-5d40-12a2-18072cf60528=609ecd5a-3f6e-5d0c-2239-2096b155a4d0=9833]

{code:java}
Expecting throwable message:
Mar 16 01:35:07   "class 
org.apache.flink.streaming.runtime.streamrecord.StreamRecord cannot be cast to 
class org.apache.flink.streaming.api.watermark.Watermark 
(org.apache.flink.streaming.runtime.streamrecord.StreamRecord and 
org.apache.flink.streaming.api.watermark.Watermark are in unnamed module of 
loader 'app')"
Mar 16 01:35:07 to contain:
Mar 16 01:35:07   "cannot be cast to 
org.apache.flink.streaming.api.watermark.Watermark"
Mar 16 01:35:07 but did not.
Mar 16 01:35:07 
Mar 16 01:35:07 Throwable that failed the check:
Mar 16 01:35:07 
Mar 16 01:35:07 java.lang.ClassCastException: class 
org.apache.flink.streaming.runtime.streamrecord.StreamRecord cannot be cast to 
class org.apache.flink.streaming.api.watermark.Watermark 
(org.apache.flink.streaming.runtime.streamrecord.StreamRecord and 
org.apache.flink.streaming.api.watermark.Watermark are in unnamed module of 
loader 'app')
Mar 16 01:35:07 at 
org.apache.flink.streaming.runtime.streamrecord.StreamElement.asWatermark(StreamElement.java:92)
Mar 16 01:35:07 at 
org.assertj.core.api.ThrowableAssert.catchThrowable(ThrowableAssert.java:63)
Mar 16 01:35:07 at 
org.assertj.core.api.AssertionsForClassTypes.catchThrowable(AssertionsForClassTypes.java:892)
 {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34718) PartitionedWindowed

2024-03-18 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-34718:
---

 Summary: PartitionedWindowed
 Key: FLINK-34718
 URL: https://issues.apache.org/jira/browse/FLINK-34718
 Project: Flink
  Issue Type: Bug
  Components: API / DataStream
Affects Versions: 1.20.0
Reporter: Ryan Skraba


[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58320=logs=8fd9202e-fd17-5b26-353c-ac1ff76c8f28=ea7cf968-e585-52cb-e0fc-f48de023a7ca=9646]

18 of the KeyedPartitionWindowedStreamITCase and 
NonKeyedPartitionWindowedStreamITCase unit tests introduced in FLINK-34543 are 
failing in the adaptive scheduler profile, with errors similar to:
{code:java}
Mar 15 01:54:12 Caused by: java.lang.IllegalStateException: The adaptive 
scheduler supports pipelined data exchanges (violated by MapPartition 
(org.apache.flink.streaming.runtime.tasks.OneInputStreamTask) -> 
ddb598ad156ed281023ba4eebbe487e3).
Mar 15 01:54:12 at 
org.apache.flink.util.Preconditions.checkState(Preconditions.java:215)
Mar 15 01:54:12 at 
org.apache.flink.runtime.scheduler.adaptive.AdaptiveScheduler.assertPreconditions(AdaptiveScheduler.java:438)
Mar 15 01:54:12 at 
org.apache.flink.runtime.scheduler.adaptive.AdaptiveScheduler.(AdaptiveScheduler.java:356)
Mar 15 01:54:12 at 
org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerFactory.createInstance(AdaptiveSchedulerFactory.java:124)
Mar 15 01:54:12 at 
org.apache.flink.runtime.jobmaster.DefaultSlotPoolServiceSchedulerFactory.createScheduler(DefaultSlotPoolServiceSchedulerFactory.java:121)
Mar 15 01:54:12 at 
org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:384)
Mar 15 01:54:12 at 
org.apache.flink.runtime.jobmaster.JobMaster.(JobMaster.java:361)
Mar 15 01:54:12 at 
org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.internalCreateJobMasterService(DefaultJobMasterServiceFactory.java:128)
Mar 15 01:54:12 at 
org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.lambda$createJobMasterService$0(DefaultJobMasterServiceFactory.java:100)
Mar 15 01:54:12 at 
org.apache.flink.util.function.FunctionUtils.lambda$uncheckedSupplier$4(FunctionUtils.java:112)
Mar 15 01:54:12 at 
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
Mar 15 01:54:12 ... 4 more
 {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34717) BroadcastStateITCase failed fatally with 127 exit code

2024-03-18 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-34717:
---

 Summary: BroadcastStateITCase failed fatally with 127 exit code
 Key: FLINK-34717
 URL: https://issues.apache.org/jira/browse/FLINK-34717
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.20.0
Reporter: Ryan Skraba


[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58306=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=86f654fa-ab48-5c1a-25f4-7e7f6afb9bba=9069]
{code:java}
 Mar 14 13:58:43 13:58:43.330 [ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:3.2.2:test (integration-tests) 
on project flink-tests: 
Mar 14 13:58:43 13:58:43.330 [ERROR] 
Mar 14 13:58:43 13:58:43.330 [ERROR] Please refer to 
/__w/1/s/flink-tests/target/surefire-reports for the individual test results.
Mar 14 13:58:43 13:58:43.330 [ERROR] Please refer to dump files (if any exist) 
[date].dump, [date]-jvmRun[N].dump and [date].dumpstream.
Mar 14 13:58:43 13:58:43.330 [ERROR] ExecutionException The forked VM 
terminated without properly saying goodbye. VM crash or System.exit called?
Mar 14 13:58:43 13:58:43.330 [ERROR] Command was /bin/sh -c cd 
'/__w/1/s/flink-tests' && '/usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java' 
'-XX:+UseG1GC' '-Xms256m' '-XX:+IgnoreUnrecognizedVMOptions' 
'--add-opens=java.base/java.util=ALL-UNNAMED' 
'--add-opens=java.base/java.io=ALL-UNNAMED' '-Xmx1536m' '-jar' 
'/__w/1/s/flink-tests/target/surefire/surefirebooter-20240314132147062_959.jar' 
'/__w/1/s/flink-tests/target/surefire' '2024-03-14T13-21-44_122-jvmRun1' 
'surefire-20240314132147062_957tmp' 'surefire_254-20240314132147062_958tmp'
Mar 14 13:58:43 13:58:43.330 [ERROR] Error occurred in starting fork, check 
output in log
Mar 14 13:58:43 13:58:43.330 [ERROR] Process Exit Code: 127
Mar 14 13:58:43 13:58:43.330 [ERROR] 
org.apache.maven.surefire.booter.SurefireBooterForkException: 
ExecutionException The forked VM terminated without properly saying goodbye. VM 
crash or System.exit called?
Mar 14 13:58:43 13:58:43.330 [ERROR] Command was /bin/sh -c cd 
'/__w/1/s/flink-tests' && '/usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java' 
'-XX:+UseG1GC' '-Xms256m' '-XX:+IgnoreUnrecognizedVMOptions' 
'--add-opens=java.base/java.util=ALL-UNNAMED' 
'--add-opens=java.base/java.io=ALL-UNNAMED' '-Xmx1536m' '-jar' 
'/__w/1/s/flink-tests/target/surefire/surefirebooter-20240314132147062_959.jar' 
'/__w/1/s/flink-tests/target/surefire' '2024-03-14T13-21-44_122-jvmRun1' 
'surefire-20240314132147062_957tmp' 'surefire_254-20240314132147062_958tmp'
Mar 14 13:58:43 13:58:43.330 [ERROR] Error occurred in starting fork, check 
output in log
Mar 14 13:58:43 13:58:43.330 [ERROR] Process Exit Code: 127
Mar 14 13:58:43 13:58:43.330 [ERROR]at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:456)
Mar 14 13:58:43 13:58:43.330 [ERROR]at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:418)
Mar 14 13:58:43 13:58:43.330 [ERROR]at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:297)
Mar 14 13:58:43 13:58:43.331 [ERROR]at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:250)
{code}
Looking at the watchdog, only the 
org.apache.flink.test.streaming.runtime.BroadcastStateITCase is started without 
finished.  It has two test methods which are both successfully run, so the 
problem might with the {{
{color:#00}MiniClusterWithClientResource{color}}} not shutting down.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34673) SessionRelatedITCase#testTouchSession failure on GitHub Actions

2024-03-14 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-34673:
---

 Summary: SessionRelatedITCase#testTouchSession failure on GitHub 
Actions
 Key: FLINK-34673
 URL: https://issues.apache.org/jira/browse/FLINK-34673
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Gateway
Affects Versions: 1.19.0
Reporter: Ryan Skraba


[https://github.com/apache/flink/actions/runs/8258416388/job/22590907051#step:10:12155]
{code:java}
 Error: 03:08:21 03:08:21.304 [ERROR] 
org.apache.flink.table.gateway.rest.SessionRelatedITCase.testTouchSession -- 
Time elapsed: 0.015 s <<< FAILURE!
Mar 13 03:08:21 java.lang.AssertionError: 
Mar 13 03:08:21 
Mar 13 03:08:21 Expecting actual:
Mar 13 03:08:21   1710299301198L
Mar 13 03:08:21 to be greater than:
Mar 13 03:08:21   1710299301198L
Mar 13 03:08:21 
Mar 13 03:08:21     at 
org.apache.flink.table.gateway.rest.SessionRelatedITCase.testTouchSession(SessionRelatedITCase.java:175)
Mar 13 03:08:21     at 
java.base/java.lang.reflect.Method.invoke(Method.java:580)
Mar 13 03:08:21     at 
java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
Mar 13 03:08:21     at 
java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387)
Mar 13 03:08:21     at 
java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312)
Mar 13 03:08:21     at 
java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843)
Mar 13 03:08:21     at 
java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808)
Mar 13 03:08:21     at 
java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188)
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov

2024-01-02 Thread Ryan Skraba
Awesome news for the community -- congratulations Alex (and Happy New
Year everyone!)

Ryan

On Tue, Jan 2, 2024 at 2:55 PM Yun Tang  wrote:
>
> Congratulation to Alex and Happy New Year everyone!
>
> Best
> Yun Tang
> 
> From: Rui Fan <1996fan...@gmail.com>
> Sent: Tuesday, January 2, 2024 21:33
> To: dev@flink.apache.org 
> Cc: Alexander Fedulov 
> Subject: Re: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov
>
> Happy new year!
>
> Hmm, sorry for the typo in the last email.
> Congratulations Alex, well done!
>
> Best,
> Rui
>
> On Tue, 2 Jan 2024 at 20:23, Rui Fan <1996fan...@gmail.com> wrote:
>
> > Configurations Alexander!
> >
> > Best,
> > Rui
> >
> > On Tue, Jan 2, 2024 at 8:15 PM Maximilian Michels  wrote:
> >
> >> Happy New Year everyone,
> >>
> >> I'd like to start the year off by announcing Alexander Fedulov as a
> >> new Flink committer.
> >>
> >> Alex has been active in the Flink community since 2019. He has
> >> contributed more than 100 commits to Flink, its Kubernetes operator,
> >> and various connectors [1][2].
> >>
> >> Especially noteworthy are his contributions on deprecating and
> >> migrating the old Source API functions and test harnesses, the
> >> enhancement to flame graphs, the dynamic rescale time computation in
> >> Flink Autoscaling, as well as all the small enhancements Alex has
> >> contributed which make a huge difference.
> >>
> >> Beyond code contributions, Alex has been an active community member
> >> with his activity on the mailing lists [3][4], as well as various
> >> talks and blog posts about Apache Flink [5][6].
> >>
> >> Congratulations Alex! The Flink community is proud to have you.
> >>
> >> Best,
> >> The Flink PMC
> >>
> >> [1]
> >> https://github.com/search?type=commits=author%3Aafedulov+org%3Aapache
> >> [2]
> >> https://issues.apache.org/jira/browse/FLINK-28229?jql=status%20in%20(Resolved%2C%20Closed)%20AND%20assignee%20in%20(afedulov)%20ORDER%20BY%20resolved%20DESC%2C%20created%20DESC
> >> [3] https://lists.apache.org/list?dev@flink.apache.org:lte=100M:Fedulov
> >> [4] https://lists.apache.org/list?u...@flink.apache.org:lte=100M:Fedulov
> >> [5]
> >> https://flink.apache.org/2020/01/15/advanced-flink-application-patterns-vol.1-case-study-of-a-fraud-detection-system/
> >> [6]
> >> https://www.ververica.com/blog/presenting-our-streaming-concepts-introduction-to-flink-video-series
> >>
> >


Meet our keynote speakers and register to Community Over Code EU!

2023-12-22 Thread Ryan Skraba
[Note: You're receiving this email because you are subscribed to one or
more project dev@ mailing lists at the Apache Software Foundation.]











*
Merge
with the ASF EUniverse!The registration for Community Over Code Europe is
finally open! Get your tickets now and save your spot!
We are happy to announce that we
have confirmed the first featured speakers
!  - Asim Hussain, Executive
Director at Green Software Foundation- Dirk-Willem Van Gulik, VP of Public
Policy at The Apache Software Foundation- Ruth Ikega, Community Lead at
CHAOSS Africa Visit our website
 to learn more about this
amazing lineup.CFP is openWe are looking forward to hearing all you have to
share with the Apache Community. Please submit your talk proposal
 before January 12, 2024.Interested in
boosting your brand?Take a look at our prospectus

and find out the opportunities we have for you. Be one step ahead and book
your room at the hotel venueWe have a special rate for you at the Radisson
Blu Carlton, the hotel that will hold Community Over Code EU. Learn more
about the location and venue 
and book your accommodation. Should you have any questions, please do not
hesitate to contact us. We wish you Happy Holidays in the company of your
loved ones! See you in Bratislava next year!Community Over Code EU
Organizer Committee*


Re: [DISCUSS] Release flink-connector-parent v1.01

2023-12-15 Thread Ryan Skraba
Hello!  I've been following this discussion (while looking and
building a lot of the connectors):

+1 (non-binding) to doing a 1.1.0 release adding the configurability
of surefire and jvm flags.

Thanks for driving this!

Ryan

On Fri, Dec 15, 2023 at 2:06 PM Etienne Chauchot  wrote:
>
> Hi PMC members,
>
> Version will be 1.1.0 and not 1.0.1 as one of the PMC members already
> created this version tag in jira and tickets are targeted to this version.
>
> Anyone for pushing my pub key to apache dist ?
>
> Thanks
>
> Etienne
>
> Le 14/12/2023 à 17:51, Etienne Chauchot a écrit :
> >
> > Hi all,
> >
> > It has been 2 weeks since the start of this release discussion. For
> > now only Sergey agreed to release. On a lazy consensus basis, let's
> > say that we leave until Monday for people to express concerns about
> > releasing connector-parent.
> >
> > In the meantime, I'm doing my environment setup and I miss the rights
> > to upload my GPG pub key to flink apache dist repo. Can one of the PMC
> > members push it ?
> >
> > Joint to this email is the updated KEYS file with my pub key added.
> >
> > Thanks
> >
> > Best
> >
> > Etienne
> >
> > Le 05/12/2023 à 16:30, Etienne Chauchot a écrit :
> >>
> >> Hi Péter,
> >>
> >> My answers are inline
> >>
> >>
> >> Best
> >>
> >> Etienne
> >>
> >>
> >> Le 05/12/2023 à 05:27, Péter Váry a écrit :
> >>> Hi Etienne,
> >>>
> >>> Which branch would you cut the release from?
> >> the parent_pom branch (consisting of a single maven pom file)
> >>> I find the flink-connector-parent branches confusing.
> >>>
> >>> If I merge a PR to the ci_utils branch, would it immediately change the CI
> >>> workflow of all of the connectors?
> >>
> >> The ci_utils branch is basically one ci.yml workflow. _testing.yml
> >> and maven test-project are both for testing the ci.yml workflow and
> >> display what it can do to connector authors.
> >>
> >> As the connectors workflows refer ci.yml as this:
> >> apache/flink-connector-shared-utils/.github/workflows/ci.yml@ci_utils,
> >> if we merge changes to ci.yml all the CIs in the connectors' repo
> >> will change.
> >>
> >>> If I merge something to the release_utils branch, would it immediately
> >>> change the release process of all of the connectors?
> >> I don't know how release-utils scripts are integrated with the
> >> connectors' code yet
> >>> I would like to add the possibility of creating Python packages for the
> >>> connectors [1]. This would consist of some common code, which should 
> >>> reside
> >>> in flink-connector-parent, like:
> >>> - scripts for running Python test - test infra. I expect that this would
> >>> evolve in time
> >>> - ci workflow - this would be more slow moving, but might change if the
> >>> infra is charging
> >>> - release scripts - this would be slow moving, but might change too.
> >>>
> >>> I think we should have a release for all of the above components, so the
> >>> connectors could move forward on their own pace.
> >>
> >>
> >> I think it is quite out of the scope of this release: here we are
> >> only talking about releasing a parent pom maven file for the connectors.
> >>
> >>> What do you think?
> >>>
> >>> Thanks,
> >>> Péter
> >>>
> >>> [1]https://issues.apache.org/jira/browse/FLINK-33528
> >>>
> >>> On Thu, Nov 30, 2023, 16:55 Etienne Chauchot  wrote:
> >>>
>  Thanks Sergey for your vote. Indeed I have listed only the PRs merged
>  since last release but there are these 2 open PRs that could be worth
>  reviewing/merging before release.
> 
>  https://github.com/apache/flink-connector-shared-utils/pull/25
> 
>  https://github.com/apache/flink-connector-shared-utils/pull/20
> 
>  Best
> 
>  Etienne
> 
> 
>  Le 30/11/2023 à 11:12, Sergey Nuyanzin a écrit :
> > thanks for volunteering Etienne
> >
> > +1 for releasing
> > however there is one more PR to enable custom jvm flags for connectors
> > in similar way it is done in Flink main repo for modules
> > It will simplify a bit support for java 17
> >
> > could we have this as well in the coming release?
> >
> >
> >
> > On Wed, Nov 29, 2023 at 11:40 AM Etienne Chauchot
> > wrote:
> >
> >> Hi all,
> >>
> >> I would like to discuss making a v1.0.1 release of
>  flink-connector-parent.
> >> Since last release, there were only 2 changes:
> >>
> >> -https://github.com/apache/flink-connector-shared-utils/pull/19
> >> (spotless addition)
> >>
> >> -https://github.com/apache/flink-connector-shared-utils/pull/26
> >> (surefire configuration)
> >>
> >> The new release would bring the ability to skip some tests in the
> >> connectors and among other things skip the archunit tests. It is
> >> important for connectors to skip archunit tests when tested against a
> >> version of Flink that changes the archunit rules leading to a change of
> >> the violation store. As there is only one 

Re: [DISCUSS] Confluent Avro support without Schema Registry access

2023-11-24 Thread Ryan Skraba
Hello Dale and Martijn, I've been looking into some schema registry
issues, and I thought I'd bring this back up.

I can *kind of* see the value in configuring the Flink job with
sufficient information that you can run and/or test without a schema
registry, but it really seems like the best way to mock having a
schema registry would be to spin up and run a schema registry
someplace where it *can* be observed and used.

I would lean towards putting effort into finding a way to run a
limited and maybe ephemeral schema registry alongside your job,
instead of adding the (potentially many) tweaks and configurations
directly in the table parameters.  Do you think this is an approach
that might be more satisfactory and useful?

All my best, Ryan


On Thu, Nov 2, 2023 at 2:00 PM Martijn Visser  wrote:
>
> Hi Dale,
>
> > Aren’t we already fairly dependent on the schema remaining consistent, 
> > because otherwise we’d need to update the table schema as well?
>
> No, because the schema can be updated with optional fields and
> depending on the compatibility mode, Flink will just consume or
> produce nulls in that case.
>
> > I’m not sure what you mean here, sorry. Are you thinking about issues if 
> > you needed to mix-and-match with both formatters at the same time? (Rather 
> > than just using the Avro formatter as I was describing)
>
> Flink doesn't distinguish a table being a source or a sink. If you
> change the Avro format to support reading Schema Registry encoded Avro
> format, you would also change it when writing it. However, in order to
> write the proper Schema Registry Avro format, you need to have the
> magic byte included.
>
> I think the entire point of the Schema Registry Avro messages is that
> there is a tight coupling towards a Schema Registry service; that's
> the point of the format. I think opening up for alternative processing
> is opening up a potential Pandora's box of issues that can be derived
> from that: (de)serialization errors, issues with schema evolution
> checks as a consumer or a producer etc. I don't see much value for the
> Flink project to go in that direction, which would be supporting edge
> cases anyway.
>
> Best regards,
>
> Martijn
>
> On Wed, Nov 1, 2023 at 10:36 PM Dale Lane  wrote:
> >
> > Thanks for the pointer to FLINK-33045 - I hadn’t spotted that. That sounds 
> > like it’d address one possible issue (where someone using Flink shouldn’t 
> > be, or perhaps doesn’t have access/permission to, register new schemas).
> >
> > I should be clear that I absolutely agree that using a schema registry is 
> > optimum. It should be the norm – it should be the default, preferred and 
> > recommended option.
> >
> > However, I think that there may still be times where the schema registry 
> > isn’t available.
> >
> > Maybe you’re using a mirrored copy of the topic on another kafka cluster 
> > and don’t have the original Kafka cluster’s schema registry available. 
> > Maybe networking restrictions means where you are running Flink doesn’t 
> > have connectivity to the schema registry. Maybe the developer using Flink 
> > doesn’t have permission for or access to the schema registry. Maybe the 
> > schema registry is currently unavailable. Maybe the developer using Flink 
> > is developing their Flink job offline, disconnected from the environment 
> > where the schema registry is running (ahead of in future deploying their 
> > finished Flink job where it will have access to the schema registry).
> >
> > It is in such circumstances that I think the approach the avro formatter 
> > offers is a useful fallback. Through the table schema, it lets you specify 
> > the shape of your data, allowing you to process it without requiring an 
> > external dependency.
> >
> > It seems to me that making it impossible to process Confluent Avro-encoded 
> > messages without access to an additional external component is too strict a 
> > restriction (as much as there are completely valid reasons for it to be a 
> > recommendation).
> >
> > And, with a very small modification to the avro formatter, it’s a 
> > restriction we could remove.
> >
> > Kind regards
> >
> > Dale
> >
> >
> >
> > From: Ryan Skraba 
> > Date: Monday, 30 October 2023 at 16:42
> > To: dev@flink.apache.org 
> > Subject: [EXTERNAL] Re: [DISCUSS] Confluent Avro support without Schema 
> > Registry access
> > Hello!  I took a look at FLINK-33045, which is somewhat related: In
> > that improvement, the author wants to control who registers schemas.
> > The Flink job would know the Avro schema to use, and would look up the
&

Re: Apicurio Avro format proposal

2023-11-23 Thread Ryan Skraba
Pardon me, I forgot to include that I'd seen this before as
FLINK-26654.  There's a linked JIRA with an open PR that kind of
*plugs in* 8-byte ids . I haven't had the chance to check out Apicurio
yet, but I'm interested in schema registries in general.

All my best, Ryan

[1]: https://github.com/apache/flink/pull/21805
"[FLINK-30721][avro-confluent-registry] Enable 8byte schema id"

On Thu, Nov 23, 2023 at 10:48 AM Ryan Skraba  wrote:
>
> Hello David!
>
> In the FLIP, I'd be interested in knowing how the avro-apicurio and
> avro-confluent formats would differ!  Outside of configuration
> options, are there different features?  Would the two schema registry
> formats have a lot of common base that we could take advantage of?
>
> All my best, Ryan
>
> On Thu, Nov 23, 2023 at 10:14 AM David Radley  wrote:
> >
> > Hi Martijn,
> > Ok will do,
> >   Kind regards, David.
> >
> > From: Martijn Visser 
> > Date: Wednesday, 22 November 2023 at 21:47
> > To: dev@flink.apache.org 
> > Subject: [EXTERNAL] Re: Apicurio Avro format proposal
> > Hi David,
> >
> > Can you create a small FLIP for this?
> >
> > Best regards,
> >
> > Martijn
> >
> > On Wed, Nov 22, 2023 at 6:46 PM David Radley  
> > wrote:
> > >
> > > Hi,
> > > I would like to propose a new Apicurio Avro format.
> > > The Apicurio Avro Schema Registry (avro-apicurio) format would allow you 
> > > to read records that were serialized by the 
> > > io.apicurio.registry.serde.avro.AvroKafkaSerializer and to write records 
> > > that can in turn be read by the 
> > > io.apicurio.registry.serde.avro.AvroKafkaDeserialiser.
> > >
> > > With format options including:
> > >
> > >   *   Apicurio Registry URL
> > >   *   Artifact resolver strategy
> > >   *   ID location
> > >   *   ID encoding
> > >   *   Avro datum provider
> > >   *   Avro encoding
> > >
> > >
> > >
> > > For more details see 
> > > https://www.apicur.io/registry/docs/apicurio-registry/2.4.x/getting-started/assembly-configuring-kafka-client-serdes.html#registry-serdes-types-avro_registry
> > >
> > > I am happy to work on this,
> > >   Kind regards, David.
> > >
> > > Unless otherwise stated above:
> > >
> > > IBM United Kingdom Limited
> > > Registered in England and Wales with number 741598
> > > Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 3AU
> >
> > Unless otherwise stated above:
> >
> > IBM United Kingdom Limited
> > Registered in England and Wales with number 741598
> > Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 3AU


Re: Apicurio Avro format proposal

2023-11-23 Thread Ryan Skraba
Hello David!

In the FLIP, I'd be interested in knowing how the avro-apicurio and
avro-confluent formats would differ!  Outside of configuration
options, are there different features?  Would the two schema registry
formats have a lot of common base that we could take advantage of?

All my best, Ryan

On Thu, Nov 23, 2023 at 10:14 AM David Radley  wrote:
>
> Hi Martijn,
> Ok will do,
>   Kind regards, David.
>
> From: Martijn Visser 
> Date: Wednesday, 22 November 2023 at 21:47
> To: dev@flink.apache.org 
> Subject: [EXTERNAL] Re: Apicurio Avro format proposal
> Hi David,
>
> Can you create a small FLIP for this?
>
> Best regards,
>
> Martijn
>
> On Wed, Nov 22, 2023 at 6:46 PM David Radley  wrote:
> >
> > Hi,
> > I would like to propose a new Apicurio Avro format.
> > The Apicurio Avro Schema Registry (avro-apicurio) format would allow you to 
> > read records that were serialized by the 
> > io.apicurio.registry.serde.avro.AvroKafkaSerializer and to write records 
> > that can in turn be read by the 
> > io.apicurio.registry.serde.avro.AvroKafkaDeserialiser.
> >
> > With format options including:
> >
> >   *   Apicurio Registry URL
> >   *   Artifact resolver strategy
> >   *   ID location
> >   *   ID encoding
> >   *   Avro datum provider
> >   *   Avro encoding
> >
> >
> >
> > For more details see 
> > https://www.apicur.io/registry/docs/apicurio-registry/2.4.x/getting-started/assembly-configuring-kafka-client-serdes.html#registry-serdes-types-avro_registry
> >
> > I am happy to work on this,
> >   Kind regards, David.
> >
> > Unless otherwise stated above:
> >
> > IBM United Kingdom Limited
> > Registered in England and Wales with number 741598
> > Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 3AU
>
> Unless otherwise stated above:
>
> IBM United Kingdom Limited
> Registered in England and Wales with number 741598
> Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 3AU


[jira] [Created] (FLINK-33627) Bump snappy-java to 1.1.10.4 in flink-statefun

2023-11-23 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-33627:
---

 Summary: Bump snappy-java to 1.1.10.4 in flink-statefun
 Key: FLINK-33627
 URL: https://issues.apache.org/jira/browse/FLINK-33627
 Project: Flink
  Issue Type: Bug
  Components: Stateful Functions
Affects Versions: statefun-3.3.0
Reporter: Ryan Skraba


Xerial published a security alert for a Denial of Service attack that [exists 
on 
1.1.10.1|https://github.com/xerial/snappy-java/security/advisories/GHSA-55g7-9cwv-5qfv].

See FLINK-33149 for flink core and connectors.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33520) FileNotFoundException when running GPUDriverTest

2023-11-10 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-33520:
---

 Summary: FileNotFoundException when running GPUDriverTest
 Key: FLINK-33520
 URL: https://issues.apache.org/jira/browse/FLINK-33520
 Project: Flink
  Issue Type: Technical Debt
  Components: Tests
Affects Versions: 1.18.0
Reporter: Ryan Skraba


I'd been running into a mysterious error running the 
{{flink-external-resources}} module tests:

{code}
java.io.FileNotFoundException: The gpu discovery script does not exist in path 
/opt/asf/flink/src/test/resources/testing-gpu-discovery.sh.
at 
org.apache.flink.externalresource.gpu.GPUDriver.(GPUDriver.java:98)
at 
org.apache.flink.externalresource.gpu.GPUDriverTest.testGPUDriverWithInvalidAmount(GPUDriverTest.java:64)
at
{code}

>From the command line and IntelliJ, when it seems to works, it _always_ works, 
>and when it fails it _always_ fails. I finally took a moment to figure it out: 
>if the {{FLINK_HOME}} environment variable is set (to a valid Flink 
>distribution of any version), this test fails.

This is a very minor irritation, but it's pretty easy to fix.

The workaround is to launch the unit test in an environment where this 
environment variable is not set.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] Release flink-connector-opensearch v1.1.0, release candidate #1

2023-11-06 Thread Ryan Skraba
Hello! +1 (non-binding) Thanks for the release!

I've validated the source for the RC1:
* flink-connector-opensearch-1.1.0-src.tgz at r64995
* The sha512 checksum is OK.
* The source file is signed correctly.
* The signature 0F79F2AFB2351BC29678544591F9C1EC125FD8DB is found in the
KEYS file, and on https://keyserver.ubuntu.com/
* The source file is consistent with the GitHub tag v1.1.0-rc1, which
corresponds to commit 0f659cc65131c9ff7c8c35eb91f5189e80414ea1
- The files explicitly excluded by create_pristine_sources (such as
.gitignore and the submodule tools/releasing/shared) are not present.
* Has a LICENSE file and a NOTICE file
* Does not contain any compiled binaries.

* The sources can be compiled and unit tests pass with flink.version 1.17.1
and flink.version 1.18.0

* Nexus has three staged artifact ids for 1.1.0-1.17 and 1.1.0-1.18
- flink-connector-opensearch (.jar, -javadoc.jar, -sources.jar,
-tests.jar and .pom)
- flink-sql-connector-opensearch (.jar, -sources.jar and .pom)
- flink-connector-gcp-pubsub-parent (only .pom)

All my best, Ryan

On Fri, Nov 3, 2023 at 10:29 AM Danny Cranmer  wrote:
>
> Hi everyone,
>
> Please review and vote on the release candidate #1 for the version 1.1.0 of
> flink-connector-opensearch, as follows:
> [ ] +1, Approve the release
> [ ] -1, Do not approve the release (please provide specific comments)
>
>
> The complete staging area is available for your review, which includes:
> * JIRA release notes [1],
> * the official Apache source release to be deployed to dist.apache.org [2],
> which are signed with the key with fingerprint
> 0F79F2AFB2351BC29678544591F9C1EC125FD8DB [3],
> * all artifacts to be deployed to the Maven Central Repository [4],
> * source code tag v1.1.0-rc1 [5],
> * website pull request listing the new release [6].
>
> The vote will be open for at least 72 hours. It is adopted by majority
> approval, with at least 3 PMC affirmative votes.
>
> Thanks,
> Danny
>
> [1]
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353141
> [2]
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-opensearch-1.1.0-rc1/
> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> [4] https://repository.apache.org/content/repositories/orgapacheflink-1666/
> [5] https://github.com/apache/flink-connector-opensearch/tree/v1.1.0-rc1
> [6] https://github.com/apache/flink-web/pull/694


Re: [VOTE] Release flink-connector-gcp-pubsub v3.0.2, release candidate #1

2023-11-06 Thread Ryan Skraba
Hello! +1 (non-binding)

One note: the parent pom still has 1.16.0 for the Maven property of
flink.version for both 1.17 and 1.18 releases.

I've validated the source for the RC1:
flink-connector-gcp-pubsub-3.0.2-src.tgz at r65060
* The sha512 checksum is OK.
* The source file is signed correctly.
* The signature 0F79F2AFB2351BC29678544591F9C1EC125FD8DB is found in the
KEYS file, and on https://keyserver.ubuntu.com/
* The source file is consistent with the GitHub tag v3.0.2-rc1, which
corresponds to commit 4c6be836e6c0f36ef5711f12d7b935254e7d248d
- The files explicitly excluded by create_pristine_sources (such as
.gitignore and the submodule tools/releasing/shared) are not present.
* Has a LICENSE file and a NOTICE file
* Does not contain any compiled binaries.

* The sources can be compiled and unit tests pass with flink.version 1.17.1
and flink.version 1.18.0

* Nexus has two staged artifact ids for 3.0.2-1.17 and 3.0.2-1.18
- flink-connector-gcp-pubsub (.jar, -javadoc.jar, -sources.jar and .pom)
- flink-connector-gcp-pubsub-parent (only .pom)

I did a simple smoke test on an emulated Pub/Sub with the 1.18 version.

All my best, Ryan Skraba


Call for Presentations now open: Community over Code EU 2024

2023-10-30 Thread Ryan Skraba
(Note: You are receiving this because you are subscribed to the dev@
list for one or more projects of the Apache Software Foundation.)

It's back *and* it's new!

We're excited to announce that the first edition of Community over
Code Europe (formerly known as ApacheCon EU) which will be held at the
Radisson Blu Carlton Hotel in Bratislava, Slovakia from June 03-05,
2024! This eagerly anticipated event will be our first live EU
conference since 2019.

The Call for Presentations (CFP) for Community Over Code EU 2024 is
now open at https://eu.communityovercode.org/blog/cfp-open/,
and will close 2024/01/12 23:59:59 GMT.

We welcome submissions on any topic related to the Apache Software
Foundation, Apache projects, or the communities around those projects.
We are specifically looking for presentations in the following
categories:

* API & Microservices
* Big Data Compute
* Big Data Storage
* Cassandra
* CloudStack
* Community
* Data Engineering
* Fintech
* Groovy
* Incubator
* IoT
* Performance Engineering
* Search
* Tomcat, Httpd and other servers

Additionally, we are thrilled to introduce a new feature this year: a
poster session. This addition will provide an excellent platform for
showcasing high-level projects and incubator initiatives in a visually
engaging manner. We believe this will foster lively discussions and
facilitate networking opportunities among participants.

All my best, and thanks so much for your participation,

Ryan Skraba (on behalf of the program committee)

[Countdown]: https://www.timeanddate.com/countdown/to?iso=20240112T2359=1440


Re: [DISCUSS] Confluent Avro support without Schema Registry access

2023-10-30 Thread Ryan Skraba
Hello!  I took a look at FLINK-33045, which is somewhat related: In
that improvement, the author wants to control who registers schemas.
The Flink job would know the Avro schema to use, and would look up the
ID to use in framing the Avro binary.  It uses but never changes the
schema registry.

Here it sounds like you want nearly the same thing with one more step:
if the Flink job is configured with the schema to use, it could also
be pre-configured with the ID that the schema registry knows.
Technically, it could be configured with a *set* of schemas mapped to
their IDs when the job starts, but I imagine this would be pretty
clunky.

I'm curious if you can share what customer use cases wouldn't want
access to the schema registry!  One of the reasons it exists is to
prevent systems from writing unreadable or corrupted data to a Kafka
topic (or other messaging system) -- which I think is what Martijn is
asking about.  It's unlikely to be a performance gain from hiding it.

Thanks for bringing this up for discussion!  Ryan

[FLINK-33045]: https://issues.apache.org/jira/browse/FLINK-33045
[Single Object Encoding]:
https://avro.apache.org/docs/1.11.1/specification/_print/#single-object-encoding-specification

On Fri, Oct 27, 2023 at 3:13 PM Dale Lane  wrote:
>
> > if you strip the magic byte, and the schema has
> > evolved when you're consuming it from Flink,
> > you can end up with deserialization errors given
> > that a field might have been deleted/added/
> > changed etc.
>
> Aren’t we already fairly dependent on the schema remaining consistent, 
> because otherwise we’d need to update the table schema as well?
>
> > it wouldn't work when you actually want to
> > write avro-confluent, because that requires a
> > check when producing if you're still being compliant.
>
> I’m not sure what you mean here, sorry. Are you thinking about issues if you 
> needed to mix-and-match with both formatters at the same time? (Rather than 
> just using the Avro formatter as I was describing)
>
> Kind regards
>
> Dale
>
>
>
> From: Martijn Visser 
> Date: Friday, 27 October 2023 at 14:03
> To: dev@flink.apache.org 
> Subject: [EXTERNAL] Re: [DISCUSS] Confluent Avro support without Schema 
> Registry access
> Hi Dale,
>
> I'm struggling to understand in what cases you want to read data
> serialized in connection with Confluent Schema Registry, but can't get
> access to the Schema Registry service. It seems like a rather exotic
> situation and it beats the purposes of using a Schema Registry in the
> first place? I also doubt that it's actually really useful: if you
> strip the magic byte, and the schema has evolved when you're consuming
> it from Flink, you can end up with deserialization errors given that a
> field might have been deleted/added/changed etc. Also, it wouldn't
> work when you actually want to write avro-confluent, because that
> requires a check when producing if you're still being compliant.
>
> Best regards,
>
> Martijn
>
> On Fri, Oct 27, 2023 at 2:53 PM Dale Lane  wrote:
> >
> > TLDR:
> > We currently require a connection to a Confluent Schema Registry to be able 
> > to work with Confluent Avro data. With a small modification to the Avro 
> > formatter, I think we could also offer the ability to process this type of 
> > data without requiring access to the schema registry.
> >
> > What would people think of such an enhancement?
> >
> > -
> >
> > When working with Avro data, there are two formats available to us: avro 
> > and avro-confluent.
> >
> > avro
> > Data it supports: Avro records
> > Approach: You specify a table schema and it derives an appropriate Avro 
> > schema from this.
> >
> > avro-confluent
> > Data it supports: Confluent’s variant[1] of the Avro encoding
> > Approach: You provide connection details (URL, credentials, 
> > keystore/truststore, schema lookup strategy, etc.) for retrieving an 
> > appropriate schema from the Confluent Schema Registry.
> >
> > What this means is if you have Confluent Avro data[2] that you want to use 
> > in Flink, you currently have to use the avro-confluent format, and that 
> > means you need to provide Flink with access to your Schema Registry.
> >
> > I think there will be times where you may not want, or may not be able, to 
> > provide Flink with direct access to a Schema Registry. In such cases, it 
> > would be useful to support the same behaviour that the avro format does 
> > (i.e. allow you to explicitly specify a table schema)
> >
> > This could be achieved with a very minor modification to the avro formatter.
> >
> > For reading records, we could add an option to the formatter to highlight 
> > when records will be Confluent Avro. If that option is set, we just need 
> > the formatter to skip the first bytes with the schema ID/version (it can 
> > then use the remaining bytes with a regular Avro decoder as it does today – 
> > the existing implementation would be essentially unchanged).
> >
> > For writing records, something similar would 

[VOTE] Add JSON encoding to Avro serialization

2023-10-25 Thread Ryan Skraba
Hello!

I'm reviewing a new feature of another contributor (Dale Lane) on
FLINK-33058 that adds JSON-encoding in addition to the binary Avro
serialization format.  He addressed my original objections that JSON
encoding isn't _generally_ a best practice for Avro messages.

The discussion is pretty well-captured in the JIRA and PR, but I
wanted to give it a bit of visiblity and see if there were any strong
opinions on the subject! Given the minor nature of this feature, I
don't think it requires a FLIP.

*TL;DR*:  JSON-encoded Avro might not be ideal for production, but it
has a place for small systems and especially setting up and testing
before making the switch to binary-encoding.

All my best, Ryan

[Jira]: https://issues.apache.org/jira/browse/FLINK-33058
[PR]: https://github.com/apache/flink/pull/23395


Re: FW: RE: Close orphaned/stale PRs

2023-10-04 Thread Ryan Skraba
Hey, this has been an interesting discussion -- this is something that
has been on my mind as an open source contributor and committer (I'm
not a Flink committer).

A large number of open PRs doesn't _necessarily_ mean a project is
unhealthy or has technical debt. If it's fun and easy to get your
contribution accepted and committed, even for a small fix, you're more
likely to raise another PR, and another.  I wouldn't be surprised if
there's a natural equilibrium where adding capacity to smoothly review
and manage more PRs cause more PRs to be submitted.  Everyone wins!

I don't think there's a measure for the "average PR lifetime", or
"time to first comment", but those would be more interesting things to
know and those are the worrisome ones.

As a contributor, I'm pretty willing to wait as long as necessary (and
rebase and fix merge conflicts) if there's good communication in
place. I'm pretty patient, especially if I knew that the PR would be
looked at and merged for a specific fix version (for example).  I'd
expect simple and obvious fixes with limited scope to take less time
than a more complex, far-reaching change.  I'd probably appreciate
that the boring-cyborg welcomes me on my first PR, but I'd be pretty
irritated if any PR were closed without any human interaction.

As a reviewer or committer, it's just overwhelming to see the big
GitHub list, and sometimes it feels random just "picking one near the
top" to look at.  In projects where I have the committer role, I
sometimes feel more badly about work I'm *not* doing than the work I'm
getting done! This isn't sustainable either.  A lot of people on the
project are volunteering after hours, and grooming, reviewing and
commenting PRs shouldn't be a thankless, unending job to feel bad
about.

As a contributor, one "magic" solution that I'd love to see is a
better UI that could show (for example) tentative "review dates", like
the number at a butcher shop, and proposed reviewers.

If I was committing to reviewing a PR every day, it would be great if
I could know which ones were the best "next" candidates to review: the
one waiting longest, or a new, critical fix in my domain.  As it
stands, there's next to no chance that the PRs in the middle of the
list are going to get any attention, but closing them stand to lose
valuable work or (worse) turn off a potential contributor forever.

Taking a look at some open PRs that I authored or interacted with: I
found one that should have been closed, one that was waiting for MY
attention for a merge-squash-rebase (oops), another where I made some
requested changes and it's back in review limbo.  Unfortunately, I
don't think any of these would have been brought to my attention by a
nag-bot. I don't think I'm alone; automated emails get far less
attention  with sometime not giving automated emails much attention.

OK, one more thing to think about: some underrepresented groups in
tech can find it difficult to demand attention, through constant
pinging and commenting and reminding...  So finding ways to make sure
that non-squeaky wheels also get some love is a really fair goal.

There's some pretty good ideas in this conversation, and I'm really
glad to hear it being brought up!  I'd love to hear any other
brainstorming for ideas, and get the virtual circle that David
mentioned!

All my best, Ryan







On Wed, Oct 4, 2023 at 12:03 PM David Radley  wrote:
>
> Hi,
> To add I agree with Martijn’s insights; I think we are saying similar things. 
> To progress agreed upon work, and not blanket close all stale prs,
>   Kind regards, David.
>
> From: David Radley 
> Date: Wednesday, 4 October 2023 at 10:59
> To: dev@flink.apache.org 
> Subject: [EXTERNAL] RE: Close orphaned/stale PRs
> Hi ,
> I agree Venkata this issue is bigger than closing out stale prs.
>
> We can see that issues are being raised at a rate way above the resolution 
> time. 
> https://issues.apache.org/jira/secure/ConfigureReport.jspa?projectOrFilterId=project-12315522=daily=90=true=major=12315522=com.atlassian.jira.jira-core-reports-plugin%3Acreatedvsresolved-report_token=A5KQ-2QAV-T4JA-FDED_19ff17decb93662bafa09e4b3ffb3a385c202015_lin=Next
> Gaining over 500 issues to the backlog every 3 months.
>
> We have over 1000 open prs. This is a lot of technical debt. I came across a 
> 6 month old pr recently that had not been merged. A second Jira issue was 
> raised  for the same problem and a second pr fixed the issue (identically). 
> The first pr was still on the backlog until we noticed it.
>
> I am looking to contribute to the community to be able to identify issues I 
> can work on and then be reasonably certain they will be reviewed and merged 
> so I can build on contributions. I have worked as a maintainer and committer 
> in other communities and managed to spend some of the week addressing 
> incoming work; I am happy to do this in some capacity with the support of 
> committer(s) for Flink.  It seems to me it is virtuous circle to enable 

[jira] [Created] (FLINK-33177) Memory leak in MockStreamingRuntimeContext

2023-10-02 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-33177:
---

 Summary: Memory leak in MockStreamingRuntimeContext
 Key: FLINK-33177
 URL: https://issues.apache.org/jira/browse/FLINK-33177
 Project: Flink
  Issue Type: Bug
  Components: Tests
Reporter: Ryan Skraba


(I noticed this when fixing FLINK-33018)

The three-argument constructor for 
[MockStreamingRuntimeContext|https://github.com/apache/flink/blob/ab26175a82a836da9edfaea6325038541e492a3e/flink-streaming-java/src/test/java/org/apache/flink/streaming/util/MockStreamingRuntimeContext.java#L42]
 has a memory leak due to a MockEnvironment being created and never closed.

You can reproduce this by running any test that uses this constructor in 
IntelliJ with a mode set to "Repeat until fail". After about 16K runs:
{code:java}
#
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x7f5f814c1000, 
16384, 0) failed; error='Not enough space' (errno=12)
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 16384 bytes for committing 
reserved memory.
# An error report file with more information is saved as:
# 
/home/ryan.skraba/working/apache/flink-connector-gcp-pubsub/flink-connector-gcp-pubsub/hs_err_pid214687.log
[154.974s][warning][os,thread] Failed to start the native thread for 
java.lang.Thread "IOManager reader thread #1"
Exception in thread "Thread-48747" java.lang.OutOfMemoryError: unable to create 
native thread: possibly out of memory or process/resource limits reached
    at java.base/java.lang.Thread.start0(Native Method)
    at java.base/java.lang.Thread.start(Thread.java:802)
    at 
org.apache.flink.runtime.io.disk.iomanager.IOManagerAsync.(IOManagerAsync.java:97)
    at 
org.apache.flink.runtime.io.disk.iomanager.IOManagerAsync.(IOManagerAsync.java:66)
    at 
org.apache.flink.runtime.io.disk.iomanager.IOManagerAsync.(IOManagerAsync.java:57)
    at 
org.apache.flink.runtime.operators.testutils.MockEnvironmentBuilder.build(MockEnvironmentBuilder.java:173)
    at 
org.apache.flink.streaming.util.MockStreamingRuntimeContext.(MockStreamingRuntimeContext.java:52)
    at 
org.apache.flink.streaming.connectors.gcp.pubsub.PubSubConsumingTest.lambda$createSourceThread$0(PubSubConsumingTest.java:186)
    at java.base/java.lang.Thread.run(Thread.java:833)
[154.977s][warning][os,thread] Attempt to deallocate stack guard pages failed 
(0x7f5f816c1000-0x7f5f816c5000).
[thread 214689 also had an error]
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x7f69994e, 
65536, 0) failed; error='Not enough space' (errno=12)
Disconnected from the target VM, address: '127.0.0.1:40395', transport: 
'socket' {code}
or
{code:java}
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x7f01232ab000, 
16384, 0) failed; error='Not enough space' (errno=12)
[thread 330183 also had an error]
[21.295s][warning][os,thread] Failed to start thread "Unknown thread" - 
pthread_create failed (EAGAIN) for attributes: stacksize: 1024k, guardsize: 0k, 
detached.
[21.295s][warning][os,thread] Failed to start the native thread for 
java.lang.Thread "IOManager reader thread #1"
#
# If you would like to submit a bug report, please visit:
#   
https://bugzilla.redhat.com/enter_bug.cgi?product=Fedora=java-17-openjdk-portable=37
# {code}
This obviously isn't a big deal, since the tests that use this mock are only 
intended to be run once.  These errors can be fixed by using the four argument 
version of the constructor and explicitly closing the MockEnvironment.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [ANNOUNCE] Release 1.18.0, release candidate #0

2023-09-25 Thread Ryan Skraba
Hello!  There's a security fix that probably should be applied to 1.18
in the next RC1 : https://github.com/apache/flink/pull/23461 (bump to
snappy-java).  Do you think this would be possible to include?

All my best, Ryan

[1]: https://issues.apache.org/jira/browse/FLINK-33149 "Bump
snappy-java to 1.1.10.4"



On Mon, Sep 25, 2023 at 3:54 PM Jing Ge  wrote:
>
> Thanks Zakelly for the update! Appreciate it!
>
> @Piotr Nowojski  If you do not have any other
> concerns, I will move forward to create 1.18 rc1 and start voting. WDYT?
>
> Best regards,
> Jing
>
> On Mon, Sep 25, 2023 at 2:20 AM Zakelly Lan  wrote:
>
> > Hi Jing and everyone,
> >
> > I have conducted three rounds of benchmarking with Java11, comparing
> > release 1.18 (commit: deb07e99560[1]) with commit 6d62f9918ea[2]. The
> > results are attached[3]. Most of the tests show no obvious regression.
> > However, I did observe significant change in several tests. Upon
> > reviewing the historical results from the previous pipeline, I also
> > discovered a substantial variance in those tests, as shown in the
> > timeline pictures included in the sheet[3]. I believe this variance
> > has existed for a long time and requires further investigation, and
> > fully measuring the variance requires more rounds (15 or more). I
> > think for now it is not a blocker for release 1.18. WDYT?
> >
> >
> > Best,
> > Zakelly
> >
> > [1]
> > https://github.com/apache/flink/commit/deb07e99560b45033a629afc3f90666ad0a32feb
> > [2]
> > https://github.com/apache/flink/commit/6d62f9918ea2cbb8a10c705a25a4ff6deab60711
> > [3]
> > https://docs.google.com/spreadsheets/d/1V0-duzNTgu7H6R7kioF-TAPhlqWl7Co6Q9ikTBuaULo/edit?usp=sharing
> >
> > On Sun, Sep 24, 2023 at 11:29 AM ConradJam  wrote:
> > >
> > > +1 for testing with Java 17
> > >
> > > Jing Ge  于2023年9月24日周日 09:40写道:
> > >
> > > > +1 for testing with Java 17 too. Thanks Zakelly for your effort!
> > > >
> > > > Best regards,
> > > > Jing
> > > >
> > > > On Fri, Sep 22, 2023 at 1:01 PM Zakelly Lan 
> > wrote:
> > > >
> > > > > Hi Jing,
> > > > >
> > > > > I agree we could wait for the result with Java 11. And it should be
> > > > > available next Monday.
> > > > > Additionally, I could also build a pipeline with Java 17 later since
> > > > > it is supported in 1.18[1].
> > > > >
> > > > >
> > > > > Best regards,
> > > > > Zakelly
> > > > >
> > > > > [1]
> > > > >
> > > >
> > https://github.com/apache/flink/commit/9c1318ca7fa5b2e7b11827068ad1288483aaa464#diff-8310c97396d60e96766a936ca8680f1e2971ef486cfc2bc55ec9ca5a5333c47fR53
> > > > >
> > > > > On Fri, Sep 22, 2023 at 5:57 PM Jing Ge 
> > > > > wrote:
> > > > > >
> > > > > > Hi Zakelly,
> > > > > >
> > > > > > Thanks for your effort and the update! Since Java 8 has been
> > > > > deprecated[1],
> > > > > > let's wait for the result with Java 11. It should be available
> > after
> > > > the
> > > > > > weekend and there should be no big surprise. WDYT?
> > > > > >
> > > > > > Best regards,
> > > > > > Jing
> > > > > >
> > > > > > [1]
> > > > > >
> > > > >
> > > >
> > https://nightlies.apache.org/flink/flink-docs-master/release-notes/flink-1.15/#jdk-upgrade
> > > > > >
> > > > > > On Fri, Sep 22, 2023 at 11:26 AM Zakelly Lan <
> > zakelly@gmail.com>
> > > > > wrote:
> > > > > >
> > > > > > > Hi everyone,
> > > > > > >
> > > > > > > I want to provide an update on the benchmark results that I have
> > been
> > > > > > > working on. After spending some time preparing the environment
> > and
> > > > > > > adjusting the benchmark script, I finally got a comparison
> > between
> > > > > > > release 1.18 (commit: 2aeb99804ba[1]) and the commit before the
> > old
> > > > > > > codespeed server went down (commit: 6d62f9918ea[2]) on openjdk8.
> > The
> > > > > > > report is attached[3]. Note that the test has only run once on
> > jdk8,
> > > > > > > so the impact of single-test fluctuations is not ruled out.
> > > > > > > Additionally, I have noticed some significant fluctuations in
> > > > specific
> > > > > > > tests when reviewing previous benchmark scores, which I have also
> > > > > > > noted in the report. Taking all of these factors into
> > consideration,
> > > > I
> > > > > > > think there is no obvious regression in release 1.18 *for now*.
> > More
> > > > > > > tests including the one on openjdk11 are on the way. Hope it
> > does not
> > > > > > > delay the release procedure.
> > > > > > >
> > > > > > > Please let me know if you have any concerns.
> > > > > > >
> > > > > > >
> > > > > > > Best,
> > > > > > > Zakelly
> > > > > > >
> > > > > > > [1]
> > > > > > >
> > > > >
> > > >
> > https://github.com/apache/flink/commit/2aeb99804ba56c008df0a1730f3246d3fea856b9
> > > > > > > [2]
> > > > > > >
> > > > >
> > > >
> > https://github.com/apache/flink/commit/6d62f9918ea2cbb8a10c705a25a4ff6deab60711
> > > > > > > [3]
> > > > > > >
> > > > >
> > > >
> > https://docs.google.com/spreadsheets/d/1-3Y974jYq_WrQNzLN-y_6lOU-NGXaIDTQBYTZd04tJ0/edit?usp=sharing
> > > > > > >
> > > > > > 

[jira] [Created] (FLINK-33149) Bump snappy-java to 1.1.10.4

2023-09-25 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-33149:
---

 Summary: Bump snappy-java to 1.1.10.4
 Key: FLINK-33149
 URL: https://issues.apache.org/jira/browse/FLINK-33149
 Project: Flink
  Issue Type: Bug
  Components: API / Core, Connectors / AWS, Connectors / HBase, 
Connectors / Kafka, Stateful Functions
Affects Versions: 1.18.0, 1.16.3, 1.17.2
Reporter: Ryan Skraba


Xerial published a security alert for a Denial of Service attack that [exists 
on 
1.1.10.1|https://github.com/xerial/snappy-java/security/advisories/GHSA-55g7-9cwv-5qfv].

This is included in flink-dist, but also in flink-statefun, and several 
connectors.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Azure pipelines CI view permissions

2023-08-31 Thread Ryan Skraba
Hey everyone,

Has something changed in the configuration for the CI page?

I can see the pipelines, but we can't see the list of recent / individual runs:

https://dev.azure.com/apache-flink/apache-flink/_build

That being said, we can still see the results of SUCCESS or FAILURE
when a PR runs, so I'm wondering if this is an intentional change!

All my best, Ryan


Re: [DISCUSS] FLIP-358: flink-avro enhancement and cleanup

2023-08-31 Thread Ryan Skraba
Hey -- I have a certain knowledge of Avro, and I'd be willing to help
out with some of these enhancements, writing tests and reviewing.  I
have a *lot* of Avro schemas available for validation!

The FLIP looks pretty good and covers the possible cases pretty
rigorously. I wasn't aware of some of the gaps you've pointed out
here!

How useful do you think the new ENUM_STRING DataType would be outside
of the Avro use case?  It seems like a good enough addition that would
solve the problem here.

A small note: I assume the AvroSchemaUtils is meant to be annotated
@PublicEvolving as well.

All my best, Ryan


On Tue, Aug 29, 2023 at 4:35 AM Becket Qin  wrote:
>
> Hi folks,
>
> I would like to start the discussion about FLIP-158[1] which proposes to
> clean up and enhance the Avro support in Flink. More specifically, it
> proposes to:
>
> 1. Make it clear what are the public APIs in flink-avro components.
> 2. Fix a few buggy cases in flink-avro
> 3. Add more supported Avro use cases out of the box.
>
> Feedbacks are welcome!
>
> Thanks
>
> Jiangjie (Becket) Qin
>
> [1]
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-358%3A+flink-avro+enhancement+and+cleanup


Re: [ANNOUNCE] New Apache Flink PMC Member - Matthias Pohl

2023-08-07 Thread Ryan Skraba
Congratulations Matthias -- very well-deserved, the community is lucky to
have you <3

All my best, Ryan

On Mon, Aug 7, 2023 at 3:04 PM Lincoln Lee  wrote:

> Congratulations!
>
> Best,
> Lincoln Lee
>
>
> Feifan Wang  于2023年8月7日周一 20:13写道:
>
> > Congrats Matthias!
> >
> >
> >
> > ——
> > Name: Feifan Wang
> > Email: zoltar9...@163.com
> >
> >
> >  Replied Message 
> > | From | Matthias Pohl |
> > | Date | 08/7/2023 16:16 |
> > | To |  |
> > | Subject | Re: [ANNOUNCE] New Apache Flink PMC Member - Matthias Pohl |
> > Thanks everyone. :)
> >
> > On Mon, Aug 7, 2023 at 3:18 AM Andriy Redko  wrote:
> >
> > Congrats Matthias, well deserved!!
> >
> > DC> Congrats Matthias!
> >
> > DC> Very well deserved, thankyou for your continuous, consistent
> > contributions.
> > DC> Welcome.
> >
> > DC> Thanks,
> > DC> Danny
> >
> > DC> On Fri, Aug 4, 2023 at 9:30 AM Feng Jin 
> wrote:
> >
> > Congratulations, Matthias!
> >
> > Best regards
> >
> > Feng
> >
> > On Fri, Aug 4, 2023 at 4:29 PM weijie guo 
> > wrote:
> >
> > Congratulations, Matthias!
> >
> > Best regards,
> >
> > Weijie
> >
> >
> > Wencong Liu  于2023年8月4日周五 15:50写道:
> >
> > Congratulations, Matthias!
> >
> > Best,
> > Wencong Liu
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > At 2023-08-04 11:18:00, "Xintong Song" 
> > wrote:
> > Hi everyone,
> >
> > On behalf of the PMC, I'm very happy to announce that Matthias Pohl
> > has
> > joined the Flink PMC!
> >
> > Matthias has been consistently contributing to the project since
> > Sep
> > 2020,
> > and became a committer in Dec 2021. He mainly works in Flink's
> > distributed
> > coordination and high availability areas. He has worked on many
> > FLIPs
> > including FLIP195/270/285. He helped a lot with the release
> > management,
> > being one of the Flink 1.17 release managers and also very active
> > in
> > Flink
> > 1.18 / 2.0 efforts. He also contributed a lot to improving the
> > build
> > stability.
> >
> > Please join me in congratulating Matthias!
> >
> > Best,
> >
> > Xintong (on behalf of the Apache Flink PMC)
> >
> >
> >
> >
> >
> >
>


[jira] [Created] (FLINK-32371) Bump snappy-java to 1.1.10.1

2023-06-16 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-32371:
---

 Summary: Bump snappy-java to 1.1.10.1
 Key: FLINK-32371
 URL: https://issues.apache.org/jira/browse/FLINK-32371
 Project: Flink
  Issue Type: Improvement
  Components: Build System
Reporter: Ryan Skraba


There is a CVE in all versions of snappy prior to 1.1.10.1 
https://nvd.nist.gov/vuln/detail/CVE-2023-34455





--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] Release flink-connector-cassandra 3.1.0, release candidate #2

2023-05-10 Thread Ryan Skraba
+1 (non-binding)

I've validated this RC: flink-connector-cassandra-3.1.0-rc2 at r61661
- The SHA512 hash is OK.
- The source file is signed correctly.
- The signature 0F79F2AFB2351BC29678544591F9C1EC125FD8DB is found in the
KEYS file.
- The source file is consistent with the Github tag v3.0.1-rc2, which
corresponds to commit
https://github.com/apache/flink-connector-cassandra/tree/83945fe41cb6e7c188dfbf656b04955142600bb2
  - The files explicitly excluded by create_pristine_sources (such as
.gitignore and the submodule tools/releasing/shared) are not present.
- Has a LICENSE file and NOTICE files.
- Does not contain any compiled binaries.
- Has 2 artifacts staged at
https://repository.apache.org/content/repositories/orgapacheflink-1631/

The bug that blocked RC1 (FLINK-31927) is no longer present.

Thanks!  Ryan

On Wed, May 10, 2023 at 11:51 AM Martijn Visser 
wrote:

> +1 (binding)
>
> - Validated hashes
> - Verified signature
> - Verified that no binaries exist in the source archive
> - Build the source with Maven
> - Verified licenses
> - Verified web PRs
>
> On Tue, May 9, 2023 at 11:43 PM Khanh Vu  wrote:
>
> > +1 (non-binding)
> >
> > - Verified sha512 checksum matches file archive.
> > - Verified file archive is signed and signature is authenticated.
> > - Verified no binaries exist in the source archive.
> > - Verified source archive is consistent with Github source code with tag
> > v3.1.0-rc2, at commit 83945fe41cb6e7c188dfbf656b04955142600bb2.
> > - Source built successfully with maven and integration tests passed.
> >
> > Best regards,
> > Khanh Vu
> >
> > On Tue, May 9, 2023 at 9:50 AM Etienne Chauchot 
> > wrote:
> >
> > > Hi everyone,
> > >
> > > +1 (non-binding)
> > >
> > > I checked:
> > >
> > > - release notes
> > >
> > > - tag
> > >
> > > - tested the prod artifact with
> > https://github.com/echauchot/flink-samples
> > >
> > > Best
> > >
> > > Etienne
> > >
> > > Le 05/05/2023 à 11:39, Danny Cranmer a écrit :
> > > > Hi everyone,
> > > > Please review and vote on the release candidate #2 for the version
> > 3.1.0,
> > > > as follows:
> > > > [ ] +1, Approve the release
> > > > [ ] -1, Do not approve the release (please provide specific comments)
> > > >
> > > > The complete staging area is available for your review, which
> includes:
> > > > * JIRA release notes [1],
> > > > * the official Apache source release to be deployed to
> dist.apache.org
> > > [2],
> > > > which are signed with the key with fingerprint
> > > > 0F79F2AFB2351BC29678544591F9C1EC125FD8DB [3],
> > > > * all artifacts to be deployed to the Maven Central Repository [4],
> > > > * source code tag v3.1.0-rc2 [5],
> > > > * website pull request listing the new release [6].
> > > >
> > > > The vote will be open for at least 72 hours. It is adopted by
> majority
> > > > approval, with at least 3 PMC affirmative votes.
> > > >
> > > > Thanks,
> > > > Danny
> > > >
> > > > [1]
> > > >
> > >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353030
> > > > [2]
> > > >
> > >
> >
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-cassandra-3.1.0-rc2
> > > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > > > [4]
> > > https://repository.apache.org/content/repositories/orgapacheflink-1631
> > > > [5]
> > https://github.com/apache/flink-connector-cassandra/tree/v3.1.0-rc2
> > > > [6] https://github.com/apache/flink-web/pull/642
> > > >
> > >
> >
>


Re: [VOTE] Release flink-connector-opensearch, release candidate #1

2023-04-18 Thread Ryan Skraba
Hello!  +1 (non-binding)

I've validated the source for the RC1:
flink-connector-opensearch-1.0.1-src.tgz
* The sha512 checksum is OK.
* The source file is signed correctly.
* The signature A5F3BCE4CBE993573EC5966A65321B8382B219AF is found in the
KEYS file, and on https://keys.openpgp.org
* The source file is consistent with the Github tag v1.0.1-rc1, which
corresponds to commit c52dbf4fc9c473592479a6c4fc6b2b5227699737
   - The files explicitly excluded by create_pristine_sources (such as
.gitignore and the submodule tools/releasing/shared) are not present.
* Has a LICENSE file and a NOTICE file.  The sql-connector has a
NOTICE file for bundled artifacts.
* Does not contain any compiled binaries.

* The sources can be compiled and tests pass with flink.version 1.17.0 and
flink.version 1.16.1

* Nexus has three staged artifact ids for 1.0.1-1.16 and 1.0.1-1.17
 - flink-connector-opensearch-parent (only .pom)
 - flink-connector-opensearch (.jar, -sources.jar, -javadoc.jar, -tests.jar
and .pom)
 - flink-sql-connector-opensearch (.jar, -sources.jar and .pom)
* All 18 files have been signed with the same key as above, and have
correct sha1 and md5 checksums.

I didn't run any additional smoke tests other than the integration test
cases.

A couple minor points, but nothing that would block this release.

- like the other connectors I've checked, flink.version in the parent pom
is set to 1.16.0 even for 1.17 artifacts, which might be confusing.
- the NOTICE files have the wrong year.
- unlike other connectors, flink-connector-opensearch publishes the
-tests.jar classifier to nexus.  Is this desired?
- The sql-connector PackagingITCase test fails when using
`-Prelease,docs-and-source`, but otherwise works as intended.

All my best and thanks for the release.

Ryan

On Thu, Apr 13, 2023 at 3:39 PM Andrey Redko  wrote:

> +1 (non-binding), thanks Martijn!
>
> Best Regards,
> Andriy Redko
>
> On Thu, Apr 13, 2023, 8:54 AM Martijn Visser 
> wrote:
>
> > Hi everyone,
> > Please review and vote on the release candidate #1 for the version 1.0.1,
> > as follows:
> > [ ] +1, Approve the release
> > [ ] -1, Do not approve the release (please provide specific comments)
> >
> >
> > The complete staging area is available for your review, which includes:
> > * JIRA release notes [1],
> > * the official Apache source release to be deployed to dist.apache.org
> > [2],
> > which are signed with the key with fingerprint
> > A5F3BCE4CBE993573EC5966A65321B8382B219AF [3],
> > * all artifacts to be deployed to the Maven Central Repository [4],
> > * source code tag v1.0.1-rc1 [5],
> > * website pull request listing the new release [6].
> >
> > The vote will be open for at least 72 hours. It is adopted by majority
> > approval, with at least 3 PMC affirmative votes.
> >
> > Thanks,
> > Release Manager
> >
> > [1]
> >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12352686
> > [2]
> >
> >
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-opensearch-1.0.1-rc1
> > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > [4]
> > https://repository.apache.org/content/repositories/orgapacheflink-1612/
> > [5] https://github.com/apache/flink-connector-
> > /releases/tag/v1.0.1-rc1
> > [6] https://github.com/apache/flink-web/pull/636
> >
>


Re: [VOTE] Release flink-connector-rabbitmq v3.0.1, release candidate #1

2023-04-17 Thread Ryan Skraba
Hello!  +1 (non-binding)

I've validated the source for the RC1:
flink-connector-rabbitmq-3.0.1-src.tgz
* The sha512 checksum is OK.
* The source file is signed correctly.
* The signature A5F3BCE4CBE993573EC5966A65321B8382B219AF is found in the
KEYS file, and on https://keys.openpgp.org
* The source file is consistent with the Github tag v3.0.1-rc1, which
corresponds to commit 9827e71662c8f155cda5efe5ebbac804fd0fd8e2
   - The files explicitly excluded by create_pristine_sources (such as
.gitignore and the submodule tools/releasing/shared) are not present.
* Has a LICENSE file and a NOTICE file.  The sql-connector has a
NOTICE file for bundled artifacts.
* Does not contain any compiled binaries.

* The sources can be compiled and tests pass with flink.version 1.17.0 and
flink.version 1.16.1

* Nexus has three staged artifact ids for 3.0.1-1.16 and 3.0.1-1.17
 - flink-connector-rabbitmq-parent (only .pom)
 - flink-connector-rabbitmq (.jar, -sources.jar, -javadoc.jar and .pom)
 - flink-sql-connector-rabbitmq (.jar, -sources.jar and .pom)
* All 16 files have been signed with the same key as above, and have
correct sha1 and md5 checksums.

I didn't run any additional smoke tests other than the integration test
cases.

A couple minor points, but nothing that would block this release.

- like flink-connector-gcp-pubsub-parent, the
flink-connector-rabbitmq-parent:3.0.1-1.17 pom artifact has the
flink.version set to 1.16.0, which might be confusing.
- the NOTICE file for sql-connector has the wrong year.

All my best and thanks for the release.

Ryan


On Thu, Apr 13, 2023 at 4:45 PM Martijn Visser 
wrote:

> Hi everyone,
> Please review and vote on the release candidate #1 for the version 3.0.1,
> as follows:
> [ ] +1, Approve the release
> [ ] -1, Do not approve the release (please provide specific comments)
>
> This version is compatible with Flink 1.16.x and Flink 1.17.x
>
> The complete staging area is available for your review, which includes:
> * JIRA release notes [1],
> * the official Apache source release to be deployed to dist.apache.org
> [2],
> which are signed with the key with fingerprint
> A5F3BCE4CBE993573EC5966A65321B8382B219AF [3],
> * all artifacts to be deployed to the Maven Central Repository [4],
> * source code tag v3.0.1-rc1 [5],
> * website pull request listing the new release [6].
>
> The vote will be open for at least 72 hours. It is adopted by majority
> approval, with at least 3 PMC affirmative votes.
>
> Thanks,
> Release Manager
>
> [1]
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12352699
> [2]
>
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-rabbitmq-3.0.1-rc1
> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> [4]
> https://repository.apache.org/content/repositories/orgapacheflink-1615/
> [5]
> https://github.com/apache/flink-connector-rabbitmq/releases/tag/v3.0.1-rc1
> [6] https://github.com/apache/flink-web/pull/639
>


Re: [VOTE] Release flink-connector-gcp-pubsub v3.0.1, release candidate #1

2023-04-14 Thread Ryan Skraba
Hello!  +1 (non-binding)

I've validated the source for the RC1:
flink-connector-gcp-pubsub-3.0.1-src.tgz
* The sha512 checksum is OK.
* The source file is signed correctly.
* The signature A5F3BCE4CBE993573EC5966A65321B8382B219AF is found in the
KEYS file, and on https://keys.openpgp.org
* The source file is consistent with the Github tag v3.0.1-rc1, which
corresponds to commit 73e56edb2aa4513f6a73dc071545fb2508fd2d44
   - The files explicitly excluded by create_pristine_sources (such as
.gitignore and the submodule tools/releasing/shared) are not present.
* Has a LICENSE file and a NOTICE filel
* Does not contain any compiled binaries.

* The sources can be compiled and unit tests pass with flink.version 1.17.0
and flink.version 1.16.1

* Nexus has three staged artifact ids for 3.0.1-1.16 and 3.0.1-1.17
 - flink-connector-gcp-pubsub (.jar, -javadoc.jar, -sources.jar and .pom)
 - flink-connector-gcp-pubsub-e2e-tests (.jar, -sources.jar and .pom)
 - flink-connector-gcp-pubsub-parent (only .pom)
* All 16 files have been signed with the same key as above, and have
correct sha1 and md5 checksums.

Simple smoke testing on an emulated Pub/Sub service works for both flink
versions.

One really minor point: it looks like the
org.apache.flink:flink-connector-gcp-pubsub-parent:3.0.1-1.17:pom has the
flink-version set to 1.16.0.  This is a bit confusing, but all the flink
transitive dependencies are in the provided scope, so there's no
consequence.  I guess we could argue that it is the "source" compatibility
level for both versions!

All my best and thanks for the release.

Ryan






On Thu, Apr 13, 2023 at 4:07 PM Martijn Visser 
wrote:

> Hi everyone,
> Please review and vote on the release candidate #1 for the version 3.0.1,
> as follows:
> [ ] +1, Approve the release
> [ ] -1, Do not approve the release (please provide specific comments)
>
> This version is compatible with Flink 1.16.x and Flink 1.17.x.
>
> The complete staging area is available for your review, which includes:
> * JIRA release notes [1],
> * the official Apache source release to be deployed to dist.apache.org
> [2],
> which are signed with the key with fingerprint
> A5F3BCE4CBE993573EC5966A65321B8382B219AF [3],
> * all artifacts to be deployed to the Maven Central Repository [4],
> * source code tag v3.0.1-rc1 [5],
> * website pull request listing the new release [6].
>
> The vote will be open for at least 72 hours. It is adopted by majority
> approval, with at least 3 PMC affirmative votes.
>
> Thanks,
> Release Manager
>
> [1]
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12352770
> [2]
>
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-gcp-pubsub-3.0.1-rc1
> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> [4]
> https://repository.apache.org/content/repositories/orgapacheflink-1613/
> [5]
>
> https://github.com/apache/flink-connector-gcp-pubsub/releases/tag/v3.0.1-rc1
> [6] https://github.com/apache/flink-web/pull/637
>


[jira] [Created] (FLINK-31674) [JUnit5 Migration] Module: flink-table-planner (BatchAbstractTestBase)

2023-03-30 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-31674:
---

 Summary: [JUnit5 Migration] Module: flink-table-planner 
(BatchAbstractTestBase)
 Key: FLINK-31674
 URL: https://issues.apache.org/jira/browse/FLINK-31674
 Project: Flink
  Issue Type: Sub-task
Reporter: Ryan Skraba


This is one sub-subtask related to the flink-table-planner migration 
(FLINK-29541).

While most of the JUnit migrations tasks are done by modules, a number of 
abstract test classes in flink-table-planner have large hierarchies that cross 
module boundaries.  This task is to migrate all of the tests that depend on 
{{BatchAbstractTestBase}} to JUnit5.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] FLIP-299 Pub/Sub Lite Connector

2023-03-02 Thread Ryan Skraba
Hello Daniel!  Quite a while ago, I started porting the Pub/Sub connector
(from an existing PR) to the new source API in the new
flink-connector-gcp-pubsub repository [PR2].  As Martijn mentioned, there
hasn't been a lot of attention on this connector; any community involvement
would be appreciated!

Instead of considering this a new connector, is there an opportunity here
to offer the two variants (Pub/Sub and Pub/Sub Lite) as different artifacts
in that same repo?  Is there much common logic that can be shared between
the two?  I'm not as familiar as I should be with Lite, but I do recall
that they share many concepts and _some_ dependencies.

All my best, Ryan


On Wed, Mar 1, 2023 at 11:21 PM Daniel Collins 
wrote:

> Hello all,
>
> I'd like to start an official discuss thread for adding a Pub/Sub Lite
> Connector to Flink. We've had requests from our users to add flink support,
> and are willing to maintain and support this connector long term from the
> product team.
>
> The proposal is https://cwiki.apache.org/confluence/x/P51bDg, what would
> be
> people's thoughts on adding this connector?
>
> -Daniel
>


Re: "Introduction to Apache Flink" presentation

2023-01-11 Thread Ryan Skraba
Hello Joe -- sorry for the late reply, a colleague just showed me this.

I don't know if you're aware, but there's an incubating project called
Apache Training (Incubating) [1] that collects introductory and training
materials for various Apache projects.  All ASF committers have write
permissions on the incubator-training repository.

For your first point, if you're looking for some common, shared reveal.js
(and asciidoctor)-based tools, this might be a good home for a Flink
presentation.  There's still some room for improvement in the shared code,
of course, but that's the point of sharing the code :D  I did my ApacheCon
Avro presentation with it (but STILL haven't gotten around to contributing
it).

There's not a lot of momentum in the Training project at the moment, but
IMO it's fundamentally a pretty good idea for the community.  If you go
that route, I'd be willing to help out with review and conversion to
reveal.js and asciidoctor; a lot of the conversion work is just fiddling
around with layout until it looks OK.

All that being said, the presentation looks really good and I'd probably be
motivated to see it!  (is there a video yet?)

All my best, Ryan

[1]: https://training.apache.org/ "Training website"
[2]:
https://training.apache.org/presentations/incubator/navigating-asf-incubation/index.html
"Example deployed presentation"
[3]: https://github.com/apache/incubator-training "GitHub repository"

On Thu, Nov 17, 2022 at 11:57 AM Johannes Moser  wrote:

> Hi all,
>
> I was talking at a meetup yesterday about Apache Flink.
> There is this Google Presentation that was created earlier and never has
> been updated. As I had to come up with a presentation anyhow I used this
> and updated it. See the result in [1].
>
> First, I'd like to get feedback on the updated presentation. So please if
> you got some time, have a look at it and comment. Everyone should be able
> to do that.
>
> Having a default presentation that people can use to talk about Apache
> Flink is a good thing, so I'd like to keep that. I want to test the
> sentiment of the community with a couple of connected ideas.
>
> 1) Moving the presentation to a more sustainable technology that can also
> be incorporated better into the Apache Flink workflow.
> I'd suggest using something like reveal.js and adding the presentation to
> flink-web, so everyone has an easier time contributing. I guess the
> presentation would also benefit from a review process and an update could
> be included in release processes.
>
> 2) Adding a list of people that might be open to giving that presentation,
> so that Meetup organizers can reach out to them. Other communities have
> such "community champion" setups. The Apache Flink community would benefit
> from that as well.
>
> Happy to hear your feedback.
>
> Thanks,
> Joe
>
> [1]
>
> https://eu01.z.antigena.com/l/FScYr7uhmgjism6OHA9nu7LhyL6Qnf6vwbVF9hyy6sxoGIX461XJZFXXYApTrdOSPMSaWQXc2cSpICl4dkelyd3n0FcDTlT4~ICYKcio~3oA3xaZByzwUqiDXr-po8ciAUIA12eHuVtoXH7ir9RJCzVwuj45r8Eql9UT9ENdseAP6mzz6vccT-L0V5YgdR98gCssQg5FhqpWPVD9Kx~twT7W5ZuyvHR9jvz91BNEXG-9ITh5
>


Re: [ANNOUNCE] New Apache Flink Committer - Lincoln Lee

2023-01-10 Thread Ryan Skraba
Congratulations Lincoln!

All my best, Ryan

On Tue, Jan 10, 2023 at 9:37 AM yh z  wrote:

> Congratulations, Lincoln!
>
> Best regards,
> Yunhong Zheng
>
> Biao Liu  于2023年1月10日周二 15:02写道:
>
> > Congratulations, Lincoln!
> >
> > Thanks,
> > Biao /'bɪ.aʊ/
> >
> >
> >
> > On Tue, 10 Jan 2023 at 14:59, Hang Ruan  wrote:
> >
> > > Congratulations, Lincoln!
> > >
> > > Best,
> > > Hang
> > >
> > > Biao Geng  于2023年1月10日周二 14:57写道:
> > >
> > > > Congrats, Lincoln!
> > > > Best,
> > > > Biao Geng
> > > >
> > > > 获取 Outlook for iOS
> > > > 
> > > > 发件人: Wencong Liu 
> > > > 发送时间: Tuesday, January 10, 2023 2:39:47 PM
> > > > 收件人: dev@flink.apache.org 
> > > > 主题: Re:Re: [ANNOUNCE] New Apache Flink Committer - Lincoln Lee
> > > >
> > > > Congratulations, Lincoln!
> > > >
> > > > Best regards,
> > > > Wencong
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > 在 2023-01-10 13:25:09,"Yanfei Lei"  写道:
> > > > >Congratulations, well deserved!
> > > > >
> > > > >Best,
> > > > >Yanfei
> > > > >
> > > > >Yuan Mei  于2023年1月10日周二 13:16写道:
> > > > >
> > > > >> Congratulations, Lincoln!
> > > > >>
> > > > >> Best,
> > > > >> Yuan
> > > > >>
> > > > >> On Tue, Jan 10, 2023 at 12:23 PM Lijie Wang <
> > wangdachui9...@gmail.com
> > > >
> > > > >> wrote:
> > > > >>
> > > > >> > Congratulations, Lincoln!
> > > > >> >
> > > > >> > Best,
> > > > >> > Lijie
> > > > >> >
> > > > >> > Jingsong Li  于2023年1月10日周二 12:07写道:
> > > > >> >
> > > > >> > > Congratulations, Lincoln!
> > > > >> > >
> > > > >> > > Best,
> > > > >> > > Jingsong
> > > > >> > >
> > > > >> > > On Tue, Jan 10, 2023 at 11:56 AM Leonard Xu <
> xbjt...@gmail.com>
> > > > wrote:
> > > > >> > > >
> > > > >> > > > Congratulations, Lincoln!
> > > > >> > > >
> > > > >> > > > Impressive work in streaming semantics, well deserved!
> > > > >> > > >
> > > > >> > > >
> > > > >> > > > Best,
> > > > >> > > > Leonard
> > > > >> > > >
> > > > >> > > >
> > > > >> > > > > On Jan 10, 2023, at 11:52 AM, Jark Wu 
> > > wrote:
> > > > >> > > > >
> > > > >> > > > > Hi everyone,
> > > > >> > > > >
> > > > >> > > > > On behalf of the PMC, I'm very happy to announce Lincoln
> Lee
> > > as
> > > > a
> > > > >> new
> > > > >> > > Flink
> > > > >> > > > > committer.
> > > > >> > > > >
> > > > >> > > > > Lincoln Lee has been a long-term Flink contributor since
> > 2017.
> > > > He
> > > > >> > > mainly
> > > > >> > > > > works on Flink
> > > > >> > > > > SQL parts and drives several important FLIPs, e.g.,
> FLIP-232
> > > > (Retry
> > > > >> > > Async
> > > > >> > > > > I/O), FLIP-234 (
> > > > >> > > > > Retryable Lookup Join), FLIP-260 (TableFunction Finish).
> > > > Besides,
> > > > >> He
> > > > >> > > also
> > > > >> > > > > contributed
> > > > >> > > > > much to Streaming Semantics, including the non-determinism
> > > > problem
> > > > >> > and
> > > > >> > > the
> > > > >> > > > > message
> > > > >> > > > > ordering problem.
> > > > >> > > > >
> > > > >> > > > > Please join me in congratulating Lincoln for becoming a
> > Flink
> > > > >> > > committer!
> > > > >> > > > >
> > > > >> > > > > Cheers,
> > > > >> > > > > Jark Wu
> > > > >> > > >
> > > > >> > >
> > > > >> >
> > > > >>
> > > >
> > >
> >
>


Re: [DISCUSS] Retroactively externalize some connectors for 1.16

2022-12-06 Thread Ryan Skraba
Hello -- this makes sense to me: removing connectors from 1.17 (but not the
1.16 branch) will still give users a long time to migrate.

+1 (non-binding)

Ryan

On Fri, Dec 2, 2022 at 11:42 AM Dong Lin  wrote:

> Sounds good!
>
> +1
>
> On Fri, Dec 2, 2022 at 5:58 PM Chesnay Schepler 
> wrote:
>
> > Dec 9th is just a suggestion; the idea being to have a date that covers
> > connectors that are being released right now, while enforcing some
> > migration window.
> >
> > We will not reserve time for such a verification. Release testing is
> > meant to achieve that.
> > Since 1.16.x is unaffected by the removal from the master branch there
> > is no risk to existing deployments, while 1.17 is still quite a bit away.
> >
> > On 02/12/2022 02:11, Dong Lin wrote:
> > > Hello Chesney,
> > >
> > > The overall plan sounds good! Just to double check, is Dec 9th the
> > proposed
> > > cutoff date for the release of those externalized connectors?
> > >
> > > Also, will we reserve time for users to verify that the drop-in
> > replacement
> > > from Flink 1.16 to those externalized connectors can work as expected
> > > before removing their code from the master branch?
> > >
> > > Thanks,
> > > Dong
> > >
> > >
> > > On Thu, Dec 1, 2022 at 11:01 PM Chesnay Schepler 
> > wrote:
> > >
> > >> Hello,
> > >>
> > >> let me clarify the title first.
> > >>
> > >> In the original proposal for the connector externalization we said
> that
> > >> an externalized connector has to exist in parallel with the version
> > >> shipped in the main Flink release for 1 cycle.
> > >>
> > >> For example, 1.16.0 shipped with the elasticsearch connector, but at
> the
> > >> same time there's the externalized variant as a drop-in replacement,
> and
> > >> the 1.17.0 release will not include a ES connector.
> > >>
> > >> The rational was to give users some window to update their projects.
> > >>
> > >>
> > >> We are now about to externalize a few more connectors (cassandra,
> > >> pulsar, jdbc), targeting 1.16 within the next week.
> > >> The 1.16.0 release has now been about a month ago; so it hasn't been a
> > >> lot of time since then.
> > >> I'm now wondering if we could/should treat these connectors as
> > >> externalized for 1.16, meaning that we would remove them from the
> master
> > >> branch now, not ship them in 1.17 and move all further development
> into
> > >> the connector repos.
> > >>
> > >> The main benefit is that we won't have to bother with syncing changes
> > >> across repos all the time.
> > >>
> > >> We would of course need some sort-of cutoff date for this (December
> > >> 9th?), to ensure there's still some reasonably large gap left for
> users
> > >> to migrate.
> > >>
> > >> Let me know what you think.
> > >>
> > >> Regards,
> > >> Chesnay
> > >>
> > >>
> >
> >
>


Re: [VOTE] Release flink-connector-cassandra, release candidate #1

2022-11-29 Thread Ryan Skraba
Hello!  +1 (non-binding)

1.  Verified signature and SHA512 checksum in svn
2.  Verified that the tag exists on the source git repository and is
(mostly) identical to the source tar release
  - Some .-prefixed files like .idea/  and .gitignore in the repo under the
tag that are not in the tar (but .editorconfig is).
3. Verified that the original source in
apache/flink/flink-connectors/flink-connector-cassandra branch is (mostly)
identical to the externalized repository
  - A minor change to an ITCase constant to avoid a dependency, no non-test
changes.
4. Built the source with Maven
5. Verified staged artifacts and their SHA1 checksum (using
--strict-checksums)

All my best and thanks for this work!  Ryan


On Tue, Nov 29, 2022 at 3:44 PM Chesnay Schepler  wrote:

> The usage of the artifact shortcode was somewhat intentional; the
> connector_artifact shortcode does not support scala suffixes, which the
> cassandra connector unfortunately needs
>
> On 29/11/2022 15:16, Danny Cranmer wrote:
> > +1 (binding)
> >
> > - Verified signature/hashes
> > - Build the source with Maven (tests pass)
> > - Verified NOTICE files
> > - Verified that no binaries exist in the source archive
> > - Verified artifacts in repository.apache.org are as expected
> > - Verified tag exists on github
> > - Approved web PR
> >
> > I noticed the incorrect short code has been used in the docs [1], however
> > this can be addressed as a follow-up.
> >
> > Thanks,
> >
> > [1]
> >
> https://github.com/apache/flink-connector-cassandra/blob/main/docs/content/docs/connectors/datastream/cassandra.md
> >
> > On Tue, Nov 29, 2022 at 1:48 PM Martijn Visser  >
> > wrote:
> >
> >> +1 (binding)
> >>
> >> - Validated hashes
> >> - Verified signature
> >> - Verified that no binaries exist in the source archive
> >> - Build the source with Maven
> >> - Verified licenses
> >> - Verified web PR
> >>
> >> On Tue, Nov 29, 2022 at 11:50 AM Chesnay Schepler 
> >> wrote:
> >>
> >>>   > the link to a tag in the initial message is wrong
> >>>
> >>> Whoops, small bug in the message generator.
> >>>
> >>>   > Did we ever consider having the `flink-version` always as part of
> the
> >>> connector version?
> >>>
> >>> Not specifically, no.
> >>> I think it makes sense though that if you take the "3.0.0" source that
> >>> you get "3.0.0" artifacts (and making such a suffix work for SNAPSHOT
> >>> artifacts is a bit tricky because it's suddenly an infix).
> >>> In general the current setup is less complex.
> >>> I can see upsides though, like not having to modify the poms when
> >>> staging the jars.
> >>> But medium-term we want to get rid of these suffixes anyway.
> >>>
> >>> On 29/11/2022 10:36, Dawid Wysakowicz wrote:
>  +1 (binding)
> 
>  - Downloaded artifacts
>  - Checked hash and signature
>  - No binaries in source archive found
>  - Verified NOTICE files
>  - Built from source code
>  - Verified that no SNAPSHOT versions exist, all versions point to
> 3.0.0
> >>> in
>  POM files
>  - Tag is OK
>  - Reviewed the Web PR
> 
>  Few notes:
> 
>  - the link to a tag in the initial message is wrong. It should've been
> 
> >>
> https://github.com/apache/flink-connector-cassandra/releases/tag/v3.0.0-rc1
>  - I was a bit surprised that executing `mvn -DskipTests package`
>  produces artifacts without the Flink version suffix. Did we ever
>  consider having the `flink-version` always as part of the connector
>  version?
> 
>  Best,
> 
>  Dawid
> 
>  On 25/11/2022 10:31, Chesnay Schepler wrote:
> > Hi everyone,
> > Please review and vote on the release candidate #1 for the version
> > 3.0.0, as follows:
> > [ ] +1, Approve the release
> > [ ] -1, Do not approve the release (please provide specific comments)
> >
> >
> > The complete staging area is available for your review, which
> >> includes:
> > * JIRA release notes [1],
> > * the official Apache source release to be deployed to
> > dist.apache.org [2], which are signed with the key with fingerprint
> > C2EED7B111D464BA [3],
> > * all artifacts to be deployed to the Maven Central Repository [4],
> > * source code tag v3.0.0-1 [5],
> > * website pull request listing the new release [6].
> >
> > The vote will be open for at least 72 hours. It is adopted by
> > majority approval, with at least 3 PMC affirmative votes.
> >
> > This is the first externalized released of the Cassandra connector
> > and functionally identical to 1.16.0.
> >
> >
> > Thanks,
> > Chesnay
> >
> > [1] https://issues.apache.org/jira/projects/FLINK/versions/12352593
> > [2]
> >
> >>
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-cassandra-3.0.0-rc1
> > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > [4]
> >
> >> https://repository.apache.org/content/repositories/orgapacheflink-1549/

Re: ASF Slack

2022-11-10 Thread Ryan Skraba
I have to admit, I also (weakly) prefer a single workspace where possible.
It's easy to miss (or just not look at) the apache-flink workspace because
I didn't think of it.

The spam and spam account issues are probably inevitable for any slack
workspace as it grows, and we can probably reasonably expect to deal with
it in apache-flink as well if it grows significantly popular.  Have we seen
any spammy issues so far at 1362 members (as opposed to the-asf's 11433
members?)

All my best, Ryan

On Thu, Nov 10, 2022 at 4:51 PM Maximilian Michels  wrote:

> >On the other hand, could you explain a bit more about what are the
> problems / drawbacks that you see in the current Flink Slack?
> >- I assume having to join too many workspaces counts one
>
> I like the idea of having a single workspace for all ASF projects,
> similarly
> to how we share JIRA, mail servers, or other infrastructure. Sharing
> resources usually means there are some constraints but it also has the
> upside of solving problems once for all projects. Arguably that's less true
> for a cloud product like Slack but some customizations can still be applied
> to Slack workspaces to streamline the experience. It looks like the ASF
> hasn't come up with a good workflow for projects, e.g. channel moderation.
>
> It might not be worth migrating back at this point but we can continue the
> evaluation at a later time.
>
> -Max
>
>
> On Thu, Nov 10, 2022 at 3:31 PM Chesnay Schepler 
> wrote:
>
> > https://issues.apache.org/jira/browse/INFRA-22573
> >
> > On 10/11/2022 11:17, Martijn Visser wrote:
> > > It was discussed at the latest Apachecon conference by Infra during one
> > of
> > > the lightning talks. If I recall correctly, it was primarily turned to
> > > invite-only due to spam. But definitely good to validate that.
> > >
> > > On Thu, Nov 10, 2022 at 11:09 AM Maximilian Michels 
> > wrote:
> > >
> > >> The registration problem should be solvable. Maybe it is due to the
> > Slack
> > >> pricing model that the ASF Slack is invite-only. I'll ping the
> community
> > >> mailing list.
> > >>
> > >> Have these issues at any point been discussed with the ASF? I feel
> like
> > >> this is one of the examples where a community spins off to do its own
> > thing
> > >> instead of working together with the foundation.
> > >>
> > >> -Max
> > >>
> > >> On Wed, Nov 9, 2022 at 10:46 AM Konstantin Knauf 
> > >> wrote:
> > >>
> > >>> Hi everyone,
> > >>>
> > >>> I agree with Xintong in the sense that I don't see what has changed
> > since
> > >>> the original decision on this topic. In my opinion, there is a high
> > cost
> > >> in
> > >>> moving to ASF now, namely I fear we will loose many of the >1200
> > members
> > >>> and the momentum that I see in the workspace. To me there would need
> to
> > >> be
> > >>> strong reason for reverting this decision now.
> > >>>
> > >>> Cheers,
> > >>>
> > >>> Kosntantin
> > >>>
> > >>> Am Di., 8. Nov. 2022 um 10:35 Uhr schrieb Xintong Song <
> > >>> tonysong...@gmail.com>:
> > >>>
> >  Hi Max,
> > 
> >  Thanks for bringing this up. I'm open to a re-evaluation of the
> Slack
> >  usages.
> > 
> >  In the initial discussion for creating the Slack workspace [1],
> > >>> leveraging
> >  the ASF Slack was indeed brought up as an alternative by many
> folks. I
> >  think we have chosen a dedicated Flink Slack over the ASF Slack
> mainly
> > >>> for
> >  two reasons.
> >  - ASF Slack is limited to people with an @apache.org email address
> >  - With a dedicated Flink Slack, we have the full authority to manage
> > >> and
> >  customize it. E.g., archiving / removing improper channels,
> reporting
> > >> the
> >  build and benchmark reports to channels, subscribing and re-post
> Flink
> > >>> blog
> >  posts.
> >  As far as I can see, these concerns for the ASF slack have not
> changed
> >  since the previous decision.
> > 
> >  On the other hand, could you explain a bit more about what are the
> > >>> problems
> >  / drawbacks that you see in the current Flink Slack?
> >  - I assume having to join too many workspaces counts one
> > 
> >  Best,
> > 
> >  Xintong
> > 
> > 
> >  [1]
> https://lists.apache.org/thread/n43r4qmwprhdmzrj494dbbwr9w7bbdcv
> > 
> >  On Tue, Nov 8, 2022 at 4:51 PM Martijn Visser <
> > >> martijnvis...@apache.org>
> >  wrote:
> > 
> > > If you click on the link from Beam via an incognito window/logged
> out
> > >>> of
> > > Slack, you will be prompted to provide the workspace URL of the
> ASF.
> > >> If
> >  you
> > > do that, you're prompted for a login screen or you can create an
> > >>> account.
> > > Creating an account prompts you to have an @apache.org email
> > >> address.
> >  See
> > > https://imgur.com/a/jXvr5Ai
> > >
> > > So for me that's a -1 for switching to the ASF workspace.
> > >
> > > On Mon, Nov 7, 2022 at 10:52 PM Austin 

Re: [ACCOUNCE] Apache Flink Elasticsearch Connector 3.0.0 released

2022-11-10 Thread Ryan Skraba
Excellent news -- welcome to the new era of easier, more timely and more
feature-rich releases for everyone!

Great job!  Ryan

On Thu, Nov 10, 2022 at 3:15 PM Leonard Xu  wrote:

> Thanks Chesnay and Martijn for the great work!   I believe the
> flink-connector-shared-utils[1] you built will help Flink connector
> developers a lot.
>
>
> Best,
> Leonard
> [1] https://github.com/apache/flink-connector-shared-utils
>
> 2022年11月10日 下午9:53,Martijn Visser  写道:
>
> Really happy with the first externalized connector for Flink. Thanks a lot
> to all of you involved!
>
> On Thu, Nov 10, 2022 at 12:51 PM Chesnay Schepler 
> wrote:
>
>> The Apache Flink community is very happy to announce the release of
>> Apache Flink Elasticsearch Connector 3.0.0.
>>
>> Apache Flink® is an open-source stream processing framework for
>> distributed, high-performing, always-available, and accurate data
>> streaming applications.
>>
>> The release is available for download at:
>> https://flink.apache.org/downloads.html
>>
>> This release marks the first time we have released a connector
>> separately from the main Flink release.
>> Over time more connectors will be migrated to this release model.
>>
>> This release is equivalent to the connector version released alongside
>> Flink 1.16.0 and acts as a drop-in replacement.
>>
>> The full release notes are available in Jira:
>> https://issues.apache.org/jira/projects/FLINK/versions/12352291
>>
>> We would like to thank all contributors of the Apache Flink community
>> who made this release possible!
>>
>> Regards,
>> Chesnay
>>
>
>


Re: [Cassandra] source connector

2022-10-21 Thread Ryan Skraba
There's definitely consensus on externalizing the flink connectors!  I've
been tracking the progress and I'd be happy to provide support on Cassandra
if you'd like.

There's some new information at
https://cwiki.apache.org/confluence/display/FLINK/Externalized+Connector+development

The first step to externalizing Cassandra would be for a PMC member to
create the flink-connector-cassandra repository for the project.

If I understand correctly, this shouldn't block your PR, since the core
connector (inside apache/flink) and the external one
(apache/flink-cassandra-connector) should be kept in sync for one major
release cycle.  For my own benefit, I can start by reviewing your PR!

All my best, Ryan

On Fri, Oct 21, 2022 at 11:51 AM Etienne Chauchot 
wrote:

> Hi,
>
> Yes sure, if there is a consensus on moving, just tell me where to move
> my PR to.
>
> Best
>
> Etienne
>
> Le 19/10/2022 à 18:20, Alexander Fedulov a écrit :
> > Hi Etienne,
> >
> > thanks for your contribution. In light of the current efforts to
> > externalize connectors, do you think we could maybe combine the new
> > implementation with moving it into an external repository instead of
> > merging into Flink main?
> >
> > Best,
> > Alexander Fedulov
> >
> > On Fri, Oct 14, 2022 at 4:18 PM Etienne Chauchot 
> > wrote:
> >
> >> Hi all,
> >>
> >> As promised, I have developed the Cassandra source connector based on
> >> FLIP-27. I've just submitted the
> >> PR:https://github.com/apache/flink/pull/21073
> >>
> >>
> >> Best
> >>
> >> Etienne
> >>
>


Re: [VOTE] Externalized connector release details​

2022-10-13 Thread Ryan Skraba
+1 non-binding!  I've been following (and generally agreeing) with the
thread -- it's a perfectly reasonable way to start, and I'm sure we can
adjust the process if it turns out to be unsuitable or unexpected as the
connectors evolve in their external repositories.

On Thu, Oct 13, 2022 at 12:37 PM Thomas Weise  wrote:

> +1 (binding) for the vote and thanks for the explanation
>
> On Thu, Oct 13, 2022 at 5:58 AM Chesnay Schepler 
> wrote:
>
> > @Thomas:
> > Version-specific modules that either contain a connector or shims to
> > support that Flink version.
> > Alternatively, since the addition of such code (usually) goes beyond a
> > patch release you'd create a new minor version and could have that only
> > support the later version.
> >
> > On 13/10/2022 02:05, Thomas Weise wrote:
> > > "Branches are not specific to a Flink version. (i.e., no v3.2-1.15)"
> > >
> > > Sorry for the late question. I could not find in the discussion thread
> > how
> > > a connector can make use of features of the latest Flink version that
> > were
> > > not present in the previous Flink version, when branches cannot be
> Flink
> > > version specific?
> > >
> > > Thanks,
> > > Thomas
> > >
> > > On Wed, Oct 12, 2022 at 4:09 PM Ferenc Csaky
>  > >
> > > wrote:
> > >
> > >> +1 from my side (non-binding)
> > >>
> > >> Best,
> > >> F
> > >>
> > >>
> > >> --- Original Message ---
> > >> On Wednesday, October 12th, 2022 at 15:47, Martijn Visser <
> > >> martijnvis...@apache.org> wrote:
> > >>
> > >>
> > >>>
> > >>> +1 (binding), I am indeed assuming that Chesnay meant the last two
> > minor
> > >>> versions as supported.
> > >>>
> > >>> Op wo 12 okt. 2022 om 20:18 schreef Danny Cranmer
> > >> dannycran...@apache.org
> >  Thanks for the concise summary Chesnay.
> > 
> >  +1 from me (binding)
> > 
> >  Just one clarification, for "3.1) The Flink versions supported by
> the
> >  project (last 2 major Flink versions) must be supported.". Do we
> > >> actually
> >  mean major here, as in Flink 1.x.x and 2.x.x? Right now we would
> only
> >  support Flink 1.15.x and not 1.14.x? I would be inclined to support
> > the
> >  latest 2 minor Flink versions (major.minor.patch) given that we only
> > >> have 1
> >  active major Flink version.
> > 
> >  Danny
> > 
> >  On Wed, Oct 12, 2022 at 2:12 PM Chesnay Schepler ches...@apache.org
> >  wrote:
> > 
> > > Since the discussion
> > > (https://lists.apache.org/thread/mpzzlpob9ymkjfybm96vz2y2m5fjyvfo)
> > >> has
> > > stalled a bit but we need a conclusion to move forward I'm opening
> a
> > > vote.
> > >
> > > Proposal summary:
> > >
> > > 1) Branch model
> > > 1.1) The default branch is called "main" and used for the next
> major
> > > iteration.
> > > 1.2) Remaining branches are called "vmajor.minor". (e.g., v3.2)
> > > 1.3) Branches are not specific to a Flink version. (i.e., no
> > >> v3.2-1.15)
> > > 2) Versioning
> > > 2.1) Source releases: major.minor.patch
> > > 2.2) Jar artifacts: major.minor.match-flink-major.flink-minor
> > > (This may imply releasing the exact same connector jar multiple
> times
> > > under different versions)
> > >
> > > 3) Flink compatibility
> > > 3.1) The Flink versions supported by the project (last 2 major
> Flink
> > > versions) must be supported.
> > > 3.2) How this is achived is left to the connector, as long as it
> > > conforms to the rest of the proposal.
> > >
> > > 4) Support
> > > 4.1) The last 2 major connector releases are supported with only
> the
> > > latter receiving additional features, with the following
> exceptions:
> > > 4.1.a) If the older major connector version does not support any
> > > currently supported Flink version, then it is no longer supported.
> > > 4.1.b) If the last 2 major versions do not cover all supported
> Flink
> > > versions, then the latest connector version that supports the older
> > > Flink version /additionally /gets patch support.
> > > 4.2) For a given major connector version only the latest minor
> > >> version
> > > is supported.
> > > (This means if 1.1.x is released there will be no more 1.0.x
> release)
> > >
> > > I'd like to clarify that these won't be set in stone for eternity.
> > > We should re-evaluate how well this model works over time and
> adjust
> > >> it
> > > accordingly, consistently across all connectors.
> > > I do believe that as is this strikes a good balance between
> > > maintainability for us and clarity to users.
> > >
> > > Voting schema:
> > >
> > > Consensus, committers have binding votes, open for at least 72
> hours.
> > >>> --
> > >>> Martijn
> > >>> https://twitter.com/MartijnVisser82
> > >>> https://github.com/MartijnVisser
> >
> >
> >
>


Re: [DISCUSS] Externalized connector release details

2022-09-16 Thread Ryan Skraba
I had to write down a diagram to fully understand the discussion :D

If I recall correctly, during the externalization discussion, the "price to
pay" for the (many) advantages of taking connectors out of the main repo
was to maintain and manually consult a compatibility matrix per connector.
I'm not a fan of that approach, and your example of diverging code between
2.1.0-1.15 and 2.1.0-1.16 is a good reason why.

b) I think your proposal here is a viable alternative.

c) In my experience, the extra effort of correctly cherry-picking to the
"right" branches adds a small burden to each commit and release event.

The biggest challenge will be for committers for each connector to be
mindful of which versions are "still in the game" (but this is also true
for the compatibility matrix approach).  Two major versions of connectors
multiplied by two versions of Flink is up to three cherry-picks per commit
-- plus one if the connector is currently being migrated and exists
simultaneously inside and outside the main repo, plus another for the
previous still-supported version of flink.  It's going to take some
education effort!

Weighing in on the shim approach: this might be something to leave up to
each connector -- I can see it being easier or more relevant for some
connectors than others to use dedicated branches versus dedicated modules
per flink version, and this might evolve with the connector.

d) Socially, it's a nice convention to have the `main` branch point to the
"newest and best" variant for new users to check out, test, build and
create a PR for quick fixes without diving deep into the policies of the
project, even if it is qualified for just the newest version of Flink.

Danny: I like the common connector parent approach too.  Looks tidy!

All my best, Ryan




On Fri, Sep 16, 2022 at 10:41 AM Chesnay Schepler 
wrote:

> a) 2 Flink versions would be the obvious answer. I don't think anything
> else makes much sense.
>
> I don't want us to increment versions just because the Flink versions
> change, so in your example I'd go with 2.0.0-1.16.
>
> c)
>
> Generally speaking I would love to avoid the Flink versions in branch
> names, because it simplifies the git branching (everything, really) and
> as you said makes main useful.
>
> However the devil is in the details.
>
> Imagine we have 2.0.0 for 1.15, and 1.16 is released with now a new API
> (maybe even just a test util) that we want to use.
>
> For the sake of arguments let's say we go with 2.1.0-1.16,
> and at the same time we also had some pending changes for the 1.15
> connector (let's say exclusive to 1.15; some workaround for a bug or smth),
> so we also have 2.1.0-1.15.
>
> We can't have 1 module use 2 different Flink API versions to satisfy
> this. Build profiles wouldn't solve this.
>
> We could do something like adding Flink-version specific shims though,
> somewhat similar to how we support ES 6/7.
>
>
> On 15/09/2022 23:23, Danny Cranmer wrote:
> > Thanks for starting this discussion. I am working on the early stages of
> > the new DynamoDB connector and have been pondering the same thing.
> >
> > a) Makes sense. On the flip side, how many Flink versions will we
> > support? Right now we support 2 versions for Flink, so it makes sense to
> > follow this rule.
> >
> > For example if the latest connector version is 2.0.0, we would only
> publish
> > 2.0.0-1.15 and 2.0.0-1.14.
> > Then once we want to ship connector 2.1.0 if Flink 1.16 is out, we would
> > publish 2.1.0-1.16 and 2.1.0-1.15.
> > Which leaves the case when a new Flink version is released (1.16 for
> > example) and connector 2.0.0 is already published. We do not have any
> > connector changes so could consider adding 2.0.0-1.16 (resulting in
> > 2.0.0-1.16, 2.0.0-1.15 and 2.0.0-1.14 (no longer supported)) or
> requiring a
> > version bump to 2.1.0-1.16. I would prefer adding 2.0.0-1.16 if there are
> > no changes to the code, and this is possible. If a connector code never
> > changes, we would end up with 2.0.0-1.14, 2.0.0-1.15 ... 2.0.0-n.m
> >
> > b) I like this suggestion, even though the Flink dependencies are usually
> > "provided" and therefore the builds are identical. It gives the users a
> > clear indicator of what is supported, and allows us to target tests
> against
> > different Flink versions consistently.
> >
> > c) Instead of having a branch per Flink version can we have multiple
> build
> > profiles like the Scala variable? Having 1.0-1.15 and 1.0-1.16 branches
> > would likely be duplicate code and increase the maintenance burden
> (having
> > to merge PRs into multiple branches). If the connector code is not
> > compatible with both Flink versions we can bump the connector version at
> > this point. I would propose following the Flink branching strategy
> > "release-1.0" unless this will not work.
> >
> > d) If we remove the Flink qualifier from the branch as discussed above,
> > then main can be the next major version, like in Flink.
> >
>
>


Re: [DISCUSS] Restructure FLIP pages with page properties

2022-09-16 Thread Ryan Skraba
This sounds like an obvious improvement to save tedious labour -- at the
same time, would this be something interesting to organize lists of FLIP
pages around topics?   If I'm interested in the connectors, for example.

+1 and thanks, Ryan

On Fri, Sep 16, 2022 at 12:53 PM Yun Tang  wrote:

> +1, thanks for driving this, Chesnay!
>
> Best
> Yun Tang
> 
> From: Matthias Pohl 
> Sent: Friday, September 16, 2022 18:21
> To: dev@flink.apache.org 
> Subject: Re: [DISCUSS] Restructure FLIP pages with page properties
>
> +1 Thanks for proposing that, Chesnay. I guess, it's a good idea to reduce
> efforts around documenting by removing duplicated content.
>
> On Fri, Sep 16, 2022 at 11:56 AM Chesnay Schepler 
> wrote:
>
> > Hello,
> >
> > The current FLIP overview page [1] is kind of a mess and inconvenient to
> > modify. It requires a fair amount of manual work to get right, requiring
> > updates in multiple places.
> >
> > I'd like to improve that, and experiment with the Page properties
> > (Reports) and labels to auto-generate the overview page.
> >
> > I've already applied the changes to the Discarded FLIPs to that end, so
> > have a look at the bottom of the overview page.
> >
> > In short, we can categorize FLIPs (what's currently done with the
> > "Status" line in the FLIP) via labels, filter them in the overview based
> > on that, and then extract page properties (aka, a special table in the
> > FLIP page) to populate a listing.
> >
> > The end goal is that we will never have to update the overview page
> > manually.
> >
> > I'd volunteer to migrate all existing FLIP pages of course.
> >
> > - [1]
> >
> >
> https://cwiki.apache.org/confluence/display/FLINK/Flink+Improvement+Proposals
> >
> >
>


Re: [VOTE] FLIP-254 Redis Streams connector

2022-09-12 Thread Ryan Skraba
Hello!  There's quite a bit of existing code and it looks like there's
interest and community willing to contribute to this connector with 2
implementations already in the flink-connector-redis repo[1].

There's a couple of points that should probably be fixed in the FLIP: some
typos such as "provide at-least guarantees" and the initial version should
not be 1.0.0 given that version 1.1.5 was already released in its previous
incarnation[2].

In principle: +1 (non-binding)

All my best, Ryan

[1]: https://github.com/apache/flink-connector-redis/pulls
[2]:
https://mvnrepository.com/artifact/org.apache.flink/flink-connector-redis



On Mon, Sep 12, 2022 at 10:20 AM Zheng Yu Chen  wrote:

> +1 (non-binding)
>
> Martijn Visser  于2022年9月12日周一 15:58写道:
>
> > Hi everyone,
> >
> > With no comments provided in the discussion thread, I'm opening a vote
> > thread on FLIP-254: Redis Streams connector:
> >
> > FLIP:
> >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-254%3A+Redis+Streams+Connector
> >
> >
> > The vote will be open for at least 72h.
> >
> > Best regards,
> >
> > Martijn
> > https://twitter.com/MartijnVisser82
> > https://github.com/MartijnVisser
> >
>


[jira] [Created] (FLINK-29036) Code examples on the Data Sources page have errors

2022-08-18 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-29036:
---

 Summary: Code examples on the Data Sources page have errors
 Key: FLINK-29036
 URL: https://issues.apache.org/jira/browse/FLINK-29036
 Project: Flink
  Issue Type: Bug
Reporter: Ryan Skraba


While reviewing the  [Data 
Source|https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/dev/datastream/sources/],
 some examples are slightly out of date.

As an example, FutureNotifier doesn't exist any more.

This page (as well as some javadoc) could be reviewed for correctness.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29017) Some github links in released doc point to master

2022-08-17 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-29017:
---

 Summary: Some github links in released doc point to master
 Key: FLINK-29017
 URL: https://issues.apache.org/jira/browse/FLINK-29017
 Project: Flink
  Issue Type: Bug
Reporter: Ryan Skraba


For example:
{code:java}
The following example uses the example schema 
[testdata.avsc](https://github.com/apache/flink/blob/master/flink-formats/flink-parquet/src/test/resources/avro/testdata.avsc):
{code}
should be using the {{gh_link}} shortcode to automatically create the link to 
the the correct github branch:
{code:java}
The following example uses the example schema {{< gh_link 
file="flink-formats/flink-parquet/src/test/resources/avro/testdata.avsc" 
name="testdata.avsc" >}}:
{code}
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] FLIP-243: Dedicated Opensearch connectors

2022-07-27 Thread Ryan Skraba
Hello!

I'd like to add my non-binding +1 for this FLIP.

Full disclosure: as a colleague of Andriy, I sometimes hear the gory details of 
divergence between Elasticsearch and OpenSearch.  Objectively, this is a good 
reason to create independent OpenSearch connectors.

As a side comment, while Elasticsearch as a trademark and service mark never 
has an internal capital S, OpenSearch always does.

All my best, Ryan

On 2022/07/13 20:22:11 Andriy Redko wrote:
> Hey Folks,
> 
> Thanks a lot for all the feedback and comments so far. Based on the 
> discussion [1], 
> it seems like there is a genuine interest in supporting OpenSearch [2] 
> natively. With 
> that being said, I would like to start a vote on FLIP-243 [3].
> 
> The vote will last for at least 72 hours unless there is an objection or
> insufficient votes.
> 
> Thank you!
> 
> [1] https://lists.apache.org/thread/jls0vqc7jb84jp14j4jok1pqfgo2cl30
> [2] https://opensearch.org/
> [3] 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-243%3A+Dedicated+Opensearch+connectors
> 
> 
> Best Regards,
> Andriy Redko
> 
> 


[jira] [Created] (FLINK-28542) [JUnit5 Migration] FileSystemBehaviorTestSuite

2022-07-13 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-28542:
---

 Summary: [JUnit5 Migration] FileSystemBehaviorTestSuite
 Key: FLINK-28542
 URL: https://issues.apache.org/jira/browse/FLINK-28542
 Project: Flink
  Issue Type: Sub-task
  Components: FileSystems
Reporter: Ryan Skraba


The FileSystemBehaviorTestSuite in flink-core has an implementation in most 
modules in flink-filesystems.  All of these implementations (one for each 
filesystem) should be migrated together.

{color:#00} {color}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-28522) [JUnit5 Migration] Module: flink-sequence-file

2022-07-12 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-28522:
---

 Summary: [JUnit5 Migration] Module: flink-sequence-file
 Key: FLINK-28522
 URL: https://issues.apache.org/jira/browse/FLINK-28522
 Project: Flink
  Issue Type: Sub-task
  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
Reporter: Ryan Skraba






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-28449) [JUnit5 Migration] Module: flink-parquet

2022-07-07 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-28449:
---

 Summary: [JUnit5 Migration] Module: flink-parquet
 Key: FLINK-28449
 URL: https://issues.apache.org/jira/browse/FLINK-28449
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / FileSystem
Reporter: Ryan Skraba






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-27971) [JUnit5 Migration] Module: flink-json

2022-06-09 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-27971:
---

 Summary: [JUnit5 Migration] Module: flink-json
 Key: FLINK-27971
 URL: https://issues.apache.org/jira/browse/FLINK-27971
 Project: Flink
  Issue Type: Sub-task
Reporter: Ryan Skraba






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27970) [JUnit5 Migration] Module: flink-hadoop-buik

2022-06-09 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-27970:
---

 Summary: [JUnit5 Migration] Module: flink-hadoop-buik
 Key: FLINK-27970
 URL: https://issues.apache.org/jira/browse/FLINK-27970
 Project: Flink
  Issue Type: Sub-task
Reporter: Ryan Skraba






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27885) [JUnit5 Migration] Module: flink-csv

2022-06-02 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-27885:
---

 Summary: [JUnit5 Migration] Module: flink-csv
 Key: FLINK-27885
 URL: https://issues.apache.org/jira/browse/FLINK-27885
 Project: Flink
  Issue Type: Sub-task
  Components: Tests
Reporter: Ryan Skraba






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27059) [JUnit5 Migration] Module: flink-compress

2022-04-05 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-27059:
---

 Summary: [JUnit5 Migration] Module: flink-compress
 Key: FLINK-27059
 URL: https://issues.apache.org/jira/browse/FLINK-27059
 Project: Flink
  Issue Type: Sub-task
Reporter: Ryan Skraba






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-27046) [JUnit5 Migration] Module: flink-*-glue-schema-registry

2022-04-04 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-27046:
---

 Summary: [JUnit5 Migration] Module: flink-*-glue-schema-registry
 Key: FLINK-27046
 URL: https://issues.apache.org/jira/browse/FLINK-27046
 Project: Flink
  Issue Type: Sub-task
  Components: Tests
Reporter: Ryan Skraba


Migrate the two modules flink-avro-glue-schema-registry and 
flink-json-glue-schema registry.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26736) [JUnit5 Migration] Module: flink-avro-confluent-registry

2022-03-18 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-26736:
---

 Summary: [JUnit5 Migration] Module: flink-avro-confluent-registry
 Key: FLINK-26736
 URL: https://issues.apache.org/jira/browse/FLINK-26736
 Project: Flink
  Issue Type: Sub-task
Reporter: Ryan Skraba






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


Re: [ANNOUNCE] New Apache Flink Committer - David Morávek

2022-03-07 Thread Ryan Skraba
Congratulations David!

On Mon, Mar 7, 2022 at 9:54 AM Jan Lukavský  wrote:

> Congratulations David!
>
>   Jan
>
> On 3/7/22 09:44, Etienne Chauchot wrote:
> > Congrats David !
> >
> > Well deserved !
> >
> > Etienne
> >
> > Le 07/03/2022 à 08:47, David Morávek a écrit :
> >> Thanks everyone!
> >>
> >> Best,
> >> D.
> >>
> >> On Sun 6. 3. 2022 at 9:07, Yuan Mei  wrote:
> >>
> >>> Congratulations, David!
> >>>
> >>> Best Regards,
> >>> Yuan
> >>>
> >>> On Sat, Mar 5, 2022 at 8:13 PM Roman Khachatryan 
> >>> wrote:
> >>>
>  Congratulations, David!
> 
>  Regards,
>  Roman
> 
>  On Fri, Mar 4, 2022 at 7:54 PM Austin Cawley-Edwards
>   wrote:
> > Congrats David!
> >
> > On Fri, Mar 4, 2022 at 12:18 PM Zhilong Hong 
>  wrote:
> >> Congratulations, David!
> >>
> >> Best,
> >> Zhilong
> >>
> >> On Sat, Mar 5, 2022 at 1:09 AM Piotr Nowojski  >
> >> wrote:
> >>
> >>> Congratulations :)
> >>>
> >>> pt., 4 mar 2022 o 16:04 Aitozi  napisał(a):
> >>>
>  Congratulations David!
> 
>  Ingo Bürk  于2022年3月4日周五 22:56写道:
> 
> > Congrats, David!
> >
> > On 04.03.22 12:34, Robert Metzger wrote:
> >> Hi everyone,
> >>
> >> On behalf of the PMC, I'm very happy to announce David
> >>> Morávek
>  as a
> >>> new
> >> Flink committer.
> >>
> >> His first contributions to Flink date back to 2019. He has
> >>> been
> >> increasingly active with reviews and driving major
> >>> initiatives
>  in
> >> the
> >> community. David brings valuable experience from being a
>  committer
> >> in
>  the
> >> Apache Beam project to Flink.
> >>
> >>
> >> Please join me in congratulating David for becoming a Flink
> >>> committer!
> >> Cheers,
> >> Robert
> >>
>


Re: [DISCUSS] Enable scala formatting check

2022-03-02 Thread Ryan Skraba
+1 for me -- I've used spotless and scalafmt together in the past, and
especially appreciated how consistent it is between using on the command
line and in the IDE.

All my best, Ryan


On Wed, Mar 2, 2022 at 11:19 AM Marios Trivyzas  wrote:

> +1 from me as well, Having a unified auto-formatter for scala would be
> great.
> Currently we don't have consistency in our code base, and this makes it
> more difficult
> to read and work on the scala code.
>
> Best,
> Marios
>
> On Wed, Mar 2, 2022 at 11:41 AM wenlong.lwl 
> wrote:
>
> > +1, currently the scalastyle does not work well actually, there are a lot
> > of style differences in different files. It would be great if the code
> can
> > be auto formatted.
> >
> > Best,
> > Wenlong
> >
> > On Wed, 2 Mar 2022 at 16:34, Jingsong Li  wrote:
> >
> > > +1.
> > >
> > > Thanks for driving.
> > >
> > > I wrote some scala code, the style of our flink's scala is messy. We
> > > can do better.
> > >
> > > Best,
> > > Jingsong
> > >
> > > On Wed, Mar 2, 2022 at 4:19 PM Yun Tang  wrote:
> > > >
> > > > +1
> > > >
> > > > I also noticed that the project of scalafmt [1] is much more active
> > than
> > > scalatyle [2], which has no release in the past 4 years.
> > > >
> > > >
> > > > [1] https://github.com/scalameta/scalafmt/releases
> > > > [2] https://github.com/scalastyle/scalastyle/tags
> > > >
> > > > Best
> > > > Yun Tang
> > > >
> > > > 
> > > > From: Konstantin Knauf 
> > > > Sent: Wednesday, March 2, 2022 15:01
> > > > To: dev 
> > > > Subject: Re: [DISCUSS] Enable scala formatting check
> > > >
> > > > +1 I've never written any Scala in Flink, but this makes a lot of
> sense
> > > to
> > > > me. Converging on a smaller set of tools and simplifying the build is
> > > > always a good idea and the Community already concluded before that
> > > spotless
> > > > is generally a good approach.
> > > >
> > > > On Tue, Mar 1, 2022 at 5:52 PM Francesco Guardiani <
> > > france...@ververica.com>
> > > > wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > I want to propose to enable the spotless scalafmt integration and
> > > remove
> > > > > the scalastyle plugin.
> > > > >
> > > > > From an initial analysis, scalafmt can do everything scalastyle can
> > > do, and
> > > > > the integration with spotless looks easy to enable:
> > > > > https://github.com/diffplug/spotless/tree/main/plugin-maven#scala.
> > The
> > > > > scalafmt conf file gets picked up automatically from every IDE, and
> > it
> > > can
> > > > > be heavily tuned.
> > > > >
> > > > > This way we can unify the formatting and integrate with our CI
> > without
> > > any
> > > > > additional configurations. And we won't need scalastyle anymore, as
> > > > > scalafmt will take care of the checks:
> > > > >
> > > > > * mvn spotless:check will check both java and scala
> > > > > * mvn spotless:apply will format both java and scala
> > > > >
> > > > > WDYT?
> > > > >
> > > > > FG
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > >
> > > > > Francesco Guardiani | Software Engineer
> > > > >
> > > > > france...@ververica.com
> > > > >
> > > > >
> > > > > 
> > > > >
> > > > > Follow us @VervericaData
> > > > >
> > > > > --
> > > > >
> > > > > Join Flink Forward  - The Apache Flink
> > > > > Conference
> > > > >
> > > > > Stream Processing | Event Driven | Real Time
> > > > >
> > > > > --
> > > > >
> > > > > Ververica GmbH | Invalidenstrasse 115, 10115 Berlin, Germany
> > > > >
> > > > > --
> > > > >
> > > > > Ververica GmbH
> > > > >
> > > > > Registered at Amtsgericht Charlottenburg: HRB 158244 B
> > > > >
> > > > > Managing Directors: Karl Anton Wehner, Holger Temme, Yip Park Tung
> > > Jason,
> > > > > Jinwei (Kevin) Zhang
> > > > >
> > > >
> > > >
> > > > --
> > > >
> > > > Konstantin Knauf
> > > >
> > > > https://twitter.com/snntrable
> > > >
> > > > https://github.com/knaufk
> > >
> >
>
>
> --
> Marios
>


[jira] [Created] (FLINK-26232) [JUnit5 Migration] Module: flink-avro

2022-02-17 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-26232:
---

 Summary: [JUnit5 Migration] Module: flink-avro
 Key: FLINK-26232
 URL: https://issues.apache.org/jira/browse/FLINK-26232
 Project: Flink
  Issue Type: Sub-task
Reporter: Ryan Skraba






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25962) Flink generated Avro schemas can't be parsed using Python

2022-02-04 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-25962:
---

 Summary: Flink generated Avro schemas can't be parsed using Python
 Key: FLINK-25962
 URL: https://issues.apache.org/jira/browse/FLINK-25962
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.14.3
Reporter: Ryan Skraba


Flink currently generates Avro schemas as records with the top-level name 
{{"record"}}

Unfortunately, there is some inconsistency between Avro implementations in 
different languages that may prevent this record from being read, notably 
Python, which generates the error:
avro.schema.SchemaParseException: record is a reserved type name
(See this comment for the full stack trace).

The Java SDK accepts this name, and there's an [ongoing 
discussion|https://lists.apache.org/thread/0wmgyx6z69gy07lvj9ndko75752b8cn2] 
about what the expected behaviour should be.  This should be clarified and 
fixed in Avro, of course.

Regardless of the resolution, the best practice (which is used almost 
everywhere else in the Flink codebase) is to explicitly specify a top-level 
namespace for an Avro record.   We should use a default like: 
{{{}org.apache.flink.avro.generated{}}}.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


Re: [DISCUSS] JUnit 5 Migration

2022-01-05 Thread Ryan Skraba
Hello!  I can help out with the effort -- I've got a bit of experience with
JUnit 4 and 5 migration, and it looks like even with the AssertJ scripts
there's going to be a lot of mechanical and manual work to be done.  The
migration document looks pretty comprehensive!

For the remaining topics to be discussed:

I don't have a strong opinion on what to do about parameterized tests that
use inheritance, although Jing Ge's proposal sounds reasonable and easy to
follow.  I wouldn't be worried about temporarily redundant test code too
much if it simplifies getting us into a good final state, especially since
redundant code would be easy to spot and remove when we get rid of JUnit 4
artifacts.

Getting rid of PowerMock sounds fine to me.

I don't think it's necessary to have a common author for commits, given
that commits will have the [JUnit5 migration] tag.  I guess my preference
would be to have "one or a few" commits per module, merged progressively.

Is there an existing branch on a repo with some of the modules already
migrated?

All my best, Ryan

On Fri, Dec 17, 2021 at 5:19 PM Jing Ge  wrote:

> Thanks Hang and Qingsheng for your effort and starting this discussion. As
> additional information, I've created an umbrella ticket(
> https://issues.apache.org/jira/browse/FLINK-25325). It is recommended to
> create all JUnit5 migration related tasks under it, So we could track the
> whole migration easily.
>
> I think, for the parameterized test issue, the major problem is that, on
> one hand, JUnit 5 has its own approach to make parameterized tests and it
> does not allow to use parameterized fixtures at class level. This is a huge
> difference compared to JUnit4. On the other hand, currently, there are many
> cross module test class inheritances, which means that the migration could
> not be done in one shot. It must be allowed to run JUnit4 and JUnit5 tests
> simultaneously during the migration process. As long as there are sub
> parameterized test classes in JUnit4, it will be risky to migrate the
> parent class to JUnit5. And if the parent class has to stick with JUnit4
> during the migration, any migrated JUnit5 subclass might need to duplicate
> the test methods defined in the parent class. In this case, I would prefer
> to duplicate the test methods with different names in the parent class for
> both JUnit4 and JUnit5 only during the migration process as temporary
> solution and remove the test methods for JUnit4 once the migration process
> is finished, i.e. when all subclasses are JUnit5 tests. It is a trade-off
> solution. Hopefully we could find another better solution during the
> discussion.
>
> Speaking of replacing @Test with @TestTemplate, since I did read all tests,
> do we really need to replace all of them with @TestTemplate w.r.t. the
> parameterized tests?
>
> For the PowrMock tests, it is a good opportunity to remove them.
>
> best regards
> Jing
>
> On Fri, Dec 17, 2021 at 2:14 PM Hang Ruan  wrote:
>
> > Hi, all,
> >
> > Apache Flink is using JUnit for unit tests and integration tests widely
> in
> > the project, however, it binds to the legacy JUnit 4 deeply. It is
> > important to migrate existing cases to JUnit 5 in order to avoid
> splitting
> > the project into different JUnit versions.
> >
> > Qingsheng Ren and I have conducted some trials about the JUnit 5
> migration,
> > but there are too many modules that need to migrate. We would like to get
> > more help from the community. It is planned to migrate module by module,
> > and a JUnit 5 migration guide
> > <
> >
> https://docs.google.com/document/d/1514Wa_aNB9bJUen4xm5uiuXOooOJTtXqS_Jqk9KJitU/edit?usp=sharing
> > >[1]
> > has been provided to new helpers on the cooperation method and how to
> > migrate.
> >
> > There are still some problem to discuss:
> >
> > 1. About parameterized test:
> >
> > Some test classes inherit from other base test classes. We have discussed
> > different situations in the guide, but the situation where a
> parameterized
> > test subclass inherits from a non parameterized parent class has not been
> > resolved.
> >
> > In JUnit 4, the parent test class always has some test cases annotated by
> > @Test. And  the parameterized subclass will run these test cases in the
> > parent class in a parameterized way.
> >
> > In JUnit 5, if we want a test case to be invoked multiple times, the test
> > case must be annotated by @TestTemplate. A test case can not be annotated
> > by both @Test and @TestTemplate, which means a test case can not be
> invoked
> > as both a parameterized test and a non parameterized test.
> >
> > We thought of two ways to migrate this situation, but not good enough.
> Both
> > two ways will introduce redundant codes, and make it hard to maintain.
> >
> > The first way is to change the parent class to a parameterized test and
> > replace @Test tests to @TestTemplate tests. For its non parameterized
> > subclasses, we provide them a fake parameter method, which will provide
> > 

Re: [DISCUSS] Looking for maintainers for Google PubSub connector or discuss next step

2022-01-04 Thread Ryan Skraba
Hello,

I'm familiar with the Pub/Sub connectors from the Apache Beam project, but
quite a bit less so with Flink.  This looks like a good learning
opportunity, and I'd be interested in helping out here.

If we decide to keep the connector, I can start taking a look at the next
step: going through the existing PR, fixing conflicts with master.

All my best, Ryan

On Mon, Jan 3, 2022 at 3:41 PM Martijn Visser  wrote:

> Hi everyone,
>
> We're looking for community members, who would like to maintain Flink's
> Google PubSub connector [1] going forward. There are multiple improvement
> tickets open and the original contributors are currently unable to work on
> further improvements.
>
> An overview of some of the open tickets:
>
> * https://issues.apache.org/jira/browse/FLINK-20625 -> Refactor PubSub
> Source to use new Source API (FLIP-27)
> * https://issues.apache.org/jira/browse/FLINK-24298 -> Refactor PubSub
> Sink
> to use new Sink API (FLIP-143 / FLIP-171)
> * https://issues.apache.org/jira/browse/FLINK-24299 -> Make PubSub
> available as Source and Sink for Table/SQL users
>
> Next to these tickets, the connector only supports Java 8 and we can
> improve on the tests for this connector.
>
> If you would like to take on this responsibility or can join this effort
> in a supporting role, please reach out!
>
> If we can't find maintainers for this connector, what do you think we
> should do? I would be in favour of dropping the connector from Flink. We
> could also consider moving the connector, either to the new external
> connector repository or Apache Bahir. I'm not sure if that would be
> valuable, since at some point connector won't work in Flink (since it
> doesn't use the target interfaces) and the source code can still be found
> in Flink's git repo by looking back to previous versions.
>
> I'm looking forward to your thoughts.
>
> Best regards,
>
> Martijn Visser
>
> [1]
>
> https://nightlies.apache.org/flink/flink-docs-stable/docs/connectors/datastream/pubsub/
>


Re: Re:Re: [ANNOUNCE] New Apache Flink Committer - Ingo Bürk

2021-12-03 Thread Ryan Skraba
Congratulations Ingo!

On Fri, Dec 3, 2021 at 8:17 AM Yun Tang  wrote:

> Congratulations, Ingo!
>
> Best
> Yun Tang
> 
> From: Yuepeng Pan 
> Sent: Friday, December 3, 2021 14:14
> To: dev@flink.apache.org 
> Cc: Ingo Bürk 
> Subject: Re:Re: [ANNOUNCE] New Apache Flink Committer - Ingo Bürk
>
>
>
>
> Congratulations, Ingo!
>
>
> Best,
> Yuepeng Pan
>
>
>
>
>
> At 2021-12-03 13:47:38, "Yun Gao"  wrote:
> >Congratulations Ingo!
> >
> >Best,
> >Yun
> >
> >
> >--
> >From:刘建刚 
> >Send Time:2021 Dec. 3 (Fri.) 11:52
> >To:dev 
> >Cc:"Ingo Bürk" 
> >Subject:Re: [ANNOUNCE] New Apache Flink Committer - Ingo Bürk
> >
> >Congratulations!
> >
> >Best,
> >Liu Jiangang
> >
> >Till Rohrmann  于2021年12月2日周四 下午11:24写道:
> >
> >> Hi everyone,
> >>
> >> On behalf of the PMC, I'm very happy to announce Ingo Bürk as a new
> Flink
> >> committer.
> >>
> >> Ingo has started contributing to Flink since the beginning of this
> year. He
> >> worked mostly on SQL components. He has authored many PRs and helped
> review
> >> a lot of other PRs in this area. He actively reported issues and helped
> our
> >> users on the MLs. His most notable contributions were Support SQL 2016
> JSON
> >> functions in Flink SQL (FLIP-90), Register sources/sinks in Table API
> >> (FLIP-129) and various other contributions in the SQL area. Moreover,
> he is
> >> one of the few people in our community who actually understands Flink's
> >> frontend.
> >>
> >> Please join me in congratulating Ingo for becoming a Flink committer!
> >>
> >> Cheers,
> >> Till
> >>
>


[jira] [Created] (FLINK-24476) Rename all ElasticSearch to Elasticsearch (without camel case)

2021-10-07 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-24476:
---

 Summary: Rename all ElasticSearch to Elasticsearch (without camel 
case)
 Key: FLINK-24476
 URL: https://issues.apache.org/jira/browse/FLINK-24476
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / ElasticSearch
Affects Versions: 1.14.0
Reporter: Ryan Skraba


Elasticsearch is a [trademark and service 
mark|https://www.elastic.co/legal/trademarks].  It's incorrect to use 
CamelCase: it's not two words, nor is the internal capital S part of the brand.

Where possible, we should use the single word without an internal capital S, 
especially in user documentation.

(Luckily, I don't believe there are any user-facing APIs with incorrect 
capitalization.)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)