[jira] [Created] (FLINK-23613) debezium and canal support read medata op and type

2021-08-03 Thread Ward Harris (Jira)
Ward Harris created FLINK-23613:
---

 Summary: debezium and canal support read medata op and type
 Key: FLINK-23613
 URL: https://issues.apache.org/jira/browse/FLINK-23613
 Project: Flink
  Issue Type: New Feature
  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
Reporter: Ward Harris


in our scene, there will be two types of database data delivered to the data 
warehouse:
 1. the first type is exactly the same as the online table
 2. the second type adds two columns on the basis of the previous table, 
representing action_type and action_time respectively, which is to record events
 in order to solve this demand by flink sql, it is necessary to be able to read 
the action_type and action_time from debezium or canal metadata, action_time 
can read from ingestion-timestamp metadata, but can not read action_type from 
metadata.

the database action is insert/update/delete, but there will be 
insert/update_before/update_after/delete in Flink's RowKind, so action_type is 
RowKind will be better for us. at the same time, flink needs to modify RowKind 
to insert for record this event table.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] FLIP-179: Expose Standardized Operator Metrics

2021-08-03 Thread Becket Qin
Personally speaking, it is intuitive for me to set a gauge in MetricGroup.
So I am fine with set*Gauge pattern as long as the method is in
*MetricGroup class.

Thanks,

Jiangjie (Becket) Qin

On Tue, Aug 3, 2021 at 7:24 PM Arvid Heise  wrote:

> @Becket Qin  @Thomas Weise  would
> you
> also agree to @Chesnay Schepler  's proposal?
>
> I think the main intention is to only use a Gauge when the exact metric is
> exposed. For any partial value that may be used in an internal/predefined
> metric, we would only use a supplier instead of a Gauge.
>
> So a connector developer can immediately distinguish the cases: if it's a
> metric class he would see the exact metric corresponding to the setter. If
> it's some Supplier, the developer would expect that the value is used in a
> differently named metric, which we would describe in the JavaDoc.
> Could that also be a solution to the currentEventFetchTimeLag metric?
>
> On Tue, Aug 3, 2021 at 12:54 PM Thomas Weise  wrote:
>
> > +1 (binding)
> >
> > On Tue, Aug 3, 2021 at 12:58 AM Chesnay Schepler 
> > wrote:
> >
> > > +1 (binding)
> > >
> > > Although I still think all the set* methods should accept a Supplier
> > > instead of a Gauge.
> > >
> > > On 02/08/2021 12:36, Becket Qin wrote:
> > > > +1 (binding).
> > > >
> > > > Thanks for driving the efforts, Arvid.
> > > >
> > > > Cheers,
> > > >
> > > > Jiangjie (Becket) Qin
> > > >
> > > > On Sat, Jul 31, 2021 at 12:08 PM Steven Wu 
> > wrote:
> > > >
> > > >> +1 (non-binding)
> > > >>
> > > >> On Fri, Jul 30, 2021 at 3:55 AM Arvid Heise 
> wrote:
> > > >>
> > > >>> Dear devs,
> > > >>>
> > > >>> I'd like to open a vote on FLIP-179: Expose Standardized Operator
> > > Metrics
> > > >>> [1] which was discussed in this thread [2].
> > > >>> The vote will be open for at least 72 hours unless there is an
> > > objection
> > > >>> or not enough votes.
> > > >>>
> > > >>> The proposal excludes the implementation for the
> > > currentFetchEventTimeLag
> > > >>> metric, which caused a bit of discussion without a clear
> convergence.
> > > We
> > > >>> will implement that metric in a generic way at a later point and
> > > >> encourage
> > > >>> sources to implement it themselves in the meantime.
> > > >>>
> > > >>> Best,
> > > >>>
> > > >>> Arvid
> > > >>>
> > > >>> [1]
> > > >>>
> > > >>>
> > > >>
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-179%3A+Expose+Standardized+Operator+Metrics
> > > >>> [2]
> > > >>>
> > > >>>
> > > >>
> > >
> >
> https://lists.apache.org/thread.html/r856920cbfe6a262b521109c5bdb9e904e00a9b3f1825901759c24d85%40%3Cdev.flink.apache.org%3E
> > >
> > >
> > >
> >
>


[jira] [Created] (FLINK-23612) SELECT ROUND(CAST(1.2345 AS FLOAT), 1) cannot compile

2021-08-03 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-23612:
---

 Summary: SELECT ROUND(CAST(1.2345 AS FLOAT), 1) cannot compile
 Key: FLINK-23612
 URL: https://issues.apache.org/jira/browse/FLINK-23612
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Runtime
Affects Versions: 1.14.0
Reporter: Caizhi Weng


Run this SQL {{SELECT ROUND(CAST(1.2345 AS FLOAT), 1)}} and the following 
exception will be thrown:

{code}
java.lang.RuntimeException: Could not instantiate generated class 
'ExpressionReducer$2'

at 
org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:75)
at 
org.apache.flink.table.planner.codegen.ExpressionReducer.reduce(ExpressionReducer.scala:108)
at 
org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressionsInternal(ReduceExpressionsRule.java:759)
at 
org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressions(ReduceExpressionsRule.java:699)
at 
org.apache.calcite.rel.rules.ReduceExpressionsRule$ProjectReduceExpressionsRule.onMatch(ReduceExpressionsRule.java:306)
at 
org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542)
at 
org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407)
at 
org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)
at 
org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
at 
org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202)
at 
org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:62)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:58)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:57)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeTree(BatchCommonSubGraphBasedOptimizer.scala:87)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.org$apache$flink$table$planner$plan$optimize$BatchCommonSubGraphBasedOptimizer$$optimizeBlock(BatchCommonSubGraphBasedOptimizer.scala:58)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:46)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:46)
at scala.collection.immutable.List.foreach(List.scala:392)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.doOptimize(BatchCommonSubGraphBasedOptimizer.scala:46)
at 
org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:77)
at 
org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:282)
at 
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:165)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1702)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeQueryOperation(TableEnvironmentImpl.java:833)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1301)
at 
org.apache.flink.table.api.internal.TableImpl.execute(TableImpl.java:601)
at 
org.apache.flink.table.planner.runtime.utils.BatchTestBase.executeQuery(BatchTestBase.scala:300)
at 

[jira] [Created] (FLINK-23611) YARNSessionCapacitySchedulerITCase.testVCoresAreSetCorrectlyAndJobManagerHostnameAreShownInWebInterfaceAndDynamicPropertiesAndYarnApplicationNameAndTaskManagerSlots hang

2021-08-03 Thread Xintong Song (Jira)
Xintong Song created FLINK-23611:


 Summary: 
YARNSessionCapacitySchedulerITCase.testVCoresAreSetCorrectlyAndJobManagerHostnameAreShownInWebInterfaceAndDynamicPropertiesAndYarnApplicationNameAndTaskManagerSlots
 hangs on azure
 Key: FLINK-23611
 URL: https://issues.apache.org/jira/browse/FLINK-23611
 Project: Flink
  Issue Type: Bug
  Components: Deployment / YARN
Affects Versions: 1.12.5
Reporter: Xintong Song
 Fix For: 1.12.6


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21439=logs=245e1f2e-ba5b-5570-d689-25ae21e5302f=e7f339b2-a7c3-57d9-00af-3712d4b15354=28959



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23610) DefaultSchedulerTest.testProducedPartitionRegistrationTimeout fails on azure

2021-08-03 Thread Xintong Song (Jira)
Xintong Song created FLINK-23610:


 Summary: 
DefaultSchedulerTest.testProducedPartitionRegistrationTimeout fails on azure
 Key: FLINK-23610
 URL: https://issues.apache.org/jira/browse/FLINK-23610
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.14.0
Reporter: Xintong Song
 Fix For: 1.14.0


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21438=logs=b0a398c0-685b-599c-eb57-c8c2a771138e=747432ad-a576-5911-1e2a-68c6bedc248a=7834

{code}
Aug 03 23:05:35 [ERROR] Tests run: 40, Failures: 1, Errors: 0, Skipped: 0, Time 
elapsed: 1.43 s <<< FAILURE! - in 
org.apache.flink.runtime.scheduler.DefaultSchedulerTest
Aug 03 23:05:35 [ERROR] 
testProducedPartitionRegistrationTimeout(org.apache.flink.runtime.scheduler.DefaultSchedulerTest)
  Time elapsed: 0.137 s  <<< FAILURE!
Aug 03 23:05:35 java.lang.AssertionError: 
Aug 03 23:05:35 
Aug 03 23:05:35 Expected: a collection with size <2>
Aug 03 23:05:35  but: collection size was <0>
Aug 03 23:05:35 at 
org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
Aug 03 23:05:35 at 
org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8)
Aug 03 23:05:35 at 
org.apache.flink.runtime.scheduler.DefaultSchedulerTest.testProducedPartitionRegistrationTimeout(DefaultSchedulerTest.java:1391)
Aug 03 23:05:35 at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
Aug 03 23:05:35 at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
Aug 03 23:05:35 at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Aug 03 23:05:35 at 
java.base/java.lang.reflect.Method.invoke(Method.java:566)
Aug 03 23:05:35 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
Aug 03 23:05:35 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
Aug 03 23:05:35 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
Aug 03 23:05:35 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
Aug 03 23:05:35 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
Aug 03 23:05:35 at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
Aug 03 23:05:35 at 
org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
Aug 03 23:05:35 at 
org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
Aug 03 23:05:35 at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
Aug 03 23:05:35 at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
Aug 03 23:05:35 at 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
Aug 03 23:05:35 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
Aug 03 23:05:35 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
Aug 03 23:05:35 at 
org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
Aug 03 23:05:35 at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
Aug 03 23:05:35 at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
Aug 03 23:05:35 at 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
Aug 03 23:05:35 at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
Aug 03 23:05:35 at 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
Aug 03 23:05:35 at org.junit.rules.RunRules.evaluate(RunRules.java:20)
Aug 03 23:05:35 at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
Aug 03 23:05:35 at 
org.junit.runners.ParentRunner.run(ParentRunner.java:413)
Aug 03 23:05:35 at org.junit.runners.Suite.runChild(Suite.java:128)
Aug 03 23:05:35 at org.junit.runners.Suite.runChild(Suite.java:27)
Aug 03 23:05:35 at 
org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
Aug 03 23:05:35 at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
Aug 03 23:05:35 at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
Aug 03 23:05:35 at 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
Aug 03 23:05:35 at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
Aug 03 23:05:35 at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
Aug 03 23:05:35 at 
org.junit.runners.ParentRunner.run(ParentRunner.java:413)
Aug 03 23:05:35 at 
org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55)
Aug 03 

[jira] [Created] (FLINK-23609) Codeine error of "Infinite or NaN at java.math.BigDecimal.(BigDecimal.java:898)"

2021-08-03 Thread xiaojin.wy (Jira)
xiaojin.wy created FLINK-23609:
--

 Summary: Codeine error of "Infinite or NaN  at 
java.math.BigDecimal.(BigDecimal.java:898)"
 Key: FLINK-23609
 URL: https://issues.apache.org/jira/browse/FLINK-23609
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Runtime
Affects Versions: 1.14.0
 Environment: java.lang.NumberFormatException: Infinite or NaN

at java.math.BigDecimal.(BigDecimal.java:898)
at java.math.BigDecimal.(BigDecimal.java:875)
at 
org.apache.flink.table.planner.codegen.ExpressionReducer.reduce(ExpressionReducer.scala:202)
at 
org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressionsInternal(ReduceExpressionsRule.java:759)
at 
org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressions(ReduceExpressionsRule.java:699)
at 
org.apache.calcite.rel.rules.ReduceExpressionsRule$FilterReduceExpressionsRule.onMatch(ReduceExpressionsRule.java:152)
at 
org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542)
at 
org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407)
at 
org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)
at 
org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
at 
org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202)
at 
org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:62)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:58)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:57)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeTree(BatchCommonSubGraphBasedOptimizer.scala:87)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.org$apache$flink$table$planner$plan$optimize$BatchCommonSubGraphBasedOptimizer$$optimizeBlock(BatchCommonSubGraphBasedOptimizer.scala:58)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:46)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:46)
at scala.collection.immutable.List.foreach(List.scala:392)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.doOptimize(BatchCommonSubGraphBasedOptimizer.scala:46)
at 
org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:77)
at 
org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:282)
at 
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:165)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1702)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:781)
at 
org.apache.flink.table.planner.utils.TestingStatementSet.execute(TableTestBase.scala:1509)
at 
org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableXiaojin(TableSourceITCase.scala:317)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at 

[jira] [Created] (FLINK-23608) org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a suitable table factory for 'org.apache.flink.table.factories.TableSourceFactory

2021-08-03 Thread Jira
张祥兵 created FLINK-23608:
---

 Summary: 
org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a 
suitable table factory for 'org.apache.flink.table.factories.TableSourceFactory
 Key: FLINK-23608
 URL: https://issues.apache.org/jira/browse/FLINK-23608
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: 1.9.0
Reporter: 张祥兵


在IDEA可以正常执行 ,放在Flink上报错
org.apache.flink.client.program.ProgramInvocationException: The main method 
caused an error: findAndCreateTableSource failed.
at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:593)
at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438)
at 
org.apache.flink.client.program.OptimizerPlanEnvironment.getOptimizedPlan(OptimizerPlanEnvironment.java:83)
at 
org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:80)
at 
org.apache.flink.runtime.webmonitor.handlers.utils.JarHandlerUtils$JarHandlerContext.toJobGraph(JarHandlerUtils.java:126)
at 
org.apache.flink.runtime.webmonitor.handlers.JarPlanHandler.lambda$handleRequest$1(JarPlanHandler.java:100)
at java.util.concurrent.CompletableFuture$AsyncSupply.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: org.apache.flink.table.api.TableException: findAndCreateTableSource 
failed.
at 
org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:67)
at 
org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:54)
at 
org.apache.flink.table.descriptors.ConnectTableDescriptor.registerTableSource(ConnectTableDescriptor.java:69)
at com.bing.flink.controller.TestKafkaFlink.main(TestKafkaFlink.java:45)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576)
... 9 more
Caused by: org.apache.flink.table.api.NoMatchingTableFactoryException: Could 
not find a suitable table factory for 
'org.apache.flink.table.factories.TableSourceFactory' in
the classpath.

Reason: No context matches.

The following properties are requested:
connector.properties.0.key=group.id
connector.properties.0.value=test
connector.properties.1.key=bootstrap.servers
connector.properties.1.value=localhost:9092
connector.property-version=1
connector.topic=test
connector.type=kafka
connector.version=universal
format.derive-schema=true
format.fail-on-missing-field=true
format.property-version=1
format.type=json
schema.0.name=error_time
schema.0.type=VARCHAR
schema.1.name=error_id
schema.1.type=VARCHAR
schema.2.name=task_type
schema.2.type=VARCHAR
update-mode=append

The following factories have been considered:
org.apache.flink.table.catalog.GenericInMemoryCatalogFactory
org.apache.flink.table.sources.CsvBatchTableSourceFactory
org.apache.flink.table.sources.CsvAppendTableSourceFactory
org.apache.flink.table.sinks.CsvBatchTableSinkFactory
org.apache.flink.table.sinks.CsvAppendTableSinkFactory
org.apache.flink.table.planner.delegation.BlinkPlannerFactory
org.apache.flink.table.planner.delegation.BlinkExecutorFactory
org.apache.flink.table.planner.StreamPlannerFactory
org.apache.flink.table.executor.StreamExecutorFactory
at 
org.apache.flink.table.factories.TableFactoryService.filterByContext(TableFactoryService.java:283)
at 
org.apache.flink.table.factories.TableFactoryService.filter(TableFactoryService.java:191)
at 
org.apache.flink.table.factories.TableFactoryService.findSingleInternal(TableFactoryService.java:144)
at 
org.apache.flink.table.factories.TableFactoryService.find(TableFactoryService.java:97)
at 
org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:64)
... 17 more
2021-08-03 19:06:55,821 WARN  akka.remote.transport.netty.NettyTransport
- Remote connection to [/127.0.0.1:7513] failed with 
java.io.IOException: Զ��ǿ�ȹر���һ�еӡ�
2021-08-03 19:06:55,828 WARN  akka.remote.ReliableDeliverySupervisor
- Association with remote system [akka.tcp://flink@127.0.0.1:7457] 
has failed, address is now gated for [50] ms. Reason: [Disassociated] 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Unable to read state Witten by Beam application with Flink runner using Flink's State Processor API

2021-08-03 Thread Kathula, Sandeep
Hi David,
Thanks for the reply. I tried with Beam 2.29 and Flink 1.12 and 
still getting NullPointerException like before. I changed the code a bit to 
remove all the proprietary software used in our company and able to reproduce 
the issue with local Kafka, Beam with Flink runner running locally.

   Beam Flink runner code: 
https://github.com/kathulasandeep/beam-flink-stateful/blob/master/src/main/java/org/sandeep/processor/Processor.java
   Local Kafka producer: 
https://github.com/kathulasandeep/beam-flink-stateful/blob/master/src/main/java/org/sandeep/kafka/SimpleKafkaProducer.java
Reading state using State processor API: 
https://github.com/kathulasandeep/beam-flink-stateful/blob/master/src/main/java/org/sandeep/readstate/StateReader.java

Thanks,
Sandeep  


On 7/27/21, 10:10 AM, "David Morávek"  wrote:

This email is from an external sender.


Hi Sandeep,

In general I'd say it will be tricky to read Beam state this way as it
doesn't use Flink primitives, but it's writing state in custom binary
format (it can be de-serialized, but it's not easy to put all of the pieces
together).

Can you please share an example code of how you're reading the state? Also
can please you try this with latest Beam / Flink versions (the ones you're
using are no longer supported)?

Best,
D.

On Tue, Jul 27, 2021 at 5:46 PM Kathula, Sandeep
 wrote:

> Hi,
>  We have a simple Beam application like a work count running with
> Flink runner (Beam 2.26 and Flink 1.9). We are using Beam’s value state. I
> am trying to read the state from savepoint using  Flink's State Processor
> API but getting a NullPointerException. Converted the whole code into Pure
> Flink application, created a savepoint and tried to read the state where 
we
> are able to read the state successfully.
>
> Exception Stack trace:
>
> Exception in thread "main"
> org.apache.flink.runtime.client.JobExecutionException: Job execution 
failed.
> at
> 
org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
> at
> 
org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:631)
> at
> org.apache.flink.client.LocalExecutor.executePlan(LocalExecutor.java:222)
> at
> 
org.apache.flink.api.java.LocalEnvironment.execute(LocalEnvironment.java:91)
> at
> 
org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:820)
> at
> org.apache.flink.api.java.DataSet.count(DataSet.java:398)
> at
> com.intuit.spp.example.StateReader.main(StateReader.java:34)
> Caused by: java.io.IOException: Failed to restore state backend
> at
> 
org.apache.flink.state.api.input.KeyedStateInputFormat.getStreamOperatorStateContext(KeyedStateInputFormat.java:231)
> at
> 
org.apache.flink.state.api.input.KeyedStateInputFormat.open(KeyedStateInputFormat.java:177)
> at
> 
org.apache.flink.state.api.input.KeyedStateInputFormat.open(KeyedStateInputFormat.java:79)
> at
> 
org.apache.flink.runtime.operators.DataSourceTask.invoke(DataSourceTask.java:173)
> at
> org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705)
> at
> org.apache.flink.runtime.taskmanager.Task.run(Task.java:530)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.Exception: Exception while creating
> StreamOperatorStateContext.
> at
> 
org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:195)
> at
> 
org.apache.flink.state.api.input.KeyedStateInputFormat.getStreamOperatorStateContext(KeyedStateInputFormat.java:223)
> ... 6 more
> Caused by: org.apache.flink.util.FlinkException: Could not restore keyed
> state backend for
> f25cb861abbd020d3595d47c5d53d3fd_f25cb861abbd020d3595d47c5d53d3fd_(1/1)
> from any of the 1 provided restore options.
> at
> 
org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRestorerProcedure.java:135)
> at
> 
org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.keyedStatedBackend(StreamTaskStateInitializerImpl.java:307)
> at
> 
org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:135)
> ... 7 more
> Caused by: org.apache.flink.runtime.state.BackendBuildingException: Failed
> when trying to restore heap backend
>   

[jira] [Created] (FLINK-23607) Cleanup unnecessary dependencies in dstl pom.xml

2021-08-03 Thread Roman Khachatryan (Jira)
Roman Khachatryan created FLINK-23607:
-

 Summary: Cleanup unnecessary dependencies in dstl pom.xml
 Key: FLINK-23607
 URL: https://issues.apache.org/jira/browse/FLINK-23607
 Project: Flink
  Issue Type: Technical Debt
  Components: Runtime / State Backends
Affects Versions: 1.14.0
Reporter: Roman Khachatryan
 Fix For: 1.14.0


- check whether some dependencies (i.e. flink-streaming-java, shaded guava, 
flink-test-utils-junit) are indeed necessary
- fix the scope of flink-runtime (and make it consistent with flink-core scope)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23606) Add safety guards in StreamTask(s) if a global failover for a synchronous savepoint should've happen

2021-08-03 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-23606:


 Summary: Add safety guards in StreamTask(s) if a global failover 
for a synchronous savepoint should've happen
 Key: FLINK-23606
 URL: https://issues.apache.org/jira/browse/FLINK-23606
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Checkpointing
Reporter: Dawid Wysakowicz


We should fail hard if 
* we receive a {{notifyCheckpointAborted}} for a synchronous savepoint
* we receive a newer barrier than a synchronous savepoint
* 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23605) An exception was thrown when the metric was reported by PrometheusPushGatewayReporter

2021-08-03 Thread JasonLee (Jira)
JasonLee created FLINK-23605:


 Summary: An exception was thrown when the metric was reported by 
PrometheusPushGatewayReporter
 Key: FLINK-23605
 URL: https://issues.apache.org/jira/browse/FLINK-23605
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Metrics
Affects Versions: 1.13.1, 1.13.0, 1.12.0
Reporter: JasonLee
 Fix For: 1.13.2


Exceptions are as follows
{code:java}
// code placeholder
java.lang.NoSuchMethodError: 
org.apache.commons.math3.stat.descriptive.rank.Percentile.withNaNStrategy(Lorg/apache/commons/math3/stat/ranking/NaNStrategy;)Lorg/apache/commons/math3/stat/descriptive/rank/Percentile;
at 
org.apache.flink.runtime.metrics.DescriptiveStatisticsHistogramStatistics$CommonMetricsSnapshot.(DescriptiveStatisticsHistogramStatistics.java:96)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]at 
org.apache.flink.runtime.metrics.DescriptiveStatisticsHistogramStatistics$CommonMetricsSnapshot.(DescriptiveStatisticsHistogramStatistics.java:90)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]at 
org.apache.flink.runtime.metrics.DescriptiveStatisticsHistogramStatistics.(DescriptiveStatisticsHistogramStatistics.java:40)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]at 
org.apache.flink.runtime.metrics.DescriptiveStatisticsHistogram.getStatistics(DescriptiveStatisticsHistogram.java:49)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]at 
org.apache.flink.metrics.prometheus.AbstractPrometheusReporter$HistogramSummaryProxy.addSamples(AbstractPrometheusReporter.java:362)
 ~[?:?]at 
org.apache.flink.metrics.prometheus.AbstractPrometheusReporter$HistogramSummaryProxy.collect(AbstractPrometheusReporter.java:335)
 ~[?:?]at 
io.prometheus.client.CollectorRegistry.collectorNames(CollectorRegistry.java:100)
 ~[?:?]at 
io.prometheus.client.CollectorRegistry.register(CollectorRegistry.java:50) 
~[?:?]at io.prometheus.client.Collector.register(Collector.java:139) ~[?:?] 
   at io.prometheus.client.Collector.register(Collector.java:132) ~[?:?]at 
org.apache.flink.metrics.prometheus.AbstractPrometheusReporter.notifyOfAddedMetric(AbstractPrometheusReporter.java:135)
 ~[?:?]at 
org.apache.flink.runtime.metrics.MetricRegistryImpl.register(MetricRegistryImpl.java:390)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]at 
org.apache.flink.runtime.metrics.groups.AbstractMetricGroup.addMetric(AbstractMetricGroup.java:414)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]at 
org.apache.flink.runtime.metrics.groups.AbstractMetricGroup.histogram(AbstractMetricGroup.java:367)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]at 
org.apache.flink.streaming.util.LatencyStats.reportLatency(LatencyStats.java:65)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator.reportOrForwardLatencyMarker(AbstractStreamOperator.java:580)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator.processLatencyMarker(AbstractStreamOperator.java:566)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]at 
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitLatencyMarker(OneInputStreamTask.java:216)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]at 
org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.processElement(AbstractStreamTaskNetworkInput.java:139)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]at 
org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.emitNext(AbstractStreamTaskNetworkInput.java:105)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]at 
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:66)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]at 
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:423)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:204)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]at 
org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:681)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]at 
org.apache.flink.streaming.runtime.tasks.StreamTask.executeInvoke(StreamTask.java:636)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]at 
org.apache.flink.streaming.runtime.tasks.StreamTask.runWithCleanUpOnFail(StreamTask.java:647)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:620) 
~[flink-dist_2.11-1.13.1.jar:1.13.1]at 
org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:779) 
~[flink-dist_2.11-1.13.1.jar:1.13.1]at 
org.apache.flink.runtime.taskmanager.Task.run(Task.java:566) 
~[flink-dist_2.11-1.13.1.jar:1.13.1]at 
java.lang.Thread.run(Thread.java:745) ~[?:1.8.0_111]
{code}
This looks like a JAR conflict



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23604) 'csv.disable-quote-character' can not take effect during deserialization for old csv format

2021-08-03 Thread hehuiyuan (Jira)
hehuiyuan created FLINK-23604:
-

 Summary: 'csv.disable-quote-character' can not take effect during 
deserialization for old csv format
 Key: FLINK-23604
 URL: https://issues.apache.org/jira/browse/FLINK-23604
 Project: Flink
  Issue Type: Improvement
  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
Reporter: hehuiyuan


https://issues.apache.org/jira/browse/FLINK-21207

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23603) 动态查询sql后,使用toAppendStream将动态表转化为流时报错,org.apache.flink.table.api.TableException

2021-08-03 Thread liuhong (Jira)
liuhong created FLINK-23603:
---

 Summary: 
动态查询sql后,使用toAppendStream将动态表转化为流时报错,org.apache.flink.table.api.TableException
 Key: FLINK-23603
 URL: https://issues.apache.org/jira/browse/FLINK-23603
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Affects Versions: 1.13.1
 Environment:  
{code:java}
pom.xml

org.apache.flink
flink-java
1.13.1
provided


org.apache.flink
flink-streaming-java_2.12
1.13.1
provided


org.apache.flink
flink-clients_2.12
1.13.1
provided


     org.apache.flink
     flink-table-planner-blink_2.12
     1.13.1
     provided
 
 
     org.apache.flink
     flink-streaming-scala_2.12
     1.13.1
     provided
 
{code}
{code:java}
import com.atguigu.chapter05.bean.Water1;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.Table;
import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
import static org.apache.flink.table.api.Expressions.$;

public class Flink08_Time_ProcessingTime_DDL {

public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(1);
StreamTableEnvironment tEnv = StreamTableEnvironment.create(env);

tEnv.executeSql("create table sensor(" +
"id string," +
"ts bigint," +
"vc int" +
//"pt as proctime()" +
") with (" +
" 'connector' = 'filesystem' ," +
" 'path' = 'input/water.txt' ," +
" 'format' = 'csv' " +
")");
//tEnv.sqlQuery("select * from sensor").execute().print();
//Table t1 = tEnv.sqlQuery("select id,ts,vc hight from sensor");
Table t1 = tEnv.from("sensor");

Table t2 = t1.select($("id"), $("ts"),$("vc").as("height"));
/*t2.execute().print();
t2.printSchema();*/
tEnv.toAppendStream(t2, Water1.class).print();
env.execute();

}
}
{code}
{code:java}
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;

@Data
@NoArgsConstructor
@AllArgsConstructor
public class Water1 {

private String id;
private Long ts;
private Integer height;
}
{code}
{panel:title=water.txt}
sensor_1,1,1
sensor_1,2,2
sensor_2,3,45
sensor_1,4,4
sensor_2,6,9
sensor_1,7,6
sensor_3,8,7
{panel}
 

 

 
Reporter: liuhong


当执行环境中Flink08_Time_ProcessingTime_DDL.main时会抛出以下异常,如果在Flink08_Time_ProcessingTime_DDL中修改

Table t2 = t1.select($("id"), 
$("ts"),{color:#de350b}$("vc").as("height")){color};为

Table t2 = t1.select($("id"),{color:#de350b}$("vc").as("height"){color}, 
$("ts"));则正常输出结果

Exception in thread "main" org.apache.flink.table.api.TableException: height is 
not found in id, ts, vcException in thread "main" 
org.apache.flink.table.api.TableException: height is not found in id, ts, vc at 
org.apache.flink.table.planner.codegen.SinkCodeGenerator$.$anonfun$generateRowConverterOperator$1(SinkCodeGenerator.scala:83)
 at 
org.apache.flink.table.planner.codegen.SinkCodeGenerator$.$anonfun$generateRowConverterOperator$1$adapted(SinkCodeGenerator.scala:79)
 at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233) 
at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:32) 
at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:29) 
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:194) at 
scala.collection.TraversableLike.map(TraversableLike.scala:233) at 
scala.collection.TraversableLike.map$(TraversableLike.scala:226) at 
scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:194) at 
org.apache.flink.table.planner.codegen.SinkCodeGenerator$.generateRowConverterOperator(SinkCodeGenerator.scala:79)
 at 
org.apache.flink.table.planner.codegen.SinkCodeGenerator.generateRowConverterOperator(SinkCodeGenerator.scala)
 at 
org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecLegacySink.translateToTransformation(CommonExecLegacySink.java:190)
 at 
org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecLegacySink.translateToPlanInternal(CommonExecLegacySink.java:141)
 at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:134)
 at 
org.apache.flink.table.planner.delegation.StreamPlanner.$anonfun$translateToPlan$1(StreamPlanner.scala:70)
 at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233) 
at scala.collection.Iterator.foreach(Iterator.scala:937) at 
scala.collection.Iterator.foreach$(Iterator.scala:937) at 
scala.collection.AbstractIterator.foreach(Iterator.scala:1425) at 
scala.collection.IterableLike.foreach(IterableLike.scala:70) at 
scala.collection.IterableLike.foreach$(IterableLike.scala:69) at 

Re: [VOTE] FLIP-179: Expose Standardized Operator Metrics

2021-08-03 Thread Arvid Heise
@Becket Qin  @Thomas Weise  would you
also agree to @Chesnay Schepler  's proposal?

I think the main intention is to only use a Gauge when the exact metric is
exposed. For any partial value that may be used in an internal/predefined
metric, we would only use a supplier instead of a Gauge.

So a connector developer can immediately distinguish the cases: if it's a
metric class he would see the exact metric corresponding to the setter. If
it's some Supplier, the developer would expect that the value is used in a
differently named metric, which we would describe in the JavaDoc.
Could that also be a solution to the currentEventFetchTimeLag metric?

On Tue, Aug 3, 2021 at 12:54 PM Thomas Weise  wrote:

> +1 (binding)
>
> On Tue, Aug 3, 2021 at 12:58 AM Chesnay Schepler 
> wrote:
>
> > +1 (binding)
> >
> > Although I still think all the set* methods should accept a Supplier
> > instead of a Gauge.
> >
> > On 02/08/2021 12:36, Becket Qin wrote:
> > > +1 (binding).
> > >
> > > Thanks for driving the efforts, Arvid.
> > >
> > > Cheers,
> > >
> > > Jiangjie (Becket) Qin
> > >
> > > On Sat, Jul 31, 2021 at 12:08 PM Steven Wu 
> wrote:
> > >
> > >> +1 (non-binding)
> > >>
> > >> On Fri, Jul 30, 2021 at 3:55 AM Arvid Heise  wrote:
> > >>
> > >>> Dear devs,
> > >>>
> > >>> I'd like to open a vote on FLIP-179: Expose Standardized Operator
> > Metrics
> > >>> [1] which was discussed in this thread [2].
> > >>> The vote will be open for at least 72 hours unless there is an
> > objection
> > >>> or not enough votes.
> > >>>
> > >>> The proposal excludes the implementation for the
> > currentFetchEventTimeLag
> > >>> metric, which caused a bit of discussion without a clear convergence.
> > We
> > >>> will implement that metric in a generic way at a later point and
> > >> encourage
> > >>> sources to implement it themselves in the meantime.
> > >>>
> > >>> Best,
> > >>>
> > >>> Arvid
> > >>>
> > >>> [1]
> > >>>
> > >>>
> > >>
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-179%3A+Expose+Standardized+Operator+Metrics
> > >>> [2]
> > >>>
> > >>>
> > >>
> >
> https://lists.apache.org/thread.html/r856920cbfe6a262b521109c5bdb9e904e00a9b3f1825901759c24d85%40%3Cdev.flink.apache.org%3E
> >
> >
> >
>


Re: [VOTE] FLIP-179: Expose Standardized Operator Metrics

2021-08-03 Thread Thomas Weise
+1 (binding)

On Tue, Aug 3, 2021 at 12:58 AM Chesnay Schepler  wrote:

> +1 (binding)
>
> Although I still think all the set* methods should accept a Supplier
> instead of a Gauge.
>
> On 02/08/2021 12:36, Becket Qin wrote:
> > +1 (binding).
> >
> > Thanks for driving the efforts, Arvid.
> >
> > Cheers,
> >
> > Jiangjie (Becket) Qin
> >
> > On Sat, Jul 31, 2021 at 12:08 PM Steven Wu  wrote:
> >
> >> +1 (non-binding)
> >>
> >> On Fri, Jul 30, 2021 at 3:55 AM Arvid Heise  wrote:
> >>
> >>> Dear devs,
> >>>
> >>> I'd like to open a vote on FLIP-179: Expose Standardized Operator
> Metrics
> >>> [1] which was discussed in this thread [2].
> >>> The vote will be open for at least 72 hours unless there is an
> objection
> >>> or not enough votes.
> >>>
> >>> The proposal excludes the implementation for the
> currentFetchEventTimeLag
> >>> metric, which caused a bit of discussion without a clear convergence.
> We
> >>> will implement that metric in a generic way at a later point and
> >> encourage
> >>> sources to implement it themselves in the meantime.
> >>>
> >>> Best,
> >>>
> >>> Arvid
> >>>
> >>> [1]
> >>>
> >>>
> >>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-179%3A+Expose+Standardized+Operator+Metrics
> >>> [2]
> >>>
> >>>
> >>
> https://lists.apache.org/thread.html/r856920cbfe6a262b521109c5bdb9e904e00a9b3f1825901759c24d85%40%3Cdev.flink.apache.org%3E
>
>
>


Re: Compilation error - Execution of spotless-check goal failed in flink-annotations project

2021-08-03 Thread Muhammad Haseeb Asif
Thanks, Appreciate the quick turnaround.


I had to set up the $JAVA_HOME$ to 1.8 version so that maven can pick up that 
version and it started building for me. Thanks


From: Chesnay Schepler 
Sent: 03 August 2021 12:05:49
To: dev@flink.apache.org; Muhammad Haseeb Asif
Subject: Re: Compilation error - Execution of spotless-check goal failed in 
flink-annotations project

You are not using java 8, as shown by your maven output.
We have not made sure yet that Flink can be built on Java 16.

On 03/08/2021 12:00, Muhammad Haseeb Asif wrote:
> I am trying to build the Apache Flink project on my local machine and it 
> seems like failing due to spotless issues.
>
>
> I am running the build on mac with java 8
>
>
>   xyz% java -version
> java version "1.8.0_301"
> Java(TM) SE Runtime Environment (build 1.8.0_301-b09)
> Java HotSpot(TM) 64-Bit Server VM (build 25.301-b09, mixed mode)
>
> Following is the error
>
>
> [ERROR] Failed to execute goal 
> com.diffplug.spotless:spotless-maven-plugin:2.4.2:check (spotless-check) on 
> project flink-annotations: Execution spotless-check of goal 
> com.diffplug.spotless:spotless-maven-plugin:2.4.2:check failed: 
> java.lang.reflect.InvocationTargetException: class 
> com.google.googlejavaformat.java.RemoveUnusedImports (in unnamed module 
> @0x4bc9389) cannot access class com.sun.tools.javac.util.Context (in module 
> jdk.compiler) because module jdk.compiler does not export 
> com.sun.tools.javac.util to unnamed module @0x4bc9389 -> [Help 1]
>
>
> We are getting the issue due to unused imports, so either we can remove the 
> spotless at all or somehow configure it to ignore the warning for specific 
> projects. Any ideas to build the project locally will be helpful.
>
>
> And maven version is as follows
>
>   xyz% mvn -version
> Apache Maven 3.8.1 (05c21c65bdfed0f71a2f2ada8b84da59348c4c5d)
> Maven home: /usr/local/Cellar/maven/3.8.1/libexec
> Java version: 16.0.1, vendor: Homebrew, runtime: 
> /usr/local/Cellar/openjdk/16.0.1/libexec/openjdk.jdk/Contents/Home
> Default locale: en_GB, platform encoding: UTF-8
> OS name: "mac os x", version: "11.5.1", arch: "x86_64", family: "mac"
>
>
> Some other details are
>
> [INFO] 
> 
> [INFO] Detecting the operating system and CPU architecture
> [INFO] 
> 
> [INFO] os.detected.name: osx
> [INFO] os.detected.arch: x86_64
> [INFO] os.detected.bitness: 64
> [INFO] os.detected.version: 11.5
> [INFO] os.detected.version.major: 11
> [INFO] os.detected.version.minor: 5
> [INFO] os.detected.classifier: osx-x86_64
> [INFO] 
> 
>
> Any suggestions or thoughts will be helpful. Thanks
>
>



Re: why i am receiving every email after unsubscribed

2021-08-03 Thread Chesnay Schepler
Follow the instructions at 
https://www.apache.org/foundation/mailinglists.html#request-addresses-for-unsubscribing 
to figure out which mail addresses was recorded for your subscription, 
and then send mail to dev-unsubscribe-user=@flink.apache.org.


On 03/08/2021 12:16, R Bhaaagi wrote:

Hi Team,

Every email is hitting my email box bhagi.ramaha...@gmail.com.
I already unsubscribed from all Support Email id's.

Please check and resolve this issue.





[jira] [Created] (FLINK-23602) org.codehaus.commons.compiler.CompileException: Line 84, Column 78: No applicable constructor/method found for actual parameters "org.apache.flink.table.data.DecimalData

2021-08-03 Thread xiaojin.wy (Jira)
xiaojin.wy created FLINK-23602:
--

 Summary: org.codehaus.commons.compiler.CompileException: Line 84, 
Column 78: No applicable constructor/method found for actual parameters 
"org.apache.flink.table.data.DecimalData
 Key: FLINK-23602
 URL: https://issues.apache.org/jira/browse/FLINK-23602
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Runtime
Affects Versions: 1.14.0
Reporter: xiaojin.wy



{code:java}
CREATE TABLE database5_t2 (
  `c0` DECIMAL , `c1` BIGINT
) WITH (
  'connector' = 'filesystem',
  'format' = 'testcsv',
  'path' = '$resultPath33'
)
INSERT OVERWRITE database5_t2(c0, c1) VALUES(-120229892, 790169221), 
(-1070424438, -1787215649)
SELECT COUNT(CAST ((database5_t2.c0) BETWEEN ((REVERSE(CAST ('1969-12-08' AS 
STRING  AND
(('-727278084') IN (database5_t2.c0, '0.9996987230442536')) AS DOUBLE )) AS ref0
FROM database5_t2 GROUP BY database5_t2.c1  ORDER BY database5_t2.c1
{code}

Running the sql above, will generate the error of this:


{code:java}
java.util.concurrent.ExecutionException: 
org.apache.flink.table.api.TableException: Failed to wait job finish

at 
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
at 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
at 
org.apache.flink.table.api.internal.TableResultImpl.awaitInternal(TableResultImpl.java:129)
at 
org.apache.flink.table.api.internal.TableResultImpl.await(TableResultImpl.java:92)
at 
org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableXiaojin(TableSourceITCase.scala:482)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
at 
com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33)
at 
com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:230)
at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:58)
Caused by: org.apache.flink.table.api.TableException: Failed to wait job finish
at 
org.apache.flink.table.api.internal.InsertResultIterator.hasNext(InsertResultIterator.java:56)
at 
org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:370)
at 
org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.isFirstRowReady(TableResultImpl.java:383)
at 
org.apache.flink.table.api.internal.TableResultImpl.lambda$awaitInternal$1(TableResultImpl.java:116)
at 

why i am receiving every email after unsubscribed

2021-08-03 Thread R Bhaaagi
Hi Team,

Every email is hitting my email box bhagi.ramaha...@gmail.com.
I already unsubscribed from all Support Email id's.

Please check and resolve this issue.


Re: Compilation error - Execution of spotless-check goal failed in flink-annotations project

2021-08-03 Thread Chesnay Schepler

You are not using java 8, as shown by your maven output.
We have not made sure yet that Flink can be built on Java 16.

On 03/08/2021 12:00, Muhammad Haseeb Asif wrote:

I am trying to build the Apache Flink project on my local machine and it seems 
like failing due to spotless issues.


I am running the build on mac with java 8


  xyz% java -version
java version "1.8.0_301"
Java(TM) SE Runtime Environment (build 1.8.0_301-b09)
Java HotSpot(TM) 64-Bit Server VM (build 25.301-b09, mixed mode)

Following is the error


[ERROR] Failed to execute goal 
com.diffplug.spotless:spotless-maven-plugin:2.4.2:check (spotless-check) on 
project flink-annotations: Execution spotless-check of goal 
com.diffplug.spotless:spotless-maven-plugin:2.4.2:check failed: 
java.lang.reflect.InvocationTargetException: class 
com.google.googlejavaformat.java.RemoveUnusedImports (in unnamed module 
@0x4bc9389) cannot access class com.sun.tools.javac.util.Context (in module 
jdk.compiler) because module jdk.compiler does not export com.sun.tools.javac.util 
to unnamed module @0x4bc9389 -> [Help 1]


We are getting the issue due to unused imports, so either we can remove the 
spotless at all or somehow configure it to ignore the warning for specific 
projects. Any ideas to build the project locally will be helpful.


And maven version is as follows

  xyz% mvn -version
Apache Maven 3.8.1 (05c21c65bdfed0f71a2f2ada8b84da59348c4c5d)
Maven home: /usr/local/Cellar/maven/3.8.1/libexec
Java version: 16.0.1, vendor: Homebrew, runtime: 
/usr/local/Cellar/openjdk/16.0.1/libexec/openjdk.jdk/Contents/Home
Default locale: en_GB, platform encoding: UTF-8
OS name: "mac os x", version: "11.5.1", arch: "x86_64", family: "mac"


Some other details are

[INFO] 
[INFO] Detecting the operating system and CPU architecture
[INFO] 
[INFO] os.detected.name: osx
[INFO] os.detected.arch: x86_64
[INFO] os.detected.bitness: 64
[INFO] os.detected.version: 11.5
[INFO] os.detected.version.major: 11
[INFO] os.detected.version.minor: 5
[INFO] os.detected.classifier: osx-x86_64
[INFO] 

Any suggestions or thoughts will be helpful. Thanks






Compilation error - Execution of spotless-check goal failed in flink-annotations project

2021-08-03 Thread Muhammad Haseeb Asif
I am trying to build the Apache Flink project on my local machine and it seems 
like failing due to spotless issues.


I am running the build on mac with java 8


 xyz% java -version
java version "1.8.0_301"
Java(TM) SE Runtime Environment (build 1.8.0_301-b09)
Java HotSpot(TM) 64-Bit Server VM (build 25.301-b09, mixed mode)

Following is the error


[ERROR] Failed to execute goal 
com.diffplug.spotless:spotless-maven-plugin:2.4.2:check (spotless-check) on 
project flink-annotations: Execution spotless-check of goal 
com.diffplug.spotless:spotless-maven-plugin:2.4.2:check failed: 
java.lang.reflect.InvocationTargetException: class 
com.google.googlejavaformat.java.RemoveUnusedImports (in unnamed module 
@0x4bc9389) cannot access class com.sun.tools.javac.util.Context (in module 
jdk.compiler) because module jdk.compiler does not export 
com.sun.tools.javac.util to unnamed module @0x4bc9389 -> [Help 1]


We are getting the issue due to unused imports, so either we can remove the 
spotless at all or somehow configure it to ignore the warning for specific 
projects. Any ideas to build the project locally will be helpful.


And maven version is as follows

 xyz% mvn -version
Apache Maven 3.8.1 (05c21c65bdfed0f71a2f2ada8b84da59348c4c5d)
Maven home: /usr/local/Cellar/maven/3.8.1/libexec
Java version: 16.0.1, vendor: Homebrew, runtime: 
/usr/local/Cellar/openjdk/16.0.1/libexec/openjdk.jdk/Contents/Home
Default locale: en_GB, platform encoding: UTF-8
OS name: "mac os x", version: "11.5.1", arch: "x86_64", family: "mac"


Some other details are

[INFO] 
[INFO] Detecting the operating system and CPU architecture
[INFO] 
[INFO] os.detected.name: osx
[INFO] os.detected.arch: x86_64
[INFO] os.detected.bitness: 64
[INFO] os.detected.version: 11.5
[INFO] os.detected.version.major: 11
[INFO] os.detected.version.minor: 5
[INFO] os.detected.classifier: osx-x86_64
[INFO] 

Any suggestions or thoughts will be helpful. Thanks



[jira] [Created] (FLINK-23601) TPC-DS end-to-end test fail with "File upload failed."

2021-08-03 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-23601:


 Summary: TPC-DS end-to-end test fail with "File upload failed."
 Key: FLINK-23601
 URL: https://issues.apache.org/jira/browse/FLINK-23601
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.14.0
Reporter: Dawid Wysakowicz


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21345=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=070ff179-953e-5bda-71fa-d6599415701c=18857

{code}
Caused by: org.apache.flink.table.api.TableException: Failed to execute sql
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:819)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:783)
at 
org.apache.flink.table.api.internal.TableImpl.executeInsert(TableImpl.java:574)
at 
org.apache.flink.table.api.internal.TableImpl.executeInsert(TableImpl.java:556)
at 
org.apache.flink.table.tpcds.TpcdsTestProgram.main(TpcdsTestProgram.java:122)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:355)
... 8 more
Caused by: org.apache.flink.util.FlinkException: Failed to execute job 
'insert-into_default_catalog.default_database.query34_sinkTable'.
at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:2078)
at 
org.apache.flink.client.program.StreamContextEnvironment.executeAsync(StreamContextEnvironment.java:135)
at 
org.apache.flink.table.planner.delegation.DefaultExecutor.executeAsync(DefaultExecutor.java:81)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:801)
... 17 more
Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to 
submit JobGraph.
at 
org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$9(RestClusterClient.java:405)
at 
java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:884)
at 
java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:866)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at 
java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)
at 
org.apache.flink.util.concurrent.FutureUtils.lambda$retryOperationWithDelay$9(FutureUtils.java:373)
at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at 
java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:575)
at 
java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:943)
at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.flink.runtime.rest.util.RestClientException: [File upload 
failed.]
at 
org.apache.flink.runtime.rest.RestClient.parseResponse(RestClient.java:486)
at 
org.apache.flink.runtime.rest.RestClient.lambda$submitRequest$3(RestClient.java:466)
at 
java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:966)
at 
java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:940)
... 4 more

{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23600) Rework StateFun's remote module parsing and binding

2021-08-03 Thread Tzu-Li (Gordon) Tai (Jira)
Tzu-Li (Gordon) Tai created FLINK-23600:
---

 Summary: Rework StateFun's remote module parsing and binding
 Key: FLINK-23600
 URL: https://issues.apache.org/jira/browse/FLINK-23600
 Project: Flink
  Issue Type: New Feature
  Components: Stateful Functions
Reporter: Tzu-Li (Gordon) Tai
Assignee: Tzu-Li (Gordon) Tai


Currently, we have a {{JsonModule}} class that is responsible for parsing 
user's module YAML specifications, resolving the specification into application 
components (i.e. function providers, ingresses, routers, and egresses) that is 
then bound to the application universe.

Over time, the {{JsonModule}} class has overgrown with several changes as we 
progressively adapted the YAML format.
* The class handles ALL kinds of components, including ingresses / functions / 
egresses etc. The code is extremely fragile and becoming hard to extend.
* Users have no access to extend this class, if they somehow need to plugin 
custom components (e.g. adding an unsupported ingress / egress, custom protocol 
implementations etc).

We aim to rework this with the following goals in mind:
# The system should only handle {{module.yaml}} parsing up to the point where 
it extracts a list of JSON objects that individually represent an application 
component.
# The system has no knowledge of what each JSON objects contains, other than 
its {{TypeName}} which would map to a corresponding {{ComponentBinder}}.
# A {{ComponentBinder}} is essentially an extension bound to the system that 
knows how to parse a specific JSON object, and bind components to the 
application universe.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] FLIP-183: Dynamic buffer size adjustment

2021-08-03 Thread Piotr Nowojski
Hi Devs,

In some discussions that popped up during reviewing the code, we decided to
rename this effort from clumsy "Dynamic buffer size adjustment" or
"Automatic in-flight data control", to "Buffer debloat". First of all,
bufferbloat is already an established name for this problem that we are
trying to solve [1], albeit it's used mostly on much lower network stack
layers. Buffer debloating is also an established name for efforts to solve
bufferbloat problem [2]. Secondly, it's just a more catchy name that can be
more easily advertised :) Hence "bufferbloat", "buffer debloating" would be
the terminology that we will be using in the code, the config options, the
documentation and potential blog posts.

Please let us know if you think there is an even better name for this
effort, as we have a time until the 1.14 release to rename it.

Best, Piotrek

[1] https://en.wikipedia.org/wiki/Bufferbloat
[2] https://www.google.com/search?q=buffer+%22debloat%22

śr., 21 lip 2021 o 13:24 Anton Kalashnikov  napisał(a):

> Thanks everyone for sharing your opinion. I updated the FLIP according
> to discussion and I'm going to start the vote on this FLIP
>
> --
> Best regards,
> Anton Kalashnikov
>
> 16.07.2021 09:23, Till Rohrmann пишет:
> > I think this is a good idea. +1 for this approach. Are you gonna update
> the
> > FLIP accordingly?
> >
> > Cheers,
> > Till
> >
> > On Thu, Jul 15, 2021 at 9:33 PM Steven Wu  wrote:
> >
> >> I really like the new idea.
> >>
> >> On Thu, Jul 15, 2021 at 11:51 AM Piotr Nowojski 
> >> wrote:
> >>
> >>> Hi Till,
> >>>
>    I assume that buffer sizes are only
>  changed for newly assigned buffers/credits, right? Otherwise, the data
>  could already be on the wire and then it wouldn't fit on the receiver
> >>> side.
>  Or do we have a back channel mechanism to tell the sender that a part
> >> of
> >>> a
>  buffer needs to be resent once more capacity is available?
> >>> Initially our implementation proposal was intending to implement the
> >> first
> >>> option. Buffer size would be attached to a credit message, so first
> >>> received would need to allocate a buffer with the updated size, send
> the
> >>> credit upstream, and sender would be allowed to only send as much data
> as
> >>> in the credit. So there would be no way and no problem with changing
> >> buffer
> >>> sizes while something is "on the wire".
> >>>
> >>> However Anton suggested an even simpler idea to me today. There is
> >> actually
> >>> no problem with receivers supporting all buffer sizes up to the maximum
> >>> allowed size (current configured memory segment size). Thus new buffer
> >> size
> >>> can be treated as a recommendation by the sender. We can announce a new
> >>> buffer size, and the sender will start capping the newly requested
> buffer
> >>> to that size, but we can still send already filled buffers in chunks
> with
> >>> any size, as long as it's below max memory segment size. In this way we
> >> can
> >>> leave any already filled in buffers on the sender side untouched and we
> >> do
> >>> not need to partition/slice them before sending them down, making at
> >> least
> >>> the initial version even simpler. This way we also do not need to
> >>> differentiate that different credits have different sizes. We just
> >> announce
> >>> a single value "recommended/requested buffer size".
> >>>
> >>> Piotrek
> >>>
> >>> czw., 15 lip 2021 o 17:27 Till Rohrmann 
> >> napisał(a):
>  Hi everyone,
> 
>  Thanks a lot for creating this FLIP Anton and Piotr. I think it looks
> >>> like
>  a very promising solution for speeding up our checkpoints and being
> >> able
> >>> to
>  create them more reliably.
> 
>  Following up on Steven's question: I assume that buffer sizes are only
>  changed for newly assigned buffers/credits, right? Otherwise, the data
>  could already be on the wire and then it wouldn't fit on the receiver
> >>> side.
>  Or do we have a back channel mechanism to tell the sender that a part
> >> of
> >>> a
>  buffer needs to be resent once more capacity is available?
> 
>  Cheers,
>  Till
> 
>  On Wed, Jul 14, 2021 at 11:16 AM Piotr Nowojski  >
>  wrote:
> 
> > Hi Steven,
> >
> > As downstream/upstream nodes are decoupled, if downstream nodes
> >> adjust
> > first it's buffer size first, there will be a lag until this updated
>  buffer
> > size information reaches the upstream node.. It is a problem, but it
> >>> has
>  a
> > quite simple solution that we described in the FLIP document:
> >
> >> Sending the buffer of the right size.
> >> It is not enough to know just the number of available buffers
> >>> (credits)
> > for the downstream because the size of these buffers can be
> >> different.
> >> So we are proposing to resolve this problem in the following way:
> >> If
>  the
> > downstream buffer size is changed then the upstream should send
> >> 

[jira] [Created] (FLINK-23599) Remove JobVertex#connectIdInput

2021-08-03 Thread Zhilong Hong (Jira)
Zhilong Hong created FLINK-23599:


 Summary: Remove JobVertex#connectIdInput
 Key: FLINK-23599
 URL: https://issues.apache.org/jira/browse/FLINK-23599
 Project: Flink
  Issue Type: Technical Debt
  Components: Runtime / Coordination
Affects Versions: 1.14.0
Reporter: Zhilong Hong
 Fix For: 1.14.0


{{JobVertex#connectIdInput}} is not used in production anymore. It's only used 
in the unit tests {{testAttachViaIds}} and {{testCannotConnectMissingId}} 
located in {{DefaultExecutionGraphConstructionTest}}. However, these two test 
cases are designed to test this method. Therefore, this method and its test 
cases can be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23598) DataOutputSerializer.writeBytes updates position twice

2021-08-03 Thread nihileon (Jira)
nihileon created FLINK-23598:


 Summary: DataOutputSerializer.writeBytes updates position twice
 Key: FLINK-23598
 URL: https://issues.apache.org/jira/browse/FLINK-23598
 Project: Flink
  Issue Type: Bug
  Components: API / Core
Affects Versions: 1.13.0
Reporter: nihileon
 Attachments: image-2021-08-03-16-07-17-790.png, 
image-2021-08-03-16-07-40-338.png, image-2021-08-03-16-08-09-249.png

DataOutputSerializer.writeBytes updates this.position twice, which only need to 
be update once.

If the initiate position is 0 and I write a string of length 10, the position 
will be updated to 20.

!image-2021-08-03-16-07-17-790.png|width=762,height=372!!image-2021-08-03-16-07-40-338.png|width=744,height=166!

!image-2021-08-03-16-08-09-249.png|width=698,height=269!

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] FLIP-179: Expose Standardized Operator Metrics

2021-08-03 Thread Chesnay Schepler

+1 (binding)

Although I still think all the set* methods should accept a Supplier 
instead of a Gauge.


On 02/08/2021 12:36, Becket Qin wrote:

+1 (binding).

Thanks for driving the efforts, Arvid.

Cheers,

Jiangjie (Becket) Qin

On Sat, Jul 31, 2021 at 12:08 PM Steven Wu  wrote:


+1 (non-binding)

On Fri, Jul 30, 2021 at 3:55 AM Arvid Heise  wrote:


Dear devs,

I'd like to open a vote on FLIP-179: Expose Standardized Operator Metrics
[1] which was discussed in this thread [2].
The vote will be open for at least 72 hours unless there is an objection
or not enough votes.

The proposal excludes the implementation for the currentFetchEventTimeLag
metric, which caused a bit of discussion without a clear convergence. We
will implement that metric in a generic way at a later point and

encourage

sources to implement it themselves in the meantime.

Best,

Arvid

[1]



https://cwiki.apache.org/confluence/display/FLINK/FLIP-179%3A+Expose+Standardized+Operator+Metrics

[2]



https://lists.apache.org/thread.html/r856920cbfe6a262b521109c5bdb9e904e00a9b3f1825901759c24d85%40%3Cdev.flink.apache.org%3E





[jira] [Created] (FLINK-23597) support Add Jar in Table api

2021-08-03 Thread zoucao (Jira)
zoucao created FLINK-23597:
--

 Summary: support Add Jar in Table api
 Key: FLINK-23597
 URL: https://issues.apache.org/jira/browse/FLINK-23597
 Project: Flink
  Issue Type: Improvement
Reporter: zoucao






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23596) flink on k8s can create more than one instance

2021-08-03 Thread zhouwenyang (Jira)
zhouwenyang created FLINK-23596:
---

 Summary: flink on k8s can create more than one instance
 Key: FLINK-23596
 URL: https://issues.apache.org/jira/browse/FLINK-23596
 Project: Flink
  Issue Type: New Feature
Reporter: zhouwenyang


Like spark use --conf spark.executor.instances=2,I hope flink can support 
similar paramters



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23595) Allow JSON format deserialize non-numeric numbers

2021-08-03 Thread loyi (Jira)
loyi created FLINK-23595:


 Summary: Allow JSON format deserialize non-numeric numbers
 Key: FLINK-23595
 URL: https://issues.apache.org/jira/browse/FLINK-23595
 Project: Flink
  Issue Type: Improvement
  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
Affects Versions: 1.13.1
 Environment: {noformat}
Exception stack:

Caused by: java.io.IOException: Failed to deserialize JSON 
'{"uniqueId":"6974697215254697525","monitorKey":"live-log-list","indicatorMap":{"record-retry-rate":NaN,"request-latency-max":-Infinity,"request-latency-avg":NaN,"buffer-available-bytes":3.3554432E7,"waiting-threads":0.0,"record-error-rate":NaN},"tagMap":{"_aggregate":"RAW","host":"live-log-list-001"},"periodInMs":0,"dataTime":1627903774962,"receiveTime":1627903774965}'.
        at 
org.apache.flink.formats.json.JsonRowDeserializationSchema.deserialize(JsonRowDeserializationSchema.java:149)
 ~[?:?]
        at 
org.apache.flink.formats.json.JsonRowDeserializationSchema.deserialize(JsonRowDeserializationSchema.java:81)
 ~[?:?]
        at 
org.apache.flink.api.common.serialization.DeserializationSchema.deserialize(DeserializationSchema.java:82)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]
        at 
org.apache.flink.streaming.connectors.kafka.internals.KafkaDeserializationSchemaWrapper.deserialize(KafkaDeserializationSchemaWrapper.java:58)
 ~[?:?]
        at 
org.apache.flink.streaming.connectors.kafka.internals.KafkaFetcher.partitionConsumerRecordsHandler(KafkaFetcher.java:179)
 ~[?:?]
        at 
org.apache.flink.streaming.connectors.kafka.internals.KafkaFetcher.runFetchLoop(KafkaFetcher.java:142)
 ~[?:?]
        at 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:826)
 ~[?:?]
        at 
org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:110)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]
        at 
org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:66) 
~[flink-dist_2.11-1.13.1.jar:1.13.1]
        at 
org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:269)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]
Caused by: 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.core.JsonParseException: 
Non-standard token 'NaN': enable JsonParser.Feature.ALLOW_NON_NUMERIC_NUMBERS 
to allow
 at [Source: UNKNOWN; line: 1, column: 310]
        at 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:2337)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]
        at 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:710)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]
        at 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.core.json.UTF8StreamJsonParser._handleUnexpectedValue(UTF8StreamJsonParser.java:2669)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]
        at 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.core.json.UTF8StreamJsonParser.nextFieldName(UTF8StreamJsonParser.java:1094)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]
        at 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.std.BaseNodeDeserializer.deserializeObject(JsonNodeDeserializer.java:268)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]
        at 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.std.BaseNodeDeserializer.deserializeObject(JsonNodeDeserializer.java:277)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]
        at 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.std.JsonNodeDeserializer.deserialize(JsonNodeDeserializer.java:69)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]
        at 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.std.JsonNodeDeserializer.deserialize(JsonNodeDeserializer.java:16)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]
        at 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.DefaultDeserializationContext.readRootValue(DefaultDeserializationContext.java:322)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]
        at 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectMapper._readTreeAndClose(ObjectMapper.java:4635)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]
        at 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectMapper.readTree(ObjectMapper.java:3056)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]
        at 
org.apache.flink.formats.json.JsonRowDeserializationSchema.deserialize(JsonRowDeserializationSchema.java:142)
 ~[?:?]
        at 
org.apache.flink.formats.json.JsonRowDeserializationSchema.deserialize(JsonRowDeserializationSchema.java:81)
 ~[?:?]
        at 
org.apache.flink.api.common.serialization.DeserializationSchema.deserialize(DeserializationSchema.java:82)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]
        at 

Re: Security Vulnerabilities with Flink OpenJDK Docker Image

2021-08-03 Thread Chesnay Schepler

To answer your questions:

1) yes, see https://issues.apache.org/jira/browse/FLINK-23221
2) Once an upstream image with the fix was released we will try to 
release new images ASAP.

3) No, there's nothing to do on the Flink side.
4) No, we only have the debian-based images.

On 02/08/2021 16:40, Konstantin Knauf wrote:

Hi Daniel,

sorry for the late reply and thanks for the report. We'll look into this
and get back to you.

Cheers,

Konstantin

On Tue, Jun 15, 2021 at 4:33 AM Daniel Moore
 wrote:


Hello All,

We have been implementing a solution using the Flink image from
https://github.com/apache/flink-docker/blob/master/1.13/scala_2.12-java11-debian/Dockerfile
and it got flagged by our image repository for 3 major security
vulnerabilities:

CVE-2017-8804
CVE-2019-25013
CVE-2021-33574

All of these stem from the `glibc` packages in the `openjdk:11-jre` image.

We have a working image based on building Flink using the Amazon Corretto
image -
https://github.com/corretto/corretto-docker/blob/88df29474df6fc3f3f19daa8c5515d934f706cd0/11/jdk/al2/Dockerfile.
This works although there are  some issues related to linking
`libjemalloc`.  Before we fully test this new image we wanted to reach out
to the community for insight on the following questions:

1. Are these vulnerabilities captured in an issue yet?
2. If so, when could we except a new official image that contains the
Debian fixes for these issues?
3. If not, how can we help contribute to a solution?
4. Are there officially supported non-Debian based Flink images?

We appreciate the insights and look forward to working with the community
on a solution.






[jira] [Created] (FLINK-23594) Unstable test_from_and_to_data_stream_event_time failed

2021-08-03 Thread Huang Xingbo (Jira)
Huang Xingbo created FLINK-23594:


 Summary: Unstable test_from_and_to_data_stream_event_time failed
 Key: FLINK-23594
 URL: https://issues.apache.org/jira/browse/FLINK-23594
 Project: Flink
  Issue Type: Bug
  Components: API / Python
Affects Versions: 1.14.0
Reporter: Huang Xingbo
Assignee: Huang Xingbo
 Fix For: 1.14.0


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21337=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23593) Performance regression on 15.07.2021

2021-08-03 Thread Piotr Nowojski (Jira)
Piotr Nowojski created FLINK-23593:
--

 Summary: Performance regression on 15.07.2021
 Key: FLINK-23593
 URL: https://issues.apache.org/jira/browse/FLINK-23593
 Project: Flink
  Issue Type: Bug
  Components: Benchmarks
Affects Versions: 1.14.0
Reporter: Piotr Nowojski


http://codespeed.dak8s.net:8000/timeline/?ben=sortedMultiInput=2
http://codespeed.dak8s.net:8000/timeline/?ben=sortedTwoInput=2


{noformat}
pnowojski@piotr-mbp: [~/flink -  ((no branch, bisect started on pr/16589))] $ 
git ls f4afbf3e7de..eb8100f7afe
eb8100f7afe [4 weeks ago] (pn/bad, bad, refs/bisect/bad) 
[FLINK-22017][coordination] Allow BLOCKING result partition to be individually 
consumable [Thesharing]
d2005268b1e [4 weeks ago] (HEAD, pn/bisect-4, bisect-4) 
[FLINK-22017][coordination] Get the ConsumedPartitionGroup that 
IntermediateResultPartition and DefaultResultPartition belong to [Thesharing]
d8b1a6fd368 [3 weeks ago] [FLINK-23372][streaming-java] Disable 
AllVerticesInSameSlotSharingGroupByDefault in batch mode [Timo Walther]
4a78097d038 [3 weeks ago] (pn/bisect-3, bisect-3, 
refs/bisect/good-4a78097d0385749daceafd8326930c8cc5f26f1a) 
[FLINK-21928][clients][runtime] Introduce static method constructors of 
DuplicateJobSubmissionException for better readability. [David Moravek]
172b9e32215 [3 weeks ago] [FLINK-21928][clients] JobManager failover should 
succeed, when trying to resubmit already terminated job in application mode. 
[David Moravek]
f483008db86 [3 weeks ago] [FLINK-21928][core] Introduce 
org.apache.flink.util.concurrent.FutureUtils#handleException method, that 
allows future to recover from the specied exception. [David Moravek]
d7ac08c2ac0 [3 weeks ago] (pn/bisect-2, bisect-2, 
refs/bisect/good-d7ac08c2ac06b9ff31707f3b8f43c07817814d4f) 
[FLINK-22843][docs-zh] Document and code are inconsistent [ZhiJie Yang]
16c3ea427df [3 weeks ago] [hotfix] Split the final checkpoint related tests to 
a separate test class. [Yun Gao]
31b3d37a22c [7 weeks ago] [FLINK-21089][runtime] Skip the execution of new 
sources if finished on restore [Yun Gao]
20fe062e1b5 [3 weeks ago] [FLINK-21089][runtime] Skip execution for the legacy 
source task if finished on restore [Yun Gao]
874c627114b [3 weeks ago] [FLINK-21089][runtime] Skip the lifecycle method of 
operators if finished on restore [Yun Gao]
ceaf24b1d88 [3 weeks ago] (pn/bisect-1, bisect-1, 
refs/bisect/good-ceaf24b1d881c2345a43f305d40435519a09cec9) [hotfix] Fix 
isClosed() for operator wrapper and proxy operator close to the operator chain 
[Yun Gao]
41ea591a6db [3 weeks ago] [FLINK-22627][runtime] Remove unused slot request 
protocol [Yangze Guo]
489346b60f8 [3 months ago] [FLINK-22627][runtime] Remove PendingSlotRequest 
[Yangze Guo]
8ffb4d2af36 [3 months ago] [FLINK-22627][runtime] Remove TaskManagerSlot 
[Yangze Guo]
72073741588 [3 months ago] [FLINK-22627][runtime] Remove SlotManagerImpl and 
its related tests [Yangze Guo]
bdb3b7541b3 [3 months ago] [hotfix][yarn] Remove unused internal options in 
YarnConfigOptionsInternal [Yangze Guo]
a6a9b192eac [3 weeks ago] [FLINK-23201][streaming] Reset alignment only for the 
currently processed checkpoint [Anton Kalashnikov]
b35701a35c7 [3 weeks ago] [FLINK-23201][streaming] Calculate checkpoint 
alignment time only for last started checkpoint [Anton Kalashnikov]
3abec22c536 [3 weeks ago] [FLINK-23107][table-runtime] Separate implementation 
of deduplicate rank from other rank functions [Shuo Cheng]
1a195f5cc59 [3 weeks ago] [FLINK-16093][docs-zh] Translate "System Functions" 
page of "Functions" into Chinese (#16348) [ZhiJie Yang]
{noformat}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)