[GitHub] [flink] MonsterChenzhuo commented on a change in pull request #17994: [FLINK-24456][Connectors / Kafka,Table SQL / Ecosystem] Support bound…

2022-01-11 Thread GitBox


MonsterChenzhuo commented on a change in pull request #17994:
URL: https://github.com/apache/flink/pull/17994#discussion_r782790555



##
File path: 
flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/table/KafkaConnectorOptions.java
##
@@ -158,6 +158,13 @@
 .withDescription(
 "Optional offsets used in case of 
\"specific-offsets\" startup mode");
 
+public static final ConfigOption SCAN_BOUNDED_SPECIFIC_OFFSETS =
+ConfigOptions.key("scan.bounded.specific-offsets")
+.stringType()
+.noDefaultValue()
+.withDescription(
+"When all partitions have reached their stop 
offsets, the source will exit");
+
 public static final ConfigOption SCAN_STARTUP_TIMESTAMP_MILLIS =

Review comment:
   I will make changes as soon as possible




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25582) flink sql kafka source cannot special custom parallelism

2022-01-11 Thread venn wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17474331#comment-17474331
 ] 

venn wu commented on FLINK-25582:
-

[~MartijnVisser]  thanks for reminding,  i have add the describe for the 
issure, please check again 

> flink sql kafka source cannot special custom parallelism
> 
>
> Key: FLINK-25582
> URL: https://issues.apache.org/jira/browse/FLINK-25582
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Table SQL / API
>Affects Versions: 1.13.0, 1.14.0
>Reporter: venn wu
>Priority: Minor
>  Labels: pull-request-available
>
> when use flink sql api, all operator have same parallelism, but in some times 
> we want specify the source / sink parallelism for kafka source, i noticed the 
> kafka sink already have parameter "sink.parallelism" to specify the sink 
> parallelism, but kafka source no, so we want flink sql api, have a parameter 
> to specify the kafka source parallelism like sink.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25615) FlinkKafkaProducer fail to correctly migrate pre Flink 1.9 state

2022-01-11 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17474328#comment-17474328
 ] 

Martijn Visser commented on FLINK-25615:


Goedemorgen [~Matthias Schwalbe] :)

Thanks for the extra info! 

> FlinkKafkaProducer fail to correctly migrate pre Flink 1.9 state
> 
>
> Key: FLINK-25615
> URL: https://issues.apache.org/jira/browse/FLINK-25615
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.9.0
>Reporter: Matthias Schwalbe
>Priority: Major
>
> I've found an unnoticed error in FlinkKafkaProvider when migrating from pre 
> Flink 1.9 state to versions starting with Flink 1.9:
>  * the operator state for next-transactional-id-hint should be deleted and 
> replaced by operator state next-transactional-id-hint-v2, however
>  * operator state next-transactional-id-hint is never deleted
>  * see here: [1] :
> {quote}        if (context.getOperatorStateStore()
>                 .getRegisteredStateNames()
>                 .contains(NEXT_TRANSACTIONAL_ID_HINT_DESCRIPTOR)) {
>             migrateNextTransactionalIdHindState(context);
>         }{quote} * migrateNextTransactionalIdHindState is never called, as 
> the condition cannot become true:
>  ** getRegisteredStateNames returns a list of String, whereas 
> NEXT_TRANSACTIONAL_ID_HINT_DESCRIPTOR is ListStateDescriptor (type mismatch)
> The Effect is:
>  * because NEXT_TRANSACTIONAL_ID_HINT_DESCRIPTOR is for a UnionListState, and
>  * the state is not cleared,
>  * each time the job restarts from a savepoint or checkpoint the size 
> multiplies times the parallelism
>  * then because each entry leaves an offset in metadata, akka.framesize 
> becomes too small, before we run into memory overflow
>  
> The breaking change has been introduced in commit 
> 70fa80e3862b367be22b593db685f9898a2838ef
>  
> A simple fix would be to change the code to:
> {quote}        if (context.getOperatorStateStore()
>                 .getRegisteredStateNames()
>                 .contains(NEXT_TRANSACTIONAL_ID_HINT_DESCRIPTOR.getName())) {
>             migrateNextTransactionalIdHindState(context);
>         }
> {quote}
>  
> Although FlinkKafkaProvider  is marked as deprecated it is probably a while 
> here to stay
>  
> Greeting
> Matthias (Thias) Schwalbe
>  
> [1] 
> https://github.com/apache/flink/blob/d7cf2c10f8d4fba81173854cbd8be27c657c7c7f/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaProducer.java#L1167-L1171
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25582) flink sql kafka source cannot special custom parallelism

2022-01-11 Thread venn wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venn wu updated FLINK-25582:

Description: when use flink sql api, all operator have same parallelism, 
but in some times we want specify the source / sink parallelism for kafka 
source, i noticed the kafka sink already have parameter "sink.parallelism" to 
specify the sink parallelism, but kafka source no, so we want flink sql api, 
have a parameter to specify the kafka source parallelism like sink.  (was: when 
use flink sql api, all operator have same parallelism, but in some times we 
want specify the source / sink parallelism for kafka source, i notices the 
kafka sink already have parameter "sink.parallelism" to specify the sink 
parallelism, but kafka source no, so we want flink sql api, have a parameter to 
specify the kafka source parallelism like sink.)

> flink sql kafka source cannot special custom parallelism
> 
>
> Key: FLINK-25582
> URL: https://issues.apache.org/jira/browse/FLINK-25582
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Table SQL / API
>Affects Versions: 1.13.0, 1.14.0
>Reporter: venn wu
>Priority: Minor
>  Labels: pull-request-available
>
> when use flink sql api, all operator have same parallelism, but in some times 
> we want specify the source / sink parallelism for kafka source, i noticed the 
> kafka sink already have parameter "sink.parallelism" to specify the sink 
> parallelism, but kafka source no, so we want flink sql api, have a parameter 
> to specify the kafka source parallelism like sink.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25582) flink sql kafka source cannot special custom parallelism

2022-01-11 Thread venn wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venn wu updated FLINK-25582:

Description: when use flink sql api, all operator have same parallelism, 
but in some times we want specify the source / sink parallelism for kafka 
source, i notices the kafka sink already have parameter "sink.parallelism" to 
specify the sink parallelism, but kafka source no, so we want flink sql api, 
have a parameter to specify the kafka source parallelism like sink.  (was: 
costom flink sql kafka source parallelism)

> flink sql kafka source cannot special custom parallelism
> 
>
> Key: FLINK-25582
> URL: https://issues.apache.org/jira/browse/FLINK-25582
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Table SQL / API
>Affects Versions: 1.13.0, 1.14.0
>Reporter: venn wu
>Priority: Minor
>  Labels: pull-request-available
>
> when use flink sql api, all operator have same parallelism, but in some times 
> we want specify the source / sink parallelism for kafka source, i notices the 
> kafka sink already have parameter "sink.parallelism" to specify the sink 
> parallelism, but kafka source no, so we want flink sql api, have a parameter 
> to specify the kafka source parallelism like sink.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25070) FLIP-195: Improve the name and structure of vertex and operator name for job

2022-01-11 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17474326#comment-17474326
 ] 

Martijn Visser commented on FLINK-25070:


[~wenlong.lwl] Does your FLIP also supersede 
https://issues.apache.org/jira/browse/FLINK-20375? 

> FLIP-195: Improve the name and structure of vertex and operator name for job
> 
>
> Key: FLINK-25070
> URL: https://issues.apache.org/jira/browse/FLINK-25070
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream, Runtime / Web Frontend, Table SQL / 
> Runtime
>Reporter: Wenlong Lyu
>Assignee: Wenlong Lyu
>Priority: Major
> Fix For: 1.15.0
>
>
> this is an umbrella issue tracking the improvement of operator/vertex names 
> in flink: 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-195%3A+Improve+the+name+and+structure+of+vertex+and+operator+name+for+job



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25095) Case when would be translated into different expression in Hive dialect and default dialect

2022-01-11 Thread Shuo Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17474325#comment-17474325
 ] 

Shuo Cheng commented on FLINK-25095:


Similar issue: FLINK-20765

> Case when would be translated into different expression in Hive dialect and 
> default dialect
> ---
>
> Key: FLINK-25095
> URL: https://issues.apache.org/jira/browse/FLINK-25095
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive, Table SQL / Planner
>Affects Versions: 1.13.0, 1.14.0
>Reporter: xiangqiao
>Priority: Major
>  Labels: pull-request-available
>
> When we {*}use blink planner{*}'s *batch mode* and set {*}hive dialect{*}. 
> This exception will be reported when the *subquery field* is used in {*}case 
> when{*}.
> {code:java}
> org.apache.flink.table.planner.codegen.CodeGenException: Mismatch of 
> function's argument data type 'BOOLEAN NOT NULL' and actual argument type 
> 'BOOLEAN'.    at 
> org.apache.flink.table.planner.codegen.calls.BridgingFunctionGenUtil$$anonfun$verifyArgumentTypes$1.apply(BridgingFunctionGenUtil.scala:326)
>     at 
> org.apache.flink.table.planner.codegen.calls.BridgingFunctionGenUtil$$anonfun$verifyArgumentTypes$1.apply(BridgingFunctionGenUtil.scala:323)
>     at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>     at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>     at 
> org.apache.flink.table.planner.codegen.calls.BridgingFunctionGenUtil$.verifyArgumentTypes(BridgingFunctionGenUtil.scala:323)
>     at 
> org.apache.flink.table.planner.codegen.calls.BridgingFunctionGenUtil$.generateFunctionAwareCallWithDataType(BridgingFunctionGenUtil.scala:98)
>     at 
> org.apache.flink.table.planner.codegen.calls.BridgingFunctionGenUtil$.generateFunctionAwareCall(BridgingFunctionGenUtil.scala:65)
>     at 
> org.apache.flink.table.planner.codegen.calls.BridgingSqlFunctionCallGen.generate(BridgingSqlFunctionCallGen.scala:73)
>     at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateCallExpression(ExprCodeGenerator.scala:811)
>     at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:501)
>     at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:56)
>     at org.apache.calcite.rex.RexCall.accept(RexCall.java:174)
>     at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateExpression(ExprCodeGenerator.scala:155)
>     at 
> org.apache.flink.table.planner.codegen.CalcCodeGenerator$$anonfun$2.apply(CalcCodeGenerator.scala:152)
>     at 
> org.apache.flink.table.planner.codegen.CalcCodeGenerator$$anonfun$2.apply(CalcCodeGenerator.scala:152)
>     at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>     at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>     at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>     at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>     at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>     at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>     at 
> org.apache.flink.table.planner.codegen.CalcCodeGenerator$.produceProjectionCode$1(CalcCodeGenerator.scala:152)
>     at 
> org.apache.flink.table.planner.codegen.CalcCodeGenerator$.generateProcessCode(CalcCodeGenerator.scala:177)
>     at 
> org.apache.flink.table.planner.codegen.CalcCodeGenerator$.generateCalcOperator(CalcCodeGenerator.scala:50)
>     at 
> org.apache.flink.table.planner.codegen.CalcCodeGenerator.generateCalcOperator(CalcCodeGenerator.scala)
>     at 
> org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecCalc.translateToPlanInternal(CommonExecCalc.java:95)
>     at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:134)
>     at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecEdge.translateToPlan(ExecEdge.java:250)
>     at 
> org.apache.flink.table.planner.plan.nodes.exec.batch.BatchExecHashAggregate.translateToPlanInternal(BatchExecHashAggregate.java:84)
>     at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:134)
>     at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecEdge.translateToPlan(ExecEdge.java:250)
>     at 
> org.apache.flink.table.planner.plan.nodes.exec.batch.BatchExecExchange.translateToPlanInternal(BatchExecExchange.java:103)
>     at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:134)
>     at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecEdge.translateToPlan(ExecEdge.java:250)
>     

[GitHub] [flink-table-store] JingsongLi commented on a change in pull request #2: [FLINK-25625] Introduce FileFormat

2022-01-11 Thread GitBox


JingsongLi commented on a change in pull request #2:
URL: https://github.com/apache/flink-table-store/pull/2#discussion_r782782673



##
File path: pom.xml
##
@@ -444,7 +439,7 @@ under the License.
 
 
 
-
com.fasterxml.jackson*:*:(,2.12.0]

Review comment:
   flink-avro uses jackson 2.11 (from avro)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-table-store] JingsongLi commented on a change in pull request #2: [FLINK-25625] Introduce FileFormat

2022-01-11 Thread GitBox


JingsongLi commented on a change in pull request #2:
URL: https://github.com/apache/flink-table-store/pull/2#discussion_r782782354



##
File path: pom.xml
##
@@ -153,12 +154,6 @@ under the License.
 1.3.9
 
 
-
-org.apache.flink
-flink-shaded-jackson

Review comment:
   Useless depedency.

##
File path: pom.xml
##
@@ -153,12 +154,6 @@ under the License.
 1.3.9
 
 
-
-org.apache.flink
-flink-shaded-jackson

Review comment:
   Useless dependency.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] ruanhang1993 commented on a change in pull request #17994: [FLINK-24456][Connectors / Kafka,Table SQL / Ecosystem] Support bound…

2022-01-11 Thread GitBox


ruanhang1993 commented on a change in pull request #17994:
URL: https://github.com/apache/flink/pull/17994#discussion_r782782134



##
File path: 
flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/table/KafkaConnectorOptions.java
##
@@ -158,6 +158,13 @@
 .withDescription(
 "Optional offsets used in case of 
\"specific-offsets\" startup mode");
 
+public static final ConfigOption SCAN_BOUNDED_SPECIFIC_OFFSETS =
+ConfigOptions.key("scan.bounded.specific-offsets")
+.stringType()
+.noDefaultValue()
+.withDescription(
+"When all partitions have reached their stop 
offsets, the source will exit");
+
 public static final ConfigOption SCAN_STARTUP_TIMESTAMP_MILLIS =

Review comment:
   `scan.startup.mode` has nothing to do with end logic.
   
   I suggest to provide a `scan.bounded.stop-mode` table option. 
   If it is `timestamp`, use `TimestampOffsetsInitializer` for the 
`stoppingOffsetsInitializer` in the `KafkaSourceBuilder`. 
   If it is `specific-offsets`, use `SpecifiedOffsetsInitializer` for the 
`stoppingOffsetsInitializer` in the `KafkaSourceBuilder`.
   Or else it will be the `NoStoppingOffsetsInitializer` by default.
   
   Besides, we need to check the corresponding setting for different modes.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25624) KafkaSinkITCase.testRecoveryWithExactlyOnceGuarantee blocked on azure pipeline

2022-01-11 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-25624:

Affects Version/s: (was: 1.14.2)

> KafkaSinkITCase.testRecoveryWithExactlyOnceGuarantee blocked on azure pipeline
> --
>
> Key: FLINK-25624
> URL: https://issues.apache.org/jira/browse/FLINK-25624
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Reporter: Yun Gao
>Priority: Major
>  Labels: test-stability
>
> {code:java}
> "main" #1 prio=5 os_prio=0 tid=0x7fda8c00b000 nid=0x21b2 waiting on 
> condition [0x7fda92dd7000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x826165c0> (a 
> java.util.concurrent.CompletableFuture$Signaller)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
>   at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
>   at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
>   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1989)
>   at 
> org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:69)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1951)
>   at 
> org.apache.flink.connector.kafka.sink.KafkaSinkITCase.testRecoveryWithAssertion(KafkaSinkITCase.java:335)
>   at 
> org.apache.flink.connector.kafka.sink.KafkaSinkITCase.testRecoveryWithExactlyOnceGuarantee(KafkaSinkITCase.java:190)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>  {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=29285=logs=c5612577-f1f7-5977-6ff6-7432788526f7=ffa8837a-b445-534e-cdf4-db364cf8235d=42106



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25625) Introduce FileFormat for table-store

2022-01-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-25625:
---
Labels: pull-request-available  (was: )

> Introduce FileFormat for table-store
> 
>
> Key: FLINK-25625
> URL: https://issues.apache.org/jira/browse/FLINK-25625
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.1.0
>
>
> Introduce file format class which creates reader and writer factories for 
> specific file format.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink-table-store] JingsongLi opened a new pull request #2: [FLINK-25625] Introduce FileFormat

2022-01-11 Thread GitBox


JingsongLi opened a new pull request #2:
URL: https://github.com/apache/flink-table-store/pull/2


   Introduce file format class which creates reader and writer factories for 
specific file format.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-25625) Introduce FileFormat for table-store

2022-01-11 Thread Jingsong Lee (Jira)
Jingsong Lee created FLINK-25625:


 Summary: Introduce FileFormat for table-store
 Key: FLINK-25625
 URL: https://issues.apache.org/jira/browse/FLINK-25625
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Reporter: Jingsong Lee
 Fix For: table-store-0.1.0


Introduce file format class which creates reader and writer factories for 
specific file format.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-25625) Introduce FileFormat for table-store

2022-01-11 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned FLINK-25625:


Assignee: Jingsong Lee

> Introduce FileFormat for table-store
> 
>
> Key: FLINK-25625
> URL: https://issues.apache.org/jira/browse/FLINK-25625
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
> Fix For: table-store-0.1.0
>
>
> Introduce file format class which creates reader and writer factories for 
> specific file format.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Closed] (FLINK-25619) Init flink-table-store repository

2022-01-11 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-25619.

Resolution: Fixed

master:

d0e4138a24ca35b944c0243f97d19c48b2501c59

96de3c96feecc1337af8f7d99aed73e8f948eb73

8098f0a604503795de347bfc27478b53d6bef285

7d01fc003c39046f52a7c1652d93e251d614869b

0a6c41aa844c30627ebcd3b0bc8c662956d9ac28

2bd3dd55c39dfded93d55b2dac42fa19cb8582e8

9d5c08c86cbda10119a0f4da679529974617d637

> Init flink-table-store repository
> -
>
> Key: FLINK-25619
> URL: https://issues.apache.org/jira/browse/FLINK-25619
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.1.0
>
>
> Create:
>  * README.md
>  * NOTICE LICENSE CODE_OF_CONDUCT
>  * .gitignore
>  * maven tools
>  * releasing tools
>  * github build workflow
>  * pom.xml



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25523) KafkaSourceITCase$KafkaSpecificTests.testTimestamp fails on AZP

2022-01-11 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17474323#comment-17474323
 ] 

Yun Gao commented on FLINK-25523:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=29270=logs=c5f0071e-1851-543e-9a45-9ac140befc32=15a22db7-8faa-5b34-3920-d33c9f0ca23c=35635

> KafkaSourceITCase$KafkaSpecificTests.testTimestamp fails on AZP
> ---
>
> Key: FLINK-25523
> URL: https://issues.apache.org/jira/browse/FLINK-25523
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.15.0
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.15.0
>
>
> The test {{KafkaSourceITCase$KafkaSpecificTests.testTimestamp}} fails on AZP 
> with
> {code}
> 2022-01-05T03:08:57.1647316Z java.util.concurrent.TimeoutException: The topic 
> metadata failed to propagate to Kafka broker.
> 2022-01-05T03:08:57.1660635Z  at 
> org.apache.flink.core.testutils.CommonTestUtils.waitUtil(CommonTestUtils.java:214)
> 2022-01-05T03:08:57.1667856Z  at 
> org.apache.flink.core.testutils.CommonTestUtils.waitUtil(CommonTestUtils.java:230)
> 2022-01-05T03:08:57.1668778Z  at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl.createTestTopic(KafkaTestEnvironmentImpl.java:216)
> 2022-01-05T03:08:57.1670072Z  at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironment.createTestTopic(KafkaTestEnvironment.java:98)
> 2022-01-05T03:08:57.1671078Z  at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestBase.createTestTopic(KafkaTestBase.java:216)
> 2022-01-05T03:08:57.1671942Z  at 
> org.apache.flink.connector.kafka.source.KafkaSourceITCase$KafkaSpecificTests.testTimestamp(KafkaSourceITCase.java:104)
> 2022-01-05T03:08:57.1672619Z  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-01-05T03:08:57.1673715Z  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-01-05T03:08:57.1675000Z  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-01-05T03:08:57.1675907Z  at 
> java.base/java.lang.reflect.Method.invoke(Method.java:566)
> 2022-01-05T03:08:57.1676587Z  at 
> org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688)
> 2022-01-05T03:08:57.1677316Z  at 
> org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
> 2022-01-05T03:08:57.1678380Z  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
> 2022-01-05T03:08:57.1679264Z  at 
> org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
> 2022-01-05T03:08:57.1680002Z  at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
> 2022-01-05T03:08:57.1680776Z  at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:92)
> 2022-01-05T03:08:57.1681682Z  at 
> org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
> 2022-01-05T03:08:57.1682442Z  at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
> 2022-01-05T03:08:57.1683450Z  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
> 2022-01-05T03:08:57.1685362Z  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
> 2022-01-05T03:08:57.1686284Z  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
> 2022-01-05T03:08:57.1687152Z  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
> 2022-01-05T03:08:57.1687818Z  at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
> 2022-01-05T03:08:57.1688479Z  at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
> 2022-01-05T03:08:57.1689376Z  at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210)
> 2022-01-05T03:08:57.1690108Z  at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> 2022-01-05T03:08:57.1690825Z  at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206)
> 2022-01-05T03:08:57.1691470Z  at 
> 

[jira] [Commented] (FLINK-24801) "Post-job: Cache Maven local repo" failed on Azure

2022-01-11 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17474321#comment-17474321
 ] 

Yun Gao commented on FLINK-24801:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=29276=logs=af184cdd-c6d8-5084-0b69-7e9c67b35f7a=bccfcce4-958a-475a-a593-8a3baa959662=225

> "Post-job: Cache Maven local repo" failed on Azure
> --
>
> Key: FLINK-24801
> URL: https://issues.apache.org/jira/browse/FLINK-24801
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Assignee: Chesnay Schepler
>Priority: Critical
>  Labels: test-stability
>
> {code:java}
> 2021-11-05T13:49:20.5298458Z Resolved to: 
> maven|Linux|kL7EJ8TeMrJ0VZs51DUWRqheXcKK2cN2spGtx9IbVxQ=
> 2021-11-05T13:49:21.0445785Z ApplicationInsightsTelemetrySender will 
> correlate events with X-TFS-Session b5cbe6a0-61e7-4345-81b6-efaed3924cbe
> 2021-11-05T13:49:21.0700758Z Getting a pipeline cache artifact with one of 
> the following fingerprints:
> 2021-11-05T13:49:21.0702157Z Fingerprint: 
> `maven|Linux|kL7EJ8TeMrJ0VZs51DUWRqheXcKK2cN2spGtx9IbVxQ=`
> 2021-11-05T13:49:21.3648278Z There is a cache miss.
> 2021-11-05T13:50:26.4782603Z tar: 
> c9692460fbd54c808fca7be315d83578_archive.tar: Wrote only 2048 of 10240 bytes
> 2021-11-05T13:50:26.4784975Z tar: Error is not recoverable: exiting now
> 2021-11-05T13:50:27.0397318Z ApplicationInsightsTelemetrySender correlated 1 
> events with X-TFS-Session b5cbe6a0-61e7-4345-81b6-efaed3924cbe
> 2021-11-05T13:50:27.0531774Z ##[error]Process returned non-zero exit code: 2
> 2021-11-05T13:50:27.0666804Z ##[section]Finishing: Cache Maven local repo
> {code}
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=26018=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=96bd9872-da2e-43b4-b013-1295f1c23a41=220]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18119: [FLINK-24947] Support hostNetwork for native K8s integration on session mode

2022-01-11 Thread GitBox


flinkbot edited a comment on pull request #18119:
URL: https://github.com/apache/flink/pull/18119#issuecomment-994734000


   
   ## CI report:
   
   * 3dbd53a6bc03cad9a218fdc080d367d85f15ec96 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29291)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25624) KafkaSinkITCase.testRecoveryWithExactlyOnceGuarantee blocked on azure pipeline

2022-01-11 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-25624:

Affects Version/s: 1.15.0

> KafkaSinkITCase.testRecoveryWithExactlyOnceGuarantee blocked on azure pipeline
> --
>
> Key: FLINK-25624
> URL: https://issues.apache.org/jira/browse/FLINK-25624
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Priority: Major
>  Labels: test-stability
>
> {code:java}
> "main" #1 prio=5 os_prio=0 tid=0x7fda8c00b000 nid=0x21b2 waiting on 
> condition [0x7fda92dd7000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x826165c0> (a 
> java.util.concurrent.CompletableFuture$Signaller)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
>   at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
>   at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
>   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1989)
>   at 
> org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:69)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1951)
>   at 
> org.apache.flink.connector.kafka.sink.KafkaSinkITCase.testRecoveryWithAssertion(KafkaSinkITCase.java:335)
>   at 
> org.apache.flink.connector.kafka.sink.KafkaSinkITCase.testRecoveryWithExactlyOnceGuarantee(KafkaSinkITCase.java:190)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>  {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=29285=logs=c5612577-f1f7-5977-6ff6-7432788526f7=ffa8837a-b445-534e-cdf4-db364cf8235d=42106



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25624) KafkaSinkITCase.testRecoveryWithExactlyOnceGuarantee blocked on azure pipeline

2022-01-11 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17474320#comment-17474320
 ] 

Yun Gao commented on FLINK-25624:
-

Perhaps cc [~fpaul] ~

> KafkaSinkITCase.testRecoveryWithExactlyOnceGuarantee blocked on azure pipeline
> --
>
> Key: FLINK-25624
> URL: https://issues.apache.org/jira/browse/FLINK-25624
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Priority: Major
>  Labels: test-stability
>
> {code:java}
> "main" #1 prio=5 os_prio=0 tid=0x7fda8c00b000 nid=0x21b2 waiting on 
> condition [0x7fda92dd7000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x826165c0> (a 
> java.util.concurrent.CompletableFuture$Signaller)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
>   at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
>   at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
>   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1989)
>   at 
> org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:69)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1951)
>   at 
> org.apache.flink.connector.kafka.sink.KafkaSinkITCase.testRecoveryWithAssertion(KafkaSinkITCase.java:335)
>   at 
> org.apache.flink.connector.kafka.sink.KafkaSinkITCase.testRecoveryWithExactlyOnceGuarantee(KafkaSinkITCase.java:190)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>  {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=29285=logs=c5612577-f1f7-5977-6ff6-7432788526f7=ffa8837a-b445-534e-cdf4-db364cf8235d=42106



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] Aireed removed a comment on pull request #13789: [FLINK-19727][table-runtime] Implement ParallelismProvider for sink i…

2022-01-11 Thread GitBox


Aireed removed a comment on pull request #13789:
URL: https://github.com/apache/flink/pull/13789#issuecomment-1009713407


   @shouweikun hello,   what's the purpose of the check??   A connector which 
support cdc  must be have same parallelism with source or  have primary keys 
even if it's not in changelog mode
   
![image](https://user-images.githubusercontent.com/8862395/148908370-f9c8ce7a-0f22-438b-a3a0-d78cc0669fb7.png)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25624) KafkaSinkITCase.testRecoveryWithExactlyOnceGuarantee blocked on azure pipeline

2022-01-11 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-25624:

Labels: test-stability  (was: )

> KafkaSinkITCase.testRecoveryWithExactlyOnceGuarantee blocked on azure pipeline
> --
>
> Key: FLINK-25624
> URL: https://issues.apache.org/jira/browse/FLINK-25624
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.2
>Reporter: Yun Gao
>Priority: Major
>  Labels: test-stability
>
> {code:java}
> "main" #1 prio=5 os_prio=0 tid=0x7fda8c00b000 nid=0x21b2 waiting on 
> condition [0x7fda92dd7000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x826165c0> (a 
> java.util.concurrent.CompletableFuture$Signaller)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
>   at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
>   at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
>   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1989)
>   at 
> org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:69)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1951)
>   at 
> org.apache.flink.connector.kafka.sink.KafkaSinkITCase.testRecoveryWithAssertion(KafkaSinkITCase.java:335)
>   at 
> org.apache.flink.connector.kafka.sink.KafkaSinkITCase.testRecoveryWithExactlyOnceGuarantee(KafkaSinkITCase.java:190)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>  {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=29285=logs=c5612577-f1f7-5977-6ff6-7432788526f7=ffa8837a-b445-534e-cdf4-db364cf8235d=42106



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25624) KafkaSinkITCase.testRecoveryWithExactlyOnceGuarantee blocked on azure pipeline

2022-01-11 Thread Yun Gao (Jira)
Yun Gao created FLINK-25624:
---

 Summary: KafkaSinkITCase.testRecoveryWithExactlyOnceGuarantee 
blocked on azure pipeline
 Key: FLINK-25624
 URL: https://issues.apache.org/jira/browse/FLINK-25624
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: 1.14.2
Reporter: Yun Gao


{code:java}
"main" #1 prio=5 os_prio=0 tid=0x7fda8c00b000 nid=0x21b2 waiting on 
condition [0x7fda92dd7000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x826165c0> (a 
java.util.concurrent.CompletableFuture$Signaller)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
at 
java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
at 
java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
at 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1989)
at 
org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:69)
at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1951)
at 
org.apache.flink.connector.kafka.sink.KafkaSinkITCase.testRecoveryWithAssertion(KafkaSinkITCase.java:335)
at 
org.apache.flink.connector.kafka.sink.KafkaSinkITCase.testRecoveryWithExactlyOnceGuarantee(KafkaSinkITCase.java:190)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
 {code}
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=29285=logs=c5612577-f1f7-5977-6ff6-7432788526f7=ffa8837a-b445-534e-cdf4-db364cf8235d=42106



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18306: [FLINK-25445] No need to create local recovery dirs when disabled loc…

2022-01-11 Thread GitBox


flinkbot edited a comment on pull request #18306:
URL: https://github.com/apache/flink/pull/18306#issuecomment-1008234006


   
   ## CI report:
   
   * 32883ca0240ef8ef0f3fb7162035a3aff23cdb7d Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29293)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-6757) Investigate Apache Atlas integration

2022-01-11 Thread HideOnBush (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-6757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17474316#comment-17474316
 ] 

HideOnBush commented on FLINK-6757:
---

I think this is very meaningful, especially in the construction of real-time 
data warehouses, through hooks, many manual recording tasks can be solved. and 
we are already using this function

> Investigate Apache Atlas integration
> 
>
> Key: FLINK-6757
> URL: https://issues.apache.org/jira/browse/FLINK-6757
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common
>Reporter: Till Rohrmann
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned
>
> Users asked for an integration of Apache Flink with Apache Atlas. It might be 
> worthwhile to investigate what is necessary to achieve this task.
> References:
> http://atlas.incubator.apache.org/StormAtlasHook.html



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25374) Azure pipeline get stalled on scanning project

2022-01-11 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17474314#comment-17474314
 ] 

Yun Gao commented on FLINK-25374:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=29285=logs=e92ecf6d-e207-5a42-7ff7-528ff0c5b259=40fc352e-9b4c-5fd8-363f-628f24b01ec2=8314

> Azure pipeline get stalled on scanning project
> --
>
> Key: FLINK-25374
> URL: https://issues.apache.org/jira/browse/FLINK-25374
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Affects Versions: 1.15.0, 1.13.5, 1.14.2
>Reporter: Yun Gao
>Assignee: Martijn Visser
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.15.0
>
> Attachments: network2.jpg
>
>
> {code:java}
> 2021-12-18T02:01:01.8980373Z Dec 18 02:01:01 RUNNING 'run_mvn 
> -Dflink.forkCount=2 -Dflink.forkCountTestPackage=2 -Dfast -Pskip-webui-build 
> -Dlog.dir=/__w/_temp/debug_files 
> -Dlog4j.configurationFile=file:///__w/2/s/tools/ci/log4j.properties 
> -DskipTests -Dhadoop.version=2.8.3 -Dinclude_hadoop_aws -Dscala-2.11  
> install'.
> 2021-12-18T02:01:01.885Z Dec 18 02:01:01 Invoking mvn with 'mvn 
> -Dmaven.wagon.http.pool=false -Dorg.slf4j.simpleLogger.showDateTime=true 
> -Dorg.slf4j.simpleLogger.dateTimeFormat=HH:mm:ss.SSS 
> -Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=warn
>  --no-snapshot-updates -B -Dhadoop.version=2.8.3 -Dinclude_hadoop_aws 
> -Dscala-2.11  --settings /__w/2/s/tools/ci/alibaba-mirror-settings.xml  
> -Dflink.forkCount=2 -Dflink.forkCountTestPackage=2 -Dfast -Pskip-webui-build 
> -Dlog.dir=/__w/_temp/debug_files 
> -Dlog4j.configurationFile=file:///__w/2/s/tools/ci/log4j.properties 
> -DskipTests -Dhadoop.version=2.8.3 -Dinclude_hadoop_aws -Dscala-2.11 install'
> 2021-12-18T02:01:01.9407169Z Picked up JAVA_TOOL_OPTIONS: 
> -XX:+HeapDumpOnOutOfMemoryError
> 2021-12-18T02:01:02.8291019Z Dec 18 02:01:02 [INFO] Scanning for projects...
> 2021-12-18T02:16:02.4676481Z Dec 18 02:16:02 
> ==
> 2021-12-18T02:16:02.4679732Z Dec 18 02:16:02 Process produced no output for 
> 900 seconds.
> 2021-12-18T02:16:02.4680416Z Dec 18 02:16:02 
> ==
> 2021-12-18T02:16:02.4681062Z Dec 18 02:16:02 
> ==
> 2021-12-18T02:16:02.4681601Z Dec 18 02:16:02 The following Java processes are 
> running (JPS)
> 2021-12-18T02:16:02.4682191Z Dec 18 02:16:02 
> ==
> 2021-12-18T02:16:02.4743659Z Picked up JAVA_TOOL_OPTIONS: 
> -XX:+HeapDumpOnOutOfMemoryError
> 2021-12-18T02:16:03.1019936Z Dec 18 02:16:03 354 Launcher
> 2021-12-18T02:16:03.1020514Z Dec 18 02:16:03 857 Jps
> 2021-12-18T02:16:03.1052014Z Dec 18 02:16:03 
> ==
> 2021-12-18T02:16:03.1052803Z Dec 18 02:16:03 Printing stack trace of Java 
> process 354
> 2021-12-18T02:16:03.1053382Z Dec 18 02:16:03 
> ==
> 2021-12-18T02:16:03.1123385Z Picked up JAVA_TOOL_OPTIONS: 
> -XX:+HeapDumpOnOutOfMemoryError
> 2021-12-18T02:16:03.4416679Z Dec 18 02:16:03 2021-12-18 02:16:03
> 2021-12-18T02:16:03.4417639Z Dec 18 02:16:03 Full thread dump OpenJDK 64-Bit 
> Server VM (25.292-b10 mixed mode):
> 2021-12-18T02:16:03.4418277Z Dec 18 02:16:03 
> 2021-12-18T02:16:03.4419452Z Dec 18 02:16:03 "Attach Listener" #22 daemon 
> prio=9 os_prio=0 tid=0x7fbc9c001000 nid=0x3b8 waiting on condition 
> [0x]
> 2021-12-18T02:16:03.4420652Z Dec 18 02:16:03java.lang.Thread.State: 
> RUNNABLE
> 2021-12-18T02:16:03.4421479Z Dec 18 02:16:03 
> 2021-12-18T02:16:03.4422239Z Dec 18 02:16:03 "Service Thread" #20 daemon 
> prio=9 os_prio=0 tid=0x7fbd4810c800 nid=0x196 runnable 
> [0x]
> 2021-12-18T02:16:03.4422936Z Dec 18 02:16:03java.lang.Thread.State: 
> RUNNABLE
> 2021-12-18T02:16:03.4423280Z Dec 18 02:16:03 
> 2021-12-18T02:16:03.4423900Z Dec 18 02:16:03 "C1 CompilerThread14" #19 daemon 
> prio=9 os_prio=0 tid=0x7fbd48109800 nid=0x195 waiting on condition 
> [0x]
> 2021-12-18T02:16:03.4424648Z Dec 18 02:16:03java.lang.Thread.State: 
> RUNNABLE
> 2021-12-18T02:16:03.4425089Z Dec 18 02:16:03 
> 2021-12-18T02:16:03.4425734Z Dec 18 02:16:03 "C1 CompilerThread13" #18 daemon 
> prio=9 os_prio=0 tid=0x7fbd48107800 nid=0x194 waiting on condition 
> [0x]
> 2021-12-18T02:16:03.4426339Z Dec 18 02:16:03java.lang.Thread.State: 
> RUNNABLE
> 

[GitHub] [flink] wsry commented on a change in pull request #17936: [FLINK-24954][network] Reset read buffer request timeout on buffer recycling for sort-shuffle

2022-01-11 Thread GitBox


wsry commented on a change in pull request #17936:
URL: https://github.com/apache/flink/pull/17936#discussion_r782747038



##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/disk/BatchShuffleReadBufferPoolTest.java
##
@@ -98,6 +98,34 @@ public void testRecycle() throws Exception {
 assertEquals(bufferPool.getNumTotalBuffers(), 
bufferPool.getAvailableBuffers());
 }
 
+@Test
+public void testBufferOperationTimestampUpdated() throws Exception {
+BatchShuffleReadBufferPool bufferPool = new 
BatchShuffleReadBufferPool(1024, 1024);
+long oldTimestamp = bufferPool.getLastBufferOperationTimestamp();
+List buffers = bufferPool.requestBuffers();
+assertEquals(1, buffers.size());
+long nowTimestamp = bufferPool.getLastBufferOperationTimestamp();
+// The timestamp is updated when requesting buffers successfully
+assertTrue(nowTimestamp > oldTimestamp);
+assertEquals(nowTimestamp, 
bufferPool.getLastBufferOperationTimestamp());
+
+oldTimestamp = nowTimestamp;
+bufferPool.recycle(buffers);

Review comment:
   It is better to sleep for example 100ms before recycling buffers.

##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/disk/BatchShuffleReadBufferPoolTest.java
##
@@ -98,6 +98,34 @@ public void testRecycle() throws Exception {
 assertEquals(bufferPool.getNumTotalBuffers(), 
bufferPool.getAvailableBuffers());
 }
 
+@Test
+public void testBufferOperationTimestampUpdated() throws Exception {
+BatchShuffleReadBufferPool bufferPool = new 
BatchShuffleReadBufferPool(1024, 1024);
+long oldTimestamp = bufferPool.getLastBufferOperationTimestamp();
+List buffers = bufferPool.requestBuffers();

Review comment:
   It is better to sleep for example 100ms before requesting buffers.

##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/disk/BatchShuffleReadBufferPoolTest.java
##
@@ -98,6 +98,34 @@ public void testRecycle() throws Exception {
 assertEquals(bufferPool.getNumTotalBuffers(), 
bufferPool.getAvailableBuffers());
 }
 
+@Test
+public void testBufferOperationTimestampUpdated() throws Exception {
+BatchShuffleReadBufferPool bufferPool = new 
BatchShuffleReadBufferPool(1024, 1024);
+long oldTimestamp = bufferPool.getLastBufferOperationTimestamp();
+List buffers = bufferPool.requestBuffers();
+assertEquals(1, buffers.size());
+long nowTimestamp = bufferPool.getLastBufferOperationTimestamp();
+// The timestamp is updated when requesting buffers successfully
+assertTrue(nowTimestamp > oldTimestamp);
+assertEquals(nowTimestamp, 
bufferPool.getLastBufferOperationTimestamp());
+
+oldTimestamp = nowTimestamp;

Review comment:
   I guess we can replace the above two lines of code with this?
   ```oldTimestamp  = bufferPool.getLastBufferOperationTimestamp();```

##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/disk/BatchShuffleReadBufferPoolTest.java
##
@@ -98,6 +98,34 @@ public void testRecycle() throws Exception {
 assertEquals(bufferPool.getNumTotalBuffers(), 
bufferPool.getAvailableBuffers());
 }
 
+@Test
+public void testBufferOperationTimestampUpdated() throws Exception {
+BatchShuffleReadBufferPool bufferPool = new 
BatchShuffleReadBufferPool(1024, 1024);
+long oldTimestamp = bufferPool.getLastBufferOperationTimestamp();
+List buffers = bufferPool.requestBuffers();
+assertEquals(1, buffers.size());
+long nowTimestamp = bufferPool.getLastBufferOperationTimestamp();
+// The timestamp is updated when requesting buffers successfully
+assertTrue(nowTimestamp > oldTimestamp);
+assertEquals(nowTimestamp, 
bufferPool.getLastBufferOperationTimestamp());
+
+oldTimestamp = nowTimestamp;
+bufferPool.recycle(buffers);
+// The timestamp is updated when recycling buffers
+assertTrue(bufferPool.getLastBufferOperationTimestamp() > 
oldTimestamp);
+
+bufferPool.requestBuffers();
+
+oldTimestamp = bufferPool.getLastBufferOperationTimestamp();
+buffers = bufferPool.requestBuffers();
+assertEquals(0, buffers.size());

Review comment:
   The variable buffers should not be updated, if it is updated to empty, 
the bellow ```bufferPool.recycle(buffers);``` will recycle nothing. I think we 
can replace the above two lines of code with "assertEquals(0, 
bufferPool.requestBuffers().size());".

##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/partition/SortMergeResultPartitionReadSchedulerTest.java
##
@@ -192,6 +199,131 @@ public void testOnReadBufferRequestError() throws 
Exception {
 assertAllResourcesReleased();
 }
 
+@Test
+public void 

[GitHub] [flink] flinkbot edited a comment on pull request #18268: [FLINK-14902][connector] Supports jdbc async lookup join

2022-01-11 Thread GitBox


flinkbot edited a comment on pull request #18268:
URL: https://github.com/apache/flink/pull/18268#issuecomment-1005479356


   
   ## CI report:
   
   * fe7d298f1ac142a2fa53241df09eaf93b44ec4be Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29290)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] MonsterChenzhuo commented on a change in pull request #17994: [FLINK-24456][Connectors / Kafka,Table SQL / Ecosystem] Support bound…

2022-01-11 Thread GitBox


MonsterChenzhuo commented on a change in pull request #17994:
URL: https://github.com/apache/flink/pull/17994#discussion_r782766430



##
File path: 
flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/table/KafkaConnectorOptions.java
##
@@ -158,6 +158,13 @@
 .withDescription(
 "Optional offsets used in case of 
\"specific-offsets\" startup mode");
 
+public static final ConfigOption SCAN_BOUNDED_SPECIFIC_OFFSETS =
+ConfigOptions.key("scan.bounded.specific-offsets")
+.stringType()
+.noDefaultValue()
+.withDescription(
+"When all partitions have reached their stop 
offsets, the source will exit");
+
 public static final ConfigOption SCAN_STARTUP_TIMESTAMP_MILLIS =

Review comment:
   @ruanhang1993  Do you think the above logic is OK, if so I will make 
changes




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] MonsterChenzhuo commented on a change in pull request #17994: [FLINK-24456][Connectors / Kafka,Table SQL / Ecosystem] Support bound…

2022-01-11 Thread GitBox


MonsterChenzhuo commented on a change in pull request #17994:
URL: https://github.com/apache/flink/pull/17994#discussion_r782766011



##
File path: 
flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/table/KafkaConnectorOptions.java
##
@@ -158,6 +158,13 @@
 .withDescription(
 "Optional offsets used in case of 
\"specific-offsets\" startup mode");
 
+public static final ConfigOption SCAN_BOUNDED_SPECIFIC_OFFSETS =
+ConfigOptions.key("scan.bounded.specific-offsets")
+.stringType()
+.noDefaultValue()
+.withDescription(
+"When all partitions have reached their stop 
offsets, the source will exit");
+
 public static final ConfigOption SCAN_STARTUP_TIMESTAMP_MILLIS =

Review comment:
   hi @ruanhang1993 You can see if my idea is correct:
   First, there are two options for the start read position, an offset and a 
timestamp.
   When the start read position is an offset, it can be either specified or 
currently consumed, and your end position corresponds to 
`scan.bounded.stop-offsets`.
   When your start bit is a timestamp, your end position should also go to the 
corresponding timestamp.
   So the logic can be changed to: 
   with a bounded offset set, first go to determine if `scan.startup.mode` is 
set, if not go to the logic of `scan.bounded.stop-offsets`, if it is set see if 
it is a timestamp, if not go to the logic of `scan.bounded. stop-offsets` 
logic, if it is a timestamp then `scan.bounded.stop-timestamp` logic 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] xintongsong commented on pull request #18303: [FLINK-25085][runtime] Add a scheduled thread pool for periodic tasks in RpcEndpoint

2022-01-11 Thread GitBox


xintongsong commented on pull request #18303:
URL: https://github.com/apache/flink/pull/18303#issuecomment-1010713720


   Thanks for working on this, @zjureel & @KarmaGYZ.
   
   I'm still not entirely sure about exposing two executors (or two sets of 
scheduling interfaces) from the RPC endpoint. It's not easy to maintain as it 
requires callers to understand the differences between the two executors (tasks 
will be canceled as soon as the RPC endpoint shutdown for one executor, while 
not canceled immediately for the other).
   
   I understand this is in response to the previous CI problems, where we 
suspect some clean-up operations are not completed due to tasks being canceled, 
and it's not easy to identify which clean-up operations are causing the 
problems. However, I'm not comfortable with scarifying the new design just to 
cover up for the real problems.
   
   I'd suggest to try a few more things before we go for the two-executors 
approach.
   - I noticed there are some calls to `schedule` with a zero delay, (e.g., 
`ApplicationDispatcherBootstrap#runApplicationAsync`). I think the purpose for 
these calls are to process asynchronously, rather than to schedule a future 
task. It might be helpful to check the delay value in 
`MainThreadExecutor#schedule`, and only use the scheduling thread pool when 
delay is positive. My gut feeling is most clean-up tasks are async tasks that 
needs to be executed immediately rather than in future.
   - Another thing we may want to try is to replace 
`ExecutorService#shutdownNow` with `Executor#awaitTermination`, to leave a 
short time for the clean-up tasks to complete.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25441) ProducerFailedException will cause task status switch from RUNNING to CANCELED, which will cause the job to hang.

2022-01-11 Thread Lijie Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17474304#comment-17474304
 ] 

Lijie Wang commented on FLINK-25441:


[~kevin.cyj] I would, I will prepare a fix according your suggestion.

> ProducerFailedException will cause task status switch from RUNNING to 
> CANCELED, which will cause the job to hang.
> -
>
> Key: FLINK-25441
> URL: https://issues.apache.org/jira/browse/FLINK-25441
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.15.0
>Reporter: Lijie Wang
>Priority: Major
> Fix For: 1.15.0
>
>
> The {{ProducerFailedException}} extends {{{}CancelTaskException{}}}, which 
> will cause the task status switched from RUNNING to CANCELED. As described in 
> FLINK-17726, if a task is directly CANCELED by TaskManager due to its own 
> runtime issue, the task will not be recovered by JM and thus the job would 
> hang.
> Note that it will not cause problems before FLINK-24182 (it unifies the 
> failureCause handling, changes the check of CancelTaskException from 
> "{{instanceof CancelTaskException}}" to "{{ExceptionUtils.findThrowable}}"), 
> because the {{ProducerFailedException}} is always wrapped by 
> {{{}RemoteTransportException{}}}.
> The example log is as follows:
> {code:java}
> 2021-12-23 21:20:14,965 DEBUG org.apache.flink.runtime.taskmanager.Task   
>  [] - MultipleInput[945] [Source: 
> HiveSource-tpcds_bin_orc_1.catalog_sales, Source: 
> HiveSource-tpcds_bin_orc_1.store_sales, Source: 
> HiveSource-tpcds_bin_orc_1.catalog_sales, Source: 
> HiveSource-tpcds_bin_orc_1.store_sales, Source: 
> HiveSource-tpcds_bin_orc_1.store_sales, Source: 
> HiveSource-tpcds_bin_orc_1.item, Source: 
> HiveSource-tpcds_bin_orc_1.web_sales, Source: 
> HiveSource-tpcds_bin_orc_1.web_sales] - Calc[885] (143/1024)#0 
> (8a883116ab601dd5b9ad5d2717d18918) switched from RUNNING to CANCELED due to 
> CancelTaskException: 
> org.apache.flink.runtime.io.network.netty.exception.RemoteTransportException: 
> Error at remote task manager 
> 'k28b09250.eu95sqa.tbsite.net/100.69.96.154:47459'.
>   at 
> org.apache.flink.runtime.io.network.netty.CreditBasedPartitionRequestClientHandler.decodeMsg(CreditBasedPartitionRequestClientHandler.java:301)
>   at 
> org.apache.flink.runtime.io.network.netty.CreditBasedPartitionRequestClientHandler.channelRead(CreditBasedPartitionRequestClientHandler.java:190)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>   at 
> org.apache.flink.runtime.io.network.netty.NettyMessageClientDecoderDelegate.channelRead(NettyMessageClientDecoderDelegate.java:112)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
>   at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
>   at 
> org.apache.flink.shaded.netty4.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>   at java.lang.Thread.run(Thread.java:834)
> Caused by: 

[GitHub] [flink-table-store] JingsongLi merged pull request #1: [FLINK-25619] Init flink-table-store repository

2022-01-11 Thread GitBox


JingsongLi merged pull request #1:
URL: https://github.com/apache/flink-table-store/pull/1


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17777: [FLINK-24886][core] TimeUtils supports the form of m.

2022-01-11 Thread GitBox


flinkbot edited a comment on pull request #1:
URL: https://github.com/apache/flink/pull/1#issuecomment-966971402


   
   ## CI report:
   
   * b00b49e806baf132df4f13124b98469cef9ebb91 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29288)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-25209) SQLClientSchemaRegistryITCase#testReading is broken

2022-01-11 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-25209.

Resolution: Fixed

Fixed in master: 2d98608122574a9e56c00012be24824577ff47e6

The test has been re-opened.

> SQLClientSchemaRegistryITCase#testReading is broken
> ---
>
> Key: FLINK-25209
> URL: https://issues.apache.org/jira/browse/FLINK-25209
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Kafka, Table SQL / Client, Tests
>Affects Versions: 1.13.3
>Reporter: Chesnay Schepler
>Assignee: Qingsheng Ren
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> https://dev.azure.com/chesnay/flink/_build/results?buildId=1880=logs=0e31ee24-31a6-528c-a4bf-45cde9b2a14e=ff03a8fa-e84e-5199-efb2-5433077ce8e2
> {code:java}
> Dec 06 11:33:16 [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 236.417 s <<< FAILURE! - in 
> org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase
> Dec 06 11:33:16 [ERROR] 
> org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase.testReading  
> Time elapsed: 152.789 s  <<< ERROR!
> Dec 06 11:33:16 java.io.IOException: Could not read expected number of 
> messages.
> Dec 06 11:33:16   at 
> org.apache.flink.tests.util.kafka.KafkaContainerClient.readMessages(KafkaContainerClient.java:115)
> Dec 06 11:33:16   at 
> org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase.testReading(SQLClientSchemaRegistryITCase.java:165)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink-table-store] LadyForest commented on pull request #1: [FLINK-25619] Init flink-table-store repository

2022-01-11 Thread GitBox


LadyForest commented on pull request #1:
URL: https://github.com/apache/flink-table-store/pull/1#issuecomment-1010710055


   LGTM :)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi merged pull request #18326: [FLINK-25209][tests] Wait until the consuming topic is ready before polling records in KafkaContainerClient#readMessages

2022-01-11 Thread GitBox


JingsongLi merged pull request #18326:
URL: https://github.com/apache/flink/pull/18326


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-25623) TPC-DS end-to-end test (Blink planner) failed on azure due to download file tpcds.idx failed.

2022-01-11 Thread Yun Gao (Jira)
Yun Gao created FLINK-25623:
---

 Summary: TPC-DS end-to-end test (Blink planner) failed on azure 
due to download file tpcds.idx failed. 
 Key: FLINK-25623
 URL: https://issues.apache.org/jira/browse/FLINK-25623
 Project: Flink
  Issue Type: Bug
  Components: Build System / Azure Pipelines
Affects Versions: 1.13.5
Reporter: Yun Gao


 
{code:java}
Jan 12 02:22:14 [WARN] Download file tpcds.idx failed.
Jan 12 02:22:14 Command: download_and_validate tpcds.idx ../target/generator 
https://raw.githubusercontent.com/ververica/tpc-ds-generators/f5d6c11681637908ce15d697ae683676a5383641/generators/tpcds.idx
 376152c9aa150c59a386b148f954c47d linux failed. Retrying...
Jan 12 02:22:19 Command: download_and_validate tpcds.idx ../target/generator 
https://raw.githubusercontent.com/ververica/tpc-ds-generators/f5d6c11681637908ce15d697ae683676a5383641/generators/tpcds.idx
 376152c9aa150c59a386b148f954c47d linux failed 3 times.
Jan 12 02:22:19 [WARN] Download file tpcds.idx failed.
Jan 12 02:22:19 [ERROR] Download and validate data generator files fail, please 
check the network.
Jan 12 02:22:19 [FAIL] Test script contains errors.
Jan 12 02:22:19 Checking for errors...
Jan 12 02:22:19 No errors in log files.
Jan 12 02:22:19 Checking for exceptions...
Jan 12 02:22:19 No exceptions in log files.
Jan 12 02:22:19 Checking for non-empty .out files...
grep: /home/vsts/work/_temp/debug_files/flink-logs/*.out: No such file or 
directory
Jan 12 02:22:19 No non-empty .out files.
Jan 12 02:22:19 
Jan 12 02:22:19 [FAIL] 'TPC-DS end-to-end test (Blink planner)' failed after 1 
minutes and 12 seconds! Test exited with exit code 1
Jan 12 02:22:19 
 {code}
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=29284=logs=08866332-78f7-59e4-4f7e-49a56faa3179=7f606211-1454-543c-70ab-c7a028a1ce8c=19463

 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-22411) Checkpoint failed caused by Mkdirs failed to create file, the path for Flink state.checkpoints.dir in docker-compose can not work from Flink Operations Playground

2022-01-11 Thread Serge (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17474293#comment-17474293
 ] 

Serge commented on FLINK-22411:
---

That is really confusing.  Maybe u can try `chmod 777 
/tmp/flink-checkpoints-directory`[~lvshuang] 

 

> Checkpoint failed caused by Mkdirs failed to create file, the path for Flink 
> state.checkpoints.dir in docker-compose can not work from Flink Operations 
> Playground
> --
>
> Key: FLINK-22411
> URL: https://issues.apache.org/jira/browse/FLINK-22411
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.12.2
>Reporter: Serge
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
> Attachments: screenshot-1.png
>
>
> docker-compose starting correctly starting docker-compose but after several 
> minutes of work, Apache Flink has to create checkpoints, but there is some 
> problem with access to the file system. next step in [Observing Failure & 
> Recovery|https://ci.apache.org/projects/flink/flink-docs-release-1.12/try-flink/flink-operations-playground.html#observing-failure–recovery]
>  can not operation.
> Exception:
> {code:java}
> org.apache.flink.runtime.checkpoint.CheckpointException: Could not finalize 
> the pending checkpoint 104. Failure reason: Failure to finalize checkpoint.
> at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.completePendingCheckpoint(CheckpointCoordinator.java:1216)
>  ~[flink-dist_2.11-1.12.1.jar:1.12.1]
> …..
> Caused by: org.apache.flink.util.SerializedThrowable: Mkdirs failed to create 
> file:/tmp/flink-checkpoints-directory/d73c2f87b0d7ea6748a1913ee4b50afe/chk-104
> at 
> org.apache.flink.core.fs.local.LocalFileSystem.create(LocalFileSystem.java:262)
>  ~[flink-dist_2.11-1.12.1.jar:1.12.1]
> {code}
> it is work , add a step:
> Create the checkpoint and savepoint directories on the Docker host machine 
> (these volumes are mounted by the jobmanager and taskmanager, as specified in 
> docker-compose.yaml):
> {code:bash}
> mkdir -p /tmp/flink-checkpoints-directory
> mkdir -p /tmp/flink-savepoints-directory
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18326: [FLINK-25209][tests] Wait until the consuming topic is ready before polling records in KafkaContainerClient#readMessages

2022-01-11 Thread GitBox


flinkbot edited a comment on pull request #18326:
URL: https://github.com/apache/flink/pull/18326#issuecomment-1009773258


   
   ## CI report:
   
   * cf2b79036082e166b7682ec9d420301a8a2e4b50 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29287)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-table-store] JingsongLi commented on a change in pull request #1: [FLINK-25619] Init flink-table-store repository

2022-01-11 Thread GitBox


JingsongLi commented on a change in pull request #1:
URL: https://github.com/apache/flink-table-store/pull/1#discussion_r782760863



##
File path: pom.xml
##
@@ -0,0 +1,697 @@
+
+
+http://maven.apache.org/POM/4.0.0;
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+  4.0.0
+
+  

Review comment:
   I will use 4 spaces instead of 2 spaces




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-table-store] JingsongLi commented on a change in pull request #1: [FLINK-25619] Init flink-table-store repository

2022-01-11 Thread GitBox


JingsongLi commented on a change in pull request #1:
URL: https://github.com/apache/flink-table-store/pull/1#discussion_r782760863



##
File path: pom.xml
##
@@ -0,0 +1,697 @@
+
+
+http://maven.apache.org/POM/4.0.0;
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+  4.0.0
+
+  

Review comment:
   I will use 4 spaces




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] Thesharing edited a comment on pull request #17777: [FLINK-24886][core] TimeUtils supports the form of m.

2022-01-11 Thread GitBox


Thesharing edited a comment on pull request #1:
URL: https://github.com/apache/flink/pull/1#issuecomment-1010705038


   > hi @Thesharing , orignial commits are 4, i just reset the first commit and 
rebase code, then ci is running and you can see the log
   
   Thank you for rebasing the code. I've checked the failed test cases. If we 
make sure the order of minute units stay the same , the broken tests will pass.
   
   Would you mind change the line
   ```
   MINUTES(ChronoUnit.MINUTES, singular("m"), singular("min"), plural("minute"))
   ```
   into
   ```
   MINUTES(ChronoUnit.MINUTES, singular("min"), singular("m"), plural("minute"))
   ```
   and run the CI again?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-25622) Throws NPE in Python UDTF

2022-01-11 Thread Huang Xingbo (Jira)
Huang Xingbo created FLINK-25622:


 Summary: Throws NPE in Python UDTF
 Key: FLINK-25622
 URL: https://issues.apache.org/jira/browse/FLINK-25622
 Project: Flink
  Issue Type: Bug
  Components: API / Python
Affects Versions: 1.14.3
Reporter: Huang Xingbo


The failed case is

{code:python}
// Some comments here
   source_table = """
CREATE TABLE ad_track_android_source (
  `Rows` ARRAY>,
  `id` AS CAST(`Rows`[CASE WHEN `Rows`[2].`id` > 0 THEN 2 ELSE 1 
END].`id` AS INTEGER)
) WITH (
'connector' = 'datagen'
)
"""
self.t_env.execute_sql(source_table)

@udf(result_type=DataTypes.INT())
def ug(id):
return id

self.t_env.create_temporary_function("ug", ug)
res = self.t_env.sql_query(
"select id ,ug(cast(id as int)) as s from `ad_track_android_source` 
where id>0")
print(res.to_pandas())
{code}

The traceback is 

{code:java}
E   : java.lang.NullPointerException
E   at 
org.apache.calcite.rex.RexFieldAccess.checkValid(RexFieldAccess.java:74)
E   at 
org.apache.calcite.rex.RexFieldAccess.(RexFieldAccess.java:62)
E   at 
org.apache.calcite.rex.RexShuttle.visitFieldAccess(RexShuttle.java:205)
E   at 
org.apache.calcite.rex.RexProgramBuilder$RegisterShuttle.visitFieldAccess(RexProgramBuilder.java:904)
E   at 
org.apache.calcite.rex.RexProgramBuilder$RegisterShuttle.visitFieldAccess(RexProgramBuilder.java:887)
E   at 
org.apache.calcite.rex.RexFieldAccess.accept(RexFieldAccess.java:92)
E   at 
org.apache.calcite.rex.RexProgramBuilder.registerInput(RexProgramBuilder.java:295)
E   at 
org.apache.calcite.rex.RexProgramBuilder.addProject(RexProgramBuilder.java:206)
E   at 
org.apache.calcite.rex.RexProgram.create(RexProgram.java:224)
E   at 
org.apache.flink.table.planner.plan.rules.logical.PythonCalcSplitRuleBase.onMatch(PythonCalcSplitRule.scala:84)
E   at 
org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
E   at 
org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542)
E   at 
org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407)
E   at 
org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)
E   at 
org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
E   at 
org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202)
E   at 
org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189)
E   at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
E   at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
E   at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.$anonfun$optimize$1(FlinkChainedProgram.scala:62)
E   at 
scala.collection.TraversableOnce.$anonfun$foldLeft$1(TraversableOnce.scala:156)
E   at 
scala.collection.TraversableOnce.$anonfun$foldLeft$1$adapted(TraversableOnce.scala:156)
E   at scala.collection.Iterator.foreach(Iterator.scala:937)
E   at 
scala.collection.Iterator.foreach$(Iterator.scala:937)
E   at 
scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
E   at 
scala.collection.IterableLike.foreach(IterableLike.scala:70)
E   at 
scala.collection.IterableLike.foreach$(IterableLike.scala:69)
E   at 
scala.collection.AbstractIterable.foreach(Iterable.scala:54)
E   at 
scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:156)
E   at 
scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:154)
E   at 
scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
E   at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:58)
E   at 
org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.optimizeTree(StreamCommonSubGraphBasedOptimizer.scala:165)
E   at 
org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.doOptimize(StreamCommonSubGraphBasedOptimizer.scala:83)
E   at 

[GitHub] [flink] Thesharing commented on pull request #17777: [FLINK-24886][core] TimeUtils supports the form of m.

2022-01-11 Thread GitBox


Thesharing commented on pull request #1:
URL: https://github.com/apache/flink/pull/1#issuecomment-1010705038


   > hi @Thesharing , orignial commits are 4, i just reset the first commit and 
rebase code, then ci is running and you can see the log
   
   Thank you for the rebasing the code. I've checked the failed test cases. If 
we make sure the order of minute units stay the same , the broken tests will 
pass.
   
   Would you mind change the line
   ```
   MINUTES(ChronoUnit.MINUTES, singular("m"), singular("min"), plural("minute"))
   ```
   into
   ```
   MINUTES(ChronoUnit.MINUTES, singular("min"), singular("m"), plural("minute"))
   ```
   and run the CI again?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25621) LegacyStatefulJobSavepointMigrationITCase failed on azure with exit code 127

2022-01-11 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-25621:

Labels: test-stability  (was: )

> LegacyStatefulJobSavepointMigrationITCase failed on azure with exit code 127
> 
>
> Key: FLINK-25621
> URL: https://issues.apache.org/jira/browse/FLINK-25621
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.14.2
>Reporter: Yun Gao
>Priority: Major
>  Labels: test-stability
>
> {code:java}
> Jan 12 05:37:09 [WARNING] The requested profile "skip-webui-build" could not 
> be activated because it does not exist.
> Jan 12 05:37:09 [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.22.2:test 
> (integration-tests) on project flink-tests: There are test failures.
> Jan 12 05:37:09 [ERROR] 
> Jan 12 05:37:09 [ERROR] Please refer to 
> /__w/1/s/flink-tests/target/surefire-reports for the individual test results.
> Jan 12 05:37:09 [ERROR] Please refer to dump files (if any exist) 
> [date].dump, [date]-jvmRun[N].dump and [date].dumpstream.
> Jan 12 05:37:09 [ERROR] ExecutionException The forked VM terminated without 
> properly saying goodbye. VM crash or System.exit called?
> Jan 12 05:37:09 [ERROR] Command was /bin/sh -c cd /__w/1/s/flink-tests/target 
> && /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m 
> -Dmvn.forkNumber=2 -XX:+UseG1GC -jar 
> /__w/1/s/flink-tests/target/surefire/surefirebooter6476664513703430996.jar 
> /__w/1/s/flink-tests/target/surefire 2022-01-12T04-37-00_265-jvmRun2 
> surefire8749628241391231873tmp surefire_1763452000794129753394tmp
> Jan 12 05:37:09 [ERROR] Error occurred in starting fork, check output in log
> Jan 12 05:37:09 [ERROR] Process Exit Code: 127
> Jan 12 05:37:09 [ERROR] Crashed tests:
> Jan 12 05:37:09 [ERROR] 
> org.apache.flink.test.checkpointing.utils.LegacyStatefulJobSavepointMigrationITCase
> Jan 12 05:37:09 [ERROR] 
> org.apache.maven.surefire.booter.SurefireBooterForkException: 
> ExecutionException The forked VM terminated without properly saying goodbye. 
> VM crash or System.exit called?
> Jan 12 05:37:09 [ERROR] Command was /bin/sh -c cd /__w/1/s/flink-tests/target 
> && /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m 
> -Dmvn.forkNumber=2 -XX:+UseG1GC -jar 
> /__w/1/s/flink-tests/target/surefire/surefirebooter6476664513703430996.jar 
> /__w/1/s/flink-tests/target/surefire 2022-01-12T04-37-00_265-jvmRun2 
> surefire8749628241391231873tmp surefire_1763452000794129753394tmp
> Jan 12 05:37:09 [ERROR] Error occurred in starting fork, check output in log
> Jan 12 05:37:09 [ERROR] Process Exit Code: 127
> Jan 12 05:37:09 [ERROR] Crashed tests:
> Jan 12 05:37:09 [ERROR] 
> org.apache.flink.test.checkpointing.utils.LegacyStatefulJobSavepointMigrationITCase
> Jan 12 05:37:09 [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:510)
> Jan 12 05:37:09 [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:457)
> Jan 12 05:37:09 [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:298)
> Jan 12 05:37:09 [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:246)
> Jan 12 05:37:09 [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1183)
> Jan 12 05:37:09 [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:1011)
> Jan 12 05:37:09 [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:857)
> Jan 12 05:37:09 [ERROR] at 
> org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132)
> Jan 12 05:37:09 [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
> Jan 12 05:37:09 [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
> Jan 12 05:37:09 [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
>  {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=29283=logs=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3=0c010d0c-3dec-5bf1-d408-7b18988b1b2b=5423



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25621) LegacyStatefulJobSavepointMigrationITCase failed on azure with exit code 127

2022-01-11 Thread Yun Gao (Jira)
Yun Gao created FLINK-25621:
---

 Summary: LegacyStatefulJobSavepointMigrationITCase failed on azure 
with exit code 127
 Key: FLINK-25621
 URL: https://issues.apache.org/jira/browse/FLINK-25621
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Checkpointing
Affects Versions: 1.14.2
Reporter: Yun Gao


{code:java}
Jan 12 05:37:09 [WARNING] The requested profile "skip-webui-build" could not be 
activated because it does not exist.
Jan 12 05:37:09 [ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.22.2:test (integration-tests) 
on project flink-tests: There are test failures.
Jan 12 05:37:09 [ERROR] 
Jan 12 05:37:09 [ERROR] Please refer to 
/__w/1/s/flink-tests/target/surefire-reports for the individual test results.
Jan 12 05:37:09 [ERROR] Please refer to dump files (if any exist) [date].dump, 
[date]-jvmRun[N].dump and [date].dumpstream.
Jan 12 05:37:09 [ERROR] ExecutionException The forked VM terminated without 
properly saying goodbye. VM crash or System.exit called?
Jan 12 05:37:09 [ERROR] Command was /bin/sh -c cd /__w/1/s/flink-tests/target 
&& /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m 
-Dmvn.forkNumber=2 -XX:+UseG1GC -jar 
/__w/1/s/flink-tests/target/surefire/surefirebooter6476664513703430996.jar 
/__w/1/s/flink-tests/target/surefire 2022-01-12T04-37-00_265-jvmRun2 
surefire8749628241391231873tmp surefire_1763452000794129753394tmp
Jan 12 05:37:09 [ERROR] Error occurred in starting fork, check output in log
Jan 12 05:37:09 [ERROR] Process Exit Code: 127
Jan 12 05:37:09 [ERROR] Crashed tests:
Jan 12 05:37:09 [ERROR] 
org.apache.flink.test.checkpointing.utils.LegacyStatefulJobSavepointMigrationITCase
Jan 12 05:37:09 [ERROR] 
org.apache.maven.surefire.booter.SurefireBooterForkException: 
ExecutionException The forked VM terminated without properly saying goodbye. VM 
crash or System.exit called?
Jan 12 05:37:09 [ERROR] Command was /bin/sh -c cd /__w/1/s/flink-tests/target 
&& /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m 
-Dmvn.forkNumber=2 -XX:+UseG1GC -jar 
/__w/1/s/flink-tests/target/surefire/surefirebooter6476664513703430996.jar 
/__w/1/s/flink-tests/target/surefire 2022-01-12T04-37-00_265-jvmRun2 
surefire8749628241391231873tmp surefire_1763452000794129753394tmp
Jan 12 05:37:09 [ERROR] Error occurred in starting fork, check output in log
Jan 12 05:37:09 [ERROR] Process Exit Code: 127
Jan 12 05:37:09 [ERROR] Crashed tests:
Jan 12 05:37:09 [ERROR] 
org.apache.flink.test.checkpointing.utils.LegacyStatefulJobSavepointMigrationITCase
Jan 12 05:37:09 [ERROR] at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:510)
Jan 12 05:37:09 [ERROR] at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:457)
Jan 12 05:37:09 [ERROR] at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:298)
Jan 12 05:37:09 [ERROR] at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:246)
Jan 12 05:37:09 [ERROR] at 
org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1183)
Jan 12 05:37:09 [ERROR] at 
org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:1011)
Jan 12 05:37:09 [ERROR] at 
org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:857)
Jan 12 05:37:09 [ERROR] at 
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132)
Jan 12 05:37:09 [ERROR] at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
Jan 12 05:37:09 [ERROR] at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
Jan 12 05:37:09 [ERROR] at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
 {code}
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=29283=logs=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3=0c010d0c-3dec-5bf1-d408-7b18988b1b2b=5423



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink-table-store] LadyForest commented on a change in pull request #1: [FLINK-25619] Init flink-table-store repository

2022-01-11 Thread GitBox


LadyForest commented on a change in pull request #1:
URL: https://github.com/apache/flink-table-store/pull/1#discussion_r782745882



##
File path: README.md
##
@@ -0,0 +1,19 @@
+# FLink Table Store
+
+Flink Table Store is a unified streaming and batch store for building dynamic 
tables on Apache Flink.
+
+Flink Table Store is developed under the umbrella of [Apache 
Flink](https://flink.apache.org/).
+
+## Building the Project
+
+Run the `mvn clean package` command.
+
+Then you will find a JAR file that contains your application, plus any 
libraries that you may have added as dependencies to the application: 
`target/-.jar`.
+
+## Contributing
+
+You can learn more about how to contribute in the [Apache Flink 
website](https://flink.apache.org/contributing/how-to-contribute.html). For 
code contributions, please read carefully the [Contributing 
Code](https://flink.apache.org/contributing/contribute-code.html) section for 
an overview of ongoing community work.

Review comment:
   Nit: **on** the website is more accurate.

##
File path: pom.xml
##
@@ -0,0 +1,697 @@
+
+
+http://maven.apache.org/POM/4.0.0;
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+  4.0.0
+
+  

Review comment:
   It seems like Flink uses Tab as indentation for the pom file (but code 
indentation is four spaces). Should we align?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-23944) PulsarSourceITCase.testTaskManagerFailure is instable

2022-01-11 Thread Yufei Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17474282#comment-17474282
 ] 

Yufei Zhang commented on FLINK-23944:
-

I'll try to investigate

> PulsarSourceITCase.testTaskManagerFailure is instable
> -
>
> Key: FLINK-23944
> URL: https://issues.apache.org/jira/browse/FLINK-23944
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Pulsar
>Affects Versions: 1.14.0
>Reporter: Dian Fu
>Assignee: Yufan Sheng
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.15.0
>
>
> [https://dev.azure.com/dianfu/Flink/_build/results?buildId=430=logs=f3dc9b18-b77a-55c1-591e-264c46fe44d1=2d3cd81e-1c37-5c31-0ee4-f5d5cdb9324d]
> It's from my personal azure pipeline, however, I'm pretty sure that I have 
> not touched any code related to this. 
> {code:java}
> Aug 24 10:44:13 [ERROR] testTaskManagerFailure{TestEnvironment, 
> ExternalContext, ClusterControllable}[1] Time elapsed: 258.397 s <<< FAILURE! 
> Aug 24 10:44:13 java.lang.AssertionError: Aug 24 10:44:13 Aug 24 10:44:13 
> Expected: Records consumed by Flink should be identical to test data and 
> preserve the order in split Aug 24 10:44:13 but: Mismatched record at 
> position 7: Expected '0W6SzacX7MNL4xLL3BZ8C3ljho4iCydbvxIl' but was 
> 'wVi5JaJpNvgkDEOBRC775qHgw0LyRW2HBxwLmfONeEmr' Aug 24 10:44:13 at 
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) Aug 24 10:44:13 
> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8) Aug 24 
> 10:44:13 at 
> org.apache.flink.connectors.test.common.testsuites.SourceTestSuiteBase.testTaskManagerFailure(SourceTestSuiteBase.java:271)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] ruanhang1993 commented on a change in pull request #17994: [FLINK-24456][Connectors / Kafka,Table SQL / Ecosystem] Support bound…

2022-01-11 Thread GitBox


ruanhang1993 commented on a change in pull request #17994:
URL: https://github.com/apache/flink/pull/17994#discussion_r782755829



##
File path: 
flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/table/KafkaConnectorOptions.java
##
@@ -158,6 +158,13 @@
 .withDescription(
 "Optional offsets used in case of 
\"specific-offsets\" startup mode");
 
+public static final ConfigOption SCAN_BOUNDED_SPECIFIC_OFFSETS =
+ConfigOptions.key("scan.bounded.specific-offsets")
+.stringType()
+.noDefaultValue()
+.withDescription(
+"When all partitions have reached their stop 
offsets, the source will exit");
+
 public static final ConfigOption SCAN_STARTUP_TIMESTAMP_MILLIS =

Review comment:
   `latest-offset` may be covered by `timestamp`. So `timestamp` and 
`specific-offsets` are enough.
   
   IMO `latest-offset` is designed to consume until the latest offsets at the 
job start time. Maybe this setting already is covered by the `timestamp`.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-table-store] JingsongLi commented on a change in pull request #1: [FLINK-25619] Init flink-table-store repository

2022-01-11 Thread GitBox


JingsongLi commented on a change in pull request #1:
URL: https://github.com/apache/flink-table-store/pull/1#discussion_r782755360



##
File path: pom.xml
##
@@ -0,0 +1,697 @@
+
+
+http://maven.apache.org/POM/4.0.0;
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+  4.0.0
+
+  

Review comment:
   Flink old style all with tab, later only changed the code to 4 spaces, 
did not change the pom, I think we can directly use the space.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25615) FlinkKafkaProducer fail to correctly migrate pre Flink 1.9 state

2022-01-11 Thread Matthias Schwalbe (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17474283#comment-17474283
 ] 

Matthias Schwalbe commented on FLINK-25615:
---

Goede morgen Martijn :),

 

The error remains in all versions after 1.9 including the soon to be released 
1.15. In the user list I see reports of people still using pre 1.9 versions.

The trouble happens once they migrate pre 1.9 jobs to a current version (We had 
long-running 1.8 jobs that we only recently migrated).

Within a couple of days _metadata (savepoint/checkpoint) files grew from an 
average of 10MB to gigabytes, impeding the akka communication, memory 
consumption as well as other jobs on the machines. (I spent around 50 hours 
tracking this down, not wanting to risk the long term collected state (customer 
money involved))

I guess the fix is a small one to integrate, pending some testing.

 

The other solution is to change the uid() of the kafka producer which clears 
the pre 1.9 state but also leaves kafka transactions dangling ...

 

I leave the decision to continue with the ticket to more experienced folks...

 

Thias

> FlinkKafkaProducer fail to correctly migrate pre Flink 1.9 state
> 
>
> Key: FLINK-25615
> URL: https://issues.apache.org/jira/browse/FLINK-25615
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.9.0
>Reporter: Matthias Schwalbe
>Priority: Major
>
> I've found an unnoticed error in FlinkKafkaProvider when migrating from pre 
> Flink 1.9 state to versions starting with Flink 1.9:
>  * the operator state for next-transactional-id-hint should be deleted and 
> replaced by operator state next-transactional-id-hint-v2, however
>  * operator state next-transactional-id-hint is never deleted
>  * see here: [1] :
> {quote}        if (context.getOperatorStateStore()
>                 .getRegisteredStateNames()
>                 .contains(NEXT_TRANSACTIONAL_ID_HINT_DESCRIPTOR)) {
>             migrateNextTransactionalIdHindState(context);
>         }{quote} * migrateNextTransactionalIdHindState is never called, as 
> the condition cannot become true:
>  ** getRegisteredStateNames returns a list of String, whereas 
> NEXT_TRANSACTIONAL_ID_HINT_DESCRIPTOR is ListStateDescriptor (type mismatch)
> The Effect is:
>  * because NEXT_TRANSACTIONAL_ID_HINT_DESCRIPTOR is for a UnionListState, and
>  * the state is not cleared,
>  * each time the job restarts from a savepoint or checkpoint the size 
> multiplies times the parallelism
>  * then because each entry leaves an offset in metadata, akka.framesize 
> becomes too small, before we run into memory overflow
>  
> The breaking change has been introduced in commit 
> 70fa80e3862b367be22b593db685f9898a2838ef
>  
> A simple fix would be to change the code to:
> {quote}        if (context.getOperatorStateStore()
>                 .getRegisteredStateNames()
>                 .contains(NEXT_TRANSACTIONAL_ID_HINT_DESCRIPTOR.getName())) {
>             migrateNextTransactionalIdHindState(context);
>         }
> {quote}
>  
> Although FlinkKafkaProvider  is marked as deprecated it is probably a while 
> here to stay
>  
> Greeting
> Matthias (Thias) Schwalbe
>  
> [1] 
> https://github.com/apache/flink/blob/d7cf2c10f8d4fba81173854cbd8be27c657c7c7f/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaProducer.java#L1167-L1171
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18220: [FLINK-25410] Flink CLI should exit when app is accepted with detach …

2022-01-11 Thread GitBox


flinkbot edited a comment on pull request #18220:
URL: https://github.com/apache/flink/pull/18220#issuecomment-1001978981


   
   ## CI report:
   
   * f80e3635619ddb32f054090170799ee6cd49313a Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29140)
 
   * fba405054a8ac366cebddce7b9b7f263bc5ea1d9 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29295)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18308: [FLINK-25471][streaming-java] Wrong result if table transforms to Dat…

2022-01-11 Thread GitBox


flinkbot edited a comment on pull request #18308:
URL: https://github.com/apache/flink/pull/18308#issuecomment-1008462182


   
   ## CI report:
   
   * 1192049c76b9c5648ebc13f422b48098ad442a14 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29286)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18220: [FLINK-25410] Flink CLI should exit when app is accepted with detach …

2022-01-11 Thread GitBox


flinkbot edited a comment on pull request #18220:
URL: https://github.com/apache/flink/pull/18220#issuecomment-1001978981


   
   ## CI report:
   
   * f80e3635619ddb32f054090170799ee6cd49313a Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29140)
 
   * fba405054a8ac366cebddce7b9b7f263bc5ea1d9 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-25620) Upload artifacts to S3 failed on azure pipeline

2022-01-11 Thread Yun Gao (Jira)
Yun Gao created FLINK-25620:
---

 Summary: Upload artifacts to S3 failed on azure pipeline
 Key: FLINK-25620
 URL: https://issues.apache.org/jira/browse/FLINK-25620
 Project: Flink
  Issue Type: Bug
  Components: Build System / Azure Pipelines
Affects Versions: 1.14.2
Reporter: Yun Gao


{code:java}
/bin/bash --noprofile --norc /__w/_temp/fdaf679a-209f-4720-8709-5c591f730c61.sh
Installing artifacts deployment script
+ upload_to_s3 ./tools/releasing/release
+ local FILES_DIR=./tools/releasing/release
+ echo 'Installing artifacts deployment script'
+ export ARTIFACTS_DEST=/home/vsts_azpcontainer/bin/artifacts
+ ARTIFACTS_DEST=/home/vsts_azpcontainer/bin/artifacts
+ curl -sL https://raw.githubusercontent.com/travis-ci/artifacts/master/install
+ bash
bash: line 1: !DOCTYPE: No such file or directory
bash: line 2: !--
: No such file or directory
bash: line 3: $'\r': command not found
bash: line 4: Hello: command not found
bash: line 6: $'\r': command not found
bash: line 7: unexpected EOF while looking for matching `''
bash: line 83: syntax error: unexpected end of file
##[error]Bash exited with code '2'.
Finishing: Upload artifacts to S3
 {code}
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=29283=logs=585d8b77-fa33-51bc-8163-03e54ba9ce5b=68e20e55-906c-5c49-157c-3005667723c9=18



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25619) Init flink-table-store repository

2022-01-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-25619:
---
Labels: pull-request-available  (was: )

> Init flink-table-store repository
> -
>
> Key: FLINK-25619
> URL: https://issues.apache.org/jira/browse/FLINK-25619
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.1.0
>
>
> Create:
>  * README.md
>  * NOTICE LICENSE CODE_OF_CONDUCT
>  * .gitignore
>  * maven tools
>  * releasing tools
>  * github build workflow
>  * pom.xml



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink-table-store] JingsongLi opened a new pull request #1: [FLINK-25619] Init flink-table-store repository

2022-01-11 Thread GitBox


JingsongLi opened a new pull request #1:
URL: https://github.com/apache/flink-table-store/pull/1


   Create:
   - README.md
   - NOTICE LICENSE CODE_OF_CONDUCT
   - .gitignore
   - maven tools
   - releasing tools
   - github build workflow
   - pom.xml


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25152) FLIP-188: Introduce Built-in Dynamic Table Storage

2022-01-11 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-25152:
-
Fix Version/s: 1.15.0

> FLIP-188: Introduce Built-in Dynamic Table Storage
> --
>
> Key: FLINK-25152
> URL: https://issues.apache.org/jira/browse/FLINK-25152
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API, Table SQL / Ecosystem, Table Store
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
> Fix For: 1.15.0, table-store-0.1.0
>
>
> introduce built-in storage support for dynamic table, a truly unified 
> changelog & table representation, from Flink SQL’s perspective. The storage 
> will improve the usability a lot.
> More detail see: 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-188%3A+Introduce+Built-in+Dynamic+Table+Storage]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25618) Data quality by apache flink

2022-01-11 Thread tanjialiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tanjialiang updated FLINK-25618:

Description: 
This is discussing about how to support data quality through apache flink.

For example, I has a sql job, a table in this job has a column named phone, and 
the data of the column phone must match the pattern of telephone, if not match, 
i can choose drop it or ignored, and we can mark it in the metrics, so that 
user can monitor the data of quality in source and sink.

After this, user can kown about the data quality from the source and sink, it 
is useful for the downstream.

  was:
This is discussing about how to support data quality through apache flink.

For example, I has a sql job, a table in this job has a column named phone, and 
the data of the column phone must match the pattern of telephone, if not match, 
i can choose drop it or ignored, and we can mark it in the metrics, so that 
user can monitor the data of quality in source and sink.

After this, user can kown about the data quality of the source and sink, it is 
useful for the downstream.


> Data quality by apache flink
> 
>
> Key: FLINK-25618
> URL: https://issues.apache.org/jira/browse/FLINK-25618
> Project: Flink
>  Issue Type: New Feature
>Reporter: tanjialiang
>Priority: Not a Priority
>
> This is discussing about how to support data quality through apache flink.
> For example, I has a sql job, a table in this job has a column named phone, 
> and the data of the column phone must match the pattern of telephone, if not 
> match, i can choose drop it or ignored, and we can mark it in the metrics, so 
> that user can monitor the data of quality in source and sink.
> After this, user can kown about the data quality from the source and sink, it 
> is useful for the downstream.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18335: [hotfix][Pattern Recognition][docs] Fixed an issue that the example o…

2022-01-11 Thread GitBox


flinkbot edited a comment on pull request #18335:
URL: https://github.com/apache/flink/pull/18335#issuecomment-1010670800


   
   ## CI report:
   
   * ee4eb2955410919042822b61fdf01bfbfed5067d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29294)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] jxjgssylsg closed pull request #18307: Update match_recognize.md

2022-01-11 Thread GitBox


jxjgssylsg closed pull request #18307:
URL: https://github.com/apache/flink/pull/18307


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] jxjgssylsg commented on pull request #18307: Update match_recognize.md

2022-01-11 Thread GitBox


jxjgssylsg commented on pull request #18307:
URL: https://github.com/apache/flink/pull/18307#issuecomment-1010672860


   I have merged two pr according to the guidelines, please help to check


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] jxjgssylsg commented on pull request #18313: Update match_recognize.md

2022-01-11 Thread GitBox


jxjgssylsg commented on pull request #18313:
URL: https://github.com/apache/flink/pull/18313#issuecomment-1010672407


   I have merged two pr according to the guidelines, please help to check


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #18335: [hotfix][Pattern Recognition][docs] Fixed an issue that the example o…

2022-01-11 Thread GitBox


flinkbot commented on pull request #18335:
URL: https://github.com/apache/flink/pull/18335#issuecomment-1010671290


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit ee4eb2955410919042822b61fdf01bfbfed5067d (Wed Jan 12 
06:07:59 UTC 2022)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #18335: [hotfix][Pattern Recognition][docs] Fixed an issue that the example o…

2022-01-11 Thread GitBox


flinkbot commented on pull request #18335:
URL: https://github.com/apache/flink/pull/18335#issuecomment-1010670800


   
   ## CI report:
   
   * ee4eb2955410919042822b61fdf01bfbfed5067d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] jxjgssylsg closed pull request #18313: Update match_recognize.md

2022-01-11 Thread GitBox


jxjgssylsg closed pull request #18313:
URL: https://github.com/apache/flink/pull/18313


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] jxjgssylsg opened a new pull request #18335: [hotfix][Pattern Recognition][docs] Fixed an issue that the example o…

2022-01-11 Thread GitBox


jxjgssylsg opened a new pull request #18335:
URL: https://github.com/apache/flink/pull/18335


   …f “Time constraint” omits a comma.
   
   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the 
conventions defined in our code quality guide: 
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-25619) Init flink-table-store repository

2022-01-11 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned FLINK-25619:


Assignee: Jingsong Lee

> Init flink-table-store repository
> -
>
> Key: FLINK-25619
> URL: https://issues.apache.org/jira/browse/FLINK-25619
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
> Fix For: table-store-0.1.0
>
>
> Create:
>  * README.md
>  * NOTICE LICENSE CODE_OF_CONDUCT
>  * .gitignore
>  * maven tools
>  * releasing tools
>  * github build workflow
>  * pom.xml



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25619) Init flink-table-store repository

2022-01-11 Thread Jingsong Lee (Jira)
Jingsong Lee created FLINK-25619:


 Summary: Init flink-table-store repository
 Key: FLINK-25619
 URL: https://issues.apache.org/jira/browse/FLINK-25619
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Reporter: Jingsong Lee
 Fix For: table-store-0.1.0


Create:
 * README.md
 * NOTICE LICENSE CODE_OF_CONDUCT
 * .gitignore
 * maven tools
 * releasing tools
 * github build workflow
 * pom.xml



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25152) FLIP-188: Introduce Built-in Dynamic Table Storage

2022-01-11 Thread Jiangjie Qin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated FLINK-25152:
-
Fix Version/s: table-store-0.1.0
   (was: 1.15.0)

> FLIP-188: Introduce Built-in Dynamic Table Storage
> --
>
> Key: FLINK-25152
> URL: https://issues.apache.org/jira/browse/FLINK-25152
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API, Table SQL / Ecosystem, Table Store
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
> Fix For: table-store-0.1.0
>
>
> introduce built-in storage support for dynamic table, a truly unified 
> changelog & table representation, from Flink SQL’s perspective. The storage 
> will improve the usability a lot.
> More detail see: 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-188%3A+Introduce+Built-in+Dynamic+Table+Storage]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25171) When the DDL statement was executed, the column names of the Derived Columns were not validated

2022-01-11 Thread shouzuo meng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17474266#comment-17474266
 ] 

shouzuo meng commented on FLINK-25171:
--

[~godfreyhe] Hi,the solution to this problem has been provided, please spare 
some time to do the final check

> When the DDL statement was executed, the column names of the Derived Columns 
> were not validated
> ---
>
> Key: FLINK-25171
> URL: https://issues.apache.org/jira/browse/FLINK-25171
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.14.0
>Reporter: shouzuo meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
> Attachments: 5261638775663_.pic.jpg
>
>
> When I execute the DDL statement, I mistakenly use the duplicate field name 
> in THE SQL, but the execution result of the program does not throw any 
> exception or prompt. In MergeTableLikeUtilTest. Java# mergePhysicalColumns 
> add repeated the TableColumn (), also do not throw any exceptions, review the 
> code logic found only on the source table schema fields, It is not a 
> duplicate and derived tables and fields of the source table is verified, and 
> no field of derived tables if repeated verification, adding 
> physicalFieldNamesToTypes, there will be a repeating field coverage,The 
> following are the execution statements and the results
> DDL sql:
> CREATE TABLE test1 (
>   `log_version` string COMMENT '日志版本',
>   `log_version` INTEGER COMMENT '日志版本',
>   `pv_time` string COMMENT '日志时间'
> ) with(
>         'connector' = 'kafka',
>         'topic' = 'xxx',
>         'properties.bootstrap.servers' = 'xxx:9110',
>         'scan.startup.mode'='latest-offset',
>         'format' = 'json',
> )
> {code:java}
> StreamTableEnvironment tEnv = StreamTableEnvironment.create(env);
> TableResult result = tEnv.executeSql(
> CREATE TABLE test1 (
>   `log_version` string COMMENT '日志版本',
>   `log_version` INTEGER COMMENT '日志版本',
>   `pv_time` string COMMENT '日志时间'
> ) with(
>         'connector' = 'kafka',
>         'topic' = 'xxx',
>         'properties.bootstrap.servers' = 'xxx:9110',
>         'scan.startup.mode'='latest-offset',
>         'format' = 'json',
> )
> ) 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25441) ProducerFailedException will cause task status switch from RUNNING to CANCELED, which will cause the job to hang.

2022-01-11 Thread Yingjie Cao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yingjie Cao updated FLINK-25441:

Fix Version/s: 1.15.0

> ProducerFailedException will cause task status switch from RUNNING to 
> CANCELED, which will cause the job to hang.
> -
>
> Key: FLINK-25441
> URL: https://issues.apache.org/jira/browse/FLINK-25441
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.15.0
>Reporter: Lijie Wang
>Priority: Major
> Fix For: 1.15.0
>
>
> The {{ProducerFailedException}} extends {{{}CancelTaskException{}}}, which 
> will cause the task status switched from RUNNING to CANCELED. As described in 
> FLINK-17726, if a task is directly CANCELED by TaskManager due to its own 
> runtime issue, the task will not be recovered by JM and thus the job would 
> hang.
> Note that it will not cause problems before FLINK-24182 (it unifies the 
> failureCause handling, changes the check of CancelTaskException from 
> "{{instanceof CancelTaskException}}" to "{{ExceptionUtils.findThrowable}}"), 
> because the {{ProducerFailedException}} is always wrapped by 
> {{{}RemoteTransportException{}}}.
> The example log is as follows:
> {code:java}
> 2021-12-23 21:20:14,965 DEBUG org.apache.flink.runtime.taskmanager.Task   
>  [] - MultipleInput[945] [Source: 
> HiveSource-tpcds_bin_orc_1.catalog_sales, Source: 
> HiveSource-tpcds_bin_orc_1.store_sales, Source: 
> HiveSource-tpcds_bin_orc_1.catalog_sales, Source: 
> HiveSource-tpcds_bin_orc_1.store_sales, Source: 
> HiveSource-tpcds_bin_orc_1.store_sales, Source: 
> HiveSource-tpcds_bin_orc_1.item, Source: 
> HiveSource-tpcds_bin_orc_1.web_sales, Source: 
> HiveSource-tpcds_bin_orc_1.web_sales] - Calc[885] (143/1024)#0 
> (8a883116ab601dd5b9ad5d2717d18918) switched from RUNNING to CANCELED due to 
> CancelTaskException: 
> org.apache.flink.runtime.io.network.netty.exception.RemoteTransportException: 
> Error at remote task manager 
> 'k28b09250.eu95sqa.tbsite.net/100.69.96.154:47459'.
>   at 
> org.apache.flink.runtime.io.network.netty.CreditBasedPartitionRequestClientHandler.decodeMsg(CreditBasedPartitionRequestClientHandler.java:301)
>   at 
> org.apache.flink.runtime.io.network.netty.CreditBasedPartitionRequestClientHandler.channelRead(CreditBasedPartitionRequestClientHandler.java:190)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>   at 
> org.apache.flink.runtime.io.network.netty.NettyMessageClientDecoderDelegate.channelRead(NettyMessageClientDecoderDelegate.java:112)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
>   at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
>   at 
> org.apache.flink.shaded.netty4.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>   at java.lang.Thread.run(Thread.java:834)
> Caused by: 
> org.apache.flink.runtime.io.network.partition.ProducerFailedException: 
> 

[jira] [Commented] (FLINK-25441) ProducerFailedException will cause task status switch from RUNNING to CANCELED, which will cause the job to hang.

2022-01-11 Thread Yingjie Cao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17474262#comment-17474262
 ] 

Yingjie Cao commented on FLINK-25441:
-

[~wanglijie95] Would you prepare a fix for it?

> ProducerFailedException will cause task status switch from RUNNING to 
> CANCELED, which will cause the job to hang.
> -
>
> Key: FLINK-25441
> URL: https://issues.apache.org/jira/browse/FLINK-25441
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.15.0
>Reporter: Lijie Wang
>Priority: Major
>
> The {{ProducerFailedException}} extends {{{}CancelTaskException{}}}, which 
> will cause the task status switched from RUNNING to CANCELED. As described in 
> FLINK-17726, if a task is directly CANCELED by TaskManager due to its own 
> runtime issue, the task will not be recovered by JM and thus the job would 
> hang.
> Note that it will not cause problems before FLINK-24182 (it unifies the 
> failureCause handling, changes the check of CancelTaskException from 
> "{{instanceof CancelTaskException}}" to "{{ExceptionUtils.findThrowable}}"), 
> because the {{ProducerFailedException}} is always wrapped by 
> {{{}RemoteTransportException{}}}.
> The example log is as follows:
> {code:java}
> 2021-12-23 21:20:14,965 DEBUG org.apache.flink.runtime.taskmanager.Task   
>  [] - MultipleInput[945] [Source: 
> HiveSource-tpcds_bin_orc_1.catalog_sales, Source: 
> HiveSource-tpcds_bin_orc_1.store_sales, Source: 
> HiveSource-tpcds_bin_orc_1.catalog_sales, Source: 
> HiveSource-tpcds_bin_orc_1.store_sales, Source: 
> HiveSource-tpcds_bin_orc_1.store_sales, Source: 
> HiveSource-tpcds_bin_orc_1.item, Source: 
> HiveSource-tpcds_bin_orc_1.web_sales, Source: 
> HiveSource-tpcds_bin_orc_1.web_sales] - Calc[885] (143/1024)#0 
> (8a883116ab601dd5b9ad5d2717d18918) switched from RUNNING to CANCELED due to 
> CancelTaskException: 
> org.apache.flink.runtime.io.network.netty.exception.RemoteTransportException: 
> Error at remote task manager 
> 'k28b09250.eu95sqa.tbsite.net/100.69.96.154:47459'.
>   at 
> org.apache.flink.runtime.io.network.netty.CreditBasedPartitionRequestClientHandler.decodeMsg(CreditBasedPartitionRequestClientHandler.java:301)
>   at 
> org.apache.flink.runtime.io.network.netty.CreditBasedPartitionRequestClientHandler.channelRead(CreditBasedPartitionRequestClientHandler.java:190)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>   at 
> org.apache.flink.runtime.io.network.netty.NettyMessageClientDecoderDelegate.channelRead(NettyMessageClientDecoderDelegate.java:112)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
>   at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
>   at 
> org.apache.flink.shaded.netty4.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>   at java.lang.Thread.run(Thread.java:834)
> Caused by: 
> 

[GitHub] [flink] deadwind4 commented on pull request #18323: [FLINK-25601][state backends][config] Update the 'state.backend' option

2022-01-11 Thread GitBox


deadwind4 commented on pull request #18323:
URL: https://github.com/apache/flink/pull/18323#issuecomment-1010640613


   @Myasuka Thanks Yun Tang merge into master. Could I close this PR?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18306: [FLINK-25445] No need to create local recovery dirs when disabled loc…

2022-01-11 Thread GitBox


flinkbot edited a comment on pull request #18306:
URL: https://github.com/apache/flink/pull/18306#issuecomment-1008234006


   
   ## CI report:
   
   * 7d972d43be4188ad9dad3d53e45f5710fdb8b754 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29144)
 
   * 32883ca0240ef8ef0f3fb7162035a3aff23cdb7d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29293)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18306: [FLINK-25445] No need to create local recovery dirs when disabled loc…

2022-01-11 Thread GitBox


flinkbot edited a comment on pull request #18306:
URL: https://github.com/apache/flink/pull/18306#issuecomment-1008234006


   
   ## CI report:
   
   * 7d972d43be4188ad9dad3d53e45f5710fdb8b754 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29144)
 
   * 32883ca0240ef8ef0f3fb7162035a3aff23cdb7d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25566) Fail to cancel task if disk is bad for java.lang.NoClassDefFoundError

2022-01-11 Thread Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17474230#comment-17474230
 ] 

Liu commented on FLINK-25566:
-

Thanks, [~trohrmann]. Our flink version is 1.10.0. But I think the problem is 
not solved in the latest code. From the first picture, we can see that the 
class ExceptionUtils is not found. So I guess that it is failed to call 
ExceptionUtils.isJvmFatalOrOutOfMemoryError(exception) in TaskManagerRunner's 
method onFatalError. Maybe we should catch the exception and call

terminateJVM() finally.  What do you think?

> Fail to cancel task if disk is bad for java.lang.NoClassDefFoundError
> -
>
> Key: FLINK-25566
> URL: https://issues.apache.org/jira/browse/FLINK-25566
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Task
>Reporter: Liu
>Priority: Major
> Attachments: image-2022-01-07-19-07-10-968.png, 
> image-2022-01-07-19-08-49-038.png, image-2022-01-07-19-11-39-448.png
>
>
> When disk error, the related task will stuck for 
> java.lang.NoClassDefFoundError.
> !image-2022-01-07-19-08-49-038.png|width=743,height=157!
> In the TaskManagerRunner's method onFatalError, it will not terminateJVM at 
> once. The process will stuck in the disk.
> !image-2022-01-07-19-11-39-448.png|width=1085,height=400!
> In this case, maybe we should terminate the container at once.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25152) FLIP-188: Introduce Built-in Dynamic Table Storage

2022-01-11 Thread Jiangjie Qin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated FLINK-25152:
-
Component/s: Table Store

> FLIP-188: Introduce Built-in Dynamic Table Storage
> --
>
> Key: FLINK-25152
> URL: https://issues.apache.org/jira/browse/FLINK-25152
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API, Table SQL / Ecosystem, Table Store
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
> Fix For: 1.15.0
>
>
> introduce built-in storage support for dynamic table, a truly unified 
> changelog & table representation, from Flink SQL’s perspective. The storage 
> will improve the usability a lot.
> More detail see: 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-188%3A+Introduce+Built-in+Dynamic+Table+Storage]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25618) Data quality by apache flink

2022-01-11 Thread tanjialiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tanjialiang updated FLINK-25618:

Description: 
This is discussing about how to support data quality through apache flink.

For example, I has a sql job, a table in this job has a column named phone, and 
the data of the column phone must match the pattern of telephone, if not match, 
i can choose drop it or ignored, and we can mark it in the metrics, so that 
user can monitor the data of quality in source and sink.

After this, user can kown about the data quality of the source and sink, it is 
useful for the downstream.

  was:
This is discussing about how to support data quality through apache flink.

For example, I has a sql job, a table in this job has a column name phone, and 
the data of the column phone must match the pattern of telephone, if not match, 
i can choose drop it or ignored, and we can mark it in the metrics, so that 
user can monitor the data of quality in source and sink.


> Data quality by apache flink
> 
>
> Key: FLINK-25618
> URL: https://issues.apache.org/jira/browse/FLINK-25618
> Project: Flink
>  Issue Type: New Feature
>Reporter: tanjialiang
>Priority: Not a Priority
>
> This is discussing about how to support data quality through apache flink.
> For example, I has a sql job, a table in this job has a column named phone, 
> and the data of the column phone must match the pattern of telephone, if not 
> match, i can choose drop it or ignored, and we can mark it in the metrics, so 
> that user can monitor the data of quality in source and sink.
> After this, user can kown about the data quality of the source and sink, it 
> is useful for the downstream.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25618) Data quality by apache flink

2022-01-11 Thread tanjialiang (Jira)
tanjialiang created FLINK-25618:
---

 Summary: Data quality by apache flink
 Key: FLINK-25618
 URL: https://issues.apache.org/jira/browse/FLINK-25618
 Project: Flink
  Issue Type: New Feature
Reporter: tanjialiang


This is discussing about how to support data quality through apache flink.

For example, I has a sql job, a table in this job has a column name phone, and 
the data of the column phone must match the pattern of telephone, if not match, 
i can choose drop it or ignored, and we can mark it in the metrics, so that 
user can monitor the data of quality in source and sink.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] paul8263 commented on pull request #16108: [FLINK-22821][core] Stabilize NetUtils#getAvailablePort in order to avoid wrongly allocating any used ports

2022-01-11 Thread GitBox


paul8263 commented on pull request #16108:
URL: https://github.com/apache/flink/pull/16108#issuecomment-1010591513


   Thank you @fapaul . I will do it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18334: [BP-1.14][FLINK-22821][core] Stabilize NetUtils#getAvailablePort in order to a…

2022-01-11 Thread GitBox


flinkbot edited a comment on pull request #18334:
URL: https://github.com/apache/flink/pull/18334#issuecomment-1010588736


   
   ## CI report:
   
   * 96533113ed380aa0a8d5edc10ab61cc32620be0f Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29292)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] spoon-lz commented on a change in pull request #18119: [FLINK-24947] Support hostNetwork for native K8s integration on session mode

2022-01-11 Thread GitBox


spoon-lz commented on a change in pull request #18119:
URL: https://github.com/apache/flink/pull/18119#discussion_r782661091



##
File path: 
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/utils/KubernetesUtils.java
##
@@ -486,6 +487,26 @@ public static String 
tryToGetPrettyPrintYaml(KubernetesResource kubernetesResour
 }
 }
 
+/** Checks if hostNetwork is enabled. */
+public static boolean isHostNetwork(Configuration configuration) {
+return 
configuration.getBoolean(KubernetesConfigOptions.KUBERNETES_HOSTNETWORK_ENABLED);
+}
+
+/** Parse port from webUrl. */
+public static Integer gerRestBindPort(String webUrl) {

Review comment:
   @wangyang0918   Create a new class like `ResourceManagerUtils` or add 
this method in `AbstractResourceManagerDriver`?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #18334: [BP-1.14][FLINK-22821][core] Stabilize NetUtils#getAvailablePort in order to a…

2022-01-11 Thread GitBox


flinkbot commented on pull request #18334:
URL: https://github.com/apache/flink/pull/18334#issuecomment-1010589300


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 96533113ed380aa0a8d5edc10ab61cc32620be0f (Wed Jan 12 
03:14:31 UTC 2022)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-22821).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #18334: [BP-1.14][FLINK-22821][core] Stabilize NetUtils#getAvailablePort in order to a…

2022-01-11 Thread GitBox


flinkbot commented on pull request #18334:
URL: https://github.com/apache/flink/pull/18334#issuecomment-1010588736


   
   ## CI report:
   
   * 96533113ed380aa0a8d5edc10ab61cc32620be0f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] paul8263 opened a new pull request #18334: [BP-1.14][FLINK-22821][core] Stabilize NetUtils#getAvailablePort in order to a…

2022-01-11 Thread GitBox


paul8263 opened a new pull request #18334:
URL: https://github.com/apache/flink/pull/18334


   Unchanged backport of [#22821](https://github.com/apache/flink/pull/16108) 
on release-1.14


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18119: [FLINK-24947] Support hostNetwork for native K8s integration on session mode

2022-01-11 Thread GitBox


flinkbot edited a comment on pull request #18119:
URL: https://github.com/apache/flink/pull/18119#issuecomment-994734000


   
   ## CI report:
   
   * 40781260086d5780ad5c7eba1bed5e24e7a10464 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29257)
 
   * 3dbd53a6bc03cad9a218fdc080d367d85f15ec96 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29291)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18119: [FLINK-24947] Support hostNetwork for native K8s integration on session mode

2022-01-11 Thread GitBox


flinkbot edited a comment on pull request #18119:
URL: https://github.com/apache/flink/pull/18119#issuecomment-994734000


   
   ## CI report:
   
   * 40781260086d5780ad5c7eba1bed5e24e7a10464 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29257)
 
   * 3dbd53a6bc03cad9a218fdc080d367d85f15ec96 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18268: [FLINK-14902][connector] Supports jdbc async lookup join

2022-01-11 Thread GitBox


flinkbot edited a comment on pull request #18268:
URL: https://github.com/apache/flink/pull/18268#issuecomment-1005479356


   
   ## CI report:
   
   * 3c8576341becb677e123b27eb9efbc280211b7a1 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28964)
 
   * fe7d298f1ac142a2fa53241df09eaf93b44ec4be Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29290)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18268: [FLINK-14902][connector] Supports jdbc async lookup join

2022-01-11 Thread GitBox


flinkbot edited a comment on pull request #18268:
URL: https://github.com/apache/flink/pull/18268#issuecomment-1005479356


   
   ## CI report:
   
   * 3c8576341becb677e123b27eb9efbc280211b7a1 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28964)
 
   * fe7d298f1ac142a2fa53241df09eaf93b44ec4be UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-25617) Support VectorAssembler in FlinkML

2022-01-11 Thread weibo zhao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

weibo zhao closed FLINK-25617.
--
Resolution: Duplicate

> Support VectorAssembler in FlinkML
> --
>
> Key: FLINK-25617
> URL: https://issues.apache.org/jira/browse/FLINK-25617
> Project: Flink
>  Issue Type: New Feature
>  Components: Library / Machine Learning
>Reporter: weibo zhao
>Priority: Major
>
> Support VectorAssembler in FlinkML



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (FLINK-25601) Update 'state.backend' in flink-conf.yaml

2022-01-11 Thread Yun Tang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Tang resolved FLINK-25601.
--
Resolution: Fixed

merged in master: 41e96d2488e4caba17eef5172bb5ae493eb9c742

> Update 'state.backend' in flink-conf.yaml
> -
>
> Key: FLINK-25601
> URL: https://issues.apache.org/jira/browse/FLINK-25601
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Affects Versions: 1.14.2
>Reporter: Ada Wong
>Assignee: Ada Wong
>Priority: Major
>  Labels: pull-request-available
>
> The value and comments of 'state.backend' in flink-conf.yaml is deprecated.
> {code:java}
> # Supported backends are 'jobmanager', 'filesystem', 'rocksdb', or the
> # .
> #
> # state.backend: filesystem{code}
> We should update to this following.
>  
> {code:java}
> # Supported backends are 'hashmap', 'rocksdb', or the
> # .
> #
> # state.backend: hashmap {code}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25617) Support VectorAssembler in FlinkML

2022-01-11 Thread weibo zhao (Jira)
weibo zhao created FLINK-25617:
--

 Summary: Support VectorAssembler in FlinkML
 Key: FLINK-25617
 URL: https://issues.apache.org/jira/browse/FLINK-25617
 Project: Flink
  Issue Type: New Feature
  Components: Library / Machine Learning
Reporter: weibo zhao


Support VectorAssembler in FlinkML



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25616) Support VectorAssembler in FlinkML

2022-01-11 Thread weibo zhao (Jira)
weibo zhao created FLINK-25616:
--

 Summary: Support VectorAssembler in FlinkML
 Key: FLINK-25616
 URL: https://issues.apache.org/jira/browse/FLINK-25616
 Project: Flink
  Issue Type: New Feature
  Components: Library / Machine Learning
Reporter: weibo zhao


Support VectorAssembler in FlinkML



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17777: [FLINK-24886][core] TimeUtils supports the form of m.

2022-01-11 Thread GitBox


flinkbot edited a comment on pull request #1:
URL: https://github.com/apache/flink/pull/1#issuecomment-966971402


   
   ## CI report:
   
   * 405416d6b36b90cdfebc094af887eecd0a6707fb Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26475)
 
   * b00b49e806baf132df4f13124b98469cef9ebb91 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29288)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] xintongsong commented on a change in pull request #15599: [FLINK-11838][flink-gs-fs-hadoop] Create Google Storage file system with recoverable writer support

2022-01-11 Thread GitBox


xintongsong commented on a change in pull request #15599:
URL: https://github.com/apache/flink/pull/15599#discussion_r782666723



##
File path: 
flink-filesystems/flink-gs-fs-hadoop/src/test/java/org/apache/flink/fs/gs/writer/GSRecoverableWriterTest.java
##
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.fs.gs.writer;
+
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.core.fs.Path;
+import org.apache.flink.fs.gs.GSFileSystemOptions;
+import org.apache.flink.fs.gs.storage.GSBlobIdentifier;
+import org.apache.flink.fs.gs.storage.MockBlobStorage;
+
+import org.junit.Before;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.List;
+import java.util.UUID;
+
+import static org.junit.Assert.assertArrayEquals;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+/** Test {@link GSRecoverableWriter}. */
+@RunWith(Parameterized.class)
+public class GSRecoverableWriterTest {
+
+@Parameterized.Parameter(value = 0)
+public long position = 16;
+
+@Parameterized.Parameter(value = 1)
+public boolean closed = false;
+
+@Parameterized.Parameter(value = 2)
+public int componentCount;
+
+@Parameterized.Parameters(name = "position={0}, closed={1}, 
componentCount={2}")
+public static Collection data() {
+return Arrays.asList(
+new Object[][] {
+// position 0, not closed, component count = 0
+{0, false, 0},
+// position 16, not closed, component count = 2
+{16, false, 2},
+// position 0, closed, component count = 0
+{0, true, 0},
+// position 16, closed, component count = 2
+{16, true, 2},
+});
+}
+
+private GSFileSystemOptions options;
+
+private GSRecoverableWriter writer;
+
+private List componentObjectIds;
+
+private GSResumeRecoverable resumeRecoverable;
+
+private GSCommitRecoverable commitRecoverable;
+
+private GSBlobIdentifier blobIdentifier;
+
+@Before
+public void before() {
+MockBlobStorage storage = new MockBlobStorage();
+blobIdentifier = new GSBlobIdentifier("foo", "bar");
+
+Configuration flinkConfig = new Configuration();
+options = new GSFileSystemOptions(flinkConfig);
+writer = new GSRecoverableWriter(storage, options);
+
+componentObjectIds = new ArrayList();
+for (int i = 0; i < componentCount; i++) {
+componentObjectIds.add(UUID.randomUUID());
+}
+
+resumeRecoverable =
+new GSResumeRecoverable(blobIdentifier, componentObjectIds, 
position, closed);
+commitRecoverable = new GSCommitRecoverable(blobIdentifier, 
componentObjectIds);
+}
+
+@Test
+public void testRequiresCleanupOfRecoverableState() {
+assertFalse(writer.requiresCleanupOfRecoverableState());
+}
+
+@Test
+public void testSupportsResume() {
+assertTrue(writer.supportsResume());
+}
+
+@Test
+public void testOpen() throws IOException {
+Path path = new Path("gs://foo/bar");
+GSRecoverableFsDataOutputStream stream =
+(GSRecoverableFsDataOutputStream) writer.open(path);
+assertEquals("foo", stream.finalBlobIdentifier.bucketName);
+assertEquals("bar", stream.finalBlobIdentifier.objectName);
+assertEquals(options, stream.options);

Review comment:
   For `shouldConstructStream`, I think we don't need to remove anything, 
as `getPos` is a public interface declared by `FSDataOutputStream`.
   
   For `writeContent`, if I'm not mistaken we only need to remove assertions on 
`currentWriteChannel`, which is internal to the testee 
`GSRecoverableFsDataOutputStream`. The behavior whether contents are written 
properly can still be verified by assertions on the blob 

[GitHub] [flink] flinkbot edited a comment on pull request #17777: [FLINK-24886][core] TimeUtils supports the form of m.

2022-01-11 Thread GitBox


flinkbot edited a comment on pull request #1:
URL: https://github.com/apache/flink/pull/1#issuecomment-966971402


   
   ## CI report:
   
   * 405416d6b36b90cdfebc094af887eecd0a6707fb Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26475)
 
   * b00b49e806baf132df4f13124b98469cef9ebb91 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] liuyongvs commented on pull request #17777: [FLINK-24886][core] TimeUtils supports the form of m.

2022-01-11 Thread GitBox


liuyongvs commented on pull request #1:
URL: https://github.com/apache/flink/pull/1#issuecomment-1010567298


   hi @Thesharing , orignial commits are 4, i just reset the first commit and 
rebase code, then ci is running and you can see the log


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   4   5   6   7   8   >