Re: [PR] [FLINK-34383][Documentation] Modify the comment with incorrect syntax [flink]

2024-02-05 Thread via GitHub


lxliyou001 commented on PR #24273:
URL: https://github.com/apache/flink/pull/24273#issuecomment-1928963785

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26088][Connectors/ElasticSearch] Add Elasticsearch 8.0 support [flink-connector-elasticsearch]

2024-02-05 Thread via GitHub


snuyanzin commented on code in PR #53:
URL: 
https://github.com/apache/flink-connector-elasticsearch/pull/53#discussion_r1479350601


##
flink-connector-elasticsearch8/pom.xml:
##
@@ -0,0 +1,139 @@
+
+
+http://maven.apache.org/POM/4.0.0;
+xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+
+   4.0.0
+
+   
+   org.apache.flink
+   flink-connector-elasticsearch-parent
+   4.0-SNAPSHOT
+   
+
+   flink-connector-elasticsearch8
+   Flink : Connectors : Elasticsearch 8
+
+   jar
+
+   
+   
+   8.7.0
+   
+
+   
+
+   
+   
+   org.apache.flink
+   flink-streaming-java
+   ${flink.version}
+   provided
+   
+
+   
+
+   
+   
+   org.apache.flink
+   flink-table-api-java-bridge
+   ${flink.version}
+   provided
+   true
+   
+
+   
+   org.apache.flink
+   flink-connector-base
+   ${flink.version}
+   
+
+   
+   com.fasterxml.jackson.core
+   jackson-databind
+   2.9.0

Review Comment:
   why are we sticked to so old jackson which even has several CVE?
   I guess we could use 2.15.x or 2.14.x



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26088][Connectors/ElasticSearch] Add Elasticsearch 8.0 support [flink-connector-elasticsearch]

2024-02-05 Thread via GitHub


snuyanzin commented on code in PR #53:
URL: 
https://github.com/apache/flink-connector-elasticsearch/pull/53#discussion_r1479350601


##
flink-connector-elasticsearch8/pom.xml:
##
@@ -0,0 +1,139 @@
+
+
+http://maven.apache.org/POM/4.0.0;
+xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+
+   4.0.0
+
+   
+   org.apache.flink
+   flink-connector-elasticsearch-parent
+   4.0-SNAPSHOT
+   
+
+   flink-connector-elasticsearch8
+   Flink : Connectors : Elasticsearch 8
+
+   jar
+
+   
+   
+   8.7.0
+   
+
+   
+
+   
+   
+   org.apache.flink
+   flink-streaming-java
+   ${flink.version}
+   provided
+   
+
+   
+
+   
+   
+   org.apache.flink
+   flink-table-api-java-bridge
+   ${flink.version}
+   provided
+   true
+   
+
+   
+   org.apache.flink
+   flink-connector-base
+   ${flink.version}
+   
+
+   
+   com.fasterxml.jackson.core
+   jackson-databind
+   2.9.0

Review Comment:
   why are we sticked to so old jackson which even has several CVEs?
   I guess we could use 2.15.x or 2.14.x



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26088][Connectors/ElasticSearch] Add Elasticsearch 8.0 support [flink-connector-elasticsearch]

2024-02-05 Thread via GitHub


snuyanzin commented on PR #53:
URL: 
https://github.com/apache/flink-connector-elasticsearch/pull/53#issuecomment-1928954595

   good to know, thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-34324) s3_setup is called in test_file_sink.sh even if the common_s3.sh is not sourced

2024-02-05 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl reassigned FLINK-34324:
-

Assignee: Matthias Pohl

> s3_setup is called in test_file_sink.sh even if the common_s3.sh is not 
> sourced
> ---
>
> Key: FLINK-34324
> URL: https://issues.apache.org/jira/browse/FLINK-34324
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hadoop Compatibility, Tests
>Affects Versions: 1.17.2, 1.19.0, 1.18.1
>Reporter: Matthias Pohl
>Assignee: Matthias Pohl
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> See example CI run from the FLINK-34150 PR:
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56570=logs=af184cdd-c6d8-5084-0b69-7e9c67b35f7a=0f3adb59-eefa-51c6-2858-3654d9e0749d=3191
> {code}
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/test_file_sink.sh: 
> line 38: s3_setup: command not found
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34362][docs] Add argument to skip integrate connector docs in setup_docs.sh to improve build times [flink]

2024-02-05 Thread via GitHub


MartijnVisser commented on code in PR #24271:
URL: https://github.com/apache/flink/pull/24271#discussion_r1479341371


##
docs/setup_docs.sh:
##
@@ -36,24 +36,34 @@ function integrate_connector_docs {
 }
 
 
+SKIP_INTEGRATE_CONNECTOR_DOCS=false
+for arg in "$@"; do
+  if [ "$arg" == "--skip-integrate-connector-docs" ]; then
+SKIP_INTEGRATE_CONNECTOR_DOCS=true
+break
+  fi
+done
+
 # Integrate the connector documentation
+if [ "$SKIP_INTEGRATE_CONNECTOR_DOCS" = false ]; then
+  echo "Integrate connector docs"
+  rm -rf themes/connectors/*
+  rm -rf tmp
+  mkdir tmp
+  cd tmp
+
+  integrate_connector_docs elasticsearch v3.0
+  integrate_connector_docs aws v4.2
+  integrate_connector_docs cassandra v3.1
+  integrate_connector_docs pulsar v4.0
+  integrate_connector_docs jdbc v3.1
+  integrate_connector_docs rabbitmq v3.0
+  integrate_connector_docs gcp-pubsub v3.0
+  integrate_connector_docs mongodb v1.0
+  integrate_connector_docs opensearch v1.1
+  integrate_connector_docs kafka v3.0
+  integrate_connector_docs hbase v3.0

Review Comment:
   Have you checked if the Hugo build fails if nothing has been cloned yet from 
the external connectors, but the `skip_integrate_connector_docs` is being used? 
Does the Hugo build fail at that point? If not, then we should be OK. If it 
fails, it might make sense to first check if the repos have already been cloned 
and else throw a warning?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34324) s3_setup is called in test_file_sink.sh even if the common_s3.sh is not sourced

2024-02-05 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814659#comment-17814659
 ] 

Matthias Pohl commented on FLINK-34324:
---

I merged the 1.18 backport first to not interrupt the 1.19 release planning 
(because it's planned to create the release branch soon). Let's check the 
[corresponding CI run on 
release-1.18|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57335=results]
 before merging the {{master}} and 1.17 change. 

> s3_setup is called in test_file_sink.sh even if the common_s3.sh is not 
> sourced
> ---
>
> Key: FLINK-34324
> URL: https://issues.apache.org/jira/browse/FLINK-34324
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hadoop Compatibility, Tests
>Affects Versions: 1.17.2, 1.19.0, 1.18.1
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> See example CI run from the FLINK-34150 PR:
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56570=logs=af184cdd-c6d8-5084-0b69-7e9c67b35f7a=0f3adb59-eefa-51c6-2858-3654d9e0749d=3191
> {code}
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/test_file_sink.sh: 
> line 38: s3_setup: command not found
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-27539) support consuming update and delete changes In Windowing TVFs

2024-02-05 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814658#comment-17814658
 ] 

Martijn Visser commented on FLINK-27539:


Thank you [~qingyue]!

> support consuming update and delete changes In Windowing TVFs
> -
>
> Key: FLINK-27539
> URL: https://issues.apache.org/jira/browse/FLINK-27539
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.15.0
>Reporter: hjw
>Priority: Major
>
> custom_kafka is a cdc table
> sql:
> {code:java}
> select DATE_FORMAT(window_end,'-MM-dd') as date_str,sum(money) as 
> total,name
> from TABLE(CUMULATE(TABLE custom_kafka,descriptor(createtime),interval '1' 
> MINUTES,interval '1' DAY ))
> where status='1'
> group by name,window_start,window_end;
> {code}
> Error
> {code:java}
> Exception in thread "main" org.apache.flink.table.api.TableException: 
> StreamPhysicalWindowAggregate doesn't support consuming update and delete 
> changes which is produced by node TableSourceScan(table=[[default_catalog, 
> default_database, custom_kafka, watermark=[-(createtime, 5000:INTERVAL 
> SECOND)]]], fields=[name, money, status, createtime, operation_ts])
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor.createNewNode(FlinkChangelogModeInferenceProgram.scala:396)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor.visit(FlinkChangelogModeInferenceProgram.scala:315)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor.visitChild(FlinkChangelogModeInferenceProgram.scala:353)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor.$anonfun$visitChildren$1(FlinkChangelogModeInferenceProgram.scala:342)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor.$anonfun$visitChildren$1$adapted(FlinkChangelogModeInferenceProgram.scala:341)
>  at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
>  at scala.collection.immutable.Range.foreach(Range.scala:155)
>  at scala.collection.TraversableLike.map(TraversableLike.scala:233)
>  at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
>  at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor.visitChildren(FlinkChangelogModeInferenceProgram.scala:341)
> {code}
> But I found Group Window Aggregation is works when use cdc table
> {code:java}
> select DATE_FORMAT(TUMBLE_END(createtime,interval '10' MINUTES),'-MM-dd') 
> as date_str,sum(money) as total,name
> from custom_kafka
> where status='1'
> group by name,TUMBLE(createtime,interval '10' MINUTES)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [BP-1.18][FLINK-34324][test] Makes all s3 related operations being declared and called in a single location [flink]

2024-02-05 Thread via GitHub


XComp merged PR #24268:
URL: https://github.com/apache/flink/pull/24268


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [BP-1.18][FLINK-34324][test] Makes all s3 related operations being declared and called in a single location [flink]

2024-02-05 Thread via GitHub


XComp commented on PR #24268:
URL: https://github.com/apache/flink/pull/24268#issuecomment-1928945790

   I'm gonna merge 1.18 first to verify that they are also working on the 
release branches with the secrets being available, too.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33958) Implement restore tests for IntervalJoin node

2024-02-05 Thread lincoln lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814656#comment-17814656
 ] 

lincoln lee commented on FLINK-33958:
-

[~mapohl] After looked at the commit history of sql TimeIntervalJoin operator, 
the test failure above was most probably a bug(or by-design behavior) in 
previous version.

> Implement restore tests for IntervalJoin node
> -
>
> Key: FLINK-33958
> URL: https://issues.apache.org/jira/browse/FLINK-33958
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Bonnie Varghese
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34384) Release Testing: Verify FLINK-33735 Improve the exponential-delay restart-strategy

2024-02-05 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan reassigned FLINK-34384:
---

Assignee: Cancai Cai

> Release Testing: Verify FLINK-33735 Improve the exponential-delay 
> restart-strategy 
> ---
>
> Key: FLINK-34384
> URL: https://issues.apache.org/jira/browse/FLINK-34384
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Cancai Cai
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
> Attachments: image-2024-02-06-15-05-05-331.png, screenshot-1.png
>
>
> Test suggestion:
>  # Prepare a datastream job that all tasks throw exception directly.
>  ## Set the parallelism to 5 or above
>  # Prepare some configuration options:
>  ** restart-strategy.type : exponential-delay
>  ** restart-strategy.exponential-delay.attempts-before-reset-backoff : 7
>  # Start the cluster: ./bin/start-cluster.sh
>  # Run the job: ./bin/flink run -c className jarName
>  # Check the result
>  ** Check whether job will be retried 7 times
>  ** Check the exception history, the list has 7 exceptions
>  ** Each retries except the last one can see the 5 subtasks(They are 
> concurrent exceptions).
> !image-2024-02-06-15-05-05-331.png|width=1624,height=797!  
>  
> Note: Set these options mentioned at step2 at 2 level separately
>  * Cluster level: set them in the config.yaml
>  * Job level: Set them in the code
>  
> Job level demo:
> {code:java}
> public static void main(String[] args) throws Exception {
> Configuration conf = new Configuration();
> conf.setString("restart-strategy", "exponential-delay");
> 
> conf.setString("restart-strategy.exponential-delay.attempts-before-reset-backoff",
>  "6");
> StreamExecutionEnvironment env =  
> StreamExecutionEnvironment.getExecutionEnvironment(conf);
> env.setParallelism(5);
> DataGeneratorSource generatorSource =
> new DataGeneratorSource<>(
> value -> value,
> 300,
> RateLimiterStrategy.perSecond(10),
> Types.LONG);
> env.fromSource(generatorSource, WatermarkStrategy.noWatermarks(), "Data 
> Generator")
> .map(new RichMapFunction() {
> @Override
> public Long map(Long value) {
> throw new RuntimeException(
> "Excepted testing exception, subtaskIndex: " + 
> getRuntimeContext().getIndexOfThisSubtask());
> }
> })
> .print();
> env.execute();
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34382) Release Testing: Verify FLINK-33625 Support System out and err to be redirected to LOG or discarded

2024-02-05 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan reassigned FLINK-34382:
---

Assignee: Cancai Cai

> Release Testing: Verify FLINK-33625 Support System out and err to be 
> redirected to LOG or discarded
> ---
>
> Key: FLINK-34382
> URL: https://issues.apache.org/jira/browse/FLINK-34382
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Configuration
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Cancai Cai
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
>
> Test suggestion:
> Prepare a Flink SQL job and a flink datastream job
> they can use the print sink or call System.out.println inside of the UDF
> Add this config to the config.yaml
> taskmanager.system-out.mode : LOG
> Run the job
> Check whether the print log is redirected to log file
>  
> SQL demo:
> {code}
> ./bin/sql-client.sh
> CREATE TABLE orders (
>   id   INT,
>   app  INT,
>   channel  INT,
>   user_id  STRING,
>   ts   TIMESTAMP(3),
>   WATERMARK FOR ts AS ts
> ) WITH (
>'connector' = 'datagen',
>'rows-per-second'='20',
>'fields.app.min'='1',
>'fields.app.max'='10',
>'fields.channel.min'='21',
>'fields.channel.max'='30',
>'fields.user_id.length'='10'
> );
> create table print_sink ( 
>   id   INT,
>   app  INT,
>   channel  INT,
>   user_id  STRING,
>   ts   TIMESTAMP(3)
> ) with ('connector' = 'print' );
> insert into print_sink
> select id   ,app   ,channel   ,user_id   ,ts   from orders 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34334) Add sub-task level RocksDB file count metric

2024-02-05 Thread Hangxiang Yu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814655#comment-17814655
 ] 

Hangxiang Yu commented on FLINK-34334:
--

I think we could add such a metric.

But I'd suggest to report it by RocksDB Property just like other metrics: 
[https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#rocksdb-native-metrics]

Since RocksDB has supported some related metrics, e.g. num-files-at-level.

> Add sub-task level RocksDB file count metric
> 
>
> Key: FLINK-34334
> URL: https://issues.apache.org/jira/browse/FLINK-34334
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Affects Versions: 1.18.0
>Reporter: Jufang He
>Priority: Major
> Attachments: img_v3_027i_7ed0b8ba-3f12-48eb-aab3-cc368ac47cdg.jpg
>
>
> In our production environment, we encountered the problem of task deploy 
> failure. The root cause was that too many sst files of a single sub-task led 
> to too much task deployment information(OperatorSubtaskState), and then 
> caused akka request timeout in the task deploy phase. Therefore, I wanted to 
> add sub-task level RocksDB file count metrics. It is convenient to avoid 
> performance problems caused by too many sst files in time.
> RocksDB has provided the JNI 
> (https://javadoc.io/doc/org.rocksdb/rocksdbjni/6.20.3/org/rocksdb/RocksDB.html#getColumnFamilyMetaData
>  ()) We can easily retrieve the file count and report it via metrics reporter.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34202) python tests take suspiciously long in some of the cases

2024-02-05 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814651#comment-17814651
 ] 

Matthias Pohl commented on FLINK-34202:
---

1.17: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57324=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901=27713

> python tests take suspiciously long in some of the cases
> 
>
> Key: FLINK-34202
> URL: https://issues.apache.org/jira/browse/FLINK-34202
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.17.2, 1.19.0, 1.18.1
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: test-stability
>
> [This release-1.18 
> build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56603=logs=3e4dd1a2-fe2f-5e5d-a581-48087e718d53=b4612f28-e3b5-5853-8a8b-610ae894217a]
>  has the python stage running into a timeout without any obvious reason. The 
> [python stage run for 
> JDK17|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56603=logs=b53e1644-5cb4-5a3b-5d48-f523f39bcf06]
>  was also getting close to the 4h timeout.
> I'm creating this issue for documentation purposes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34095] Adds restore tests for StreamExecAsyncCalc [flink]

2024-02-05 Thread via GitHub


AlanConfluent commented on code in PR #24220:
URL: https://github.com/apache/flink/pull/24220#discussion_r1479328861


##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/nodes/exec/stream/AsyncCalcTestPrograms.java:
##
@@ -0,0 +1,297 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.nodes.exec.stream;
+
+import org.apache.flink.table.annotation.DataTypeHint;
+import org.apache.flink.table.data.TimestampData;
+import org.apache.flink.table.functions.AsyncScalarFunction;
+import org.apache.flink.table.functions.FunctionContext;
+import org.apache.flink.table.test.program.SinkTestStep;
+import org.apache.flink.table.test.program.SourceTestStep;
+import org.apache.flink.table.test.program.TableTestProgram;
+import org.apache.flink.types.Row;
+
+import java.sql.Timestamp;
+import java.time.Duration;
+import java.time.LocalDateTime;
+import java.util.TimeZone;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import static 
org.apache.flink.table.api.config.ExecutionConfigOptions.TABLE_EXEC_ASYNC_SCALAR_BUFFER_CAPACITY;
+import static 
org.apache.flink.table.api.config.ExecutionConfigOptions.TABLE_EXEC_ASYNC_SCALAR_MAX_ATTEMPTS;
+import static 
org.apache.flink.table.api.config.ExecutionConfigOptions.TABLE_EXEC_ASYNC_SCALAR_RETRY_DELAY;
+import static org.assertj.core.api.Assertions.fail;
+
+/** {@link TableTestProgram} definitions for testing {@link 
StreamExecAsyncCalc}. */
+public class AsyncCalcTestPrograms {
+
+static final TableTestProgram ASYNC_CALC_UDF_SIMPLE =
+TableTestProgram.of("async-calc-simple", "validates async calc 
node with simple UDF")
+.setupTemporaryCatalogFunction("udf1", 
AsyncJavaFunc0.class)
+.setupTableSource(
+SourceTestStep.newBuilder("source_t")
+.addSchema("a INT")
+.producedBeforeRestore(Row.of(5))
+.producedAfterRestore(Row.of(5))
+.build())
+.setupTableSink(
+SinkTestStep.newBuilder("sink_t")
+.addSchema("a INT", "a1 BIGINT")
+.consumedBeforeRestore(Row.of(5, 6L))
+.consumedAfterRestore(Row.of(5, 6L))
+.build())
+.runSql("INSERT INTO sink_t SELECT a, udf1(a) FROM 
source_t")
+.build();
+
+static final TableTestProgram ASYNC_CALC_UDF_COMPLEX =
+TableTestProgram.of("async-calc-complex", "validates calc node 
with complex UDFs")
+.setupTemporaryCatalogFunction("udf1", 
AsyncJavaFunc0.class)
+.setupTemporaryCatalogFunction("udf2", 
AsyncJavaFunc1.class)
+.setupTemporarySystemFunction("udf3", AsyncJavaFunc2.class)
+.setupTemporarySystemFunction("udf4", 
AsyncUdfWithOpen.class)
+.setupCatalogFunction("udf5", AsyncJavaFunc5.class)
+.setupTableSource(
+SourceTestStep.newBuilder("source_t")
+.addSchema(
+"a BIGINT, b INT NOT NULL, c 
VARCHAR, d TIMESTAMP(3)")
+.producedBeforeRestore(
+Row.of(
+5L,
+11,
+"hello world",
+LocalDateTime.of(2023, 12, 
16, 1, 1, 1, 123)))
+.producedAfterRestore(
+Row.of(
+5L,
+11,
+"hello world",
+  

Re: [PR] [FLINK-33634] Add Conditions to Flink CRD's Status field [flink-kubernetes-operator]

2024-02-05 Thread via GitHub


gyfora commented on PR #749:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/749#issuecomment-1928930237

   > > > @gyfora yes, I am having JIRA account using which I can login to 
https://issues.apache.org/jira/projects/FLINK/.
   > > 
   > > 
   > > okay then can you please tell me the account name? :D
   > 
   > account name : **lajithk**
   
   It seems like you need to create a confluence account (cwiki.apache.org) 
once you have that I can give you permissions to create a FLIP page


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-30629) ClientHeartbeatTest.testJobRunningIfClientReportHeartbeat is unstable

2024-02-05 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814649#comment-17814649
 ] 

Matthias Pohl commented on FLINK-30629:
---

1.17: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57324=logs=77a9d8e1-d610-59b3-fc2a-4766541e0e33=125e07e7-8de0-5c6c-a541-a567415af3ef=9747

> ClientHeartbeatTest.testJobRunningIfClientReportHeartbeat is unstable
> -
>
> Key: FLINK-30629
> URL: https://issues.apache.org/jira/browse/FLINK-30629
> Project: Flink
>  Issue Type: Bug
>  Components: Client / Job Submission
>Affects Versions: 1.17.0, 1.18.0
>Reporter: Xintong Song
>Assignee: Liu
>Priority: Critical
>  Labels: pull-request-available, stale-assigned, test-stability
> Fix For: 1.19.0
>
> Attachments: ClientHeartbeatTestLog.txt, 
> logs-cron_azure-test_cron_azure_core-1685497478.zip
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=44690=logs=77a9d8e1-d610-59b3-fc2a-4766541e0e33=125e07e7-8de0-5c6c-a541-a567415af3ef=10819
> {code:java}
> Jan 11 04:32:39 [ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 21.02 s <<< FAILURE! - in 
> org.apache.flink.client.ClientHeartbeatTest
> Jan 11 04:32:39 [ERROR] 
> org.apache.flink.client.ClientHeartbeatTest.testJobRunningIfClientReportHeartbeat
>   Time elapsed: 9.157 s  <<< ERROR!
> Jan 11 04:32:39 java.lang.IllegalStateException: MiniCluster is not yet 
> running or has already been shut down.
> Jan 11 04:32:39   at 
> org.apache.flink.util.Preconditions.checkState(Preconditions.java:193)
> Jan 11 04:32:39   at 
> org.apache.flink.runtime.minicluster.MiniCluster.getDispatcherGatewayFuture(MiniCluster.java:1044)
> Jan 11 04:32:39   at 
> org.apache.flink.runtime.minicluster.MiniCluster.runDispatcherCommand(MiniCluster.java:917)
> Jan 11 04:32:39   at 
> org.apache.flink.runtime.minicluster.MiniCluster.getJobStatus(MiniCluster.java:841)
> Jan 11 04:32:39   at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.getJobStatus(MiniClusterJobClient.java:91)
> Jan 11 04:32:39   at 
> org.apache.flink.client.ClientHeartbeatTest.testJobRunningIfClientReportHeartbeat(ClientHeartbeatTest.java:79)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34035) when flinksql with group by partition field, some errors occured in jobmanager.log

2024-02-05 Thread hansonhe (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814646#comment-17814646
 ] 

hansonhe commented on FLINK-34035:
--

[~walls.flink.m] 
No special reason to the same  column 'dt',just to have a test;
{_}If select dt,count(*{_}) from bidwhive.test.dws_test where dt >='2024-01-02' 
group by dt, jobmanager.log also have the same error.

> when flinksql with group by partition field, some errors occured in 
> jobmanager.log
> --
>
> Key: FLINK-34035
> URL: https://issues.apache.org/jira/browse/FLINK-34035
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.17.1
>Reporter: hansonhe
>Priority: Major
>
> flink.version=1.17.1
> kyuubi.version=1.8.0
> hive.version=3.1.2
> when run some hive sql as followings:
> CREATE CATALOG bidwhive WITH ('type' = 'hive', 'hive-version' = '3.1.2', 
> 'default-database' = 'test');
> (1)select count({_}) from bidwhive.test.dws_test where dt='2024-01-02' ;{_}
> _+-+_
> _| EXPR$0 |_
> _+-+_
> _| 1317 |_
> _+-+_
> _It's OK. There is no errors anywhere._
> {_}(2)select dt,count({_}) from bidwhive.test.dws_test where dt='2024-01-02' 
> group by dt;
> {+}--{+}--+
> |dt|EXPR$1|
> {+}--{+}--+
> |2024-01-02|1317|
> {+}--{+}--+
> It can get correct result. But when i check jobmanager.log,I found some 
> errors appeared repeatly as folowings.Sometimes the errors also appeared on 
> the client terminal. I don't known whether these error will affect task 
> runtime or not?. Can somebody help me to have a see?
> '''
> 2024-01-09 14:03:25,979 WARN 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher [] - An 
> exception occurred when fetching query results
> java.util.concurrent.ExecutionException: 
> org.apache.flink.util.FlinkException: Coordinator of operator 
> e9a3cbdf90f308bdf13b34acd6410e2b does not exist or the job vertex this 
> operator belongs to is not initialized. at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) 
> ~[?:1.8.0_191]
> at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895) 
> ~[?:1.8.0_191]
> at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.sendRequest(CollectResultFetcher.java:170)
>  ~[flink-dist-1.17.1.jar:1.17.1]
> at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:129)
>  [flink-dist-1.17.1.jar:1.17.1]
> at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106)
>  [flink-dist-1.17.1.jar:1.17.1]
> at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
>  [flink-dist-1.17.1.jar:1.17.1]
> at 
> org.apache.flink.table.planner.connectors.CollectDynamicSink$CloseableRowIteratorWrapper.hasNext(CollectDynamicSink.java:222)
>  [flink-table-planner_b1e58bff-c004-4dba-b7d4-fff4e8145073.jar:1.17.1]
> at 
> org.apache.flink.table.gateway.service.result.ResultStore$ResultRetrievalThread.run(ResultStore.java:155)
>  [flink-sql-gateway-1.17.1.jar:1.17.1]Caused by: 
> org.apache.flink.util.FlinkException: Coordinator of operator 
> e9a3cbdf90f308bdf13b34acd6410e2b does not exist or the job vertex this 
> operator belongs to is not initialized. at 
> org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.deliverCoordinationRequestToCoordinator(DefaultOperatorCoordinatorHandler.java:135)
>  ~[flink-dist-1.17.1.jar:1.17.1]
> at 
> org.apache.flink.runtime.scheduler.SchedulerBase.deliverCoordinationRequestToCoordinator(SchedulerBase.java:1048)
>  ~[flink-dist-1.17.1.jar:1.17.1]
> at 
> org.apache.flink.runtime.jobmaster.JobMaster.sendRequestToCoordinator(JobMaster.java:602)
>  ~[flink-dist-1.17.1.jar:1.17.1]
> at 
> org.apache.flink.runtime.jobmaster.JobMaster.deliverCoordinationRequestToCoordinator(JobMaster.java:918)
>  ~[flink-dist-1.17.1.jar:1.17.1]
> at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) ~[?:?]
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_191]
> at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_191]
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRpcInvocation$1(AkkaRpcActor.java:309)
>  ~[?:?]
> at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83)
>  ~[?:?]
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:307)
>  ~[?:?]
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:222)
>  ~[?:?]
> at 
> 

Re: [PR] [FLINK-34383][Documentation] Modify the comment with incorrect syntax [flink]

2024-02-05 Thread via GitHub


lxliyou001 commented on PR #24273:
URL: https://github.com/apache/flink/pull/24273#issuecomment-1928920588

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34383][Documentation] Modify the comment with incorrect syntax [flink]

2024-02-05 Thread via GitHub


flinkbot commented on PR #24273:
URL: https://github.com/apache/flink/pull/24273#issuecomment-1928919062

   
   ## CI report:
   
   * fc224d86ae4433d522fe20f9ac10474ea2080654 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33625) FLIP-390: Support System out and err to be redirected to LOG or discarded

2024-02-05 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan updated FLINK-33625:

Release Note: System.out and System.err output the content to the 
taskmanager.out and taskmanager.err files. In a production environment, if 
flink users use them to print a lot of content, the limits of yarn or 
kubernetes may be exceeded, eventually causing the TaskManager to be killed. 
Flink supports redirect the System.out and System.err to the log file, and the 
log file can be rolled to avoid unlimited disk usage.

> FLIP-390: Support System out and err to be redirected to LOG or discarded
> -
>
> Key: FLINK-33625
> URL: https://issues.apache.org/jira/browse/FLINK-33625
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Configuration
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Get more from https://cwiki.apache.org/confluence/x/4guZE



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34375) Complete work for syntax `DESCRIBE EXTENDED tableName`

2024-02-05 Thread dalongliu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dalongliu reassigned FLINK-34375:
-

Assignee: Yunhong Zheng

> Complete work for syntax `DESCRIBE EXTENDED tableName`
> --
>
> Key: FLINK-34375
> URL: https://issues.apache.org/jira/browse/FLINK-34375
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.10.0, 1.19.0
>Reporter: xuyang
>Assignee: Yunhong Zheng
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34373) Complete work for syntax `DESCRIBE DATABASE databaseName`

2024-02-05 Thread dalongliu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dalongliu reassigned FLINK-34373:
-

Assignee: xuyang

> Complete work for syntax `DESCRIBE DATABASE databaseName`
> -
>
> Key: FLINK-34373
> URL: https://issues.apache.org/jira/browse/FLINK-34373
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.10.0, 1.19.0
>Reporter: xuyang
>Assignee: xuyang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34372) Complete work for syntax `DESCRIBE CATALOG catalogName`

2024-02-05 Thread dalongliu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dalongliu reassigned FLINK-34372:
-

Assignee: xuyang

> Complete work for syntax `DESCRIBE CATALOG catalogName`
> ---
>
> Key: FLINK-34372
> URL: https://issues.apache.org/jira/browse/FLINK-34372
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.10.0, 1.19.0
>Reporter: xuyang
>Assignee: xuyang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34374) Complete work for syntax `DESCRIBE EXTENDED DATABASE databaseName`

2024-02-05 Thread dalongliu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dalongliu reassigned FLINK-34374:
-

Assignee: Yunhong Zheng

> Complete work for syntax `DESCRIBE EXTENDED DATABASE databaseName`
> --
>
> Key: FLINK-34374
> URL: https://issues.apache.org/jira/browse/FLINK-34374
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.10.0, 1.19.0
>Reporter: xuyang
>Assignee: Yunhong Zheng
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34370) [Umbrella] Complete work and improve about enhanced Flink SQL DDL

2024-02-05 Thread dalongliu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dalongliu reassigned FLINK-34370:
-

Assignee: xuyang

> [Umbrella] Complete work and improve about enhanced Flink SQL DDL 
> --
>
> Key: FLINK-34370
> URL: https://issues.apache.org/jira/browse/FLINK-34370
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.10.0, 1.19.0
>Reporter: xuyang
>Assignee: xuyang
>Priority: Major
>
> This is a umbrella Jira for completing work for 
> [Flip-69]([https://cwiki.apache.org/confluence/display/FLINK/FLIP-69%3A+Flink+SQL+DDL+Enhancement])
>  about enhanced Flink SQL DDL.
> With FLINK-34254(https://issues.apache.org/jira/browse/FLINK-34254), it seems 
> that this flip is not finished yet.
> The matrix is below:
> |DDL|can be used in sql-client |
> |_SHOW CATALOGS_|YES|
> |_DESCRIBE_ _CATALOG catalogName_|{color:#de350b}NO{color}|
> |_USE_ _CATALOG catalogName_|YES|
> |_CREATE DATABASE dataBaseName_|YES|
> |_DROP DATABASE dataBaseName_|YES|
> |_DROP IF EXISTS DATABASE dataBaseName_|YES|
> |_DROP DATABASE dataBaseName RESTRICT_|YES|
> |_DROP DATABASE dataBaseName CASCADE_|YES|
> |_ALTER DATABASE dataBaseName SET
> ( name=value [, name=value]*)|YES|
> |_USE dataBaseName_|YES|
> |_SHOW_ _DATABASES_|YES|
> |_DESCRIBE DATABASE dataBaseName_|{color:#de350b}NO{color}|
> |_DESCRIBE EXTENDED DATABASE dataBaseName_|{color:#de350b}NO{color}|
> |_SHOW_ _TABLES_|YES|
> |_DESCRIBE tableName_|YES|
> |_DESCRIBE EXTENDED tableName_|{color:#de350b}NO{color}|
> |_ALTER_ _TABLE tableName
> RENAME TO newTableName|YES|
> |_ALTER_ _TABLE tableName
> SET ( name=value [, name=value]*)|YES|
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34383) Modify the comment with incorrect syntax

2024-02-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34383:
---
Labels: pull-request-available  (was: )

> Modify the comment with incorrect syntax
> 
>
> Key: FLINK-34383
> URL: https://issues.apache.org/jira/browse/FLINK-34383
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: li you
>Priority: Major
>  Labels: pull-request-available
>
> There is an error in the syntax of the comment for the class 
> PermanentBlobCache



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-34383][Documentation] Modify the comment with incorrect syntax [flink]

2024-02-05 Thread via GitHub


lxliyou001 opened a new pull request, #24273:
URL: https://github.com/apache/flink/pull/24273

   
   
   ## What is the purpose of the change
   
   There is an error in the syntax of the comment for the class 
PermanentBlobCache
   
   ## Brief change log
   
   modify the comment in class PermanentBlobCache 
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no) no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no) no
 - The serializers: (yes / no / don't know) no
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know) no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know) 
no
 - The S3 file system connector: (yes / no / don't know) no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no) no
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34200) AutoRescalingITCase#testCheckpointRescalingInKeyedState fails

2024-02-05 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814644#comment-17814644
 ] 

Matthias Pohl commented on FLINK-34200:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57323=logs=2c3cbe13-dee0-5837-cf47-3053da9a8a78=b78d9d30-509a-5cea-1fef-db7abaa325ae=8318

> AutoRescalingITCase#testCheckpointRescalingInKeyedState fails
> -
>
> Key: FLINK-34200
> URL: https://issues.apache.org/jira/browse/FLINK-34200
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Assignee: Rui Fan
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Attachments: FLINK-34200.failure.log.gz, debug-34200.log
>
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56601=logs=8fd9202e-fd17-5b26-353c-ac1ff76c8f28=ea7cf968-e585-52cb-e0fc-f48de023a7ca=8200]
> {code:java}
> Jan 19 02:31:53 02:31:53.954 [ERROR] Tests run: 32, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 1050 s <<< FAILURE! -- in 
> org.apache.flink.test.checkpointing.AutoRescalingITCase
> Jan 19 02:31:53 02:31:53.954 [ERROR] 
> org.apache.flink.test.checkpointing.AutoRescalingITCase.testCheckpointRescalingInKeyedState[backend
>  = rocksdb, buffersPerChannel = 2] -- Time elapsed: 59.10 s <<< FAILURE!
> Jan 19 02:31:53 java.lang.AssertionError: expected:<[(0,8000), (0,32000), 
> (0,48000), (0,72000), (1,78000), (1,3), (1,54000), (0,2000), (0,1), 
> (0,5), (0,66000), (0,74000), (0,82000), (1,8), (1,0), (1,16000), 
> (1,24000), (1,4), (1,56000), (1,64000), (0,12000), (0,28000), (0,52000), 
> (0,6), (0,68000), (0,76000), (1,18000), (1,26000), (1,34000), (1,42000), 
> (1,58000), (0,6000), (0,14000), (0,22000), (0,38000), (0,46000), (0,62000), 
> (0,7), (1,4000), (1,2), (1,36000), (1,44000)]> but was:<[(0,8000), 
> (0,32000), (0,48000), (0,72000), (1,78000), (1,3), (1,54000), (0,2000), 
> (0,1), (0,5), (0,66000), (0,74000), (0,82000), (1,8), (1,0), 
> (1,16000), (1,24000), (1,4), (1,56000), (1,64000), (0,12000), (0,28000), 
> (0,52000), (0,6), (0,68000), (0,76000), (0,1000), (0,25000), (0,33000), 
> (0,41000), (1,18000), (1,26000), (1,34000), (1,42000), (1,58000), (0,6000), 
> (0,14000), (0,22000), (0,38000), (0,46000), (0,62000), (0,7), (1,4000), 
> (1,2), (1,36000), (1,44000)]>
> Jan 19 02:31:53   at org.junit.Assert.fail(Assert.java:89)
> Jan 19 02:31:53   at org.junit.Assert.failNotEquals(Assert.java:835)
> Jan 19 02:31:53   at org.junit.Assert.assertEquals(Assert.java:120)
> Jan 19 02:31:53   at org.junit.Assert.assertEquals(Assert.java:146)
> Jan 19 02:31:53   at 
> org.apache.flink.test.checkpointing.AutoRescalingITCase.testCheckpointRescalingKeyedState(AutoRescalingITCase.java:296)
> Jan 19 02:31:53   at 
> org.apache.flink.test.checkpointing.AutoRescalingITCase.testCheckpointRescalingInKeyedState(AutoRescalingITCase.java:196)
> Jan 19 02:31:53   at java.lang.reflect.Method.invoke(Method.java:498)
> Jan 19 02:31:53   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34310) Release Testing Instructions: Verify FLINK-33325 Built-in cross-platform powerful java profiler

2024-02-05 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814641#comment-17814641
 ] 

Rui Fan edited comment on FLINK-34310 at 2/6/24 7:11 AM:
-

{quote}[~fanrui] Thanks for reporting! Would you like to take this testing work?
{quote}
Sure, I can test it after [~Yu Chen]  provided the testing instruction.

BTW, as we know, the Chinese new year is coming. I may be unavailable during 
the vocation.


was (Author: fanrui):
{quote}[~fanrui] Thanks for reporting! Would you like to take this testing work?
{quote}
Sure, I can test it after [~Yu Chen]  provided the test progress.

BTW, as we know, the Chinese new year is coming. I may be unavailable during 
the vocation.

> Release Testing Instructions: Verify FLINK-33325 Built-in cross-platform 
> powerful java profiler
> ---
>
> Key: FLINK-34310
> URL: https://issues.apache.org/jira/browse/FLINK-34310
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / REST, Runtime / Web Frontend
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Yu Chen
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
> Attachments: image-2024-02-06-14-09-39-874.png, screenshot-1.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34310) Release Testing Instructions: Verify FLINK-33325 Built-in cross-platform powerful java profiler

2024-02-05 Thread lincoln lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814643#comment-17814643
 ] 

lincoln lee commented on FLINK-34310:
-

[~fanrui] Great! Let's just wait for the testing instructions ready. 
Happy Chinese New Year! :)

> Release Testing Instructions: Verify FLINK-33325 Built-in cross-platform 
> powerful java profiler
> ---
>
> Key: FLINK-34310
> URL: https://issues.apache.org/jira/browse/FLINK-34310
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / REST, Runtime / Web Frontend
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Yu Chen
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
> Attachments: image-2024-02-06-14-09-39-874.png, screenshot-1.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34288) Release Testing Instructions: Verify FLINK-33735 Improve the exponential-delay restart-strategy

2024-02-05 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814638#comment-17814638
 ] 

Rui Fan edited comment on FLINK-34288 at 2/6/24 7:06 AM:
-

Thanks [~lincoln.86xy]  for the reminder.

 

Test suggestion:
 # Prepare a datastream job that all tasks throw exception directly.
 ## Set the parallelism to 5 or above
 # Prepare some configuration options:
 ** restart-strategy.type : exponential-delay
 ** restart-strategy.exponential-delay.attempts-before-reset-backoff : 7
 # Start the cluster: ./bin/start-cluster.sh
 # Run the job: ./bin/flink run -c className jarName
 # Check the result
 ** Check whether job will be retried 7 times
 ** Check the exception history, the list has 7 exceptions
 ** Each retries except the last one can see the 5 subtasks(They are concurrent 
exceptions).

!image-2024-02-06-14-51-11-256.png!

 

Note: Set these options mentioned at step2 at 2 level separately
 * Cluster level: set them in the config.yaml
 * Job level: Set them in the code

 

Job level demo:
{code:java}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();

conf.setString("restart-strategy", "exponential-delay");

conf.setString("restart-strategy.exponential-delay.attempts-before-reset-backoff",
 "6");
StreamExecutionEnvironment env =  
StreamExecutionEnvironment.getExecutionEnvironment(conf);
env.setParallelism(5);

DataGeneratorSource generatorSource =
new DataGeneratorSource<>(
value -> value,
300,
RateLimiterStrategy.perSecond(10),
Types.LONG);

env.fromSource(generatorSource, WatermarkStrategy.noWatermarks(), "Data 
Generator")
.map(new RichMapFunction() {
@Override
public Long map(Long value) {
throw new RuntimeException(
"Excepted testing exception, subtaskIndex: " + 
getRuntimeContext().getIndexOfThisSubtask());
}
})
.print();

env.execute();
} {code}


was (Author: fanrui):
Thanks [~lincoln.86xy]  for the reminder.

 

Test suggestion:
 # Prepare a datastream job that all tasks throw exception directly.
 ## Set the parallelism to 5 or above
 # Prepare some configuration options:
 ** restart-strategy.type : exponential-delay
 ** restart-strategy.exponential-delay.attempts-before-reset-backoff : 7
 # Start the cluster: ./bin/start-cluster.sh
 # Run the job: ./bin/flink run -c className jarName
 # Check the result
 ** Check whether job will be retried 7 times
 ** Check the exception history, the list has 2 exceptions
 ** Each retries except the last one can see the 5 subtasks(They are concurrent 
exceptions).

!image-2024-02-06-14-51-11-256.png!

 

Note: Set these options mentioned at step2 at 2 level separately
 * Cluster level: set them in the config.yaml
 * Job level: Set them in the code

 

Job level demo:
{code:java}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();

conf.setString("restart-strategy", "exponential-delay");

conf.setString("restart-strategy.exponential-delay.attempts-before-reset-backoff",
 "6");
StreamExecutionEnvironment env =  
StreamExecutionEnvironment.getExecutionEnvironment(conf);
env.setParallelism(5);

DataGeneratorSource generatorSource =
new DataGeneratorSource<>(
value -> value,
300,
RateLimiterStrategy.perSecond(10),
Types.LONG);

env.fromSource(generatorSource, WatermarkStrategy.noWatermarks(), "Data 
Generator")
.map(new RichMapFunction() {
@Override
public Long map(Long value) {
throw new RuntimeException(
"Excepted testing exception, subtaskIndex: " + 
getRuntimeContext().getIndexOfThisSubtask());
}
})
.print();

env.execute();
} {code}

> Release Testing Instructions: Verify FLINK-33735 Improve the 
> exponential-delay restart-strategy 
> 
>
> Key: FLINK-34288
> URL: https://issues.apache.org/jira/browse/FLINK-34288
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Rui Fan
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
> Attachments: image-2024-02-06-14-51-11-256.png, screenshot-1.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34384) Release Testing: Verify FLINK-33735 Improve the exponential-delay restart-strategy

2024-02-05 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan updated FLINK-34384:

Description: 
Test suggestion:
 # Prepare a datastream job that all tasks throw exception directly.
 ## Set the parallelism to 5 or above
 # Prepare some configuration options:
 ** restart-strategy.type : exponential-delay
 ** restart-strategy.exponential-delay.attempts-before-reset-backoff : 7
 # Start the cluster: ./bin/start-cluster.sh
 # Run the job: ./bin/flink run -c className jarName
 # Check the result
 ** Check whether job will be retried 7 times
 ** Check the exception history, the list has 7 exceptions
 ** Each retries except the last one can see the 5 subtasks(They are concurrent 
exceptions).

!image-2024-02-06-15-05-05-331.png|width=1624,height=797!  
 

Note: Set these options mentioned at step2 at 2 level separately
 * Cluster level: set them in the config.yaml
 * Job level: Set them in the code

 

Job level demo:
{code:java}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();

conf.setString("restart-strategy", "exponential-delay");

conf.setString("restart-strategy.exponential-delay.attempts-before-reset-backoff",
 "6");
StreamExecutionEnvironment env =  
StreamExecutionEnvironment.getExecutionEnvironment(conf);
env.setParallelism(5);

DataGeneratorSource generatorSource =
new DataGeneratorSource<>(
value -> value,
300,
RateLimiterStrategy.perSecond(10),
Types.LONG);

env.fromSource(generatorSource, WatermarkStrategy.noWatermarks(), "Data 
Generator")
.map(new RichMapFunction() {
@Override
public Long map(Long value) {
throw new RuntimeException(
"Excepted testing exception, subtaskIndex: " + 
getRuntimeContext().getIndexOfThisSubtask());
}
})
.print();

env.execute();
} {code}

  was:
Test suggestion:
 # Prepare a datastream job that all tasks throw exception directly.
 ## Set the parallelism to 5 or above
 # Prepare some configuration options:
 ** restart-strategy.type : exponential-delay
 ** restart-strategy.exponential-delay.attempts-before-reset-backoff : 7
 # Start the cluster: ./bin/start-cluster.sh
 # Run the job: ./bin/flink run -c className jarName
 # Check the result
 ** Check whether job will be retried 7 times
 ** Check the exception history, the list has 2 exceptions
 ** Each retries except the last one can see the 5 subtasks(They are concurrent 
exceptions).

!image-2024-02-06-15-05-05-331.png|width=1624,height=797!  
 

Note: Set these options mentioned at step2 at 2 level separately
 * Cluster level: set them in the config.yaml
 * Job level: Set them in the code

 

Job level demo:
{code:java}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();

conf.setString("restart-strategy", "exponential-delay");

conf.setString("restart-strategy.exponential-delay.attempts-before-reset-backoff",
 "6");
StreamExecutionEnvironment env =  
StreamExecutionEnvironment.getExecutionEnvironment(conf);
env.setParallelism(5);

DataGeneratorSource generatorSource =
new DataGeneratorSource<>(
value -> value,
300,
RateLimiterStrategy.perSecond(10),
Types.LONG);

env.fromSource(generatorSource, WatermarkStrategy.noWatermarks(), "Data 
Generator")
.map(new RichMapFunction() {
@Override
public Long map(Long value) {
throw new RuntimeException(
"Excepted testing exception, subtaskIndex: " + 
getRuntimeContext().getIndexOfThisSubtask());
}
})
.print();

env.execute();
} {code}


> Release Testing: Verify FLINK-33735 Improve the exponential-delay 
> restart-strategy 
> ---
>
> Key: FLINK-34384
> URL: https://issues.apache.org/jira/browse/FLINK-34384
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
> Attachments: image-2024-02-06-15-05-05-331.png, screenshot-1.png
>
>
> Test suggestion:
>  # Prepare a datastream job that all tasks throw exception directly.
>  ## Set the parallelism to 5 or above
>  # Prepare some configuration options:
>  ** restart-strategy.type : exponential-delay
>  ** restart-strategy.exponential-delay.attempts-before-reset-backoff : 7
>  # Start the 

Re: [PR] [FLINK-33634] Add Conditions to Flink CRD's Status field [flink-kubernetes-operator]

2024-02-05 Thread via GitHub


lajith2006 commented on PR #749:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/749#issuecomment-1928903183

   > > @gyfora yes, I am having JIRA account using which I can login to 
https://issues.apache.org/jira/projects/FLINK/.
   > 
   > okay then can you please tell me the account name? :D
   
   account name : **lajithk**


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34384) Release Testing: Verify FLINK-33735 Improve the exponential-delay restart-strategy

2024-02-05 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan updated FLINK-34384:

Attachment: (was: image-2024-02-06-14-57-51-386.png)

> Release Testing: Verify FLINK-33735 Improve the exponential-delay 
> restart-strategy 
> ---
>
> Key: FLINK-34384
> URL: https://issues.apache.org/jira/browse/FLINK-34384
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
> Attachments: image-2024-02-06-15-05-05-331.png, screenshot-1.png
>
>
> Test suggestion:
>  # Prepare a datastream job that all tasks throw exception directly.
>  ## Set the parallelism to 5 or above
>  # Prepare some configuration options:
>  ** restart-strategy.type : exponential-delay
>  ** restart-strategy.exponential-delay.attempts-before-reset-backoff : 7
>  # Start the cluster: ./bin/start-cluster.sh
>  # Run the job: ./bin/flink run -c className jarName
>  # Check the result
>  ** Check whether job will be retried 7 times
>  ** Check the exception history, the list has 2 exceptions
>  ** Each retries except the last one can see the 5 subtasks(They are 
> concurrent exceptions).
> !image-2024-02-06-15-05-05-331.png|width=1624,height=797!  
>  
> Note: Set these options mentioned at step2 at 2 level separately
>  * Cluster level: set them in the config.yaml
>  * Job level: Set them in the code
>  
> Job level demo:
> {code:java}
> public static void main(String[] args) throws Exception {
> Configuration conf = new Configuration();
> conf.setString("restart-strategy", "exponential-delay");
> 
> conf.setString("restart-strategy.exponential-delay.attempts-before-reset-backoff",
>  "6");
> StreamExecutionEnvironment env =  
> StreamExecutionEnvironment.getExecutionEnvironment(conf);
> env.setParallelism(5);
> DataGeneratorSource generatorSource =
> new DataGeneratorSource<>(
> value -> value,
> 300,
> RateLimiterStrategy.perSecond(10),
> Types.LONG);
> env.fromSource(generatorSource, WatermarkStrategy.noWatermarks(), "Data 
> Generator")
> .map(new RichMapFunction() {
> @Override
> public Long map(Long value) {
> throw new RuntimeException(
> "Excepted testing exception, subtaskIndex: " + 
> getRuntimeContext().getIndexOfThisSubtask());
> }
> })
> .print();
> env.execute();
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34384) Release Testing: Verify FLINK-33735 Improve the exponential-delay restart-strategy

2024-02-05 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan updated FLINK-34384:

Description: 
Test suggestion:
 # Prepare a datastream job that all tasks throw exception directly.
 ## Set the parallelism to 5 or above
 # Prepare some configuration options:
 ** restart-strategy.type : exponential-delay
 ** restart-strategy.exponential-delay.attempts-before-reset-backoff : 7
 # Start the cluster: ./bin/start-cluster.sh
 # Run the job: ./bin/flink run -c className jarName
 # Check the result
 ** Check whether job will be retried 7 times
 ** Check the exception history, the list has 2 exceptions
 ** Each retries except the last one can see the 5 subtasks(They are concurrent 
exceptions).

!image-2024-02-06-15-05-05-331.png|width=1624,height=797!  
 

Note: Set these options mentioned at step2 at 2 level separately
 * Cluster level: set them in the config.yaml
 * Job level: Set them in the code

 

Job level demo:
{code:java}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();

conf.setString("restart-strategy", "exponential-delay");

conf.setString("restart-strategy.exponential-delay.attempts-before-reset-backoff",
 "6");
StreamExecutionEnvironment env =  
StreamExecutionEnvironment.getExecutionEnvironment(conf);
env.setParallelism(5);

DataGeneratorSource generatorSource =
new DataGeneratorSource<>(
value -> value,
300,
RateLimiterStrategy.perSecond(10),
Types.LONG);

env.fromSource(generatorSource, WatermarkStrategy.noWatermarks(), "Data 
Generator")
.map(new RichMapFunction() {
@Override
public Long map(Long value) {
throw new RuntimeException(
"Excepted testing exception, subtaskIndex: " + 
getRuntimeContext().getIndexOfThisSubtask());
}
})
.print();

env.execute();
} {code}

  was:
Test suggestion:
 # Prepare a datastream job that all tasks throw exception directly.
 ## Set the parallelism to 5 or above
 # Prepare some configuration options:
 ** restart-strategy.type : exponential-delay
 ** restart-strategy.exponential-delay.attempts-before-reset-backoff : 7
 # Start the cluster: ./bin/start-cluster.sh
 # Run the job: ./bin/flink run -c className jarName
 # Check the result
 ** Check whether job will be retried 7 times
 ** Check the exception history, the list has 2 exceptions
 ** Each retries except the last one can see the 5 subtasks(They are concurrent 
exceptions).

 !screenshot-1.png! 
 

Note: Set these options mentioned at step2 at 2 level separately
 * Cluster level: set them in the config.yaml
 * Job level: Set them in the code

 

Job level demo:
{code:java}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();

conf.setString("restart-strategy", "exponential-delay");

conf.setString("restart-strategy.exponential-delay.attempts-before-reset-backoff",
 "6");
StreamExecutionEnvironment env =  
StreamExecutionEnvironment.getExecutionEnvironment(conf);
env.setParallelism(5);

DataGeneratorSource generatorSource =
new DataGeneratorSource<>(
value -> value,
300,
RateLimiterStrategy.perSecond(10),
Types.LONG);

env.fromSource(generatorSource, WatermarkStrategy.noWatermarks(), "Data 
Generator")
.map(new RichMapFunction() {
@Override
public Long map(Long value) {
throw new RuntimeException(
"Excepted testing exception, subtaskIndex: " + 
getRuntimeContext().getIndexOfThisSubtask());
}
})
.print();

env.execute();
} {code}


> Release Testing: Verify FLINK-33735 Improve the exponential-delay 
> restart-strategy 
> ---
>
> Key: FLINK-34384
> URL: https://issues.apache.org/jira/browse/FLINK-34384
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
> Attachments: image-2024-02-06-15-05-05-331.png, screenshot-1.png
>
>
> Test suggestion:
>  # Prepare a datastream job that all tasks throw exception directly.
>  ## Set the parallelism to 5 or above
>  # Prepare some configuration options:
>  ** restart-strategy.type : exponential-delay
>  ** restart-strategy.exponential-delay.attempts-before-reset-backoff : 7
>  # Start the cluster: ./bin/start-cluster.sh
>  # Run the 

[jira] [Updated] (FLINK-34384) Release Testing: Verify FLINK-33735 Improve the exponential-delay restart-strategy

2024-02-05 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan updated FLINK-34384:

Attachment: image-2024-02-06-15-05-05-331.png

> Release Testing: Verify FLINK-33735 Improve the exponential-delay 
> restart-strategy 
> ---
>
> Key: FLINK-34384
> URL: https://issues.apache.org/jira/browse/FLINK-34384
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
> Attachments: image-2024-02-06-15-05-05-331.png, screenshot-1.png
>
>
> Test suggestion:
>  # Prepare a datastream job that all tasks throw exception directly.
>  ## Set the parallelism to 5 or above
>  # Prepare some configuration options:
>  ** restart-strategy.type : exponential-delay
>  ** restart-strategy.exponential-delay.attempts-before-reset-backoff : 7
>  # Start the cluster: ./bin/start-cluster.sh
>  # Run the job: ./bin/flink run -c className jarName
>  # Check the result
>  ** Check whether job will be retried 7 times
>  ** Check the exception history, the list has 2 exceptions
>  ** Each retries except the last one can see the 5 subtasks(They are 
> concurrent exceptions).
>  !screenshot-1.png! 
>  
> Note: Set these options mentioned at step2 at 2 level separately
>  * Cluster level: set them in the config.yaml
>  * Job level: Set them in the code
>  
> Job level demo:
> {code:java}
> public static void main(String[] args) throws Exception {
> Configuration conf = new Configuration();
> conf.setString("restart-strategy", "exponential-delay");
> 
> conf.setString("restart-strategy.exponential-delay.attempts-before-reset-backoff",
>  "6");
> StreamExecutionEnvironment env =  
> StreamExecutionEnvironment.getExecutionEnvironment(conf);
> env.setParallelism(5);
> DataGeneratorSource generatorSource =
> new DataGeneratorSource<>(
> value -> value,
> 300,
> RateLimiterStrategy.perSecond(10),
> Types.LONG);
> env.fromSource(generatorSource, WatermarkStrategy.noWatermarks(), "Data 
> Generator")
> .map(new RichMapFunction() {
> @Override
> public Long map(Long value) {
> throw new RuntimeException(
> "Excepted testing exception, subtaskIndex: " + 
> getRuntimeContext().getIndexOfThisSubtask());
> }
> })
> .print();
> env.execute();
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33634] Add Conditions to Flink CRD's Status field [flink-kubernetes-operator]

2024-02-05 Thread via GitHub


gyfora commented on PR #749:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/749#issuecomment-1928896292

   > @gyfora yes, I am having JIRA account using which I can login to 
https://issues.apache.org/jira/projects/FLINK/.
   
   okay then can you please tell me the account name? :D 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34384) Release Testing: Verify FLINK-33735 Improve the exponential-delay restart-strategy

2024-02-05 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee updated FLINK-34384:

Description: 
Test suggestion:
 # Prepare a datastream job that all tasks throw exception directly.
 ## Set the parallelism to 5 or above
 # Prepare some configuration options:
 ** restart-strategy.type : exponential-delay
 ** restart-strategy.exponential-delay.attempts-before-reset-backoff : 7
 # Start the cluster: ./bin/start-cluster.sh
 # Run the job: ./bin/flink run -c className jarName
 # Check the result
 ** Check whether job will be retried 7 times
 ** Check the exception history, the list has 2 exceptions
 ** Each retries except the last one can see the 5 subtasks(They are concurrent 
exceptions).

 !screenshot-1.png! 
 

Note: Set these options mentioned at step2 at 2 level separately
 * Cluster level: set them in the config.yaml
 * Job level: Set them in the code

 

Job level demo:
{code:java}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();

conf.setString("restart-strategy", "exponential-delay");

conf.setString("restart-strategy.exponential-delay.attempts-before-reset-backoff",
 "6");
StreamExecutionEnvironment env =  
StreamExecutionEnvironment.getExecutionEnvironment(conf);
env.setParallelism(5);

DataGeneratorSource generatorSource =
new DataGeneratorSource<>(
value -> value,
300,
RateLimiterStrategy.perSecond(10),
Types.LONG);

env.fromSource(generatorSource, WatermarkStrategy.noWatermarks(), "Data 
Generator")
.map(new RichMapFunction() {
@Override
public Long map(Long value) {
throw new RuntimeException(
"Excepted testing exception, subtaskIndex: " + 
getRuntimeContext().getIndexOfThisSubtask());
}
})
.print();

env.execute();
} {code}

  was:
Test suggestion:
 # Prepare a datastream job that all tasks throw exception directly.
 ## Set the parallelism to 5 or above
 # Prepare some configuration options:
 ** restart-strategy.type : exponential-delay
 ** restart-strategy.exponential-delay.attempts-before-reset-backoff : 7
 # Start the cluster: ./bin/start-cluster.sh
 # Run the job: ./bin/flink run -c className jarName
 # Check the result
 ** Check whether job will be retried 7 times
 ** Check the exception history, the list has 2 exceptions
 ** Each retries except the last one can see the 5 subtasks(They are concurrent 
exceptions).

!image-2024-02-06-14-57-51-386.png!

 

Note: Set these options mentioned at step2 at 2 level separately
 * Cluster level: set them in the config.yaml
 * Job level: Set them in the code

 

Job level demo:
{code:java}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();

conf.setString("restart-strategy", "exponential-delay");

conf.setString("restart-strategy.exponential-delay.attempts-before-reset-backoff",
 "6");
StreamExecutionEnvironment env =  
StreamExecutionEnvironment.getExecutionEnvironment(conf);
env.setParallelism(5);

DataGeneratorSource generatorSource =
new DataGeneratorSource<>(
value -> value,
300,
RateLimiterStrategy.perSecond(10),
Types.LONG);

env.fromSource(generatorSource, WatermarkStrategy.noWatermarks(), "Data 
Generator")
.map(new RichMapFunction() {
@Override
public Long map(Long value) {
throw new RuntimeException(
"Excepted testing exception, subtaskIndex: " + 
getRuntimeContext().getIndexOfThisSubtask());
}
})
.print();

env.execute();
} {code}


> Release Testing: Verify FLINK-33735 Improve the exponential-delay 
> restart-strategy 
> ---
>
> Key: FLINK-34384
> URL: https://issues.apache.org/jira/browse/FLINK-34384
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
> Attachments: image-2024-02-06-14-57-51-386.png, screenshot-1.png
>
>
> Test suggestion:
>  # Prepare a datastream job that all tasks throw exception directly.
>  ## Set the parallelism to 5 or above
>  # Prepare some configuration options:
>  ** restart-strategy.type : exponential-delay
>  ** restart-strategy.exponential-delay.attempts-before-reset-backoff : 7
>  # Start the cluster: ./bin/start-cluster.sh
>  # Run the job: ./bin/flink 

[jira] [Updated] (FLINK-34384) Release Testing: Verify FLINK-33735 Improve the exponential-delay restart-strategy

2024-02-05 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee updated FLINK-34384:

Attachment: screenshot-1.png

> Release Testing: Verify FLINK-33735 Improve the exponential-delay 
> restart-strategy 
> ---
>
> Key: FLINK-34384
> URL: https://issues.apache.org/jira/browse/FLINK-34384
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
> Attachments: image-2024-02-06-14-57-51-386.png, screenshot-1.png
>
>
> Test suggestion:
>  # Prepare a datastream job that all tasks throw exception directly.
>  ## Set the parallelism to 5 or above
>  # Prepare some configuration options:
>  ** restart-strategy.type : exponential-delay
>  ** restart-strategy.exponential-delay.attempts-before-reset-backoff : 7
>  # Start the cluster: ./bin/start-cluster.sh
>  # Run the job: ./bin/flink run -c className jarName
>  # Check the result
>  ** Check whether job will be retried 7 times
>  ** Check the exception history, the list has 2 exceptions
>  ** Each retries except the last one can see the 5 subtasks(They are 
> concurrent exceptions).
> !image-2024-02-06-14-57-51-386.png!
>  
> Note: Set these options mentioned at step2 at 2 level separately
>  * Cluster level: set them in the config.yaml
>  * Job level: Set them in the code
>  
> Job level demo:
> {code:java}
> public static void main(String[] args) throws Exception {
> Configuration conf = new Configuration();
> conf.setString("restart-strategy", "exponential-delay");
> 
> conf.setString("restart-strategy.exponential-delay.attempts-before-reset-backoff",
>  "6");
> StreamExecutionEnvironment env =  
> StreamExecutionEnvironment.getExecutionEnvironment(conf);
> env.setParallelism(5);
> DataGeneratorSource generatorSource =
> new DataGeneratorSource<>(
> value -> value,
> 300,
> RateLimiterStrategy.perSecond(10),
> Types.LONG);
> env.fromSource(generatorSource, WatermarkStrategy.noWatermarks(), "Data 
> Generator")
> .map(new RichMapFunction() {
> @Override
> public Long map(Long value) {
> throw new RuntimeException(
> "Excepted testing exception, subtaskIndex: " + 
> getRuntimeContext().getIndexOfThisSubtask());
> }
> })
> .print();
> env.execute();
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34288) Release Testing Instructions: Verify FLINK-33735 Improve the exponential-delay restart-strategy

2024-02-05 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee closed FLINK-34288.
---
Resolution: Fixed

[~fanrui] Thanks for updating the instructions!
Testing tracked by https://issues.apache.org/jira/browse/FLINK-34384

> Release Testing Instructions: Verify FLINK-33735 Improve the 
> exponential-delay restart-strategy 
> 
>
> Key: FLINK-34288
> URL: https://issues.apache.org/jira/browse/FLINK-34288
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Rui Fan
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
> Attachments: image-2024-02-06-14-51-11-256.png, screenshot-1.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34384) Release Testing: Verify FLINK-33735 Improve the exponential-delay restart-strategy

2024-02-05 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan updated FLINK-34384:

Description: 
Test suggestion:
 # Prepare a datastream job that all tasks throw exception directly.
 ## Set the parallelism to 5 or above
 # Prepare some configuration options:
 ** restart-strategy.type : exponential-delay
 ** restart-strategy.exponential-delay.attempts-before-reset-backoff : 7
 # Start the cluster: ./bin/start-cluster.sh
 # Run the job: ./bin/flink run -c className jarName
 # Check the result
 ** Check whether job will be retried 7 times
 ** Check the exception history, the list has 2 exceptions
 ** Each retries except the last one can see the 5 subtasks(They are concurrent 
exceptions).

!image-2024-02-06-14-57-51-386.png!

 

Note: Set these options mentioned at step2 at 2 level separately
 * Cluster level: set them in the config.yaml
 * Job level: Set them in the code

 

Job level demo:
{code:java}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();

conf.setString("restart-strategy", "exponential-delay");

conf.setString("restart-strategy.exponential-delay.attempts-before-reset-backoff",
 "6");
StreamExecutionEnvironment env =  
StreamExecutionEnvironment.getExecutionEnvironment(conf);
env.setParallelism(5);

DataGeneratorSource generatorSource =
new DataGeneratorSource<>(
value -> value,
300,
RateLimiterStrategy.perSecond(10),
Types.LONG);

env.fromSource(generatorSource, WatermarkStrategy.noWatermarks(), "Data 
Generator")
.map(new RichMapFunction() {
@Override
public Long map(Long value) {
throw new RuntimeException(
"Excepted testing exception, subtaskIndex: " + 
getRuntimeContext().getIndexOfThisSubtask());
}
})
.print();

env.execute();
} {code}

  was:
Test suggestion:
 # Prepare a datastream job that all tasks throw exception directly.
 ## Set the parallelism to 5 or above
 # Prepare some configuration options:
 ** restart-strategy.type : exponential-delay
 ** restart-strategy.exponential-delay.attempts-before-reset-backoff : 7
 # Start the cluster: ./bin/start-cluster.sh
 # Run the job: ./bin/flink run -c className jarName
 # Check the result
 ** Check whether job will be retried 7 times
 ** Check the exception history, the list has 2 exceptions
 ** Each retries except the last one can see the 5 subtasks(They are concurrent 
exceptions).

!image-2024-02-06-14-51-11-256.png!

 

Note: Set these options mentioned at step2 at 2 level separately
 * Cluster level: set them in the config.yaml
 * Job level: Set them in the code

 

Job level demo:
{code:java}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();

conf.setString("restart-strategy", "exponential-delay");

conf.setString("restart-strategy.exponential-delay.attempts-before-reset-backoff",
 "6");
StreamExecutionEnvironment env =  
StreamExecutionEnvironment.getExecutionEnvironment(conf);
env.setParallelism(5);

DataGeneratorSource generatorSource =
new DataGeneratorSource<>(
value -> value,
300,
RateLimiterStrategy.perSecond(10),
Types.LONG);

env.fromSource(generatorSource, WatermarkStrategy.noWatermarks(), "Data 
Generator")
.map(new RichMapFunction() {
@Override
public Long map(Long value) {
throw new RuntimeException(
"Excepted testing exception, subtaskIndex: " + 
getRuntimeContext().getIndexOfThisSubtask());
}
})
.print();

env.execute();
} {code}


> Release Testing: Verify FLINK-33735 Improve the exponential-delay 
> restart-strategy 
> ---
>
> Key: FLINK-34384
> URL: https://issues.apache.org/jira/browse/FLINK-34384
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
> Attachments: image-2024-02-06-14-57-51-386.png
>
>
> Test suggestion:
>  # Prepare a datastream job that all tasks throw exception directly.
>  ## Set the parallelism to 5 or above
>  # Prepare some configuration options:
>  ** restart-strategy.type : exponential-delay
>  ** restart-strategy.exponential-delay.attempts-before-reset-backoff : 7
>  # Start the cluster: ./bin/start-cluster.sh
>  # Run the job: ./bin/flink run -c 

[jira] [Updated] (FLINK-34384) Release Testing: Verify FLINK-33735 Improve the exponential-delay restart-strategy

2024-02-05 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee updated FLINK-34384:

Description: 
Test suggestion:
 # Prepare a datastream job that all tasks throw exception directly.
 ## Set the parallelism to 5 or above
 # Prepare some configuration options:
 ** restart-strategy.type : exponential-delay
 ** restart-strategy.exponential-delay.attempts-before-reset-backoff : 7
 # Start the cluster: ./bin/start-cluster.sh
 # Run the job: ./bin/flink run -c className jarName
 # Check the result
 ** Check whether job will be retried 7 times
 ** Check the exception history, the list has 2 exceptions
 ** Each retries except the last one can see the 5 subtasks(They are concurrent 
exceptions).

!image-2024-02-06-14-51-11-256.png!

 

Note: Set these options mentioned at step2 at 2 level separately
 * Cluster level: set them in the config.yaml
 * Job level: Set them in the code

 

Job level demo:
{code:java}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();

conf.setString("restart-strategy", "exponential-delay");

conf.setString("restart-strategy.exponential-delay.attempts-before-reset-backoff",
 "6");
StreamExecutionEnvironment env =  
StreamExecutionEnvironment.getExecutionEnvironment(conf);
env.setParallelism(5);

DataGeneratorSource generatorSource =
new DataGeneratorSource<>(
value -> value,
300,
RateLimiterStrategy.perSecond(10),
Types.LONG);

env.fromSource(generatorSource, WatermarkStrategy.noWatermarks(), "Data 
Generator")
.map(new RichMapFunction() {
@Override
public Long map(Long value) {
throw new RuntimeException(
"Excepted testing exception, subtaskIndex: " + 
getRuntimeContext().getIndexOfThisSubtask());
}
})
.print();

env.execute();
} {code}

> Release Testing: Verify FLINK-33735 Improve the exponential-delay 
> restart-strategy 
> ---
>
> Key: FLINK-34384
> URL: https://issues.apache.org/jira/browse/FLINK-34384
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Rui Fan
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
> Attachments: image-2024-02-06-14-57-51-386.png
>
>
> Test suggestion:
>  # Prepare a datastream job that all tasks throw exception directly.
>  ## Set the parallelism to 5 or above
>  # Prepare some configuration options:
>  ** restart-strategy.type : exponential-delay
>  ** restart-strategy.exponential-delay.attempts-before-reset-backoff : 7
>  # Start the cluster: ./bin/start-cluster.sh
>  # Run the job: ./bin/flink run -c className jarName
>  # Check the result
>  ** Check whether job will be retried 7 times
>  ** Check the exception history, the list has 2 exceptions
>  ** Each retries except the last one can see the 5 subtasks(They are 
> concurrent exceptions).
> !image-2024-02-06-14-51-11-256.png!
>  
> Note: Set these options mentioned at step2 at 2 level separately
>  * Cluster level: set them in the config.yaml
>  * Job level: Set them in the code
>  
> Job level demo:
> {code:java}
> public static void main(String[] args) throws Exception {
> Configuration conf = new Configuration();
> conf.setString("restart-strategy", "exponential-delay");
> 
> conf.setString("restart-strategy.exponential-delay.attempts-before-reset-backoff",
>  "6");
> StreamExecutionEnvironment env =  
> StreamExecutionEnvironment.getExecutionEnvironment(conf);
> env.setParallelism(5);
> DataGeneratorSource generatorSource =
> new DataGeneratorSource<>(
> value -> value,
> 300,
> RateLimiterStrategy.perSecond(10),
> Types.LONG);
> env.fromSource(generatorSource, WatermarkStrategy.noWatermarks(), "Data 
> Generator")
> .map(new RichMapFunction() {
> @Override
> public Long map(Long value) {
> throw new RuntimeException(
> "Excepted testing exception, subtaskIndex: " + 
> getRuntimeContext().getIndexOfThisSubtask());
> }
> })
> .print();
> env.execute();
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34384) Release Testing: Verify FLINK-33735 Improve the exponential-delay restart-strategy

2024-02-05 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan updated FLINK-34384:

Attachment: image-2024-02-06-14-57-51-386.png

> Release Testing: Verify FLINK-33735 Improve the exponential-delay 
> restart-strategy 
> ---
>
> Key: FLINK-34384
> URL: https://issues.apache.org/jira/browse/FLINK-34384
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
> Attachments: image-2024-02-06-14-57-51-386.png
>
>
> Test suggestion:
>  # Prepare a datastream job that all tasks throw exception directly.
>  ## Set the parallelism to 5 or above
>  # Prepare some configuration options:
>  ** restart-strategy.type : exponential-delay
>  ** restart-strategy.exponential-delay.attempts-before-reset-backoff : 7
>  # Start the cluster: ./bin/start-cluster.sh
>  # Run the job: ./bin/flink run -c className jarName
>  # Check the result
>  ** Check whether job will be retried 7 times
>  ** Check the exception history, the list has 2 exceptions
>  ** Each retries except the last one can see the 5 subtasks(They are 
> concurrent exceptions).
> !image-2024-02-06-14-51-11-256.png!
>  
> Note: Set these options mentioned at step2 at 2 level separately
>  * Cluster level: set them in the config.yaml
>  * Job level: Set them in the code
>  
> Job level demo:
> {code:java}
> public static void main(String[] args) throws Exception {
> Configuration conf = new Configuration();
> conf.setString("restart-strategy", "exponential-delay");
> 
> conf.setString("restart-strategy.exponential-delay.attempts-before-reset-backoff",
>  "6");
> StreamExecutionEnvironment env =  
> StreamExecutionEnvironment.getExecutionEnvironment(conf);
> env.setParallelism(5);
> DataGeneratorSource generatorSource =
> new DataGeneratorSource<>(
> value -> value,
> 300,
> RateLimiterStrategy.perSecond(10),
> Types.LONG);
> env.fromSource(generatorSource, WatermarkStrategy.noWatermarks(), "Data 
> Generator")
> .map(new RichMapFunction() {
> @Override
> public Long map(Long value) {
> throw new RuntimeException(
> "Excepted testing exception, subtaskIndex: " + 
> getRuntimeContext().getIndexOfThisSubtask());
> }
> })
> .print();
> env.execute();
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34384) Release Testing: Verify FLINK-33735 Improve the exponential-delay restart-strategy

2024-02-05 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee reassigned FLINK-34384:
---

Assignee: (was: Rui Fan)

> Release Testing: Verify FLINK-33735 Improve the exponential-delay 
> restart-strategy 
> ---
>
> Key: FLINK-34384
> URL: https://issues.apache.org/jira/browse/FLINK-34384
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
> Attachments: image-2024-02-06-14-57-51-386.png
>
>
> Test suggestion:
>  # Prepare a datastream job that all tasks throw exception directly.
>  ## Set the parallelism to 5 or above
>  # Prepare some configuration options:
>  ** restart-strategy.type : exponential-delay
>  ** restart-strategy.exponential-delay.attempts-before-reset-backoff : 7
>  # Start the cluster: ./bin/start-cluster.sh
>  # Run the job: ./bin/flink run -c className jarName
>  # Check the result
>  ** Check whether job will be retried 7 times
>  ** Check the exception history, the list has 2 exceptions
>  ** Each retries except the last one can see the 5 subtasks(They are 
> concurrent exceptions).
> !image-2024-02-06-14-51-11-256.png!
>  
> Note: Set these options mentioned at step2 at 2 level separately
>  * Cluster level: set them in the config.yaml
>  * Job level: Set them in the code
>  
> Job level demo:
> {code:java}
> public static void main(String[] args) throws Exception {
> Configuration conf = new Configuration();
> conf.setString("restart-strategy", "exponential-delay");
> 
> conf.setString("restart-strategy.exponential-delay.attempts-before-reset-backoff",
>  "6");
> StreamExecutionEnvironment env =  
> StreamExecutionEnvironment.getExecutionEnvironment(conf);
> env.setParallelism(5);
> DataGeneratorSource generatorSource =
> new DataGeneratorSource<>(
> value -> value,
> 300,
> RateLimiterStrategy.perSecond(10),
> Types.LONG);
> env.fromSource(generatorSource, WatermarkStrategy.noWatermarks(), "Data 
> Generator")
> .map(new RichMapFunction() {
> @Override
> public Long map(Long value) {
> throw new RuntimeException(
> "Excepted testing exception, subtaskIndex: " + 
> getRuntimeContext().getIndexOfThisSubtask());
> }
> })
> .print();
> env.execute();
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34310) Release Testing Instructions: Verify FLINK-33325 Built-in cross-platform powerful java profiler

2024-02-05 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814641#comment-17814641
 ] 

Rui Fan commented on FLINK-34310:
-

{quote}[~fanrui] Thanks for reporting! Would you like to take this testing work?
{quote}
Sure, I can test it after [~Yu Chen]  provided the test progress.

BTW, as we know, the Chinese new year is coming. I may be unavailable during 
the vocation.

> Release Testing Instructions: Verify FLINK-33325 Built-in cross-platform 
> powerful java profiler
> ---
>
> Key: FLINK-34310
> URL: https://issues.apache.org/jira/browse/FLINK-34310
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / REST, Runtime / Web Frontend
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Yu Chen
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
> Attachments: image-2024-02-06-14-09-39-874.png, screenshot-1.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34384) Release Testing: Verify FLINK-33735 Improve the exponential-delay restart-strategy

2024-02-05 Thread lincoln lee (Jira)
lincoln lee created FLINK-34384:
---

 Summary: Release Testing: Verify FLINK-33735 Improve the 
exponential-delay restart-strategy 
 Key: FLINK-34384
 URL: https://issues.apache.org/jira/browse/FLINK-34384
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Coordination
Affects Versions: 1.19.0
Reporter: lincoln lee
Assignee: Rui Fan
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34383) Modify the comment with incorrect syntax

2024-02-05 Thread li you (Jira)
li you created FLINK-34383:
--

 Summary: Modify the comment with incorrect syntax
 Key: FLINK-34383
 URL: https://issues.apache.org/jira/browse/FLINK-34383
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Reporter: li you


There is an error in the syntax of the comment for the class PermanentBlobCache



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34288) Release Testing Instructions: Verify FLINK-33735 Improve the exponential-delay restart-strategy

2024-02-05 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814638#comment-17814638
 ] 

Rui Fan commented on FLINK-34288:
-

Thanks [~lincoln.86xy]  for the reminder.

 

Test suggestion:
 # Prepare a datastream job that all tasks throw exception directly.
 ## Set the parallelism to 5 or above
 # Prepare some configuration options:
 ** restart-strategy.type : exponential-delay
 ** restart-strategy.exponential-delay.attempts-before-reset-backoff : 7
 # Start the cluster: ./bin/start-cluster.sh
 # Run the job: ./bin/flink run -c className jarName
 # Check the result
 ** Check whether job will be retried 7 times
 ** Check the exception history, the list has 2 exceptions
 ** Each retries except the last one can see the 5 subtasks(They are concurrent 
exceptions).

!image-2024-02-06-14-51-11-256.png!

 

Note: Set these options mentioned at step2 at 2 level separately
 * Cluster level: set them in the config.yaml
 * Job level: Set them in the code

 

Job level demo:
{code:java}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();

conf.setString("restart-strategy", "exponential-delay");

conf.setString("restart-strategy.exponential-delay.attempts-before-reset-backoff",
 "6");
StreamExecutionEnvironment env =  
StreamExecutionEnvironment.getExecutionEnvironment(conf);
env.setParallelism(5);

DataGeneratorSource generatorSource =
new DataGeneratorSource<>(
value -> value,
300,
RateLimiterStrategy.perSecond(10),
Types.LONG);

env.fromSource(generatorSource, WatermarkStrategy.noWatermarks(), "Data 
Generator")
.map(new RichMapFunction() {
@Override
public Long map(Long value) {
throw new RuntimeException(
"Excepted testing exception, subtaskIndex: " + 
getRuntimeContext().getIndexOfThisSubtask());
}
})
.print();

env.execute();
} {code}

> Release Testing Instructions: Verify FLINK-33735 Improve the 
> exponential-delay restart-strategy 
> 
>
> Key: FLINK-34288
> URL: https://issues.apache.org/jira/browse/FLINK-34288
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Rui Fan
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
> Attachments: image-2024-02-06-14-51-11-256.png, screenshot-1.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34373) Complete work for syntax `DESCRIBE DATABASE databaseName`

2024-02-05 Thread xuyang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814636#comment-17814636
 ] 

xuyang commented on FLINK-34373:


I'll try to fix it.

> Complete work for syntax `DESCRIBE DATABASE databaseName`
> -
>
> Key: FLINK-34373
> URL: https://issues.apache.org/jira/browse/FLINK-34373
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.10.0, 1.19.0
>Reporter: xuyang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34382) Release Testing: Verify FLINK-33625 Support System out and err to be redirected to LOG or discarded

2024-02-05 Thread lincoln lee (Jira)
lincoln lee created FLINK-34382:
---

 Summary: Release Testing: Verify FLINK-33625 Support System out 
and err to be redirected to LOG or discarded
 Key: FLINK-34382
 URL: https://issues.apache.org/jira/browse/FLINK-34382
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Configuration
Affects Versions: 1.19.0
Reporter: lincoln lee
Assignee: Rui Fan
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34382) Release Testing: Verify FLINK-33625 Support System out and err to be redirected to LOG or discarded

2024-02-05 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee reassigned FLINK-34382:
---

Assignee: (was: Rui Fan)

> Release Testing: Verify FLINK-33625 Support System out and err to be 
> redirected to LOG or discarded
> ---
>
> Key: FLINK-34382
> URL: https://issues.apache.org/jira/browse/FLINK-34382
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Configuration
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34288) Release Testing Instructions: Verify FLINK-33735 Improve the exponential-delay restart-strategy

2024-02-05 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan updated FLINK-34288:

Attachment: image-2024-02-06-14-51-11-256.png

> Release Testing Instructions: Verify FLINK-33735 Improve the 
> exponential-delay restart-strategy 
> 
>
> Key: FLINK-34288
> URL: https://issues.apache.org/jira/browse/FLINK-34288
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Rui Fan
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
> Attachments: image-2024-02-06-14-51-11-256.png, screenshot-1.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34308) Release Testing Instructions: Verify FLINK-33625 Support System out and err to be redirected to LOG or discarded

2024-02-05 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee closed FLINK-34308.
---
Resolution: Fixed

Testing tracked by https://issues.apache.org/jira/browse/FLINK-34382

> Release Testing Instructions: Verify FLINK-33625 Support System out and err 
> to be redirected to LOG or discarded
> 
>
> Key: FLINK-34308
> URL: https://issues.apache.org/jira/browse/FLINK-34308
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Configuration
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Rui Fan
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34382) Release Testing: Verify FLINK-33625 Support System out and err to be redirected to LOG or discarded

2024-02-05 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee updated FLINK-34382:

Description: 
Test suggestion:

Prepare a Flink SQL job and a flink datastream job
they can use the print sink or call System.out.println inside of the UDF
Add this config to the config.yaml
taskmanager.system-out.mode : LOG
Run the job
Check whether the print log is redirected to log file
 

SQL demo:

./bin/sql-client.sh

CREATE TABLE orders (
  id   INT,
  app  INT,
  channel  INT,
  user_id  STRING,
  ts   TIMESTAMP(3),
  WATERMARK FOR ts AS ts
) WITH (
   'connector' = 'datagen',
   'rows-per-second'='20',
   'fields.app.min'='1',
   'fields.app.max'='10',
   'fields.channel.min'='21',
   'fields.channel.max'='30',
   'fields.user_id.length'='10'
);

create table print_sink ( 
  id   INT,
  app  INT,
  channel  INT,
  user_id  STRING,
  ts   TIMESTAMP(3)
) with ('connector' = 'print' );

insert into print_sink
select id   ,app   ,channel   ,user_id   ,ts   from orders 

> Release Testing: Verify FLINK-33625 Support System out and err to be 
> redirected to LOG or discarded
> ---
>
> Key: FLINK-34382
> URL: https://issues.apache.org/jira/browse/FLINK-34382
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Configuration
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
>
> Test suggestion:
> Prepare a Flink SQL job and a flink datastream job
> they can use the print sink or call System.out.println inside of the UDF
> Add this config to the config.yaml
> taskmanager.system-out.mode : LOG
> Run the job
> Check whether the print log is redirected to log file
>  
> SQL demo:
> ./bin/sql-client.sh
> CREATE TABLE orders (
>   id   INT,
>   app  INT,
>   channel  INT,
>   user_id  STRING,
>   ts   TIMESTAMP(3),
>   WATERMARK FOR ts AS ts
> ) WITH (
>'connector' = 'datagen',
>'rows-per-second'='20',
>'fields.app.min'='1',
>'fields.app.max'='10',
>'fields.channel.min'='21',
>'fields.channel.max'='30',
>'fields.user_id.length'='10'
> );
> create table print_sink ( 
>   id   INT,
>   app  INT,
>   channel  INT,
>   user_id  STRING,
>   ts   TIMESTAMP(3)
> ) with ('connector' = 'print' );
> insert into print_sink
> select id   ,app   ,channel   ,user_id   ,ts   from orders 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34382) Release Testing: Verify FLINK-33625 Support System out and err to be redirected to LOG or discarded

2024-02-05 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee updated FLINK-34382:

Description: 
Test suggestion:

Prepare a Flink SQL job and a flink datastream job
they can use the print sink or call System.out.println inside of the UDF
Add this config to the config.yaml
taskmanager.system-out.mode : LOG
Run the job
Check whether the print log is redirected to log file
 

SQL demo:
{code}
./bin/sql-client.sh

CREATE TABLE orders (
  id   INT,
  app  INT,
  channel  INT,
  user_id  STRING,
  ts   TIMESTAMP(3),
  WATERMARK FOR ts AS ts
) WITH (
   'connector' = 'datagen',
   'rows-per-second'='20',
   'fields.app.min'='1',
   'fields.app.max'='10',
   'fields.channel.min'='21',
   'fields.channel.max'='30',
   'fields.user_id.length'='10'
);

create table print_sink ( 
  id   INT,
  app  INT,
  channel  INT,
  user_id  STRING,
  ts   TIMESTAMP(3)
) with ('connector' = 'print' );

insert into print_sink
select id   ,app   ,channel   ,user_id   ,ts   from orders 
{code}

  was:
Test suggestion:

Prepare a Flink SQL job and a flink datastream job
they can use the print sink or call System.out.println inside of the UDF
Add this config to the config.yaml
taskmanager.system-out.mode : LOG
Run the job
Check whether the print log is redirected to log file
 

SQL demo:

./bin/sql-client.sh

CREATE TABLE orders (
  id   INT,
  app  INT,
  channel  INT,
  user_id  STRING,
  ts   TIMESTAMP(3),
  WATERMARK FOR ts AS ts
) WITH (
   'connector' = 'datagen',
   'rows-per-second'='20',
   'fields.app.min'='1',
   'fields.app.max'='10',
   'fields.channel.min'='21',
   'fields.channel.max'='30',
   'fields.user_id.length'='10'
);

create table print_sink ( 
  id   INT,
  app  INT,
  channel  INT,
  user_id  STRING,
  ts   TIMESTAMP(3)
) with ('connector' = 'print' );

insert into print_sink
select id   ,app   ,channel   ,user_id   ,ts   from orders 


> Release Testing: Verify FLINK-33625 Support System out and err to be 
> redirected to LOG or discarded
> ---
>
> Key: FLINK-34382
> URL: https://issues.apache.org/jira/browse/FLINK-34382
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Configuration
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
>
> Test suggestion:
> Prepare a Flink SQL job and a flink datastream job
> they can use the print sink or call System.out.println inside of the UDF
> Add this config to the config.yaml
> taskmanager.system-out.mode : LOG
> Run the job
> Check whether the print log is redirected to log file
>  
> SQL demo:
> {code}
> ./bin/sql-client.sh
> CREATE TABLE orders (
>   id   INT,
>   app  INT,
>   channel  INT,
>   user_id  STRING,
>   ts   TIMESTAMP(3),
>   WATERMARK FOR ts AS ts
> ) WITH (
>'connector' = 'datagen',
>'rows-per-second'='20',
>'fields.app.min'='1',
>'fields.app.max'='10',
>'fields.channel.min'='21',
>'fields.channel.max'='30',
>'fields.user_id.length'='10'
> );
> create table print_sink ( 
>   id   INT,
>   app  INT,
>   channel  INT,
>   user_id  STRING,
>   ts   TIMESTAMP(3)
> ) with ('connector' = 'print' );
> insert into print_sink
> select id   ,app   ,channel   ,user_id   ,ts   from orders 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34310) Release Testing Instructions: Verify FLINK-33325 Built-in cross-platform powerful java profiler

2024-02-05 Thread lincoln lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814634#comment-17814634
 ] 

lincoln lee commented on FLINK-34310:
-

[~Yu Chen] Could you estimate when the user doc 
(https://issues.apache.org/jira/browse/FLINK-33436) can be finished?

> Release Testing Instructions: Verify FLINK-33325 Built-in cross-platform 
> powerful java profiler
> ---
>
> Key: FLINK-34310
> URL: https://issues.apache.org/jira/browse/FLINK-34310
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / REST, Runtime / Web Frontend
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Yu Chen
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
> Attachments: image-2024-02-06-14-09-39-874.png, screenshot-1.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34381) `RelDataType#getFullTypeString` should be used to print in `RelTreeWriterImpl` if `withRowType` is true instead of `Object#toString`

2024-02-05 Thread xuyang (Jira)
xuyang created FLINK-34381:
--

 Summary: `RelDataType#getFullTypeString` should be used to print 
in `RelTreeWriterImpl` if `withRowType` is true instead of `Object#toString`
 Key: FLINK-34381
 URL: https://issues.apache.org/jira/browse/FLINK-34381
 Project: Flink
  Issue Type: Technical Debt
  Components: Table SQL / Planner
Affects Versions: 1.9.0, 1.19.0
Reporter: xuyang


Currently `RelTreeWriterImpl` use `rel.getRowType.toString` to print row type.
{code:java}
if (withRowType) {
  s.append(", rowType=[").append(rel.getRowType.toString).append("]")
} {code}
However, looking deeper into the code, we should use 
`rel.getRowType.getFullTypeString` to print the row type. Because the function 
`getFullTypeString` will print richer type information such as `nullable`. Take 
`StructuredRelDataType` as an example, the diff is below:
{code:java}
// source
util.addTableSource[(Long, Int, String)]("MyTable", 'a, 'b, 'c)

// sql
SELECT a, c FROM MyTable

// rel.getRowType.toString
RecordType(BIGINT a, VARCHAR(2147483647) c)

// rel.getRowType.getFullTypeString
RecordType(BIGINT a, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" c) NOT 
NULL{code}
   



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34310) Release Testing Instructions: Verify FLINK-33325 Built-in cross-platform powerful java profiler

2024-02-05 Thread lincoln lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814633#comment-17814633
 ] 

lincoln lee commented on FLINK-34310:
-

[~fanrui] Thanks for reporting! Would you like to take this testing work?

> Release Testing Instructions: Verify FLINK-33325 Built-in cross-platform 
> powerful java profiler
> ---
>
> Key: FLINK-34310
> URL: https://issues.apache.org/jira/browse/FLINK-34310
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / REST, Runtime / Web Frontend
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Yu Chen
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
> Attachments: image-2024-02-06-14-09-39-874.png, screenshot-1.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34308) Release Testing Instructions: Verify FLINK-33625 Support System out and err to be redirected to LOG or discarded

2024-02-05 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814627#comment-17814627
 ] 

Rui Fan edited comment on FLINK-34308 at 2/6/24 6:26 AM:
-

Test suggestion:
 # Prepare a Flink SQL job and a flink datastream job
 ** they can use the print sink or call System.out.println inside of the UDF
 # Add this config to the config.yaml
 ** taskmanager.system-out.mode : LOG
 # Run the job
 # Check whether the print log is redirected to log file

 

SQL demo:
{code:java}
./bin/sql-client.sh

CREATE TABLE orders (
  id           INT,
  app          INT,
  channel      INT,
  user_id      STRING,
  ts           TIMESTAMP(3),
  WATERMARK FOR ts AS ts
) WITH (
   'connector' = 'datagen',
   'rows-per-second'='20',
   'fields.app.min'='1',
   'fields.app.max'='10',
   'fields.channel.min'='21',
   'fields.channel.max'='30',
   'fields.user_id.length'='10'
);

create table print_sink ( 
  id           INT,
  app          INT,
  channel      INT,
  user_id      STRING,
  ts           TIMESTAMP(3)
) with ('connector' = 'print' );

insert into print_sink
select id       ,app       ,channel       ,user_id       ,ts   from orders 
{code}


was (Author: fanrui):
Test suggestion:
 # Prepare a Flink SQL job and a flink datastream job
 ** they can use the print sink or call System.out.println inside of the UDF
 # Add this config to the config.yaml
 ** taskmanager.system-out.mode : LOG
 # Run the job
 # Check whether the print log is redirected to log file

 

SQL demo:
{code:java}
CREATE TABLE orders (
  id           INT,
  app          INT,
  channel      INT,
  user_id      STRING,
  ts           TIMESTAMP(3),
  WATERMARK FOR ts AS ts
) WITH (
   'connector' = 'datagen',
   'rows-per-second'='20',
   'fields.app.min'='1',
   'fields.app.max'='10',
   'fields.channel.min'='21',
   'fields.channel.max'='30',
   'fields.user_id.length'='10'
);

create table print_sink ( 
  id           INT,
  app          INT,
  channel      INT,
  user_id      STRING,
  ts           TIMESTAMP(3)
) with ('connector' = 'print' );

insert into print_sink
select id       ,app       ,channel       ,user_id       ,ts   from orders 
{code}

> Release Testing Instructions: Verify FLINK-33625 Support System out and err 
> to be redirected to LOG or discarded
> 
>
> Key: FLINK-34308
> URL: https://issues.apache.org/jira/browse/FLINK-34308
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Configuration
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Rui Fan
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34365) [docs] Delete repeated pages in Chinese Flink website and correct the Paimon url

2024-02-05 Thread Lijie Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lijie Wang reassigned FLINK-34365:
--

Assignee: Waterking

> [docs] Delete repeated pages in Chinese Flink website and correct the Paimon 
> url
> 
>
> Key: FLINK-34365
> URL: https://issues.apache.org/jira/browse/FLINK-34365
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Waterking
>Assignee: Waterking
>Priority: Major
>  Labels: pull-request-available
> Attachments: 微信截图_20240205214854.png
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The "教程" column on the [Flink 
> 中文网|https://flink.apache.org/zh/how-to-contribute/contribute-documentation/] 
> currently has two "[With Paimon(incubating) (formerly Flink Table 
> Store)|https://paimon.apache.org/docs/master/engines/flink/%22];.
> Therefore, I delete one for brevity.
> Also, the current link is wrong and I correct it with this link "[With 
> Paimon(incubating) (formerly Flink Table 
> Store)|https://paimon.apache.org/docs/master/engines/flink];



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34308) Release Testing Instructions: Verify FLINK-33625 Support System out and err to be redirected to LOG or discarded

2024-02-05 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814627#comment-17814627
 ] 

Rui Fan commented on FLINK-34308:
-

Test suggestion:
 # Prepare a Flink SQL job and a flink datastream job
 ** they can use the print sink or call System.out.println inside of the UDF
 # Add this config to the config.yaml
 ** taskmanager.system-out.mode : LOG
 # Run the job
 # Check whether the print log is redirected to log file

 

SQL demo:
{code:java}
CREATE TABLE orders (
  id           INT,
  app          INT,
  channel      INT,
  user_id      STRING,
  ts           TIMESTAMP(3),
  WATERMARK FOR ts AS ts
) WITH (
   'connector' = 'datagen',
   'rows-per-second'='20',
   'fields.app.min'='1',
   'fields.app.max'='10',
   'fields.channel.min'='21',
   'fields.channel.max'='30',
   'fields.user_id.length'='10'
);

create table print_sink ( 
  id           INT,
  app          INT,
  channel      INT,
  user_id      STRING,
  ts           TIMESTAMP(3)
) with ('connector' = 'print' );

insert into print_sink
select id       ,app       ,channel       ,user_id       ,ts   from orders 
{code}

> Release Testing Instructions: Verify FLINK-33625 Support System out and err 
> to be redirected to LOG or discarded
> 
>
> Key: FLINK-34308
> URL: https://issues.apache.org/jira/browse/FLINK-34308
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Configuration
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Rui Fan
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34365) [docs] Delete repeated pages in Chinese Flink website and correct the Paimon url

2024-02-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34365:
---
Labels: pull-request-available  (was: )

> [docs] Delete repeated pages in Chinese Flink website and correct the Paimon 
> url
> 
>
> Key: FLINK-34365
> URL: https://issues.apache.org/jira/browse/FLINK-34365
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Waterking
>Priority: Major
>  Labels: pull-request-available
> Attachments: 微信截图_20240205214854.png
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The "教程" column on the [Flink 
> 中文网|https://flink.apache.org/zh/how-to-contribute/contribute-documentation/] 
> currently has two "[With Paimon(incubating) (formerly Flink Table 
> Store)|https://paimon.apache.org/docs/master/engines/flink/%22];.
> Therefore, I delete one for brevity.
> Also, the current link is wrong and I correct it with this link "[With 
> Paimon(incubating) (formerly Flink Table 
> Store)|https://paimon.apache.org/docs/master/engines/flink];



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34365) [docs] Delete repeated pages in Chinese Flink website and correct the Paimon url

2024-02-05 Thread Lijie Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814628#comment-17814628
 ] 

Lijie Wang commented on FLINK-34365:


Assigned to you [~waterking] :)

> [docs] Delete repeated pages in Chinese Flink website and correct the Paimon 
> url
> 
>
> Key: FLINK-34365
> URL: https://issues.apache.org/jira/browse/FLINK-34365
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Waterking
>Assignee: Waterking
>Priority: Major
>  Labels: pull-request-available
> Attachments: 微信截图_20240205214854.png
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The "教程" column on the [Flink 
> 中文网|https://flink.apache.org/zh/how-to-contribute/contribute-documentation/] 
> currently has two "[With Paimon(incubating) (formerly Flink Table 
> Store)|https://paimon.apache.org/docs/master/engines/flink/%22];.
> Therefore, I delete one for brevity.
> Also, the current link is wrong and I correct it with this link "[With 
> Paimon(incubating) (formerly Flink Table 
> Store)|https://paimon.apache.org/docs/master/engines/flink];



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34365] Delete repeated pages in Chinese Flink website and correct the Paimon url [flink-web]

2024-02-05 Thread via GitHub


wanglijie95 commented on code in PR #713:
URL: https://github.com/apache/flink-web/pull/713#discussion_r1479274520


##
docs/content.zh/getting-started/with-paimon-incubating.md:
##
@@ -1,7 +1,7 @@
 ---
 weight: 5
 title: With Paimon(incubating) (formerly Flink Table Store)
-bookHref: https://paimon.apache.org/docs/master/engines/flink/;
+bookHref: https://paimon.apache.org/docs/master/engines/flink/

Review Comment:
   How about change this to 
`"https://paimon.apache.org/docs/master/engines/flink/"` instead of 
`https://paimon.apache.org/docs/master/engines/flink/` ? 
   which will be same as 
`docs/content/getting-started/with-paimon-incubating.md`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-34309) Release Testing Instructions: Verify FLINK-33315 Optimize memory usage of large StreamOperator

2024-02-05 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan closed FLINK-34309.
---
Resolution: Fixed

> Release Testing Instructions: Verify FLINK-33315 Optimize memory usage of 
> large StreamOperator
> --
>
> Key: FLINK-34309
> URL: https://issues.apache.org/jira/browse/FLINK-34309
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Configuration
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Rui Fan
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34309) Release Testing Instructions: Verify FLINK-33315 Optimize memory usage of large StreamOperator

2024-02-05 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814624#comment-17814624
 ] 

Rui Fan commented on FLINK-34309:
-

This improvement doesn't need cross team testing. Closing it.

> Release Testing Instructions: Verify FLINK-33315 Optimize memory usage of 
> large StreamOperator
> --
>
> Key: FLINK-34309
> URL: https://issues.apache.org/jira/browse/FLINK-34309
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Configuration
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Rui Fan
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34370) [Umbrella] Complete work and improve about enhanced Flink SQL DDL

2024-02-05 Thread xuyang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814623#comment-17814623
 ] 

xuyang commented on FLINK-34370:


[~337361...@qq.com] Thanks for your volunteering.

> [Umbrella] Complete work and improve about enhanced Flink SQL DDL 
> --
>
> Key: FLINK-34370
> URL: https://issues.apache.org/jira/browse/FLINK-34370
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.10.0, 1.19.0
>Reporter: xuyang
>Priority: Major
>
> This is a umbrella Jira for completing work for 
> [Flip-69]([https://cwiki.apache.org/confluence/display/FLINK/FLIP-69%3A+Flink+SQL+DDL+Enhancement])
>  about enhanced Flink SQL DDL.
> With FLINK-34254(https://issues.apache.org/jira/browse/FLINK-34254), it seems 
> that this flip is not finished yet.
> The matrix is below:
> |DDL|can be used in sql-client |
> |_SHOW CATALOGS_|YES|
> |_DESCRIBE_ _CATALOG catalogName_|{color:#de350b}NO{color}|
> |_USE_ _CATALOG catalogName_|YES|
> |_CREATE DATABASE dataBaseName_|YES|
> |_DROP DATABASE dataBaseName_|YES|
> |_DROP IF EXISTS DATABASE dataBaseName_|YES|
> |_DROP DATABASE dataBaseName RESTRICT_|YES|
> |_DROP DATABASE dataBaseName CASCADE_|YES|
> |_ALTER DATABASE dataBaseName SET
> ( name=value [, name=value]*)|YES|
> |_USE dataBaseName_|YES|
> |_SHOW_ _DATABASES_|YES|
> |_DESCRIBE DATABASE dataBaseName_|{color:#de350b}NO{color}|
> |_DESCRIBE EXTENDED DATABASE dataBaseName_|{color:#de350b}NO{color}|
> |_SHOW_ _TABLES_|YES|
> |_DESCRIBE tableName_|YES|
> |_DESCRIBE EXTENDED tableName_|{color:#de350b}NO{color}|
> |_ALTER_ _TABLE tableName
> RENAME TO newTableName|YES|
> |_ALTER_ _TABLE tableName
> SET ( name=value [, name=value]*)|YES|
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34356) Release Testing: Verify FLINK-33768 Support dynamic source parallelism inference for batch jobs

2024-02-05 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee updated FLINK-34356:

Summary: Release Testing: Verify FLINK-33768  Support dynamic source 
parallelism inference for batch jobs   (was: Release Testin: Verify FLINK-33768 
 Support dynamic source parallelism inference for batch jobs )

> Release Testing: Verify FLINK-33768  Support dynamic source parallelism 
> inference for batch jobs 
> -
>
> Key: FLINK-34356
> URL: https://issues.apache.org/jira/browse/FLINK-34356
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
>
> This issue aims to verify FLIP-379.
> New Source can implement the interface DynamicParallelismInference to enable 
> dynamic parallelism inference. For detailed information, please refer to the 
> documentation.
> We may need to cover the following two types of test cases:
> Test 1: FileSource has implemented the dynamic source parallelism inference. 
> Test the automatic parallelism inference of FileSource.
> Test 2: Test the dynamic source parallelism inference of a custom Source.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34377) Release Testing: Verify FLINK-33297 Support standard YAML for FLINK configuration

2024-02-05 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee updated FLINK-34377:

Summary: Release Testing: Verify FLINK-33297 Support standard YAML for 
FLINK configuration  (was: Release Testing : Verify FLINK-33297 Support 
standard YAML for FLINK configuration)

> Release Testing: Verify FLINK-33297 Support standard YAML for FLINK 
> configuration
> -
>
> Key: FLINK-34377
> URL: https://issues.apache.org/jira/browse/FLINK-34377
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Configuration
>Affects Versions: 1.19.0
>Reporter: Junrui Li
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
>
> This issue aims to verify FLIP-366.
> Starting with version 1.19, Flink has officially introduced full support for 
> the standard YAML 1.2 syntax. For detailed information, please refer to the 
> Flink 
> website:https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#flink-configuration-file
> We may need to cover the following two types of test cases:
> Test 1: For newly created jobs, utilize a config.yaml file to set up the 
> Flink cluster. We need to verify that the job runs as expected with this new 
> configuration.
> Test 2: For existing jobs, migrate the legacy flink-conf.yaml to the new 
> config.yaml. Test the job runs just like before post-migration.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34310) Release Testing Instructions: Verify FLINK-33325 Built-in cross-platform powerful java profiler

2024-02-05 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814622#comment-17814622
 ] 

Rui Fan commented on FLINK-34310:
-

When I test other features in my Local, I found Profiler page throw some 
exceptions. I'm not sure whether it's expected.

 

Note: I didn't enable it.

 

!image-2024-02-06-14-09-39-874.png!

> Release Testing Instructions: Verify FLINK-33325 Built-in cross-platform 
> powerful java profiler
> ---
>
> Key: FLINK-34310
> URL: https://issues.apache.org/jira/browse/FLINK-34310
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / REST, Runtime / Web Frontend
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Yu Chen
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
> Attachments: image-2024-02-06-14-09-39-874.png, screenshot-1.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34310) Release Testing Instructions: Verify FLINK-33325 Built-in cross-platform powerful java profiler

2024-02-05 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan updated FLINK-34310:

Attachment: image-2024-02-06-14-09-39-874.png

> Release Testing Instructions: Verify FLINK-33325 Built-in cross-platform 
> powerful java profiler
> ---
>
> Key: FLINK-34310
> URL: https://issues.apache.org/jira/browse/FLINK-34310
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / REST, Runtime / Web Frontend
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Yu Chen
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
> Attachments: image-2024-02-06-14-09-39-874.png, screenshot-1.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34379) table.optimizer.dynamic-filtering.enabled lead to OutOfMemoryError

2024-02-05 Thread zhu (Jira)
zhu created FLINK-34379:
---

 Summary: table.optimizer.dynamic-filtering.enabled lead to 
OutOfMemoryError
 Key: FLINK-34379
 URL: https://issues.apache.org/jira/browse/FLINK-34379
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.18.1, 1.17.2
 Environment: 1.17.1
Reporter: zhu


When using batch computing, I union all about 50 tables and then join other 
table. When compiling the execution plan, 
there throws OutOfMemoryError: Java heap space, which was no problem in  
1.15.2. However, both 1.17.2 and 1.18.1 all throws same errors,This causes 
jobmanager to restart. Currently,it has been found that this is caused by 
table.optimizer.dynamic-filtering.enabled, which defaults is true,When I set 
table.optimizer.dynamic-filtering.enabled to false, it can be compiled and 
executed normally

code

TableEnvironment.create(EnvironmentSettings.newInstance()
.withConfiguration(configuration)
.inBatchMode().build())

sql=select att,filename,'table0' as mo_name from table0 UNION All select 
att,filename,'table1' as mo_name from table1 UNION All select 
att,filename,'table2' as mo_name from table2 UNION All select 
att,filename,'table3' as mo_name from table3 UNION All select 
att,filename,'table4' as mo_name from table4 UNION All select 
att,filename,'table5' as mo_name from table5 UNION All select 
att,filename,'table6' as mo_name from table6 UNION All select 
att,filename,'table7' as mo_name from table7 UNION All select 
att,filename,'table8' as mo_name from table8 UNION All select 
att,filename,'table9' as mo_name from table9 UNION All select 
att,filename,'table10' as mo_name from table10 UNION All select 
att,filename,'table11' as mo_name from table11 UNION All select 
att,filename,'table12' as mo_name from table12 UNION All select 
att,filename,'table13' as mo_name from table13 UNION All select 
att,filename,'table14' as mo_name from table14 UNION All select 
att,filename,'table15' as mo_name from table15 UNION All select 
att,filename,'table16' as mo_name from table16 UNION All select 
att,filename,'table17' as mo_name from table17 UNION All select 
att,filename,'table18' as mo_name from table18 UNION All select 
att,filename,'table19' as mo_name from table19 UNION All select 
att,filename,'table20' as mo_name from table20 UNION All select 
att,filename,'table21' as mo_name from table21 UNION All select 
att,filename,'table22' as mo_name from table22 UNION All select 
att,filename,'table23' as mo_name from table23 UNION All select 
att,filename,'table24' as mo_name from table24 UNION All select 
att,filename,'table25' as mo_name from table25 UNION All select 
att,filename,'table26' as mo_name from table26 UNION All select 
att,filename,'table27' as mo_name from table27 UNION All select 
att,filename,'table28' as mo_name from table28 UNION All select 
att,filename,'table29' as mo_name from table29 UNION All select 
att,filename,'table30' as mo_name from table30 UNION All select 
att,filename,'table31' as mo_name from table31 UNION All select 
att,filename,'table32' as mo_name from table32 UNION All select 
att,filename,'table33' as mo_name from table33 UNION All select 
att,filename,'table34' as mo_name from table34 UNION All select 
att,filename,'table35' as mo_name from table35 UNION All select 
att,filename,'table36' as mo_name from table36 UNION All select 
att,filename,'table37' as mo_name from table37 UNION All select 
att,filename,'table38' as mo_name from table38 UNION All select 
att,filename,'table39' as mo_name from table39 UNION All select 
att,filename,'table40' as mo_name from table40 UNION All select 
att,filename,'table41' as mo_name from table41 UNION All select 
att,filename,'table42' as mo_name from table42 UNION All select 
att,filename,'table43' as mo_name from table43 UNION All select 
att,filename,'table44' as mo_name from table44 UNION All select 
att,filename,'table45' as mo_name from table45 UNION All select 
att,filename,'table46' as mo_name from table46 UNION All select 
att,filename,'table47' as mo_name from table47 UNION All select 
att,filename,'table48' as mo_name from table48 UNION All select 
att,filename,'table49' as mo_name from table49 UNION All select 
att,filename,'table50' as mo_name from table50 UNION All select 
att,filename,'table51' as mo_name from table51 UNION All select 
att,filename,'table52' as mo_name from table52 UNION All select 
att,filename,'table53' as mo_name from table53;

Table allUnionTable = tEnv.sqlQuery(sql);
Table res =
allUnionTable.join(
allUnionTable
.groupBy(col("att"))
.select(col("att"), col("att").count().as(COUNT_NAME))
.filter(col(COUNT_NAME).isGreater(1))
.select(col(key).as("l_key")),
col(key).isEqual(col("l_key"))
);

res.printExplain(ExplainDetail.JSON_EXECUTION_PLAN);

 

 

Exception trace

Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
    at java.util.Arrays.copyOf(Arrays.java:3181)
    at 

[jira] [Assigned] (FLINK-34377) Release Testing : Verify FLINK-33297 Support standard YAML for FLINK configuration

2024-02-05 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee reassigned FLINK-34377:
---

Assignee: (was: Junrui Li)

> Release Testing : Verify FLINK-33297 Support standard YAML for FLINK 
> configuration
> --
>
> Key: FLINK-34377
> URL: https://issues.apache.org/jira/browse/FLINK-34377
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Configuration
>Affects Versions: 1.19.0
>Reporter: Junrui Li
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
>
> This issue aims to verify FLIP-366.
> Starting with version 1.19, Flink has officially introduced full support for 
> the standard YAML 1.2 syntax. For detailed information, please refer to the 
> Flink 
> website:https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#flink-configuration-file
> We may need to cover the following two types of test cases:
> Test 1: For newly created jobs, utilize a config.yaml file to set up the 
> Flink cluster. We need to verify that the job runs as expected with this new 
> configuration.
> Test 2: For existing jobs, migrate the legacy flink-conf.yaml to the new 
> config.yaml. Test the job runs just like before post-migration.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33634] Add Conditions to Flink CRD's Status field [flink-kubernetes-operator]

2024-02-05 Thread via GitHub


lajith2006 commented on PR #749:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/749#issuecomment-1928846602

   @gyfora yes, I am having JIRA account using which I can login to 
https://issues.apache.org/jira/projects/FLINK/.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-34380) Strange RowKind and records about intermediate output when using minibatch join

2024-02-05 Thread xuyang (Jira)
xuyang created FLINK-34380:
--

 Summary: Strange RowKind and records about intermediate output 
when using minibatch join
 Key: FLINK-34380
 URL: https://issues.apache.org/jira/browse/FLINK-34380
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Runtime
Affects Versions: 1.19.0
Reporter: xuyang
 Fix For: 1.19.0


{code:java}
// Add it in CalcItCase

@Test
  def test(): Unit = {
env.setParallelism(1)
val rows = Seq(
  changelogRow("+I", java.lang.Integer.valueOf(1), "1"),
  changelogRow("-U", java.lang.Integer.valueOf(1), "1"),
  changelogRow("+U", java.lang.Integer.valueOf(1), "99"),
  changelogRow("-D", java.lang.Integer.valueOf(1), "99")
)
val dataId = TestValuesTableFactory.registerData(rows)

val ddl =
  s"""
 |CREATE TABLE t1 (
 |  a int,
 |  b string
 |) WITH (
 |  'connector' = 'values',
 |  'data-id' = '$dataId',
 |  'bounded' = 'false'
 |)
   """.stripMargin
tEnv.executeSql(ddl)

val ddl2 =
  s"""
 |CREATE TABLE t2 (
 |  a int,
 |  b string
 |) WITH (
 |  'connector' = 'values',
 |  'data-id' = '$dataId',
 |  'bounded' = 'false'
 |)
   """.stripMargin
tEnv.executeSql(ddl2)

tEnv.getConfig.getConfiguration
  .set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_ENABLED, 
Boolean.box(true))
tEnv.getConfig.getConfiguration
  .set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_ALLOW_LATENCY, 
Duration.ofSeconds(5))
tEnv.getConfig.getConfiguration
  .set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_SIZE, Long.box(3L))

println(tEnv.sqlQuery("SELECT * from t1 join t2 on t1.a = t2.a").explain())

tEnv.executeSql("SELECT * from t1 join t2 on t1.a = t2.a").print()
  } {code}
Output:
{code:java}
++-+-+-+-+
| op |           a |               b |          a0 |      b0 |
++-+-+-+-+
| +U |           1 |               1 |           1 |      99 |
| +U |           1 |              99 |           1 |      99 |
| -U |           1 |               1 |           1 |      99 |
| -D |           1 |              99 |           1 |      99 |
++-+-+-+-+{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34371][runtime] Support EndOfStreamTrigger and isOutputOnlyAfterEndOfStream operator attribute to optimize task deployment [flink]

2024-02-05 Thread via GitHub


mohitjain2504 commented on code in PR #24272:
URL: https://github.com/apache/flink/pull/24272#discussion_r1479254174


##
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinatorDeActivator.java:
##
@@ -31,19 +35,31 @@
 public class CheckpointCoordinatorDeActivator implements JobStatusListener {
 
 private final CheckpointCoordinator coordinator;
+private final Map tasks;
 
-public CheckpointCoordinatorDeActivator(CheckpointCoordinator coordinator) 
{
+public CheckpointCoordinatorDeActivator(
+CheckpointCoordinator coordinator, Map tasks) {
 this.coordinator = checkNotNull(coordinator);
+this.tasks = checkNotNull(tasks);
 }
 
 @Override
 public void jobStatusChanges(JobID jobId, JobStatus newJobStatus, long 
timestamp) {
-if (newJobStatus == JobStatus.RUNNING) {
-// start the checkpoint scheduler
+if (newJobStatus == JobStatus.RUNNING && allTasksOutputNonBlocking()) {
+// start the checkpoint scheduler if there is no blocking edge
 coordinator.startCheckpointScheduler();
 } else {
 // anything else should stop the trigger for now
 coordinator.stopCheckpointScheduler();
 }
 }
+
+private boolean allTasksOutputNonBlocking() {
+for (ExecutionJobVertex vertex : tasks.values()) {

Review Comment:
   can also write it in this manner
   ```
   return tasks.values().stream()
.noneMatch(vertex -> 
vertex.getJobVertex().isAnyOutputBlocking());
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-34349) Release Testing: Verify FLINK-34219 Introduce a new join operator to support minibatch

2024-02-05 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee closed FLINK-34349.
---
Resolution: Fixed

[~xuyangzhong] Thanks for your test work and quick fixing! 

> Release Testing: Verify FLINK-34219 Introduce a new join operator to support 
> minibatch
> --
>
> Key: FLINK-34349
> URL: https://issues.apache.org/jira/browse/FLINK-34349
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Runtime
>Affects Versions: 1.19.0
>Reporter: Shuai Xu
>Assignee: xuyang
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
>
> Minibatch join is ready. Users could improve performance in regular stream 
> join scenarios. 
> Someone can verify this feature by following the 
> [doc]([https://github.com/apache/flink/pull/24240)] although it is still 
> being reviewed.
> If someone finds some bugs about this feature, you open a Jira linked this 
> one to report them.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34349) Release Testing: Verify FLINK-34219 Introduce a new join operator to support minibatch

2024-02-05 Thread xuyang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814612#comment-17814612
 ] 

xuyang commented on FLINK-34349:


Hi, I have finished this verification. Some unexpected behaviors such as bug 
and tech doubt have been linked to this jira.

> Release Testing: Verify FLINK-34219 Introduce a new join operator to support 
> minibatch
> --
>
> Key: FLINK-34349
> URL: https://issues.apache.org/jira/browse/FLINK-34349
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Runtime
>Affects Versions: 1.19.0
>Reporter: Shuai Xu
>Assignee: xuyang
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
>
> Minibatch join is ready. Users could improve performance in regular stream 
> join scenarios. 
> Someone can verify this feature by following the 
> [doc]([https://github.com/apache/flink/pull/24240)] although it is still 
> being reviewed.
> If someone finds some bugs about this feature, you open a Jira linked this 
> one to report them.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34294) Release Testing Instructions: Verify FLINK-33297 Support standard YAML for FLINK configuration

2024-02-05 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee closed FLINK-34294.
---
Resolution: Fixed

Testing tracked by https://issues.apache.org/jira/browse/FLINK-34377

> Release Testing Instructions: Verify FLINK-33297 Support standard YAML for 
> FLINK configuration
> --
>
> Key: FLINK-34294
> URL: https://issues.apache.org/jira/browse/FLINK-34294
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Configuration
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Junrui Li
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
> Attachments: screenshot-1.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34336][test] Fix the bug that AutoRescalingITCase may hang sometimes [flink]

2024-02-05 Thread via GitHub


1996fanrui commented on code in PR #24248:
URL: https://github.com/apache/flink/pull/24248#discussion_r1479228550


##
flink-tests/src/test/java/org/apache/flink/test/checkpointing/AutoRescalingITCase.java:
##
@@ -337,7 +337,7 @@ public void 
testCheckpointRescalingNonPartitionedStateCausesException() throws E
 
 restClusterClient.updateJobResourceRequirements(jobID, 
builder.build()).join();
 
-waitForRunningTasks(restClusterClient, jobID, parallelism2);
+waitForRunningTasks(restClusterClient, jobID, 2 * parallelism2);

Review Comment:
   The last line calls the rescale api, IIUC, `waitForRunningTasks` should wait 
for all tasks of rescaled job. 
   The task number after rescaling should be `2 * parallelism2`.
   
   That's why I think it's original bug, and it's out-of-scope for FLINK-34336.
   
   Other tests are similar with here, but other tests wait for all task of 
rescaled job. 



##
flink-tests/src/test/java/org/apache/flink/test/checkpointing/AutoRescalingITCase.java:
##
@@ -337,7 +337,7 @@ public void 
testCheckpointRescalingNonPartitionedStateCausesException() throws E
 
 restClusterClient.updateJobResourceRequirements(jobID, 
builder.build()).join();
 
-waitForRunningTasks(restClusterClient, jobID, parallelism2);
+waitForRunningTasks(restClusterClient, jobID, 2 * parallelism2);

Review Comment:
   The last line calls the rescale api, IIUC, `waitForRunningTasks` should wait 
for all tasks of rescaled job. 
   The task number after rescaling should be `2 * parallelism2`.
   
   That's why I think it's an original bug, and it's out-of-scope for 
FLINK-34336.
   
   Other tests are similar with here, but other tests wait for all task of 
rescaled job. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34336][test] Fix the bug that AutoRescalingITCase may hang sometimes [flink]

2024-02-05 Thread via GitHub


1996fanrui commented on code in PR #24248:
URL: https://github.com/apache/flink/pull/24248#discussion_r1479230252


##
flink-tests/src/test/java/org/apache/flink/test/checkpointing/AutoRescalingITCase.java:
##
@@ -427,7 +427,8 @@ public void 
testCheckpointRescalingWithKeyedAndNonPartitionedState() throws Exce
 
 restClusterClient.updateJobResourceRequirements(jobID, 
builder.build()).join();
 
-waitForRunningTasks(restClusterClient, jobID, parallelism2);
+// Source is parallelism, the flatMapper & Sink is parallelism2
+waitForRunningTasks(restClusterClient, jobID, parallelism + 
parallelism2);

Review Comment:
   > Can't we also reduce the cooldown phase to make the test complete faster 
similarly to what you did in 
[1996fanrui@ffd713e](https://github.com/1996fanrui/flink/commit/ffd713e24d37db2c103e4cd4361d0cd916d0d2f6)?
 
   
   I have already reduced it in this PR : 
https://github.com/apache/flink/pull/24246, that's why I didn't do it here.
   
   > I tried it locally but it didn't help, though. 樂
   
   Sorry, I don't understand. Do you mean the test duration is still long or 
test still fails?
   
   - Before reducing the cooldown time, the duration of most of tests more than 
30s.
   - After reducing the cooldown time, the duration of most of tests less than 
10s.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34336][test] Fix the bug that AutoRescalingITCase may hang sometimes [flink]

2024-02-05 Thread via GitHub


1996fanrui commented on code in PR #24248:
URL: https://github.com/apache/flink/pull/24248#discussion_r1479228550


##
flink-tests/src/test/java/org/apache/flink/test/checkpointing/AutoRescalingITCase.java:
##
@@ -337,7 +337,7 @@ public void 
testCheckpointRescalingNonPartitionedStateCausesException() throws E
 
 restClusterClient.updateJobResourceRequirements(jobID, 
builder.build()).join();
 
-waitForRunningTasks(restClusterClient, jobID, parallelism2);
+waitForRunningTasks(restClusterClient, jobID, 2 * parallelism2);

Review Comment:
   The last line calls the rescale api, IIUC, `waitForRunningTasks` should wait 
for all tasks of rescaled job. 
   The task number after rescaling should be `2 * parallelism2`.
   
   Other tests are similar with here, but other tests wait for all task of 
rescaled job. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34378) Minibatch join disrupted the original order of input records

2024-02-05 Thread xuyang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuyang updated FLINK-34378:
---
Affects Version/s: 1.19.0

> Minibatch join disrupted the original order of input records
> 
>
> Key: FLINK-34378
> URL: https://issues.apache.org/jira/browse/FLINK-34378
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Table SQL / Runtime
>Affects Versions: 1.19.0
>Reporter: xuyang
>Priority: Major
>
> I'm not sure if it's a bug. The following case can re-produce this situation.
> {code:java}
> // add it in CalcITCase
> @Test
> def test(): Unit = {
>   env.setParallelism(1)
>   val rows = Seq(
> row(1, "1"),
> row(2, "2"),
> row(3, "3"),
> row(4, "4"),
> row(5, "5"),
> row(6, "6"),
> row(7, "7"),
> row(8, "8"))
>   val dataId = TestValuesTableFactory.registerData(rows)
>   val ddl =
> s"""
>|CREATE TABLE t1 (
>|  a int,
>|  b string
>|) WITH (
>|  'connector' = 'values',
>|  'data-id' = '$dataId',
>|  'bounded' = 'false'
>|)
>  """.stripMargin
>   tEnv.executeSql(ddl)
>   val ddl2 =
> s"""
>|CREATE TABLE t2 (
>|  a int,
>|  b string
>|) WITH (
>|  'connector' = 'values',
>|  'data-id' = '$dataId',
>|  'bounded' = 'false'
>|)
>  """.stripMargin
>   tEnv.executeSql(ddl2)
>   tEnv.getConfig.getConfiguration
> .set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_ENABLED, 
> Boolean.box(true))
>   tEnv.getConfig.getConfiguration
> .set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_ALLOW_LATENCY, 
> Duration.ofSeconds(5))
>   tEnv.getConfig.getConfiguration
> .set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_SIZE, Long.box(20L))
>   println(tEnv.sqlQuery("SELECT * from t1 join t2 on t1.a = t2.a").explain())
>   tEnv.executeSql("SELECT * from t1 join t2 on t1.a = t2.a").print()
> }{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34378) Minibatch join disrupted the original order of input records

2024-02-05 Thread xuyang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuyang updated FLINK-34378:
---
Fix Version/s: 1.19.0

> Minibatch join disrupted the original order of input records
> 
>
> Key: FLINK-34378
> URL: https://issues.apache.org/jira/browse/FLINK-34378
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Table SQL / Runtime
>Affects Versions: 1.19.0
>Reporter: xuyang
>Priority: Major
> Fix For: 1.19.0
>
>
> I'm not sure if it's a bug. The following case can re-produce this situation.
> {code:java}
> // add it in CalcITCase
> @Test
> def test(): Unit = {
>   env.setParallelism(1)
>   val rows = Seq(
> row(1, "1"),
> row(2, "2"),
> row(3, "3"),
> row(4, "4"),
> row(5, "5"),
> row(6, "6"),
> row(7, "7"),
> row(8, "8"))
>   val dataId = TestValuesTableFactory.registerData(rows)
>   val ddl =
> s"""
>|CREATE TABLE t1 (
>|  a int,
>|  b string
>|) WITH (
>|  'connector' = 'values',
>|  'data-id' = '$dataId',
>|  'bounded' = 'false'
>|)
>  """.stripMargin
>   tEnv.executeSql(ddl)
>   val ddl2 =
> s"""
>|CREATE TABLE t2 (
>|  a int,
>|  b string
>|) WITH (
>|  'connector' = 'values',
>|  'data-id' = '$dataId',
>|  'bounded' = 'false'
>|)
>  """.stripMargin
>   tEnv.executeSql(ddl2)
>   tEnv.getConfig.getConfiguration
> .set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_ENABLED, 
> Boolean.box(true))
>   tEnv.getConfig.getConfiguration
> .set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_ALLOW_LATENCY, 
> Duration.ofSeconds(5))
>   tEnv.getConfig.getConfiguration
> .set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_SIZE, Long.box(20L))
>   println(tEnv.sqlQuery("SELECT * from t1 join t2 on t1.a = t2.a").explain())
>   tEnv.executeSql("SELECT * from t1 join t2 on t1.a = t2.a").print()
> }{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34378) Minibatch join disrupted the original order of input records

2024-02-05 Thread xuyang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuyang updated FLINK-34378:
---
Component/s: Table SQL / Runtime

> Minibatch join disrupted the original order of input records
> 
>
> Key: FLINK-34378
> URL: https://issues.apache.org/jira/browse/FLINK-34378
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Table SQL / Runtime
>Reporter: xuyang
>Priority: Major
>
> I'm not sure if it's a bug. The following case can re-produce this situation.
> {code:java}
> // add it in CalcITCase
> @Test
> def test(): Unit = {
>   env.setParallelism(1)
>   val rows = Seq(
> row(1, "1"),
> row(2, "2"),
> row(3, "3"),
> row(4, "4"),
> row(5, "5"),
> row(6, "6"),
> row(7, "7"),
> row(8, "8"))
>   val dataId = TestValuesTableFactory.registerData(rows)
>   val ddl =
> s"""
>|CREATE TABLE t1 (
>|  a int,
>|  b string
>|) WITH (
>|  'connector' = 'values',
>|  'data-id' = '$dataId',
>|  'bounded' = 'false'
>|)
>  """.stripMargin
>   tEnv.executeSql(ddl)
>   val ddl2 =
> s"""
>|CREATE TABLE t2 (
>|  a int,
>|  b string
>|) WITH (
>|  'connector' = 'values',
>|  'data-id' = '$dataId',
>|  'bounded' = 'false'
>|)
>  """.stripMargin
>   tEnv.executeSql(ddl2)
>   tEnv.getConfig.getConfiguration
> .set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_ENABLED, 
> Boolean.box(true))
>   tEnv.getConfig.getConfiguration
> .set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_ALLOW_LATENCY, 
> Duration.ofSeconds(5))
>   tEnv.getConfig.getConfiguration
> .set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_SIZE, Long.box(20L))
>   println(tEnv.sqlQuery("SELECT * from t1 join t2 on t1.a = t2.a").explain())
>   tEnv.executeSql("SELECT * from t1 join t2 on t1.a = t2.a").print()
> }{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34378) Minibatch join disrupted the original order of input records

2024-02-05 Thread xuyang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814590#comment-17814590
 ] 

xuyang commented on FLINK-34378:


cc [~xu_shuai_] 

> Minibatch join disrupted the original order of input records
> 
>
> Key: FLINK-34378
> URL: https://issues.apache.org/jira/browse/FLINK-34378
> Project: Flink
>  Issue Type: Technical Debt
>Reporter: xuyang
>Priority: Major
>
> I'm not sure if it's a bug, the following case can re-produce this bug.
> {code:java}
> // add it in CalcITCase
> @Test
> def test(): Unit = {
>   env.setParallelism(1)
>   val rows = Seq(
> row(1, "1"),
> row(2, "2"),
> row(3, "3"),
> row(4, "4"),
> row(5, "5"),
> row(6, "6"),
> row(7, "7"),
> row(8, "8"))
>   val dataId = TestValuesTableFactory.registerData(rows)
>   val ddl =
> s"""
>|CREATE TABLE t1 (
>|  a int,
>|  b string
>|) WITH (
>|  'connector' = 'values',
>|  'data-id' = '$dataId',
>|  'bounded' = 'false'
>|)
>  """.stripMargin
>   tEnv.executeSql(ddl)
>   val ddl2 =
> s"""
>|CREATE TABLE t2 (
>|  a int,
>|  b string
>|) WITH (
>|  'connector' = 'values',
>|  'data-id' = '$dataId',
>|  'bounded' = 'false'
>|)
>  """.stripMargin
>   tEnv.executeSql(ddl2)
>   tEnv.getConfig.getConfiguration
> .set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_ENABLED, 
> Boolean.box(true))
>   tEnv.getConfig.getConfiguration
> .set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_ALLOW_LATENCY, 
> Duration.ofSeconds(5))
>   tEnv.getConfig.getConfiguration
> .set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_SIZE, Long.box(20L))
>   println(tEnv.sqlQuery("SELECT * from t1 join t2 on t1.a = t2.a").explain())
>   tEnv.executeSql("SELECT * from t1 join t2 on t1.a = t2.a").print()
> }{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34378) Minibatch join disrupted the original order of input records

2024-02-05 Thread xuyang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuyang updated FLINK-34378:
---
Description: 
I'm not sure if it's a bug. The following case can re-produce this situation.
{code:java}
// add it in CalcITCase
@Test
def test(): Unit = {
  env.setParallelism(1)
  val rows = Seq(
row(1, "1"),
row(2, "2"),
row(3, "3"),
row(4, "4"),
row(5, "5"),
row(6, "6"),
row(7, "7"),
row(8, "8"))
  val dataId = TestValuesTableFactory.registerData(rows)

  val ddl =
s"""
   |CREATE TABLE t1 (
   |  a int,
   |  b string
   |) WITH (
   |  'connector' = 'values',
   |  'data-id' = '$dataId',
   |  'bounded' = 'false'
   |)
 """.stripMargin
  tEnv.executeSql(ddl)

  val ddl2 =
s"""
   |CREATE TABLE t2 (
   |  a int,
   |  b string
   |) WITH (
   |  'connector' = 'values',
   |  'data-id' = '$dataId',
   |  'bounded' = 'false'
   |)
 """.stripMargin
  tEnv.executeSql(ddl2)

  tEnv.getConfig.getConfiguration
.set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_ENABLED, Boolean.box(true))
  tEnv.getConfig.getConfiguration
.set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_ALLOW_LATENCY, 
Duration.ofSeconds(5))
  tEnv.getConfig.getConfiguration
.set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_SIZE, Long.box(20L))

  println(tEnv.sqlQuery("SELECT * from t1 join t2 on t1.a = t2.a").explain())

  tEnv.executeSql("SELECT * from t1 join t2 on t1.a = t2.a").print()
}{code}

  was:
I'm not sure if it's a bug, the following case can re-produce this bug.
{code:java}
// add it in CalcITCase
@Test
def test(): Unit = {
  env.setParallelism(1)
  val rows = Seq(
row(1, "1"),
row(2, "2"),
row(3, "3"),
row(4, "4"),
row(5, "5"),
row(6, "6"),
row(7, "7"),
row(8, "8"))
  val dataId = TestValuesTableFactory.registerData(rows)

  val ddl =
s"""
   |CREATE TABLE t1 (
   |  a int,
   |  b string
   |) WITH (
   |  'connector' = 'values',
   |  'data-id' = '$dataId',
   |  'bounded' = 'false'
   |)
 """.stripMargin
  tEnv.executeSql(ddl)

  val ddl2 =
s"""
   |CREATE TABLE t2 (
   |  a int,
   |  b string
   |) WITH (
   |  'connector' = 'values',
   |  'data-id' = '$dataId',
   |  'bounded' = 'false'
   |)
 """.stripMargin
  tEnv.executeSql(ddl2)

  tEnv.getConfig.getConfiguration
.set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_ENABLED, Boolean.box(true))
  tEnv.getConfig.getConfiguration
.set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_ALLOW_LATENCY, 
Duration.ofSeconds(5))
  tEnv.getConfig.getConfiguration
.set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_SIZE, Long.box(20L))

  println(tEnv.sqlQuery("SELECT * from t1 join t2 on t1.a = t2.a").explain())

  tEnv.executeSql("SELECT * from t1 join t2 on t1.a = t2.a").print()
}{code}


> Minibatch join disrupted the original order of input records
> 
>
> Key: FLINK-34378
> URL: https://issues.apache.org/jira/browse/FLINK-34378
> Project: Flink
>  Issue Type: Technical Debt
>Reporter: xuyang
>Priority: Major
>
> I'm not sure if it's a bug. The following case can re-produce this situation.
> {code:java}
> // add it in CalcITCase
> @Test
> def test(): Unit = {
>   env.setParallelism(1)
>   val rows = Seq(
> row(1, "1"),
> row(2, "2"),
> row(3, "3"),
> row(4, "4"),
> row(5, "5"),
> row(6, "6"),
> row(7, "7"),
> row(8, "8"))
>   val dataId = TestValuesTableFactory.registerData(rows)
>   val ddl =
> s"""
>|CREATE TABLE t1 (
>|  a int,
>|  b string
>|) WITH (
>|  'connector' = 'values',
>|  'data-id' = '$dataId',
>|  'bounded' = 'false'
>|)
>  """.stripMargin
>   tEnv.executeSql(ddl)
>   val ddl2 =
> s"""
>|CREATE TABLE t2 (
>|  a int,
>|  b string
>|) WITH (
>|  'connector' = 'values',
>|  'data-id' = '$dataId',
>|  'bounded' = 'false'
>|)
>  """.stripMargin
>   tEnv.executeSql(ddl2)
>   tEnv.getConfig.getConfiguration
> .set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_ENABLED, 
> Boolean.box(true))
>   tEnv.getConfig.getConfiguration
> .set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_ALLOW_LATENCY, 
> Duration.ofSeconds(5))
>   tEnv.getConfig.getConfiguration
> .set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_SIZE, Long.box(20L))
>   println(tEnv.sqlQuery("SELECT * from t1 join t2 on t1.a = t2.a").explain())
>   tEnv.executeSql("SELECT * from t1 join t2 on t1.a = t2.a").print()
> }{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34378) Minibatch join disrupted the original order of input records

2024-02-05 Thread xuyang (Jira)
xuyang created FLINK-34378:
--

 Summary: Minibatch join disrupted the original order of input 
records
 Key: FLINK-34378
 URL: https://issues.apache.org/jira/browse/FLINK-34378
 Project: Flink
  Issue Type: Technical Debt
Reporter: xuyang


I'm not sure if it's a bug, the following case can re-produce this bug.
{code:java}
// add it in CalcITCase
@Test
def test(): Unit = {
  env.setParallelism(1)
  val rows = Seq(
row(1, "1"),
row(2, "2"),
row(3, "3"),
row(4, "4"),
row(5, "5"),
row(6, "6"),
row(7, "7"),
row(8, "8"))
  val dataId = TestValuesTableFactory.registerData(rows)

  val ddl =
s"""
   |CREATE TABLE t1 (
   |  a int,
   |  b string
   |) WITH (
   |  'connector' = 'values',
   |  'data-id' = '$dataId',
   |  'bounded' = 'false'
   |)
 """.stripMargin
  tEnv.executeSql(ddl)

  val ddl2 =
s"""
   |CREATE TABLE t2 (
   |  a int,
   |  b string
   |) WITH (
   |  'connector' = 'values',
   |  'data-id' = '$dataId',
   |  'bounded' = 'false'
   |)
 """.stripMargin
  tEnv.executeSql(ddl2)

  tEnv.getConfig.getConfiguration
.set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_ENABLED, Boolean.box(true))
  tEnv.getConfig.getConfiguration
.set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_ALLOW_LATENCY, 
Duration.ofSeconds(5))
  tEnv.getConfig.getConfiguration
.set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_SIZE, Long.box(20L))

  println(tEnv.sqlQuery("SELECT * from t1 join t2 on t1.a = t2.a").explain())

  tEnv.executeSql("SELECT * from t1 join t2 on t1.a = t2.a").print()
}{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33739][doc] Document FLIP-364: Improve the exponential-delay restart-strategy [flink]

2024-02-05 Thread via GitHub


zhuzhurk commented on code in PR #24263:
URL: https://github.com/apache/flink/pull/24263#discussion_r1479181558


##
docs/content.zh/docs/ops/state/task_failure_recovery.md:
##
@@ -339,7 +339,7 @@ env = 
StreamExecutionEnvironment.get_execution_environment(config)
 ### 默认重启策略
 
 当 Checkpoint 开启且用户没有指定重启策略时,[`指数延迟重启策略`]({{< ref 
"docs/ops/state/task_failure_recovery" >}}#exponential-delay-restart-strategy) 
-是当前默认的重启策略。我们强烈推荐 Flink 用户使用指数延迟重启策略并将其设置为默认重启策略是因为它相比其他重启策略可以做到: 
+是当前默认的重启策略。我们强烈推荐 Flink 用户使用指数延迟重启策略因为使用这个策略时

Review Comment:
   因为使用这个策略时 -> ,因为使用这个策略时,



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34374) Complete work for syntax `DESCRIBE EXTENDED DATABASE databaseName`

2024-02-05 Thread Yunhong Zheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814587#comment-17814587
 ] 

Yunhong Zheng commented on FLINK-34374:
---

Hi, [~xuyangzhong] . I want to take this issue, can u assign to me! Thanks.

> Complete work for syntax `DESCRIBE EXTENDED DATABASE databaseName`
> --
>
> Key: FLINK-34374
> URL: https://issues.apache.org/jira/browse/FLINK-34374
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.10.0, 1.19.0
>Reporter: xuyang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34375) Complete work for syntax `DESCRIBE EXTENDED tableName`

2024-02-05 Thread Yunhong Zheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814586#comment-17814586
 ] 

Yunhong Zheng commented on FLINK-34375:
---

Hi, [~xuyangzhong] . I worked on the DESCRIBE EXTENDED syntax last year, so 
I'll take this issue and continue to complete it. Thank you.

> Complete work for syntax `DESCRIBE EXTENDED tableName`
> --
>
> Key: FLINK-34375
> URL: https://issues.apache.org/jira/browse/FLINK-34375
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.10.0, 1.19.0
>Reporter: xuyang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34372) Complete work for syntax `DESCRIBE CATALOG catalogName`

2024-02-05 Thread xuyang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814584#comment-17814584
 ] 

xuyang commented on FLINK-34372:


I'll try to do it.

> Complete work for syntax `DESCRIBE CATALOG catalogName`
> ---
>
> Key: FLINK-34372
> URL: https://issues.apache.org/jira/browse/FLINK-34372
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.10.0, 1.19.0
>Reporter: xuyang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34377) Release Testing : Verify FLINK-33297 Support standard YAML for FLINK configuration

2024-02-05 Thread Junrui Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junrui Li updated FLINK-34377:
--
Description: 
This issue aims to verify FLIP-366.

Starting with version 1.19, Flink has officially introduced full support for 
the standard YAML 1.2 syntax. For detailed information, please refer to the 
Flink 
website:https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#flink-configuration-file

We may need to cover the following two types of test cases:

Test 1: For newly created jobs, utilize a config.yaml file to set up the Flink 
cluster. We need to verify that the job runs as expected with this new 
configuration.

Test 2: For existing jobs, migrate the legacy flink-conf.yaml to the new 
config.yaml. Test the job runs just like before post-migration.

> Release Testing : Verify FLINK-33297 Support standard YAML for FLINK 
> configuration
> --
>
> Key: FLINK-34377
> URL: https://issues.apache.org/jira/browse/FLINK-34377
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Configuration
>Affects Versions: 1.19.0
>Reporter: Junrui Li
>Assignee: Junrui Li
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
>
> This issue aims to verify FLIP-366.
> Starting with version 1.19, Flink has officially introduced full support for 
> the standard YAML 1.2 syntax. For detailed information, please refer to the 
> Flink 
> website:https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#flink-configuration-file
> We may need to cover the following two types of test cases:
> Test 1: For newly created jobs, utilize a config.yaml file to set up the 
> Flink cluster. We need to verify that the job runs as expected with this new 
> configuration.
> Test 2: For existing jobs, migrate the legacy flink-conf.yaml to the new 
> config.yaml. Test the job runs just like before post-migration.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34377) Release Testing : Verify FLINK-33297 Support standard YAML for FLINK configuration

2024-02-05 Thread Junrui Li (Jira)
Junrui Li created FLINK-34377:
-

 Summary: Release Testing : Verify FLINK-33297 Support standard 
YAML for FLINK configuration
 Key: FLINK-34377
 URL: https://issues.apache.org/jira/browse/FLINK-34377
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Configuration
Affects Versions: 1.19.0
Reporter: Junrui Li
Assignee: Junrui Li
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34376) FLINK SQL SUM() causes a precision error

2024-02-05 Thread Fangliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814583#comment-17814583
 ] 

Fangliang Liu edited comment on FLINK-34376 at 2/6/24 3:24 AM:
---

Hi [~matriv], [~twalthr], [~zonli] 

Related issues:

https://issues.apache.org/jira/browse/FLINK-24691


was (Author: liufangliang):
Hi [~matriv] ,[~twalthr] , [~zonli] 

Related issues:

https://issues.apache.org/jira/browse/FLINK-24691

> FLINK SQL SUM() causes a precision error
> 
>
> Key: FLINK-34376
> URL: https://issues.apache.org/jira/browse/FLINK-34376
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.14.3, 1.18.1
>Reporter: Fangliang Liu
>Priority: Major
> Attachments: image-2024-02-06-11-15-02-669.png, 
> image-2024-02-06-11-17-03-399.png
>
>
> {code:java}
> select cast(sum(CAST(9.11 AS DECIMAL(38,18)) *10 ) as STRING) 
> {code}
> The precision is wrong in the Flink 1.14.3 and master branch
> !image-2024-02-06-11-15-02-669.png!
>  
> The accuracy is correct in the Flink 1.13.2 
> !image-2024-02-06-11-17-03-399.png!
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34376) FLINK SQL SUM() causes a precision error

2024-02-05 Thread Fangliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814583#comment-17814583
 ] 

Fangliang Liu commented on FLINK-34376:
---

Hi [~matriv] ,[~twalthr] , [~zonli] 

Related issues:

https://issues.apache.org/jira/browse/FLINK-24691

> FLINK SQL SUM() causes a precision error
> 
>
> Key: FLINK-34376
> URL: https://issues.apache.org/jira/browse/FLINK-34376
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.14.3, 1.18.1
>Reporter: Fangliang Liu
>Priority: Major
> Attachments: image-2024-02-06-11-15-02-669.png, 
> image-2024-02-06-11-17-03-399.png
>
>
> {code:java}
> select cast(sum(CAST(9.11 AS DECIMAL(38,18)) *10 ) as STRING) 
> {code}
> The precision is wrong in the Flink 1.14.3 and master branch
> !image-2024-02-06-11-15-02-669.png!
>  
> The accuracy is correct in the Flink 1.13.2 
> !image-2024-02-06-11-17-03-399.png!
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34376) FLINK SQL SUM() causes a precision error

2024-02-05 Thread Fangliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fangliang Liu updated FLINK-34376:
--
Description: 
{code:java}
select cast(sum(CAST(9.11 AS DECIMAL(38,18)) *10 ) as STRING) 
{code}
The precision is wrong in the Flink 1.14.3 and master branch

!image-2024-02-06-11-15-02-669.png!

 

The accuracy is correct in the Flink 1.13.2 

!image-2024-02-06-11-17-03-399.png!

 

  was:
{code:java}
select cast(sum(CAST(9.11 AS DECIMAL(38,18)) *10 ) as STRING) 
{code}
The precision is wrong in the Flink 1.14.3 and master branch

!image-2024-02-06-11-15-02-669.png!

 

The accuracy is correct in the Flink 1.13.2 

!image-2024-02-06-11-17-03-399.png!

 

Related issues:

https://issues.apache.org/jira/browse/FLINK-24691


> FLINK SQL SUM() causes a precision error
> 
>
> Key: FLINK-34376
> URL: https://issues.apache.org/jira/browse/FLINK-34376
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.14.3, 1.18.1
>Reporter: Fangliang Liu
>Priority: Major
> Attachments: image-2024-02-06-11-15-02-669.png, 
> image-2024-02-06-11-17-03-399.png
>
>
> {code:java}
> select cast(sum(CAST(9.11 AS DECIMAL(38,18)) *10 ) as STRING) 
> {code}
> The precision is wrong in the Flink 1.14.3 and master branch
> !image-2024-02-06-11-15-02-669.png!
>  
> The accuracy is correct in the Flink 1.13.2 
> !image-2024-02-06-11-17-03-399.png!
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34376) FLINK SQL SUM() causes a precision error

2024-02-05 Thread Fangliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fangliang Liu updated FLINK-34376:
--
Description: 
{code:java}
select cast(sum(CAST(9.11 AS DECIMAL(38,18)) *10 ) as STRING) 
{code}
The precision is wrong in the Flink 1.14.3 and master branch

!image-2024-02-06-11-15-02-669.png!

 

The accuracy is correct in the Flink 1.13.2 

!image-2024-02-06-11-17-03-399.png!

 

Related issues:

https://issues.apache.org/jira/browse/FLINK-24691

  was:
{code:java}
select cast(sum(CAST(9.11 AS DECIMAL(38,18)) *10 ) as STRING) 
{code}
The precision is wrong in the version below

!image-2024-02-06-11-15-02-669.png!

 

The accuracy is correct in the Flink 1.13.2 

!image-2024-02-06-11-17-03-399.png!

 

 


> FLINK SQL SUM() causes a precision error
> 
>
> Key: FLINK-34376
> URL: https://issues.apache.org/jira/browse/FLINK-34376
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.14.3, 1.18.1
>Reporter: Fangliang Liu
>Priority: Major
> Attachments: image-2024-02-06-11-15-02-669.png, 
> image-2024-02-06-11-17-03-399.png
>
>
> {code:java}
> select cast(sum(CAST(9.11 AS DECIMAL(38,18)) *10 ) as STRING) 
> {code}
> The precision is wrong in the Flink 1.14.3 and master branch
> !image-2024-02-06-11-15-02-669.png!
>  
> The accuracy is correct in the Flink 1.13.2 
> !image-2024-02-06-11-17-03-399.png!
>  
> Related issues:
> https://issues.apache.org/jira/browse/FLINK-24691



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   4   >