[GitHub] [flink] flinkbot edited a comment on pull request #14464: [FLINK-20385][canal][json] Allow to read metadata for canal-json format
flinkbot edited a comment on pull request #14464: URL: https://github.com/apache/flink/pull/14464#issuecomment-749543461 ## CI report: * e81abd70d44d6ce7221d4cd2f44d31deab78ea9a UNKNOWN * 6956309e3928df5f4e7717a978bd91eee472f6ed Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11389) * f868d49d3c71f7b55d1c6a3dc601ac1561bdd9e8 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11428) * 73d00ec389bcf20486ae71758ab613e8cfe7cb03 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-20789) Add a metric named `deserializeFailedCount` for formats which has option `ignore-parse-errors`
[ https://issues.apache.org/jira/browse/FLINK-20789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] xiaozilong updated FLINK-20789: --- Component/s: Formats (JSON, Avro, Parquet, ORC, SequenceFile) > Add a metric named `deserializeFailedCount` for formats which has option > `ignore-parse-errors` > -- > > Key: FLINK-20789 > URL: https://issues.apache.org/jira/browse/FLINK-20789 > Project: Flink > Issue Type: Improvement > Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), > Runtime / Metrics >Affects Versions: 1.12.0 >Reporter: xiaozilong >Priority: Major > > Counts the number of deserialize failed when the option `ignore-parse-errors` > is enabled. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-20789) Add a metric named `deserializeFailedCount` for formats which has option `ignore-parse-errors`
[ https://issues.apache.org/jira/browse/FLINK-20789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] xiaozilong updated FLINK-20789: --- Summary: Add a metric named `deserializeFailedCount` for formats which has option `ignore-parse-errors` (was: Add a metric named `deserializeFaildCount` for formats which has option `ignore-parse-errors`) > Add a metric named `deserializeFailedCount` for formats which has option > `ignore-parse-errors` > -- > > Key: FLINK-20789 > URL: https://issues.apache.org/jira/browse/FLINK-20789 > Project: Flink > Issue Type: Improvement > Components: Runtime / Metrics >Affects Versions: 1.12.0 >Reporter: xiaozilong >Priority: Major > > Counts the number of deserialize failed when the option `ignore-parse-errors` > is enabled. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (FLINK-20777) Default value of property "partition.discovery.interval.ms" is not as documented in new Kafka Source
[ https://issues.apache.org/jira/browse/FLINK-20777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255853#comment-17255853 ] Qingsheng Ren edited comment on FLINK-20777 at 12/29/20, 7:41 AM: -- Hi [~873925...@qq.com], Thanks for your reviewing~ I think a better implementation would be only overriding the value to -1 when the the source is bounded. For unbounded mode it should be assigned to the default value 30 seconds as documented in {{KafkaSourceOptions}} if not passed in explicitly. Actually this is due to the implementation of {{maybeOverride}} helper function. was (Author: renqs): Hi [~873925...@qq.com], Thanks for your reviewing~ I think a better implementation would be only overriding the value to -1 when the the source is bounded. For unbounded mode it should be assigned to the default value 30 seconds as documented in {KafkaSourceOptions} if not passed in explicitly. Actually this is due to the implementation of {maybeOverride} helper function. > Default value of property "partition.discovery.interval.ms" is not as > documented in new Kafka Source > > > Key: FLINK-20777 > URL: https://issues.apache.org/jira/browse/FLINK-20777 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Reporter: Qingsheng Ren >Priority: Major > Fix For: 1.12.1 > > > The default value of property "partition.discovery.interval.ms" is documented > as 30 seconds in {{KafkaSourceOptions}}, but it will be set as -1 in > {{KafkaSourceBuilder}} if user doesn't pass in this property explicitly. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-20777) Default value of property "partition.discovery.interval.ms" is not as documented in new Kafka Source
[ https://issues.apache.org/jira/browse/FLINK-20777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255853#comment-17255853 ] Qingsheng Ren commented on FLINK-20777: --- Hi [~873925...@qq.com], Thanks for your reviewing~ I think a better implementation would be only overriding the value to -1 when the the source is bounded. For unbounded mode it should be assigned to the default value 30 seconds as documented in {KafkaSourceOptions} if not passed in explicitly. Actually this is due to the implementation of {maybeOverride} helper function. > Default value of property "partition.discovery.interval.ms" is not as > documented in new Kafka Source > > > Key: FLINK-20777 > URL: https://issues.apache.org/jira/browse/FLINK-20777 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Reporter: Qingsheng Ren >Priority: Major > Fix For: 1.12.1 > > > The default value of property "partition.discovery.interval.ms" is documented > as 30 seconds in {{KafkaSourceOptions}}, but it will be set as -1 in > {{KafkaSourceBuilder}} if user doesn't pass in this property explicitly. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #14514: [FLINK-20793][core][tests] Fix the NamesTest due to code style refactor
flinkbot edited a comment on pull request #14514: URL: https://github.com/apache/flink/pull/14514#issuecomment-751931069 ## CI report: * 0a7cf741cfb092143e2dafcb8587b2294efc5dda UNKNOWN * a4538c71a6fdd097ad68b374b781b67daad49f83 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11421) * 9c63e20e6338d01570a3ee9d8f20ec76634b01ef Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11425) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14508: [FLINK-20773][format] Support allow-unescaped-control-chars option for JSON format.
flinkbot edited a comment on pull request #14508: URL: https://github.com/apache/flink/pull/14508#issuecomment-751734163 ## CI report: * f1332b021d33a6e4681b0a08ad1c5b58f153c417 UNKNOWN * 9a3e9ba496cc4dade2d716a4821be4df40444863 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11424) * 0a32b3002af742d2bbb14fa1c3e433bd52cd0f96 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11429) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14508: [FLINK-20773][format] Support allow-unescaped-control-chars option for JSON format.
flinkbot edited a comment on pull request #14508: URL: https://github.com/apache/flink/pull/14508#issuecomment-751734163 ## CI report: * f1332b021d33a6e4681b0a08ad1c5b58f153c417 UNKNOWN * 9a3e9ba496cc4dade2d716a4821be4df40444863 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11424) * 0a32b3002af742d2bbb14fa1c3e433bd52cd0f96 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (FLINK-20777) Default value of property "partition.discovery.interval.ms" is not as documented in new Kafka Source
[ https://issues.apache.org/jira/browse/FLINK-20777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255847#comment-17255847 ] jiawen xiao edited comment on FLINK-20777 at 12/29/20, 7:12 AM: sorry [~renqs],i think this is not a problem. you can read code in {{KafkaSourceBuilder}} . The reason you will find "If the source is bounded, do not run periodic partition discovery." so it's a check for bounded streaming which can prevent users from enabling dynamic partition discovery in the case of bounded sources. was (Author: 873925...@qq.com): sorry [~renqs],i think this is not a problem. your can read code in {{KafkaSourceBuilder}} . The reason you will find "If the source is bounded, do not run periodic partition discovery." so it's a check for bounded streaming which can prevent users from enabling dynamic partition discovery in the case of bounded sources. > Default value of property "partition.discovery.interval.ms" is not as > documented in new Kafka Source > > > Key: FLINK-20777 > URL: https://issues.apache.org/jira/browse/FLINK-20777 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Reporter: Qingsheng Ren >Priority: Major > Fix For: 1.12.1 > > > The default value of property "partition.discovery.interval.ms" is documented > as 30 seconds in {{KafkaSourceOptions}}, but it will be set as -1 in > {{KafkaSourceBuilder}} if user doesn't pass in this property explicitly. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-20777) Default value of property "partition.discovery.interval.ms" is not as documented in new Kafka Source
[ https://issues.apache.org/jira/browse/FLINK-20777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255847#comment-17255847 ] jiawen xiao commented on FLINK-20777: - sorry [~renqs],i think this is not a problem. your can read code in {{KafkaSourceBuilder}} . The reason you will find "If the source is bounded, do not run periodic partition discovery." so it's a check for bounded streaming which can prevent users from enabling dynamic partition discovery in the case of bounded sources. > Default value of property "partition.discovery.interval.ms" is not as > documented in new Kafka Source > > > Key: FLINK-20777 > URL: https://issues.apache.org/jira/browse/FLINK-20777 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Reporter: Qingsheng Ren >Priority: Major > Fix For: 1.12.1 > > > The default value of property "partition.discovery.interval.ms" is documented > as 30 seconds in {{KafkaSourceOptions}}, but it will be set as -1 in > {{KafkaSourceBuilder}} if user doesn't pass in this property explicitly. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] wuchong commented on a change in pull request #14464: [FLINK-20385][canal][json] Allow to read metadata for canal-json format
wuchong commented on a change in pull request #14464: URL: https://github.com/apache/flink/pull/14464#discussion_r549592196 ## File path: flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/streaming/connectors/kafka/table/KafkaChangelogTableITCase.java ## @@ -218,4 +218,168 @@ public void testKafkaDebeziumChangelogSource() throws Exception { tableResult.getJobClient().get().cancel().get(); // stop the job deleteTestTopic(topic); } + +@Test +public void testKafkaCanalChangelogSource() throws Exception { +final String topic = "changelog_canal"; +createTestTopic(topic, 1, 1); + +// enables MiniBatch processing to verify MiniBatch + FLIP-95, see FLINK-18769 +Configuration tableConf = tEnv.getConfig().getConfiguration(); +tableConf.setString("table.exec.mini-batch.enabled", "true"); +tableConf.setString("table.exec.mini-batch.allow-latency", "1s"); +tableConf.setString("table.exec.mini-batch.size", "5000"); +tableConf.setString("table.optimizer.agg-phase-strategy", "TWO_PHASE"); + +// -- Write the Canal json into Kafka --- +List lines = readLines("canal-data.txt"); +DataStreamSource stream = env.fromCollection(lines); +SerializationSchema serSchema = new SimpleStringSchema(); +FlinkKafkaPartitioner partitioner = new FlinkFixedPartitioner<>(); + +// the producer must not produce duplicates +Properties producerProperties = + FlinkKafkaProducerBase.getPropertiesFromBrokerList(brokerConnectionStrings); +producerProperties.setProperty("retries", "0"); +producerProperties.putAll(secureProps); +kafkaServer.produceIntoKafka(stream, topic, serSchema, producerProperties, partitioner); +try { +env.execute("Write sequence"); +} catch (Exception e) { +throw new Exception("Failed to write canal data to Kafka.", e); +} + +// -- Produce an event time stream into Kafka --- +String bootstraps = standardProps.getProperty("bootstrap.servers"); +String sourceDDL = +String.format( +"CREATE TABLE canal_source (" ++ +// test format metadata +" origin_database STRING METADATA FROM 'value.database' VIRTUAL," ++ " origin_table STRING METADATA FROM 'value.table' VIRTUAL," ++ " origin_sql_type MAP METADATA FROM 'value.sql-type' VIRTUAL," ++ " origin_pk_names ARRAY METADATA FROM 'value.pk-names' VIRTUAL," ++ " origin_ts TIMESTAMP(3) METADATA FROM 'value.ingestion-timestamp' VIRTUAL," ++ " id INT NOT NULL," ++ " name STRING," ++ " description STRING," ++ " weight DECIMAL(10,3)," ++ +// test connector metadata +" origin_topic STRING METADATA FROM 'topic' VIRTUAL," ++ " origin_partition STRING METADATA FROM 'partition' VIRTUAL" ++ // unused +") WITH (" ++ " 'connector' = 'kafka'," ++ " 'topic' = '%s'," ++ " 'properties.bootstrap.servers' = '%s'," ++ " 'scan.startup.mode' = 'earliest-offset'," ++ " 'value.format' = 'canal-json'" ++ ")", +topic, bootstraps); +String sinkDDL = +"CREATE TABLE sink (" ++ " origin_topic STRING," ++ " origin_database STRING," ++ " origin_table STRING," ++ " origin_sql_type MAP," ++ " origin_pk_names ARRAY," ++ " origin_ts TIMESTAMP(3)," ++ " name STRING," ++ " PRIMARY KEY (name) NOT ENFORCED" ++ ") WITH (" ++ " 'connector' = 'values'," ++ " 'sink-insert-only' = 'false'" ++ ")"; +tEnv.executeSql(sourceDDL); +tEnv.executeSql(sinkDDL); +TableResult tableResult = +tEnv.executeSql( +"INSERT INTO sink " ++ "SELECT origin_topic, origin_database, origin_table, origin_sql_type, " ++ "origin_pk_names, origin_ts, name " +
[jira] [Created] (FLINK-20798) Service temporarily unavailable due to an ongoing leader election. Please refresh.
hayden zhou created FLINK-20798: --- Summary: Service temporarily unavailable due to an ongoing leader election. Please refresh. Key: FLINK-20798 URL: https://issues.apache.org/jira/browse/FLINK-20798 Project: Flink Issue Type: Bug Components: Deployment / Kubernetes Affects Versions: 1.12.0 Environment: FLINK 1.12.0 Reporter: hayden zhou 我这边 部署 flink 到 k8s 使用 PVC 作为 high avalibility storagedir , 我看jobmanager 的日志,选举成功了。但是 web 一直显示选举进行中。 下面是 jobmanager 的日志 ``` 2020-12-29T06:45:54.177850394Z 2020-12-29 14:45:54,177 DEBUG io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - Leader election started 2020-12-29T06:45:54.177855303Z 2020-12-29 14:45:54,177 DEBUG io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - Attempting to acquire leader lease 'ConfigMapLock: default - mta-flink-resourcemanager-leader (6f6479c6-86cc-4d62-84f9-37ff968bd0e5)'... 2020-12-29T06:45:54.178668055Z 2020-12-29 14:45:54,178 DEBUG io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - WebSocket successfully opened 2020-12-29T06:45:54.178895963Z 2020-12-29 14:45:54,178 INFO org.apache.flink.runtime.leaderretrieval.DefaultLeaderRetrievalService [] - Starting DefaultLeaderRetrievalService with KubernetesLeaderRetrievalDriver\{configMapName='mta-flink-resourcemanager-leader'}. 2020-12-29T06:45:54.179327491Z 2020-12-29 14:45:54,179 DEBUG io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - Connecting websocket ... io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager@6d303498 2020-12-29T06:45:54.230081993Z 2020-12-29 14:45:54,229 DEBUG io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - WebSocket successfully opened 2020-12-29T06:45:54.230202329Z 2020-12-29 14:45:54,230 INFO org.apache.flink.runtime.leaderretrieval.DefaultLeaderRetrievalService [] - Starting DefaultLeaderRetrievalService with KubernetesLeaderRetrievalDriver\{configMapName='mta-flink-dispatcher-leader'}. 2020-12-29T06:45:54.230219281Z 2020-12-29 14:45:54,229 DEBUG io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - WebSocket successfully opened 2020-12-29T06:45:54.230353912Z 2020-12-29 14:45:54,230 INFO org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] - Starting DefaultLeaderElectionService with KubernetesLeaderElectionDriver\{configMapName='mta-flink-resourcemanager-leader'}. 2020-12-29T06:45:54.237004177Z 2020-12-29 14:45:54,236 DEBUG io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - Leader changed from null to 6f6479c6-86cc-4d62-84f9-37ff968bd0e5 2020-12-29T06:45:54.237024655Z 2020-12-29 14:45:54,236 INFO org.apache.flink.kubernetes.kubeclient.resources.KubernetesLeaderElector [] - New leader elected 6f6479c6-86cc-4d62-84f9-37ff968bd0e5 for mta-flink-restserver-leader. 2020-12-29T06:45:54.237027811Z 2020-12-29 14:45:54,236 DEBUG io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - Successfully Acquired leader lease 'ConfigMapLock: default - mta-flink-restserver-leader (6f6479c6-86cc-4d62-84f9-37ff968bd0e5)' 2020-12-29T06:45:54.237297376Z 2020-12-29 14:45:54,237 DEBUG org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] - Grant leadership to contender http://mta-flink-jobmanager:8081 with session ID 9587e13f-322f-4cd5-9fff-b4941462be0f. 2020-12-29T06:45:54.237353551Z 2020-12-29 14:45:54,237 INFO org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint [] - http://mta-flink-jobmanager:8081 was granted leadership with leaderSessionID=9587e13f-322f-4cd5-9fff-b4941462be0f 2020-12-29T06:45:54.237440354Z 2020-12-29 14:45:54,237 DEBUG org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] - Confirm leader session ID 9587e13f-322f-4cd5-9fff-b4941462be0f for leader http://mta-flink-jobmanager:8081. 2020-12-29T06:45:54.254555127Z 2020-12-29 14:45:54,254 DEBUG io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - Leader changed from null to 6f6479c6-86cc-4d62-84f9-37ff968bd0e5 2020-12-29T06:45:54.254588299Z 2020-12-29 14:45:54,254 INFO org.apache.flink.kubernetes.kubeclient.resources.KubernetesLeaderElector [] - New leader elected 6f6479c6-86cc-4d62-84f9-37ff968bd0e5 for mta-flink-resourcemanager-leader. 2020-12-29T06:45:54.254628053Z 2020-12-29 14:45:54,254 DEBUG io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - Successfully Acquired leader lease 'ConfigMapLock: default - mta-flink-resourcemanager-leader (6f6479c6-86cc-4d62-84f9-37ff968bd0e5)' 2020-12-29T06:45:54.254871569Z 2020-12-29 14:45:54,254 DEBUG org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] - Grant leadership to contender LeaderContender: StandaloneResourceManager with session ID b1730dc6-0f94-49f4-b519-56917f3027b7. 2020-12-29T06:45:54.256608291Z
[GitHub] [flink] V1ncentzzZ edited a comment on pull request #14508: [FLINK-20773][format] Support allow-unescaped-control-chars option for JSON format.
V1ncentzzZ edited a comment on pull request #14508: URL: https://github.com/apache/flink/pull/14508#issuecomment-751745476 @flinkbot run azure This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #13581: [FLINK-17331] Explicitly get the ByteBuf length of all classes which …
flinkbot edited a comment on pull request #13581: URL: https://github.com/apache/flink/pull/13581#issuecomment-706519397 ## CI report: * cd05d23ea1ac2a9283e03039a3791b1cc2df1a95 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11230) * 008f1e53122962ab10c83e548b60f0e7da5e6654 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11427) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14464: [FLINK-20385][canal][json] Allow to read metadata for canal-json format
flinkbot edited a comment on pull request #14464: URL: https://github.com/apache/flink/pull/14464#issuecomment-749543461 ## CI report: * e81abd70d44d6ce7221d4cd2f44d31deab78ea9a UNKNOWN * 6956309e3928df5f4e7717a978bd91eee472f6ed Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11389) * f868d49d3c71f7b55d1c6a3dc601ac1561bdd9e8 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #13581: [FLINK-17331] Explicitly get the ByteBuf length of all classes which …
flinkbot edited a comment on pull request #13581: URL: https://github.com/apache/flink/pull/13581#issuecomment-706519397 ## CI report: * cd05d23ea1ac2a9283e03039a3791b1cc2df1a95 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11230) * 008f1e53122962ab10c83e548b60f0e7da5e6654 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] SteNicholas commented on pull request #14464: [FLINK-20385][canal][json] Allow to read metadata for canal-json format
SteNicholas commented on pull request #14464: URL: https://github.com/apache/flink/pull/14464#issuecomment-751963694 @wuchong I have kept code spotless with Maven for conflicts resolution and modified the document. Please help to review again. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-20789) Add a metric named `deserializeFaildCount` for formats which has option `ignore-parse-errors`
[ https://issues.apache.org/jira/browse/FLINK-20789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] xiaozilong updated FLINK-20789: --- Summary: Add a metric named `deserializeFaildCount` for formats which has option `ignore-parse-errors` (was: Add a metric named `deserializeFaildCount` for formats which has option ``) > Add a metric named `deserializeFaildCount` for formats which has option > `ignore-parse-errors` > - > > Key: FLINK-20789 > URL: https://issues.apache.org/jira/browse/FLINK-20789 > Project: Flink > Issue Type: Improvement > Components: Runtime / Metrics >Affects Versions: 1.12.0 >Reporter: xiaozilong >Priority: Major > > Counts the number of deserialize failed when the option `ignore-parse-errors` > is enabled. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-20789) Add a metric named `deserializeFaildCount` for kafka connectors
[ https://issues.apache.org/jira/browse/FLINK-20789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255827#comment-17255827 ] xiaozilong commented on FLINK-20789: [~jark] Sorry, I updated the title. > Add a metric named `deserializeFaildCount` for kafka connectors > --- > > Key: FLINK-20789 > URL: https://issues.apache.org/jira/browse/FLINK-20789 > Project: Flink > Issue Type: Improvement > Components: Runtime / Metrics >Affects Versions: 1.12.0 >Reporter: xiaozilong >Priority: Major > > Counts the number of deserialize failed when the option `ignore-parse-errors` > is enabled. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #14451: [FLINK-20704][table-planner] Some rel data type does not implement th…
flinkbot edited a comment on pull request #14451: URL: https://github.com/apache/flink/pull/14451#issuecomment-749321360 ## CI report: * d9c3ee87fa55ee7a1f7d4848c8a7dc542f2011a6 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11416) Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11377) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-20789) Add a metric named `deserializeFaildCount` for formats
[ https://issues.apache.org/jira/browse/FLINK-20789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] xiaozilong updated FLINK-20789: --- Summary: Add a metric named `deserializeFaildCount` for formats (was: Add a metric named `deserializeFaildCount` for kafka connectors) > Add a metric named `deserializeFaildCount` for formats > -- > > Key: FLINK-20789 > URL: https://issues.apache.org/jira/browse/FLINK-20789 > Project: Flink > Issue Type: Improvement > Components: Runtime / Metrics >Affects Versions: 1.12.0 >Reporter: xiaozilong >Priority: Major > > Counts the number of deserialize failed when the option `ignore-parse-errors` > is enabled. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-20789) Add a metric named `deserializeFaildCount` for formats which has option ``
[ https://issues.apache.org/jira/browse/FLINK-20789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] xiaozilong updated FLINK-20789: --- Summary: Add a metric named `deserializeFaildCount` for formats which has option `` (was: Add a metric named `deserializeFaildCount` for formats) > Add a metric named `deserializeFaildCount` for formats which has option `` > -- > > Key: FLINK-20789 > URL: https://issues.apache.org/jira/browse/FLINK-20789 > Project: Flink > Issue Type: Improvement > Components: Runtime / Metrics >Affects Versions: 1.12.0 >Reporter: xiaozilong >Priority: Major > > Counts the number of deserialize failed when the option `ignore-parse-errors` > is enabled. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #8952: [FLINK-10868][flink-runtime] Add failure rater for resource manager
flinkbot edited a comment on pull request #8952: URL: https://github.com/apache/flink/pull/8952#issuecomment-513724324 ## CI report: * d083b630115604e34b0a74498890aedbff61b2a7 UNKNOWN * d58927642909f50571ed6242605aac564e074f89 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11349) * 70b43a088878601fa074245f88800b163641d6f0 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11426) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-20789) Add a metric named `deserializeFaildCount` for kafka connectors
[ https://issues.apache.org/jira/browse/FLINK-20789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255820#comment-17255820 ] Jark Wu commented on FLINK-20789: - IIUC, this should belong to specific format module and not related to Kafka connector? > Add a metric named `deserializeFaildCount` for kafka connectors > --- > > Key: FLINK-20789 > URL: https://issues.apache.org/jira/browse/FLINK-20789 > Project: Flink > Issue Type: Improvement > Components: Runtime / Metrics >Affects Versions: 1.12.0 >Reporter: xiaozilong >Priority: Major > > Counts the number of deserialize failed when the option `ignore-parse-errors` > is enabled. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] wuchong commented on pull request #14508: [FLINK-20773][format] Support allow-unescaped-control-chars option for JSON format.
wuchong commented on pull request #14508: URL: https://github.com/apache/flink/pull/14508#issuecomment-751954940 cc @fsk119 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14514: [FLINK-20793][core][tests] Fix the NamesTest due to code style refactor
flinkbot edited a comment on pull request #14514: URL: https://github.com/apache/flink/pull/14514#issuecomment-751931069 ## CI report: * 0a7cf741cfb092143e2dafcb8587b2294efc5dda UNKNOWN * a4538c71a6fdd097ad68b374b781b67daad49f83 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11421) * 9c63e20e6338d01570a3ee9d8f20ec76634b01ef Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11425) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14508: [FLINK-20773][format] Support allow-unescaped-control-chars option for JSON format.
flinkbot edited a comment on pull request #14508: URL: https://github.com/apache/flink/pull/14508#issuecomment-751734163 ## CI report: * f1332b021d33a6e4681b0a08ad1c5b58f153c417 UNKNOWN * 9a3e9ba496cc4dade2d716a4821be4df40444863 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11424) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14514: [FLINK-20793][core][tests] Fix the NamesTest due to code style refactor
flinkbot edited a comment on pull request #14514: URL: https://github.com/apache/flink/pull/14514#issuecomment-751931069 ## CI report: * 0a7cf741cfb092143e2dafcb8587b2294efc5dda UNKNOWN * a4538c71a6fdd097ad68b374b781b67daad49f83 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11421) * 9c63e20e6338d01570a3ee9d8f20ec76634b01ef UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14512: FLINK-20359 Added Owner Reference to Job Manager in native kubernetes
flinkbot edited a comment on pull request #14512: URL: https://github.com/apache/flink/pull/14512#issuecomment-751906693 ## CI report: * 86f6502cf9980ff8d7fd15fe058588eb9f5004cf Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11415) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14503: [FLINK-20693][table-planner-blink][python] Port BatchExecPythonCorrelate and StreamExecPythonCorrelate to Java
flinkbot edited a comment on pull request #14503: URL: https://github.com/apache/flink/pull/14503#issuecomment-751622207 ## CI report: * 92d6c706863d0deb3dcfd5c8687f2670a72b7f0e Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11418) * b7561fbacc0be6e8aabf155a32756f30e68ebba9 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11423) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14483: [FLINK-18998][Runtime / Web Frontend]No watermark is shown on Flink UI when ProcessingTime is used
flinkbot edited a comment on pull request #14483: URL: https://github.com/apache/flink/pull/14483#issuecomment-750722975 ## CI report: * 3785cb9929bdb3bf7c6dead17b6db7910cf3b5e3 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11413) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (FLINK-20795) add a parameter to decide whether or not print dirty record when `ignore-parse-errors` is true
[ https://issues.apache.org/jira/browse/FLINK-20795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255804#comment-17255804 ] xiaozilong edited comment on FLINK-20795 at 12/29/20, 5:09 AM: --- Hi [~zoucao], I think it's useful. Maybe we should add another parameter to control the printing frequency also. I am very interested for this. In addition, i think we can add a metric to counts number of parse errors.FLINK-20789 was (Author: xiaozilong): Hi [~zoucao], I think it's useful. Maybe we should add another parameter to control the printing frequency also. I am very interested for this. > add a parameter to decide whether or not print dirty record when > `ignore-parse-errors` is true > -- > > Key: FLINK-20795 > URL: https://issues.apache.org/jira/browse/FLINK-20795 > Project: Flink > Issue Type: Improvement > Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile) >Affects Versions: 1.13.0 >Reporter: zoucao >Priority: Major > > add a parameter to decide whether or not to print dirty data when > `ignore-parse-errors`=true, some users want to make his task stability and > know the dirty record to fix the upstream, too. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-20797) can flink on k8s use pv using NFS and pvc as the hight avalibility storagedir
hayden zhou created FLINK-20797: --- Summary: can flink on k8s use pv using NFS and pvc as the hight avalibility storagedir Key: FLINK-20797 URL: https://issues.apache.org/jira/browse/FLINK-20797 Project: Flink Issue Type: New Feature Components: Client / Job Submission Environment: FLINK 1.12.0 Reporter: hayden zhou I want to deploy Flink on k8s with HA mode, and I don't want to deploy the HDFS cluster, and I have an NFS so that I am created a PV that use NFS as the backend storage, and I created a PVC for deployment mount. this is my FLINK configMap ``` kubernetes.cluster-id: mta-flink high-availability: org.apache.flink.kubernetes.highavailability. KubernetesHaServicesFactory high-availability.storageDir: file:///opt/flink/nfs/ha ``` and this is my jobmanager yaml file: ``` volumeMounts: - name: flink-config-volume mountPath: /opt/flink/conf - name: flink-nfs mountPath: /opt/flink/nfs securityContext: runAsUser: # refers to user _flink_ from official flink image, change if necessary #fsGroup: volumes: - name: flink-config-volume configMap: name: mta-flink-config items: - key: flink-conf.yaml path: flink-conf.yaml - key: log4j-console.properties path: log4j-console.properties - name: flink-nfs persistentVolumeClaim: claimName: mta-flink-nfs-pvc ``` It can be deployed successfully, but if I browser the jobmanager:8081 website, I get the result below: ``` {"errors": ["Service temporarily unavailable due to an ongoing leader election. Please refresh."]} ``` is the PVC can be used as `high-availability.storageDir`? if it's can be used, how can I fix this error? -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #8952: [FLINK-10868][flink-runtime] Add failure rater for resource manager
flinkbot edited a comment on pull request #8952: URL: https://github.com/apache/flink/pull/8952#issuecomment-513724324 ## CI report: * d083b630115604e34b0a74498890aedbff61b2a7 UNKNOWN * d58927642909f50571ed6242605aac564e074f89 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11349) * 70b43a088878601fa074245f88800b163641d6f0 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-20796) Support unnest in the Table API
Dian Fu created FLINK-20796: --- Summary: Support unnest in the Table API Key: FLINK-20796 URL: https://issues.apache.org/jira/browse/FLINK-20796 Project: Flink Issue Type: Sub-task Components: Table SQL / API Reporter: Dian Fu Fix For: 1.13.0 Currently, there is no corresponding functionality in Table API for the following SQL: {code:java} SELECT a, s FROM T, UNNEST(T.b) as A(s) {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (FLINK-20795) add a parameter to decide whether or not print dirty record when `ignore-parse-errors` is true
[ https://issues.apache.org/jira/browse/FLINK-20795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255804#comment-17255804 ] xiaozilong edited comment on FLINK-20795 at 12/29/20, 5:01 AM: --- Hi [~zoucao], I think it's useful. Maybe we should add another parameter to control the printing frequency also. I am very interested for this. was (Author: xiaozilong): Hi [~zoucao], I think it's useful. Mayby we should control the printing frequency also. I am very interested for this. > add a parameter to decide whether or not print dirty record when > `ignore-parse-errors` is true > -- > > Key: FLINK-20795 > URL: https://issues.apache.org/jira/browse/FLINK-20795 > Project: Flink > Issue Type: Improvement > Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile) >Affects Versions: 1.13.0 >Reporter: zoucao >Priority: Major > > add a parameter to decide whether or not to print dirty data when > `ignore-parse-errors`=true, some users want to make his task stability and > know the dirty record to fix the upstream, too. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-20791) Support local aggregate push down for Blink batch planner
[ https://issues.apache.org/jira/browse/FLINK-20791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255811#comment-17255811 ] Sebastian Liu commented on FLINK-20791: --- Hi [~lirui], [~godfreyhe], [~jark], Appreciate for your time to take a look, and I have opened a related discussion in Flink dev mailing group, thanks! > Support local aggregate push down for Blink batch planner > - > > Key: FLINK-20791 > URL: https://issues.apache.org/jira/browse/FLINK-20791 > Project: Flink > Issue Type: New Feature > Components: Table SQL / API, Table SQL / Planner >Reporter: Sebastian Liu >Priority: Major > > Aggregate operator of Flink SQL is currently fully done at Flink layer. With > the developing of storage, many downstream storage of Flink SQL has the > ability to deal with Aggregation operator. > Pushing down Aggregate to data source layer will improve performance from the > perspective of the network IO and computation overhead. > > I have drafted a design doc for this new feature. > [https://docs.google.com/document/d/1kGwC_h4qBNxF2eMEz6T6arByOB8yilrPLqDN0QBQXW4/edit?usp=sharing] > > Any comment or discussion is welcome. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #14503: [FLINK-20693][table-planner-blink][python] Port BatchExecPythonCorrelate and StreamExecPythonCorrelate to Java
flinkbot edited a comment on pull request #14503: URL: https://github.com/apache/flink/pull/14503#issuecomment-751622207 ## CI report: * 92d6c706863d0deb3dcfd5c8687f2670a72b7f0e Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11418) * b7561fbacc0be6e8aabf155a32756f30e68ebba9 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-18019) The configuration specified in TableConfig may not take effect in certain cases
[ https://issues.apache.org/jira/browse/FLINK-18019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255808#comment-17255808 ] Dian Fu commented on FLINK-18019: - This problem should still exist and you can verify it before actually digging into the problem. cc [~dwysakowicz] who may provide more inputs about this issue. > The configuration specified in TableConfig may not take effect in certain > cases > --- > > Key: FLINK-18019 > URL: https://issues.apache.org/jira/browse/FLINK-18019 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / Legacy Planner, Table SQL / Planner >Affects Versions: 1.10.0, 1.11.0 >Reporter: Dian Fu >Priority: Major > > Currently If the following configuration is configured in flink-conf.yaml: > {code:java} > state.backend: rocksdb > state.checkpoints.dir: file:///tmp/flink-checkpoints > {code} > and the following configuration is configured via TableConfig: > {code:java} > tableConfig.getConfiguration().setString("state.backend.rocksdb.memory.fixed-per-slot", > "200MB") > tableConfig.getConfiguration().setString("taskmanager.memory.task.off-heap.size", > "200MB") > {code} > Then users submit the job via CliFrontend, the configuration set via > TableConfig will not take effect. > Intuitively, it should be that user specified configuration via > TableConfig(has higher priority) and the configuration specified via > flink-conf.yaml together determines the configuration of a job. However, it > doesn't hold in all cases. > The root cause is that only the configuration specified in TableConfig in > passed to *StreamExecutionEnvironment* during translate to plan. For the > above case, as *state.backend* is not specified in TableConfig and so the > configuration *state.backend.rocksdb.memory.fixed-per-slot* will not take > effect. Please note that in above example, the state backend actually used > will be RocksDB without the configuration > *state.backend.rocksdb.memory.fixed-per-slot* and > *taskmanager.memory.task.off-heap.size.* -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] fsk119 commented on a change in pull request #14437: [FLINK-20458][docs] translate gettingStarted.zh.md and correct spelling errors in gettingStarted.md
fsk119 commented on a change in pull request #14437: URL: https://github.com/apache/flink/pull/14437#discussion_r549564559 ## File path: docs/dev/table/sql/gettingStarted.zh.md ## @@ -129,11 +128,11 @@ FROM employee_information GROUP BY dep_id; {% endhighlight %} -Such queries are considered _stateful_. Flink's advanced fault-tolerance mechanism will maintain internal state and consistency, so queries always return the correct result, even in the face of hardware failure. +这样的查询被认为是 _有状态的_。Flink 的高级容错机制将维持内部状态和一致性,因此即使遇到硬件故障,查询也始终返回正确结果。 -## Sink Tables +## Sink 表 -When running this query, the SQL client provides output in real-time but in a read-only fashion. Storing results - to power a report or dashboard - requires writing out to another table. This can be achieved using an `INSERT INTO` statement. The table referenced in this clause is known as a sink table. An `INSERT INTO` statement will be submitted as a detached query to the Flink cluster. +当运行此查询时,SQL 客户端实时但是以只读方式提供输出。存储结果,作为报表或仪表板的数据来源,需要写到另一个表。这可以使用 `INSERT INTO` 语句来实现。本节中引用的表称为 sink 表。`INSERT INTO` 语句将作为一个独立查询被提交到 Flink 集群中。 {% highlight sql %} INSERT INTO department_counts Review comment: Please remove this blank space. ![image](https://user-images.githubusercontent.com/33114724/103258946-37058300-49d2-11eb-9070-e660eae05f45.png) ## File path: docs/dev/table/sql/gettingStarted.zh.md ## @@ -113,13 +112,13 @@ SELECT * from employee_information WHERE DeptId = 1; {% top %} -## Continuous Queries +## 连续查询 -While not designed initially with streaming semantics in mind, SQL is a powerful tool for building continuous data pipelines. Where Flink SQL differs from traditional database queries is that is continuously consuming rows as the arrives and produces updates to its results. +虽然最初设计时没有考虑流语义,但 SQL 是用于构建连续数据流水线的强大工具。Flink SQL 与传统数据库查询的不同之处在于,Flink SQL 持续消费到达的行并对其结果进行更新。 -A [continuous query]({% link dev/table/streaming/dynamic_tables.zh.md %}#continuous-queries) never terminates and produces a dynamic table as a result. [Dynamic tables]({% link dev/table/streaming/dynamic_tables.zh.md %}#continuous-queries) are the core concept of Flink's Table API and SQL support for streaming data. +一个[连续查询]({% link dev/table/streaming/dynamic_tables.zh.md %}#continuous-queries)永远不会终止,并会产生一个动态表作为结果。[动态表]({% link dev/table/streaming/dynamic_tables.zh.md %}#continuous-queries)是 Flink 中 Table API 和 SQL 对流数据支持的核心概念。 -Aggregations on continuous streams need to store aggregated results continuously during the execution of the query. For example, suppose you need to count the number of employees for each department from an incoming data stream. The query needs to maintain the most up to date count for each department to output timely results as new rows are processed. +连续流上的聚合需要在查询执行期间不断地存储聚合的结果。例如,假设你需要从传入的数据流中计算每个部门的员工人数。查询需要维护每个部门最新的计算总数,以便在处理新行时及时输出结果。 {% highlight sql %} SELECT Review comment: Please delete empty space before `SELECT` ![image](https://user-images.githubusercontent.com/33114724/103259026-a3808200-49d2-11eb-8a7a-6a71999a8395.png) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14493: [FLINK-20768][Connectors/ElasticSearch] Support routing field for Elasticsearch connector
flinkbot edited a comment on pull request #14493: URL: https://github.com/apache/flink/pull/14493#issuecomment-751222472 ## CI report: * 877575f48fc2b602a34213c3b9d31e850fe5b130 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11422) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14508: [FLINK-20773][format] Support allow-unescaped-control-chars option for JSON format.
flinkbot edited a comment on pull request #14508: URL: https://github.com/apache/flink/pull/14508#issuecomment-751734163 ## CI report: * f1332b021d33a6e4681b0a08ad1c5b58f153c417 UNKNOWN * 8fe82a5ff01ab13c7ba704e3ced181c3a6e4fc15 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11420) * 9a3e9ba496cc4dade2d716a4821be4df40444863 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (FLINK-20795) add a parameter to decide whether or not print dirty record when `ignore-parse-errors` is true
[ https://issues.apache.org/jira/browse/FLINK-20795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255804#comment-17255804 ] xiaozilong edited comment on FLINK-20795 at 12/29/20, 4:35 AM: --- Hi [~zoucao], I think it's useful. Mayby we should control the printing frequency also. I am very interested for this. was (Author: xiaozilong): Hi [~zoucao], I think it's useful. Mayby we should control the printing frequency also. I want to work for this. > add a parameter to decide whether or not print dirty record when > `ignore-parse-errors` is true > -- > > Key: FLINK-20795 > URL: https://issues.apache.org/jira/browse/FLINK-20795 > Project: Flink > Issue Type: Improvement > Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile) >Affects Versions: 1.13.0 >Reporter: zoucao >Priority: Major > > add a parameter to decide whether or not to print dirty data when > `ignore-parse-errors`=true, some users want to make his task stability and > know the dirty record to fix the upstream, too. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (FLINK-20795) add a parameter to decide whether or not print dirty record when `ignore-parse-errors` is true
[ https://issues.apache.org/jira/browse/FLINK-20795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255804#comment-17255804 ] xiaozilong edited comment on FLINK-20795 at 12/29/20, 4:33 AM: --- Hi [~zoucao], I think it's useful. Mayby we should control the printing frequency also. I want to work for this. was (Author: xiaozilong): Hi [~zoucao], I think is useful. Mayby we should control the printing frequency also. > add a parameter to decide whether or not print dirty record when > `ignore-parse-errors` is true > -- > > Key: FLINK-20795 > URL: https://issues.apache.org/jira/browse/FLINK-20795 > Project: Flink > Issue Type: Improvement > Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile) >Affects Versions: 1.13.0 >Reporter: zoucao >Priority: Major > > add a parameter to decide whether or not to print dirty data when > `ignore-parse-errors`=true, some users want to make his task stability and > know the dirty record to fix the upstream, too. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-20795) add a parameter to decide whether or not print dirty record when `ignore-parse-errors` is true
[ https://issues.apache.org/jira/browse/FLINK-20795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255804#comment-17255804 ] xiaozilong commented on FLINK-20795: Hi [~zoucao], I think is useful. Mayby we should control the printing frequency also. > add a parameter to decide whether or not print dirty record when > `ignore-parse-errors` is true > -- > > Key: FLINK-20795 > URL: https://issues.apache.org/jira/browse/FLINK-20795 > Project: Flink > Issue Type: Improvement > Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile) >Affects Versions: 1.13.0 >Reporter: zoucao >Priority: Major > > add a parameter to decide whether or not to print dirty data when > `ignore-parse-errors`=true, some users want to make his task stability and > know the dirty record to fix the upstream, too. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] V1ncentzzZ edited a comment on pull request #14508: [FLINK-20773][format] Support allow-unescaped-control-chars option for JSON format.
V1ncentzzZ edited a comment on pull request #14508: URL: https://github.com/apache/flink/pull/14508#issuecomment-751745476 @flinkbot run azure. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14493: [FLINK-20768][Connectors/ElasticSearch] Support routing field for Elasticsearch connector
flinkbot edited a comment on pull request #14493: URL: https://github.com/apache/flink/pull/14493#issuecomment-751222472 ## CI report: * 179ea826af4470c6c1d9a94da5f4270f16e81f40 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11326) * 877575f48fc2b602a34213c3b9d31e850fe5b130 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] curcur commented on a change in pull request #14460: [docs/javadoc][hotfix] Explicitly Document task cancellation timeout …
curcur commented on a change in pull request #14460: URL: https://github.com/apache/flink/pull/14460#discussion_r549560968 ## File path: docs/_includes/generated/all_taskmanager_section.html ## @@ -18,7 +18,7 @@ task.cancellation.timeout 18 Long -Timeout in milliseconds after which a task cancellation times out and leads to a fatal TaskManager error. A value of 0 deactivates the watch dog. +Timeout in milliseconds after which a task cancellation times out and leads to a fatal TaskManager error. A value of 0 deactivates the watch dog. Notice that a task cancellation is different from a task failure. So task cancellation timeout does not apply to task closing/clean-up caused by a task failure. Review comment: > > Notice that a task cancellation is different from a task failure. So task cancellation timeout does not apply to task closing/clean-up caused by a task failure > > -> > > > Notice that a task cancellation is different from both a task failure and a clean shutdown. So task cancellation timeout applies only if you manually cancel the job and does not apply to task closing/clean-up caused by a task failure or clean shutdown. > > ? Because as I understand, this notice applies as well for the clean shut down? 1. That's true in the sense that `cleanUpInvoke` is called both in the case of a task failure and a clean shutdown. But I thought `task-cancelation-timeout` naturally can not be considered as applied to a clean shutdown? Would that cause more confusion if we say that? The main confusion in general (and in the ticket FLINK-18983) is that people thought "task-cancelation-timeout" can be applied for "failed tasks" as well. 2. Task cancelation does not always happen "manually", it can happen when a task fails and caused the rest of the tasks canceled by JM. So I would suggest "task cancelation" instead of saying "manually cancel the job" This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] curcur commented on a change in pull request #14460: [docs/javadoc][hotfix] Explicitly Document task cancellation timeout …
curcur commented on a change in pull request #14460: URL: https://github.com/apache/flink/pull/14460#discussion_r549560968 ## File path: docs/_includes/generated/all_taskmanager_section.html ## @@ -18,7 +18,7 @@ task.cancellation.timeout 18 Long -Timeout in milliseconds after which a task cancellation times out and leads to a fatal TaskManager error. A value of 0 deactivates the watch dog. +Timeout in milliseconds after which a task cancellation times out and leads to a fatal TaskManager error. A value of 0 deactivates the watch dog. Notice that a task cancellation is different from a task failure. So task cancellation timeout does not apply to task closing/clean-up caused by a task failure. Review comment: > > Notice that a task cancellation is different from a task failure. So task cancellation timeout does not apply to task closing/clean-up caused by a task failure > > -> > > > Notice that a task cancellation is different from both a task failure and a clean shutdown. So task cancellation timeout applies only if you manually cancel the job and does not apply to task closing/clean-up caused by a task failure or clean shutdown. > > ? Because as I understand, this notice applies as well for the clean shut down? 1. That's true in the sense that `cleanUpInvoke` is called both in the case of a task failure and a clean shutdown. But I thought `task-cancelation-timeout` naturally can not be considered as applied to a clean shutdown? Would that cause more confusion if we say that? The main confusion in general (and in the ticket FLINK-18983) is that people thought "task-cancelation-timeout" can be applied for "failed tasks" as well. 2. Task cancelation does not always happen "manually", it can happen when a task fails and caused the rest of the tasks canceled by JM. So I would say "task cancelation" instead of saying "manually cancel the job" This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] curcur commented on a change in pull request #14460: [docs/javadoc][hotfix] Explicitly Document task cancellation timeout …
curcur commented on a change in pull request #14460: URL: https://github.com/apache/flink/pull/14460#discussion_r549560968 ## File path: docs/_includes/generated/all_taskmanager_section.html ## @@ -18,7 +18,7 @@ task.cancellation.timeout 18 Long -Timeout in milliseconds after which a task cancellation times out and leads to a fatal TaskManager error. A value of 0 deactivates the watch dog. +Timeout in milliseconds after which a task cancellation times out and leads to a fatal TaskManager error. A value of 0 deactivates the watch dog. Notice that a task cancellation is different from a task failure. So task cancellation timeout does not apply to task closing/clean-up caused by a task failure. Review comment: > > Notice that a task cancellation is different from a task failure. So task cancellation timeout does not apply to task closing/clean-up caused by a task failure > > -> > > > Notice that a task cancellation is different from both a task failure and a clean shutdown. So task cancellation timeout applies only if you manually cancel the job and does not apply to task closing/clean-up caused by a task failure or clean shutdown. > > ? Because as I understand, this notice applies as well for the clean shut down? 1. That's true in the sense that `cleanUpInvoke` is called both in the case of a task failure and a clean shutdown. But I thought `task-cancelation-timeout` naturally can not be considered as applied to a clean shutdown? The main confusion in general (and in the ticket FLINK-18983) is that people thought "task-cancelation-timeout" can be applied for "failed tasks" as well. 2. Task cancelation does not always happen "manually", it can happen when a task fails and caused the rest of the tasks canceled by JM. So I would say "task cancelation" instead of saying "manually cancel the job" This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] curcur commented on a change in pull request #14460: [docs/javadoc][hotfix] Explicitly Document task cancellation timeout …
curcur commented on a change in pull request #14460: URL: https://github.com/apache/flink/pull/14460#discussion_r549560968 ## File path: docs/_includes/generated/all_taskmanager_section.html ## @@ -18,7 +18,7 @@ task.cancellation.timeout 18 Long -Timeout in milliseconds after which a task cancellation times out and leads to a fatal TaskManager error. A value of 0 deactivates the watch dog. +Timeout in milliseconds after which a task cancellation times out and leads to a fatal TaskManager error. A value of 0 deactivates the watch dog. Notice that a task cancellation is different from a task failure. So task cancellation timeout does not apply to task closing/clean-up caused by a task failure. Review comment: > > Notice that a task cancellation is different from a task failure. So task cancellation timeout does not apply to task closing/clean-up caused by a task failure > > -> > > > Notice that a task cancellation is different from both a task failure and a clean shutdown. So task cancellation timeout applies only if you manually cancel the job and does not apply to task closing/clean-up caused by a task failure or clean shutdown. > > ? Because as I understand, this notice applies as well for the clean shut down? 1. That's true in the sense that cleanUpInvoke is called both in the case of a task failure and a clean shutdown. But I thought task-cancelation-timeout naturally can not be considered as applied to a clean shutdown? The main confusion in general (and in the ticket FLINK-18983) is that people thought "task-cancelation-timeout" can be applied for "failed tasks" as well. 2. Task cancelation does not always happen "manually", it can happen when a task fails and caused the rest of the tasks canceled by JM. So I would say "task cancelation" instead of saying "manually cancel the job" This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] curcur commented on a change in pull request #14460: [docs/javadoc][hotfix] Explicitly Document task cancellation timeout …
curcur commented on a change in pull request #14460: URL: https://github.com/apache/flink/pull/14460#discussion_r549560865 ## File path: docs/_includes/generated/all_taskmanager_section.html ## @@ -18,7 +18,7 @@ task.cancellation.timeout 18 Long -Timeout in milliseconds after which a task cancellation times out and leads to a fatal TaskManager error. A value of 0 deactivates the watch dog. +Timeout in milliseconds after which a task cancellation times out and leads to a fatal TaskManager error. A value of 0 deactivates the watch dog. Notice that a task cancellation is different from a task failure. So task cancellation timeout does not apply to task closing/clean-up caused by a task failure. Review comment: 1. That's true in the sense that `cleanUpInvoke` is called both in the case of a task failure and a clean shutdown. But I thought `task-cancelation-timeout` naturally can not be considered as applied to a clean shutdown? The main confusion in general (and in the ticket FLINK-18983) is that people thought "task-cancelation-timeout" can be applied for "failed tasks" as well. 2. Task cancelation does not always happen "manually", it can happen when a task fails and caused the rest of the tasks canceled by JM. So I would say "task cancelation" instead of saying "manually cancel the job" This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-20457) Fix the handling of timestamp in DataStream.from_collection
[ https://issues.apache.org/jira/browse/FLINK-20457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255796#comment-17255796 ] zhengyu.lou commented on FLINK-20457: - For example, currently {color:#4c9aff}datetime.datetime (2013, 12, 5, 0, 3, 13, 122000, tzinfo = timezone('Asia/Chongqing'){color}) will be converted to {color:#4c9aff}1386172633122000{color} and sent to the java side. My task is use Pickle package the {color:#4c9aff}datetime{color} {color:#4c9aff}object{color} and sent to the java side, finally used to construct a {color:#4c9aff}java.sql.Timestamp{color} object. Is my understanding correct?[~dian.fu] > Fix the handling of timestamp in DataStream.from_collection > --- > > Key: FLINK-20457 > URL: https://issues.apache.org/jira/browse/FLINK-20457 > Project: Flink > Issue Type: Bug > Components: API / Python >Affects Versions: 1.12.0 >Reporter: Dian Fu >Assignee: zhengyu.lou >Priority: Major > Fix For: 1.13.0, 1.12.1 > > > Currently, DataStream.from_collection firstly converts date/time/dateTime > objects to int at Python side and then construct the corresponding > Date/Time/Timestamp object at Java side. It will lose the timezone > information. Pickle could handle date/time/datetime properly and the > conversion could be avoided. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #14514: [FLINK-20793][core][tests] Fix the NamesTest due to code style refactor
flinkbot edited a comment on pull request #14514: URL: https://github.com/apache/flink/pull/14514#issuecomment-751931069 ## CI report: * 0a7cf741cfb092143e2dafcb8587b2294efc5dda UNKNOWN * a4538c71a6fdd097ad68b374b781b67daad49f83 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11421) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-20795) add a parameter to decide whether or not print dirty record when `ignore-parse-errors` is true
zoucao created FLINK-20795: -- Summary: add a parameter to decide whether or not print dirty record when `ignore-parse-errors` is true Key: FLINK-20795 URL: https://issues.apache.org/jira/browse/FLINK-20795 Project: Flink Issue Type: Improvement Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile) Affects Versions: 1.13.0 Reporter: zoucao add a parameter to decide whether or not to print dirty data when `ignore-parse-errors`=true, some users want to make his task stability and know the dirty record to fix the upstream, too. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-20789) Add a metric named `deserializeFaildCount` for kafka connectors
[ https://issues.apache.org/jira/browse/FLINK-20789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] xiaozilong updated FLINK-20789: --- Description: Counts the number of deserialize failed when the option `ignore-parse-errors` is enabled. was:Counts the number of deserialize failed when the option `ignore-parse-errors` is enabled for kafka connectors. > Add a metric named `deserializeFaildCount` for kafka connectors > --- > > Key: FLINK-20789 > URL: https://issues.apache.org/jira/browse/FLINK-20789 > Project: Flink > Issue Type: Improvement > Components: Runtime / Metrics >Affects Versions: 1.12.0 >Reporter: xiaozilong >Priority: Major > > Counts the number of deserialize failed when the option `ignore-parse-errors` > is enabled. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #14514: [FLINK-20793][core][tests] Fix the NamesTest due to code style refactor
flinkbot edited a comment on pull request #14514: URL: https://github.com/apache/flink/pull/14514#issuecomment-751931069 ## CI report: * 0a7cf741cfb092143e2dafcb8587b2294efc5dda UNKNOWN * a4538c71a6fdd097ad68b374b781b67daad49f83 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14508: [FLINK-20773][format] Support allow-unescaped-control-chars option for JSON format.
flinkbot edited a comment on pull request #14508: URL: https://github.com/apache/flink/pull/14508#issuecomment-751734163 ## CI report: * f1332b021d33a6e4681b0a08ad1c5b58f153c417 UNKNOWN * 8fe82a5ff01ab13c7ba704e3ced181c3a6e4fc15 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11420) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14503: [FLINK-20693][table-planner-blink][python] Port BatchExecPythonCorrelate and StreamExecPythonCorrelate to Java
flinkbot edited a comment on pull request #14503: URL: https://github.com/apache/flink/pull/14503#issuecomment-751622207 ## CI report: * 92d6c706863d0deb3dcfd5c8687f2670a72b7f0e Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11418) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-18019) The configuration specified in TableConfig may not take effect in certain cases
[ https://issues.apache.org/jira/browse/FLINK-18019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255788#comment-17255788 ] Ting Sun commented on FLINK-18019: -- [~dian.fu] Hello, I'm wondering whether the issue is still valid. If yes, I'd like to work on it. > The configuration specified in TableConfig may not take effect in certain > cases > --- > > Key: FLINK-18019 > URL: https://issues.apache.org/jira/browse/FLINK-18019 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / Legacy Planner, Table SQL / Planner >Affects Versions: 1.10.0, 1.11.0 >Reporter: Dian Fu >Priority: Major > > Currently If the following configuration is configured in flink-conf.yaml: > {code:java} > state.backend: rocksdb > state.checkpoints.dir: file:///tmp/flink-checkpoints > {code} > and the following configuration is configured via TableConfig: > {code:java} > tableConfig.getConfiguration().setString("state.backend.rocksdb.memory.fixed-per-slot", > "200MB") > tableConfig.getConfiguration().setString("taskmanager.memory.task.off-heap.size", > "200MB") > {code} > Then users submit the job via CliFrontend, the configuration set via > TableConfig will not take effect. > Intuitively, it should be that user specified configuration via > TableConfig(has higher priority) and the configuration specified via > flink-conf.yaml together determines the configuration of a job. However, it > doesn't hold in all cases. > The root cause is that only the configuration specified in TableConfig in > passed to *StreamExecutionEnvironment* during translate to plan. For the > above case, as *state.backend* is not specified in TableConfig and so the > configuration *state.backend.rocksdb.memory.fixed-per-slot* will not take > effect. Please note that in above example, the state backend actually used > will be RocksDB without the configuration > *state.backend.rocksdb.memory.fixed-per-slot* and > *taskmanager.memory.task.off-heap.size.* -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot commented on pull request #14514: [FLINK-20793][core][tests] Fix the NamesTest due to code style refactor
flinkbot commented on pull request #14514: URL: https://github.com/apache/flink/pull/14514#issuecomment-751931069 ## CI report: * 0a7cf741cfb092143e2dafcb8587b2294efc5dda UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14503: [FLINK-20693][table-planner-blink][python] Port BatchExecPythonCorrelate and StreamExecPythonCorrelate to Java
flinkbot edited a comment on pull request #14503: URL: https://github.com/apache/flink/pull/14503#issuecomment-751622207 ## CI report: * 2ea58420e5d5ac6d1930f8b847c46a60a2043b38 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11374) * 92d6c706863d0deb3dcfd5c8687f2670a72b7f0e Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11418) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] V1ncentzzZ edited a comment on pull request #14508: [FLINK-20773][format] Support allow-unescaped-control-chars option for JSON format.
V1ncentzzZ edited a comment on pull request #14508: URL: https://github.com/apache/flink/pull/14508#issuecomment-751745476 @flinkbot run azure This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-20793) Fix NamesTest due to code style refactor
[ https://issues.apache.org/jira/browse/FLINK-20793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dian Fu updated FLINK-20793: Fix Version/s: 1.12.2 1.11.4 1.13.0 > Fix NamesTest due to code style refactor > > > Key: FLINK-20793 > URL: https://issues.apache.org/jira/browse/FLINK-20793 > Project: Flink > Issue Type: Bug > Components: API / Core >Affects Versions: 1.11.0, 1.12.0, 1.13.0 >Reporter: Huang Xingbo >Assignee: Huang Xingbo >Priority: Major > Labels: pull-request-available, test-stability > Fix For: 1.13.0, 1.11.4, 1.12.2 > > > Due to the [FLINK-20651|https://issues.apache.org/jira/browse/FLINK-20651], > the NameTest failed > [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11403=results] > I will fix it asap -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (FLINK-20793) Fix NamesTest due to code style refactor
[ https://issues.apache.org/jira/browse/FLINK-20793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dian Fu reassigned FLINK-20793: --- Assignee: Huang Xingbo > Fix NamesTest due to code style refactor > > > Key: FLINK-20793 > URL: https://issues.apache.org/jira/browse/FLINK-20793 > Project: Flink > Issue Type: Bug > Components: API / Core >Affects Versions: 1.11.0, 1.12.0, 1.13.0 >Reporter: Huang Xingbo >Assignee: Huang Xingbo >Priority: Major > Labels: pull-request-available, test-stability > > Due to the [FLINK-20651|https://issues.apache.org/jira/browse/FLINK-20651], > the NameTest failed > [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11403=results] > I will fix it asap -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot commented on pull request #14514: [FLINK-20793][core] Fix the NamesTest due to code style refactor
flinkbot commented on pull request #14514: URL: https://github.com/apache/flink/pull/14514#issuecomment-751929562 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 0a7cf741cfb092143e2dafcb8587b2294efc5dda (Tue Dec 29 03:02:36 UTC 2020) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! * **This pull request references an unassigned [Jira ticket](https://issues.apache.org/jira/browse/FLINK-20793).** According to the [code contribution guide](https://flink.apache.org/contributing/contribute-code.html), tickets need to be assigned before starting with the implementation work. Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-20793) Fix NamesTest due to code style refactor
[ https://issues.apache.org/jira/browse/FLINK-20793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-20793: --- Labels: pull-request-available test-stability (was: test-stability) > Fix NamesTest due to code style refactor > > > Key: FLINK-20793 > URL: https://issues.apache.org/jira/browse/FLINK-20793 > Project: Flink > Issue Type: Bug > Components: API / Core >Affects Versions: 1.11.0, 1.12.0, 1.13.0 >Reporter: Huang Xingbo >Priority: Major > Labels: pull-request-available, test-stability > > Due to the [FLINK-20651|https://issues.apache.org/jira/browse/FLINK-20651], > the NameTest failed > [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11403=results] > I will fix it asap -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] HuangXingBo opened a new pull request #14514: [FLINK-20793][core] Fix the NamesTest due to code style refactor
HuangXingBo opened a new pull request #14514: URL: https://github.com/apache/flink/pull/14514 ## What is the purpose of the change *This pull request will Fix the NamesTest due to code style refactor* ## Brief change log - *Fix the NamesTest due to code style refactor* ## Verifying this change - *Original Tests* ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (no) - The serializers: (no) - The runtime per-record code paths (performance sensitive): (no) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no) - The S3 file system connector: (no) ## Documentation - Does this pull request introduce a new feature? (no) - If yes, how is the feature documented? (not applicable) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] HuangXingBo closed pull request #14513: [hotfix] Fix the NamesTest due to code style refactor
HuangXingBo closed pull request #14513: URL: https://github.com/apache/flink/pull/14513 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #14513: [hotfix] Fix the NamesTest due to code style refactor
flinkbot commented on pull request #14513: URL: https://github.com/apache/flink/pull/14513#issuecomment-751928383 ## CI report: * 9f854369bee8f8e62999d4be28cc6c027172dd26 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14503: [FLINK-20693][table-planner-blink][python] Port BatchExecPythonCorrelate and StreamExecPythonCorrelate to Java
flinkbot edited a comment on pull request #14503: URL: https://github.com/apache/flink/pull/14503#issuecomment-751622207 ## CI report: * 2ea58420e5d5ac6d1930f8b847c46a60a2043b38 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11374) * 92d6c706863d0deb3dcfd5c8687f2670a72b7f0e UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14508: [FLINK-20773][format] Support allow-unescaped-control-chars option for JSON format.
flinkbot edited a comment on pull request #14508: URL: https://github.com/apache/flink/pull/14508#issuecomment-751734163 ## CI report: * f1332b021d33a6e4681b0a08ad1c5b58f153c417 UNKNOWN * 8b19d3eb2fc61752ac94cb2b6c50b3306ab68730 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11417) * 8fe82a5ff01ab13c7ba704e3ced181c3a6e4fc15 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-20794) Support to select distinct columns in the Table API
Dian Fu created FLINK-20794: --- Summary: Support to select distinct columns in the Table API Key: FLINK-20794 URL: https://issues.apache.org/jira/browse/FLINK-20794 Project: Flink Issue Type: Sub-task Components: Table SQL / API Reporter: Dian Fu Fix For: 1.13.0 Currently, there is no corresponding functionality in Table API for the following SQL: {code:java} SELECT DISTINCT users FROM Orders {code} For example, for the following job: {code:java} table.select("distinct a") {code} It will thrown the following exception: {code:java} org.apache.flink.table.api.ExpressionParserException: Could not parse expression at column 10: ',' expected but 'a' foundorg.apache.flink.table.api.ExpressionParserException: Could not parse expression at column 10: ',' expected but 'a' founddistinct a ^ at org.apache.flink.table.expressions.PlannerExpressionParserImpl$.throwError(PlannerExpressionParserImpl.scala:726) at org.apache.flink.table.expressions.PlannerExpressionParserImpl$.parseExpressionList(PlannerExpressionParserImpl.scala:710) at org.apache.flink.table.expressions.PlannerExpressionParserImpl.parseExpressionList(PlannerExpressionParserImpl.scala:47) at org.apache.flink.table.expressions.ExpressionParser.parseExpressionList(ExpressionParser.java:40) at org.apache.flink.table.api.internal.TableImpl.select(TableImpl.java:121){code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-20793) Fix NamesTest due to code style refactor
Huang Xingbo created FLINK-20793: Summary: Fix NamesTest due to code style refactor Key: FLINK-20793 URL: https://issues.apache.org/jira/browse/FLINK-20793 Project: Flink Issue Type: Bug Components: API / Core Affects Versions: 1.12.0, 1.11.0, 1.13.0 Reporter: Huang Xingbo Due to the [FLINK-20651|https://issues.apache.org/jira/browse/FLINK-20651], the NameTest failed [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11403=results] I will fix it asap -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot commented on pull request #14513: [hotfix] Fix the NamesTest due to code style refactor
flinkbot commented on pull request #14513: URL: https://github.com/apache/flink/pull/14513#issuecomment-751926896 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 9f854369bee8f8e62999d4be28cc6c027172dd26 (Tue Dec 29 02:46:38 UTC 2020) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-20457) Fix the handling of timestamp in DataStream.from_collection
[ https://issues.apache.org/jira/browse/FLINK-20457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255779#comment-17255779 ] Dian Fu commented on FLINK-20457: - Per my understanding, the timezone info should only be valid for timestamp type. PS: It would be great if you could describe a bit on how you want to fix this issue. > Fix the handling of timestamp in DataStream.from_collection > --- > > Key: FLINK-20457 > URL: https://issues.apache.org/jira/browse/FLINK-20457 > Project: Flink > Issue Type: Bug > Components: API / Python >Affects Versions: 1.12.0 >Reporter: Dian Fu >Assignee: zhengyu.lou >Priority: Major > Fix For: 1.13.0, 1.12.1 > > > Currently, DataStream.from_collection firstly converts date/time/dateTime > objects to int at Python side and then construct the corresponding > Date/Time/Timestamp object at Java side. It will lose the timezone > information. Pickle could handle date/time/datetime properly and the > conversion could be avoided. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] HuangXingBo opened a new pull request #14513: [hotfix] Fix the NamesTest due to code style refactor
HuangXingBo opened a new pull request #14513: URL: https://github.com/apache/flink/pull/14513 ## What is the purpose of the change *This pull request will Fix the NamesTest due to code style refactor* ## Brief change log - *Fix the NamesTest due to code style refactor* ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (no) - The serializers: (no) - The runtime per-record code paths (performance sensitive): (no) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no) - The S3 file system connector: (no) ## Documentation - Does this pull request introduce a new feature? (no) - If yes, how is the feature documented? (not applicable) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] zhisheng17 closed pull request #14495: [hotfix][runtime]fix typo in HistoryServerUtils class
zhisheng17 closed pull request #14495: URL: https://github.com/apache/flink/pull/14495 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] zhisheng17 commented on pull request #14495: [hotfix][runtime]fix typo in HistoryServerUtils class
zhisheng17 commented on pull request #14495: URL: https://github.com/apache/flink/pull/14495#issuecomment-751926256 thanks @tillrohrmann , I get it, I will close this PR. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-20457) Fix the handling of timestamp in DataStream.from_collection
[ https://issues.apache.org/jira/browse/FLINK-20457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255775#comment-17255775 ] zhengyu.lou commented on FLINK-20457: - re: [~dian.fu] It seems in from_collection that all time types in python(Date,Time,Timestamp) do not retain time zone information, do I need to modify them all? > Fix the handling of timestamp in DataStream.from_collection > --- > > Key: FLINK-20457 > URL: https://issues.apache.org/jira/browse/FLINK-20457 > Project: Flink > Issue Type: Bug > Components: API / Python >Affects Versions: 1.12.0 >Reporter: Dian Fu >Assignee: zhengyu.lou >Priority: Major > Fix For: 1.13.0, 1.12.1 > > > Currently, DataStream.from_collection firstly converts date/time/dateTime > objects to int at Python side and then construct the corresponding > Date/Time/Timestamp object at Java side. It will lose the timezone > information. Pickle could handle date/time/datetime properly and the > conversion could be avoided. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] V1ncentzzZ edited a comment on pull request #14508: [FLINK-20773][format] Support allow-unescaped-control-chars option for JSON format.
V1ncentzzZ edited a comment on pull request #14508: URL: https://github.com/apache/flink/pull/14508#issuecomment-751745476 @flinkbot run azure. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-20787) Improve the Table API to make it usable
[ https://issues.apache.org/jira/browse/FLINK-20787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dian Fu updated FLINK-20787: Description: Currently, there are quite a few bugs in the Table API which makes it difficult to use. Users will encounter all kinds of problems when using Table API and have to find various workarounds from time to time. This is an umbrella JIRA for all the issues specific in the Table API and trying to make the Table API smooth to use. (was: Currently, there are quite a few bugs in the Table API which makes it difficult to use. Users will encounter all kinds of problems when using Table API and have to fall back to SQL from time to time. This is an umbrella JIRA for all the issues specific in the Table API.) > Improve the Table API to make it usable > --- > > Key: FLINK-20787 > URL: https://issues.apache.org/jira/browse/FLINK-20787 > Project: Flink > Issue Type: Improvement > Components: Table SQL / API >Reporter: Dian Fu >Priority: Major > Fix For: 1.13.0 > > > Currently, there are quite a few bugs in the Table API which makes it > difficult to use. Users will encounter all kinds of problems when using Table > API and have to find various workarounds from time to time. This is an > umbrella JIRA for all the issues specific in the Table API and trying to make > the Table API smooth to use. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #14508: [FLINK-20773][format] Support allow-unescaped-control-chars option for JSON format.
flinkbot edited a comment on pull request #14508: URL: https://github.com/apache/flink/pull/14508#issuecomment-751734163 ## CI report: * f1332b021d33a6e4681b0a08ad1c5b58f153c417 UNKNOWN * 514d2afb5520388488b7189f18ee1a97d39ea386 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11398) * 8b19d3eb2fc61752ac94cb2b6c50b3306ab68730 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14451: [FLINK-20704][table-planner] Some rel data type does not implement th…
flinkbot edited a comment on pull request #14451: URL: https://github.com/apache/flink/pull/14451#issuecomment-749321360 ## CI report: * d9c3ee87fa55ee7a1f7d4848c8a7dc542f2011a6 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11416) Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11377) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] V1ncentzzZ edited a comment on pull request #14508: [FLINK-20773][format] Support allow-unescaped-control-chars option for JSON format.
V1ncentzzZ edited a comment on pull request #14508: URL: https://github.com/apache/flink/pull/14508#issuecomment-751745476 @flinkbot run azure This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-20789) Add a metric named `deserializeFaildCount` for kafka connectors
[ https://issues.apache.org/jira/browse/FLINK-20789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255766#comment-17255766 ] xiaozilong commented on FLINK-20789: cc [~jark] > Add a metric named `deserializeFaildCount` for kafka connectors > --- > > Key: FLINK-20789 > URL: https://issues.apache.org/jira/browse/FLINK-20789 > Project: Flink > Issue Type: Improvement > Components: Runtime / Metrics >Affects Versions: 1.12.0 >Reporter: xiaozilong >Priority: Major > > Counts the number of deserialize failed when the option `ignore-parse-errors` > is enabled for kafka connectors. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #14451: [FLINK-20704][table-planner] Some rel data type does not implement th…
flinkbot edited a comment on pull request #14451: URL: https://github.com/apache/flink/pull/14451#issuecomment-749321360 ## CI report: * d9c3ee87fa55ee7a1f7d4848c8a7dc542f2011a6 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11377) Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11416) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] godfreyhe commented on pull request #14451: [FLINK-20704][table-planner] Some rel data type does not implement th…
godfreyhe commented on pull request #14451: URL: https://github.com/apache/flink/pull/14451#issuecomment-751915927 @flinkbot run azure This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] godfreyhe commented on a change in pull request #14451: [FLINK-20704][table-planner] Some rel data type does not implement th…
godfreyhe commented on a change in pull request #14451: URL: https://github.com/apache/flink/pull/14451#discussion_r549537552 ## File path: flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/plan/schema/MapRelDataType.scala ## @@ -45,5 +45,4 @@ class MapRelDataType( override def hashCode(): Int = { Objects.hashCode(keyType, valueType) } - Review comment: related issue: https://issues.apache.org/jira/browse/FLINK-20785 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14509: [FLINK-20654][checkpointing] Decline checkpoints until restored channel state is consumed
flinkbot edited a comment on pull request #14509: URL: https://github.com/apache/flink/pull/14509#issuecomment-751746750 ## CI report: * ad643bad232902ff5e2f5878dfd43776e2142ef1 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11414) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14512: FLINK-20359 Added Owner Reference to Job Manager in native kubernetes
flinkbot edited a comment on pull request #14512: URL: https://github.com/apache/flink/pull/14512#issuecomment-751906693 ## CI report: * 86f6502cf9980ff8d7fd15fe058588eb9f5004cf Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11415) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #14512: FLINK-20359 Added Owner Reference to Job Manager in native kubernetes
flinkbot commented on pull request #14512: URL: https://github.com/apache/flink/pull/14512#issuecomment-751906693 ## CI report: * 86f6502cf9980ff8d7fd15fe058588eb9f5004cf UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14509: [FLINK-20654][checkpointing] Decline checkpoints until restored channel state is consumed
flinkbot edited a comment on pull request #14509: URL: https://github.com/apache/flink/pull/14509#issuecomment-751746750 ## CI report: * 89cc24da926665e094006c6a825429c1efb7697c Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11412) * ad643bad232902ff5e2f5878dfd43776e2142ef1 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11414) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #14512: FLINK-20359 Added Owner Reference to Job Manager in native kubernetes
flinkbot commented on pull request #14512: URL: https://github.com/apache/flink/pull/14512#issuecomment-751903408 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 86f6502cf9980ff8d7fd15fe058588eb9f5004cf (Tue Dec 29 00:19:59 UTC 2020) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] blublinsky commented on pull request #14281: FLINK-20359 Added Owner Reference to Job Manager in native kubernetes setup
blublinsky commented on pull request #14281: URL: https://github.com/apache/flink/pull/14281#issuecomment-751903397 @wangyang0918 Redid the implementation based on your request. Had to move it https://github.com/apache/flink/pull/14512. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] blublinsky opened a new pull request #14512: FLINK-20359 Added Owner Reference to Job Manager in native kubernetes
blublinsky opened a new pull request #14512: URL: https://github.com/apache/flink/pull/14512 ## What is the purpose of the change Flink implementation is often a part of the larger application. As a result a synchronized management - clean up of Flink resources, when a main application is deleted is important. In Kubernetes, a common approach for such clean up is usage of the owner's reference (https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) This PR allows adding owner reference support to Flink Job manager ## Brief change log - Add configuration for owner reference - Add Owner manager resource - Add Owner manager support to KubernetesJobManagerParameters - Updated Job Manager factory to process owner's reference - Updated Job Manager factory unit test ## Verifying this change This change added tests and can be verified as follows: Extended InitJobManagerDecoratorTest to validate owner reference support ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (yes / no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes / no) yes - The serializers: (yes / no / don't know) no - The runtime per-record code paths (performance sensitive): (yes / no / don't know) no - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't know) yes - The S3 file system connector: (yes / no / don't know)no ## Documentation - Does this pull request introduce a new feature? (yes / no) - If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented) java doc This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14511: [FLINK-20792][build] Allow shorthand calls to spotless
flinkbot edited a comment on pull request #14511: URL: https://github.com/apache/flink/pull/14511#issuecomment-751833305 ## CI report: * c041a8acda3fb68859a8ecc8a2901dfcb3ce7c2d Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11407) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14509: [FLINK-20654][checkpointing] Decline checkpoints until restored channel state is consumed
flinkbot edited a comment on pull request #14509: URL: https://github.com/apache/flink/pull/14509#issuecomment-751746750 ## CI report: * 692de7c951bd21f3841560cc351c3aae3f33719c Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11408) * 89cc24da926665e094006c6a825429c1efb7697c Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11412) * ad643bad232902ff5e2f5878dfd43776e2142ef1 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11414) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14509: [FLINK-20654][checkpointing] Decline checkpoints until restored channel state is consumed
flinkbot edited a comment on pull request #14509: URL: https://github.com/apache/flink/pull/14509#issuecomment-751746750 ## CI report: * 692de7c951bd21f3841560cc351c3aae3f33719c Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11408) * 89cc24da926665e094006c6a825429c1efb7697c Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11412) * ad643bad232902ff5e2f5878dfd43776e2142ef1 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] sv3ndk closed pull request #14494: [hotfix][docs] uses lambda in Learn Flink example instead of FilterFunction
sv3ndk closed pull request #14494: URL: https://github.com/apache/flink/pull/14494 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org