[jira] [Created] (FLINK-14361) Decorrelate subQuery fail when multi correlate in project
Jingsong Lee created FLINK-14361: Summary: Decorrelate subQuery fail when multi correlate in project Key: FLINK-14361 URL: https://issues.apache.org/jira/browse/FLINK-14361 Project: Flink Issue Type: Bug Components: Table SQL / Planner Reporter: Jingsong Lee Fix For: 1.10.0 Blink planner run SQL: {code:java} SELECT (SELECT SUM(l.c) FROM l WHERE l.a = r.a and l.b = r.b) from r) {code} Will throw exception: unexpected correlate variable $cor But: {code:java} SELECT (SELECT SUM(l.c) FROM l WHERE l.a = r.a) from r){code} Single correlate in project can work. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] WeiZhong94 commented on issue #9865: [FLINK-14212][python] Support no-argument Python UDFs.
WeiZhong94 commented on issue #9865: [FLINK-14212][python] Support no-argument Python UDFs. URL: https://github.com/apache/flink/pull/9865#issuecomment-540416014 @dianfu Thanks for your comments! I have addressed them in the latest commit, please take a look. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Created] (FLINK-14360) Flink on yarn should support obtain delegation tokens of multi hdfs namespaces
Shen Yinjie created FLINK-14360: --- Summary: Flink on yarn should support obtain delegation tokens of multi hdfs namespaces Key: FLINK-14360 URL: https://issues.apache.org/jira/browse/FLINK-14360 Project: Flink Issue Type: Improvement Reporter: Shen Yinjie There's a scenario when deploy flink on yarn with multi hdfs cluster or hdfs federation, Flink need to get delegation tokens of all the namespaces before start appmaster. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] WeiZhong94 commented on a change in pull request #9865: [FLINK-14212][python] Support no-argument Python UDFs.
WeiZhong94 commented on a change in pull request #9865: [FLINK-14212][python] Support no-argument Python UDFs. URL: https://github.com/apache/flink/pull/9865#discussion_r54648 ## File path: flink-python/pyflink/table/tests/test_udf.py ## @@ -204,6 +204,26 @@ def eval(self, col): self.t_env.register_function( "non-callable-udf", udf(Plus(), DataTypes.BIGINT(), DataTypes.BIGINT())) +def test_no_argument_deterministic_udf(self): +@udf(input_types=[], result_type=DataTypes.BIGINT()) +def one(): +return 1 + +self.t_env.register_function( +"one", one) +self.t_env.register_function("add", add) + +table_sink = source_sink_utils.TestAppendSink(['a', 'b'], + [DataTypes.BIGINT(), DataTypes.BIGINT()]) +self.t_env.register_table_sink("Results", table_sink) + +t = self.t_env.from_elements([(1, 2), (2, 5), (3, 1)], ['a', 'b']) +t.select("one(), add(a, b)") \ Review comment: My initial thought was to use the "add" function to test whether the no-argument UDF can work with normal UDFs. It seems such worries are redundant so I have remove it in the latest commit. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9851: [FLINK-14341]Fix flink-python builds failure: no such option: --prefix
flinkbot edited a comment on issue #9851: [FLINK-14341]Fix flink-python builds failure: no such option: --prefix URL: https://github.com/apache/flink/pull/9851#issuecomment-539365871 ## CI report: * 1007e8b66d42b613ceafb86457913381a4e27b0f : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/130858251) * 1c7b9fe1c90079cea937afb3ac91ab2cfc79f7d1 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131064427) * 6136ed7f701fe313f08af37e1299897f98d9d932 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131120569) * df5f53e00dbda06159b59abe5fb3ca1e4fdf2acc : UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9818: [FLINK-14289][runtime] Remove Optional fields from RecordWriter relevant classes
flinkbot edited a comment on issue #9818: [FLINK-14289][runtime] Remove Optional fields from RecordWriter relevant classes URL: https://github.com/apache/flink/pull/9818#issuecomment-536416616 ## CI report: * 519051016f02b8257eb1e3d5834f36c315816266 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/129635061) * d7d830c823872df74a0d3013aa2557f6058939cf : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/131260214) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9542: [FLINK-13873][metrics] Change the column family as tags for influxdb …
flinkbot edited a comment on issue #9542: [FLINK-13873][metrics] Change the column family as tags for influxdb … URL: https://github.com/apache/flink/pull/9542#issuecomment-525276537 ## CI report: * e8636926351f3d406962dcadba275e20e49aff39 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/124736151) * d23ea97e8419bbacea0698b3ba82a459d940cf38 : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/128606376) * 05977cd49eb306d768668be4e8cb31034343df02 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/128606947) * 057d453e5e00656bcfcb87d1d69172f614b2d11f : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/128681895) * 5001042b5fc5202da14c06e2d21faf1427f50a66 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/128683546) * 3a1e39e889daf0fca82c54982911ded35df6cb77 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/128692102) * 2745a3cb02e601a1a26360e9d6b3f0af5428c66a : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129027621) * e108aa24e739d6f48f63efc560c3fc049b509860 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131172797) * 24a270d468dea677379713d5cf402ea453d9f222 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131246042) * acf1c2b9add8c3b903a8485ed41c9f0b18d97729 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/131260202) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] dianfu commented on issue #9874: [FLINK-14240][table] Merge table config parameters(TableConfig#getConfiguration) into global job parameters(ExecutionConfig#getGlobalJobParameters) wh
dianfu commented on issue #9874: [FLINK-14240][table] Merge table config parameters(TableConfig#getConfiguration) into global job parameters(ExecutionConfig#getGlobalJobParameters) when running with the legacy planner. URL: https://github.com/apache/flink/pull/9874#issuecomment-540397643 @flinkbot attention @godfreyhe @twalthr This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] dianfu commented on a change in pull request #9874: [FLINK-14240][table] Merge table config parameters(TableConfig#getConfiguration) into global job parameters(ExecutionConfig#getGlob
dianfu commented on a change in pull request #9874: [FLINK-14240][table] Merge table config parameters(TableConfig#getConfiguration) into global job parameters(ExecutionConfig#getGlobalJobParameters) when running with the legacy planner. URL: https://github.com/apache/flink/pull/9874#discussion_r40839 ## File path: flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/runtime/stream/table/TableEnvironmentITCase.scala ## @@ -0,0 +1,87 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.runtime.stream.table + +import java.io.File + +import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment +import org.apache.flink.table.api.scala.StreamTableEnvironment +import org.junit.Assert.assertEquals +import org.junit.Test +import org.apache.flink.api.scala._ +import org.apache.flink.table.api.internal.TableEnvironmentImpl +import org.apache.flink.table.api.{EnvironmentSettings, TableEnvironment, Types} +import org.apache.flink.table.api.scala._ +import org.apache.flink.table.descriptors.{FileSystem, OldCsv, Schema} +import org.apache.flink.table.planner.StreamPlanner +import org.apache.flink.table.runtime.utils.CommonTestData + +class TableEnvironmentITCase { + + @Test + def testMergeParametersInStreamTableEnvironment(): Unit = { +val env = StreamExecutionEnvironment.getExecutionEnvironment +val tEnv = StreamTableEnvironment.create(env) + +val t = env.fromCollection(Seq(1, 2, 3)).toTable(tEnv, 'a) + +tEnv.getConfig.getConfiguration.setString("testConf", "1") + +assertEquals(null, env.getConfig.getGlobalJobParameters.toMap.get("testConf")) + +t.select('a).toAppendStream[Int] + +assertEquals("1", env.getConfig.getGlobalJobParameters.toMap.get("testConf")) Review comment: The current tests test the parameters will be merged to ExecutionConfig when toAppendStream is called. I think there is no need to test this. What about adding a real ITCase? i.e. adding a RichMapFunction which reads the GlobalJobParameter? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] dianfu commented on a change in pull request #9874: [FLINK-14240][table] Merge table config parameters(TableConfig#getConfiguration) into global job parameters(ExecutionConfig#getGlob
dianfu commented on a change in pull request #9874: [FLINK-14240][table] Merge table config parameters(TableConfig#getConfiguration) into global job parameters(ExecutionConfig#getGlobalJobParameters) when running with the legacy planner. URL: https://github.com/apache/flink/pull/9874#discussion_r36683 ## File path: flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/runtime/stream/table/TableEnvironmentITCase.scala ## @@ -0,0 +1,87 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.runtime.stream.table + +import java.io.File + +import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment +import org.apache.flink.table.api.scala.StreamTableEnvironment +import org.junit.Assert.assertEquals +import org.junit.Test +import org.apache.flink.api.scala._ +import org.apache.flink.table.api.internal.TableEnvironmentImpl +import org.apache.flink.table.api.{EnvironmentSettings, TableEnvironment, Types} +import org.apache.flink.table.api.scala._ +import org.apache.flink.table.descriptors.{FileSystem, OldCsv, Schema} +import org.apache.flink.table.planner.StreamPlanner +import org.apache.flink.table.runtime.utils.CommonTestData + +class TableEnvironmentITCase { + + @Test + def testMergeParametersInStreamTableEnvironment(): Unit = { +val env = StreamExecutionEnvironment.getExecutionEnvironment +val tEnv = StreamTableEnvironment.create(env) + +val t = env.fromCollection(Seq(1, 2, 3)).toTable(tEnv, 'a) + +tEnv.getConfig.getConfiguration.setString("testConf", "1") + +assertEquals(null, env.getConfig.getGlobalJobParameters.toMap.get("testConf")) + +t.select('a).toAppendStream[Int] + +assertEquals("1", env.getConfig.getGlobalJobParameters.toMap.get("testConf")) + } + + @Test + def testMergeParametersInUnifiedTableEnvironment(): Unit = { +val tEnv = TableEnvironment.create( + EnvironmentSettings.newInstance().inStreamingMode().useOldPlanner().build()) + +val csvTable = CommonTestData.getCsvTableSource + +val tmpFile = File.createTempFile("flink-table-environment-test", ".tmp") +tmpFile.deleteOnExit() +tmpFile.delete() Review comment: Why it's deleted? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] dianfu commented on a change in pull request #9874: [FLINK-14240][table] Merge table config parameters(TableConfig#getConfiguration) into global job parameters(ExecutionConfig#getGlob
dianfu commented on a change in pull request #9874: [FLINK-14240][table] Merge table config parameters(TableConfig#getConfiguration) into global job parameters(ExecutionConfig#getGlobalJobParameters) when running with the legacy planner. URL: https://github.com/apache/flink/pull/9874#discussion_r21140 ## File path: flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/runtime/stream/table/TableEnvironmentITCase.scala ## @@ -0,0 +1,87 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.runtime.stream.table + +import java.io.File + +import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment +import org.apache.flink.table.api.scala.StreamTableEnvironment +import org.junit.Assert.assertEquals +import org.junit.Test +import org.apache.flink.api.scala._ +import org.apache.flink.table.api.internal.TableEnvironmentImpl +import org.apache.flink.table.api.{EnvironmentSettings, TableEnvironment, Types} +import org.apache.flink.table.api.scala._ +import org.apache.flink.table.descriptors.{FileSystem, OldCsv, Schema} +import org.apache.flink.table.planner.StreamPlanner +import org.apache.flink.table.runtime.utils.CommonTestData + +class TableEnvironmentITCase { + + @Test + def testMergeParametersInStreamTableEnvironment(): Unit = { +val env = StreamExecutionEnvironment.getExecutionEnvironment +val tEnv = StreamTableEnvironment.create(env) + +val t = env.fromCollection(Seq(1, 2, 3)).toTable(tEnv, 'a) + +tEnv.getConfig.getConfiguration.setString("testConf", "1") + +assertEquals(null, env.getConfig.getGlobalJobParameters.toMap.get("testConf")) + +t.select('a).toAppendStream[Int] + +assertEquals("1", env.getConfig.getGlobalJobParameters.toMap.get("testConf")) + } + + @Test + def testMergeParametersInUnifiedTableEnvironment(): Unit = { +val tEnv = TableEnvironment.create( + EnvironmentSettings.newInstance().inStreamingMode().useOldPlanner().build()) + +val csvTable = CommonTestData.getCsvTableSource + +val tmpFile = File.createTempFile("flink-table-environment-test", ".tmp") +tmpFile.deleteOnExit() +tmpFile.delete() +val path = tmpFile.toURI.toString +println(path) Review comment: remove println This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] dianfu commented on a change in pull request #9874: [FLINK-14240][table] Merge table config parameters(TableConfig#getConfiguration) into global job parameters(ExecutionConfig#getGlob
dianfu commented on a change in pull request #9874: [FLINK-14240][table] Merge table config parameters(TableConfig#getConfiguration) into global job parameters(ExecutionConfig#getGlobalJobParameters) when running with the legacy planner. URL: https://github.com/apache/flink/pull/9874#discussion_r41153 ## File path: flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/util/GlobalJobParametersMerger.scala ## @@ -0,0 +1,53 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.util + +import org.apache.flink.api.common.ExecutionConfig +import org.apache.flink.configuration.Configuration +import org.apache.flink.table.api.TableConfig + +import _root_.scala.collection.JavaConversions._ + +/** + * Utilities for merging the table config parameters and global job parameters. + */ +object GlobalJobParametersMerger { + + /** +* Merge global job parameters and table config parameters, +* and set the merged result to GlobalJobParameters Review comment: Add period at the end of line. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] dianfu commented on a change in pull request #9874: [FLINK-14240][table] Merge table config parameters(TableConfig#getConfiguration) into global job parameters(ExecutionConfig#getGlob
dianfu commented on a change in pull request #9874: [FLINK-14240][table] Merge table config parameters(TableConfig#getConfiguration) into global job parameters(ExecutionConfig#getGlobalJobParameters) when running with the legacy planner. URL: https://github.com/apache/flink/pull/9874#discussion_r43624 ## File path: flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/runtime/stream/table/TableEnvironmentITCase.scala ## @@ -0,0 +1,87 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.runtime.stream.table + +import java.io.File + +import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment +import org.apache.flink.table.api.scala.StreamTableEnvironment +import org.junit.Assert.assertEquals +import org.junit.Test +import org.apache.flink.api.scala._ +import org.apache.flink.table.api.internal.TableEnvironmentImpl +import org.apache.flink.table.api.{EnvironmentSettings, TableEnvironment, Types} Review comment: Types is not used This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] dianfu commented on a change in pull request #9874: [FLINK-14240][table] Merge table config parameters(TableConfig#getConfiguration) into global job parameters(ExecutionConfig#getGlob
dianfu commented on a change in pull request #9874: [FLINK-14240][table] Merge table config parameters(TableConfig#getConfiguration) into global job parameters(ExecutionConfig#getGlobalJobParameters) when running with the legacy planner. URL: https://github.com/apache/flink/pull/9874#discussion_r21552 ## File path: flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/runtime/stream/table/TableEnvironmentITCase.scala ## @@ -0,0 +1,87 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.runtime.stream.table + +import java.io.File + +import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment +import org.apache.flink.table.api.scala.StreamTableEnvironment +import org.junit.Assert.assertEquals +import org.junit.Test +import org.apache.flink.api.scala._ +import org.apache.flink.table.api.internal.TableEnvironmentImpl +import org.apache.flink.table.api.{EnvironmentSettings, TableEnvironment, Types} +import org.apache.flink.table.api.scala._ +import org.apache.flink.table.descriptors.{FileSystem, OldCsv, Schema} +import org.apache.flink.table.planner.StreamPlanner +import org.apache.flink.table.runtime.utils.CommonTestData + +class TableEnvironmentITCase { + + @Test + def testMergeParametersInStreamTableEnvironment(): Unit = { +val env = StreamExecutionEnvironment.getExecutionEnvironment +val tEnv = StreamTableEnvironment.create(env) + +val t = env.fromCollection(Seq(1, 2, 3)).toTable(tEnv, 'a) + +tEnv.getConfig.getConfiguration.setString("testConf", "1") + +assertEquals(null, env.getConfig.getGlobalJobParameters.toMap.get("testConf")) + +t.select('a).toAppendStream[Int] + +assertEquals("1", env.getConfig.getGlobalJobParameters.toMap.get("testConf")) + } + + @Test + def testMergeParametersInUnifiedTableEnvironment(): Unit = { +val tEnv = TableEnvironment.create( + EnvironmentSettings.newInstance().inStreamingMode().useOldPlanner().build()) + +val csvTable = CommonTestData.getCsvTableSource + +val tmpFile = File.createTempFile("flink-table-environment-test", ".tmp") +tmpFile.deleteOnExit() +tmpFile.delete() +val path = tmpFile.toURI.toString +println(path) + +tEnv.connect(new FileSystem().path(path)) + .withFormat(new OldCsv().field("id", "INT")) + .withSchema(new Schema().field("id", "INT")) + .inAppendMode() + .registerTableSink("sink") + +tEnv.fromTableSource(csvTable).select('id).insertInto("sink") + +val env = tEnv.asInstanceOf[TableEnvironmentImpl].getPlanner + .asInstanceOf[StreamPlanner].getExecutionEnvironment + +env.setParallelism(1) Review comment: It's not necessary to setParallelism This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] dianfu commented on a change in pull request #9874: [FLINK-14240][table] Merge table config parameters(TableConfig#getConfiguration) into global job parameters(ExecutionConfig#getGlob
dianfu commented on a change in pull request #9874: [FLINK-14240][table] Merge table config parameters(TableConfig#getConfiguration) into global job parameters(ExecutionConfig#getGlobalJobParameters) when running with the legacy planner. URL: https://github.com/apache/flink/pull/9874#discussion_r43082 ## File path: flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/runtime/batch/table/TableEnvironmentITCase.scala ## @@ -203,6 +203,22 @@ class TableEnvironmentITCase( val expected = List("1,1,Hi", "2,2,Hello", "3,2,Hello world") assertEquals(expected.sorted, MemoryTableSourceSinkUtil.tableDataStrings.sorted) } + + @Test + def testMergeParameters(): Unit = { +val env = ExecutionEnvironment.getExecutionEnvironment +val tEnv = BatchTableEnvironment.create(env) + +val t = CollectionDataSets.getSmall3TupleDataSet(env).toTable(tEnv).as('a, 'b, 'c) + +tEnv.getConfig.getConfiguration.setString("testConf", "1") + Review comment: remove this kind of empty lines? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] dianfu commented on a change in pull request #9874: [FLINK-14240][table] Merge table config parameters(TableConfig#getConfiguration) into global job parameters(ExecutionConfig#getGlob
dianfu commented on a change in pull request #9874: [FLINK-14240][table] Merge table config parameters(TableConfig#getConfiguration) into global job parameters(ExecutionConfig#getGlobalJobParameters) when running with the legacy planner. URL: https://github.com/apache/flink/pull/9874#discussion_r20071 ## File path: flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/util/GlobalJobParametersMerger.scala ## @@ -0,0 +1,53 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.util + +import org.apache.flink.api.common.ExecutionConfig +import org.apache.flink.configuration.Configuration +import org.apache.flink.table.api.TableConfig + +import _root_.scala.collection.JavaConversions._ + +/** + * Utilities for merging the table config parameters and global job parameters. + */ +object GlobalJobParametersMerger { + + /** +* Merge global job parameters and table config parameters, +* and set the merged result to GlobalJobParameters +*/ + def mergeParameters(executionConfig: ExecutionConfig, tableConfig: TableConfig): Unit = { + +if (executionConfig != null) { Review comment: indentation This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] TisonKun removed a comment on issue #9832: [FLINK-11843] Bind lifespan of Dispatcher to leader session
TisonKun removed a comment on issue #9832: [FLINK-11843] Bind lifespan of Dispatcher to leader session URL: https://github.com/apache/flink/pull/9832#issuecomment-540323951 The problem above is abstractly actions in `(grant|revoke)Leadership` are asynchronous. `(grant|revoke)Leadership` are synchronized inside LeaderElectionService but we later drop the synchronization by trigger an asynchronous operation. Given that component cannot serve before it confirms leadership it might be reasonable we keep synchronization for operations in `(grant|revoke)Leadership`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9818: [FLINK-14289][runtime] Remove Optional fields from RecordWriter relevant classes
flinkbot edited a comment on issue #9818: [FLINK-14289][runtime] Remove Optional fields from RecordWriter relevant classes URL: https://github.com/apache/flink/pull/9818#issuecomment-536416616 ## CI report: * 519051016f02b8257eb1e3d5834f36c315816266 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/129635061) * d7d830c823872df74a0d3013aa2557f6058939cf : UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9802: [FLINK-13361][documention] Add documentation for JDBC connector for Table API & SQL
flinkbot edited a comment on issue #9802: [FLINK-13361][documention] Add documentation for JDBC connector for Table API & SQL URL: https://github.com/apache/flink/pull/9802#issuecomment-536264046 ## CI report: * 0587de749d0d7a8c8dcc3dadeb53ae9599255abd : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129581826) * 6ff3a62cea9cd913e45541f3d870b3d39e83dcb5 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131106234) * 47f49d86148fc9f4226c853ac2a7c87103c638c6 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131135276) * aaabbc1cc6583431c9f0c2f080a9fbc42f2f4985 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131139726) * 27fae5fd56f6a70c93b37c854f3f3b937c62d073 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131255534) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9542: [FLINK-13873][metrics] Change the column family as tags for influxdb …
flinkbot edited a comment on issue #9542: [FLINK-13873][metrics] Change the column family as tags for influxdb … URL: https://github.com/apache/flink/pull/9542#issuecomment-525276537 ## CI report: * e8636926351f3d406962dcadba275e20e49aff39 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/124736151) * d23ea97e8419bbacea0698b3ba82a459d940cf38 : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/128606376) * 05977cd49eb306d768668be4e8cb31034343df02 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/128606947) * 057d453e5e00656bcfcb87d1d69172f614b2d11f : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/128681895) * 5001042b5fc5202da14c06e2d21faf1427f50a66 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/128683546) * 3a1e39e889daf0fca82c54982911ded35df6cb77 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/128692102) * 2745a3cb02e601a1a26360e9d6b3f0af5428c66a : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129027621) * e108aa24e739d6f48f63efc560c3fc049b509860 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131172797) * 24a270d468dea677379713d5cf402ea453d9f222 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131246042) * acf1c2b9add8c3b903a8485ed41c9f0b18d97729 : UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Resolved] (FLINK-13361) Add documentation for JDBC connector for Table API & SQL
[ https://issues.apache.org/jira/browse/FLINK-13361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu resolved FLINK-13361. - Resolution: Fixed 1.10.0: d1d7853c45773ed3ec8b7a577993b080a45f1d77 1.9.2: f09ff5eb6d1e01ea77e87c6b8ba9d5752d492444 > Add documentation for JDBC connector for Table API & SQL > > > Key: FLINK-13361 > URL: https://issues.apache.org/jira/browse/FLINK-13361 > Project: Flink > Issue Type: Sub-task > Components: Connectors / JDBC, Documentation >Reporter: Jark Wu >Assignee: Jingsong Lee >Priority: Major > Labels: pull-request-available > Fix For: 1.10.0, 1.9.2 > > Time Spent: 20m > Remaining Estimate: 0h > > Add documentation for JDBC connector for Table API & SQL > - “Connect to External Systems”: Add DDL for JDBC in “Table Connector” > section. JDBC support batch-source & lookup & sink. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] wuchong closed pull request #9802: [FLINK-13361][documention] Add documentation for JDBC connector for Table API & SQL
wuchong closed pull request #9802: [FLINK-13361][documention] Add documentation for JDBC connector for Table API & SQL URL: https://github.com/apache/flink/pull/9802 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] wuchong commented on a change in pull request #9802: [FLINK-13361][documention] Add documentation for JDBC connector for Table API & SQL
wuchong commented on a change in pull request #9802: [FLINK-13361][documention] Add documentation for JDBC connector for Table API & SQL URL: https://github.com/apache/flink/pull/9802#discussion_r38245 ## File path: docs/dev/table/connect.md ## @@ -1076,6 +1077,7 @@ CREATE TABLE MyUserTable ( {% top %} +<<< HEAD Review comment: remove This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] zhuzhurk commented on a change in pull request #9872: [FLINK-14291][runtime, tests] Add test coverage to DefaultScheduler
zhuzhurk commented on a change in pull request #9872: [FLINK-14291][runtime,tests] Add test coverage to DefaultScheduler URL: https://github.com/apache/flink/pull/9872#discussion_r39025 ## File path: flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/DefaultSchedulerTest.java ## @@ -252,18 +249,27 @@ public void failJobIfNotEnoughResources() throws Exception { findThrowableWithMessage( failureCause, "Could not allocate the required slot within slot request timeout.").isPresent()); + assertThat(jobStatus, is(equalTo(JobStatus.FAILED))); } - private void drainAllAvailableSlots() { - final int numberOfAvailableSlots = slotProvider.getNumberOfAvailableSlots(); - for (int i = 0; i < numberOfAvailableSlots; i++) { - slotProvider.allocateSlot( - new SlotRequestId(), - new ScheduledUnit(new JobVertexID(), null, null), - SlotProfile.noRequirements(), - true, - Time.milliseconds(TIMEOUT_MS)); - } + @Test + public void skipDeploymentIfVertexVersionOutdated() { + final JobGraph jobGraph = singleNonParallelJobVertexJobGraph(); + + final DefaultScheduler scheduler = createSchedulerAndStartScheduling(jobGraph); + + final List initiallyScheduledVertices = testExecutionVertexOperations.getDeployedVertices(); + + final ArchivedExecutionVertex onlyExecutionVertex = Iterables.getOnlyElement(scheduler.requestJob().getAllExecutionVertices()); + final ExecutionAttemptID attemptId = onlyExecutionVertex.getCurrentExecutionAttempt().getAttemptId(); + scheduler.updateTaskExecutionState(new TaskExecutionState(jobGraph.getJobID(), attemptId, ExecutionState.FAILED)); + + testExecutionSlotAllocator.disableAutoCompletePendingRequests(); + taskRestartExecutor.triggerScheduledTasks(); + executionVertexVersioner.recordModification(new ExecutionVertexID(getOnlyJobVertex(jobGraph).getID(), 0)); + testExecutionSlotAllocator.completePendingRequests(); + + assertThat(initiallyScheduledVertices, is(equalTo(testExecutionVertexOperations.getDeployedVertices(; Review comment: One more thing we may need to verify is that concurrent failovers should not result in more failovers. We can do it by limit the max restart attempts or check the final attempt number of a vertex. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] wuchong commented on a change in pull request #9866: [FLINK-14349][hbase] Create a Connector Descriptor for HBase so that user can connect HBase by TableEnvironment#connect
wuchong commented on a change in pull request #9866: [FLINK-14349][hbase] Create a Connector Descriptor for HBase so that user can connect HBase by TableEnvironment#connect URL: https://github.com/apache/flink/pull/9866#discussion_r37630 ## File path: flink-connectors/flink-hbase/src/main/java/org/apache/flink/table/descriptors/HBase.java ## @@ -0,0 +1,120 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.descriptors; + +import org.apache.flink.configuration.MemorySize; + +import java.util.Map; + +import static org.apache.flink.table.descriptors.ConnectorDescriptorValidator.CONNECTOR_VERSION; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_TABLE_NAME; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_TYPE_VALUE_HBASE; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_WRITE_BUFFER_FLUSH_INTERVAL; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_WRITE_BUFFER_FLUSH_MAX_ROWS; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_WRITE_BUFFER_FLUSH_MAX_SIZE; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_ZK_NODE_PARENT; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_ZK_QUORUM; + +/** + * Connector descriptor for Apache HBase. + */ +public class HBase extends ConnectorDescriptor { Review comment: `PublicEvolving` is used to indicate this is a public use, but with evolving class/interface. Subclass has its own additional methods, compared to the parent class. So it is still worth to mark it. Take `Csv` descriptor as an example. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9866: [FLINK-14349][hbase] Create a Connector Descriptor for HBase so that user can connect HBase by TableEnvironment#connect
flinkbot edited a comment on issue #9866: [FLINK-14349][hbase] Create a Connector Descriptor for HBase so that user can connect HBase by TableEnvironment#connect URL: https://github.com/apache/flink/pull/9866#issuecomment-539968272 ## CI report: * 372923b25d5fa8376fc40b18e2bf024efef23ed3 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131126146) * c4ff26413585cec6efc71732e1bf241f69b76c26 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131251071) * 66c42babf0280998ef474633adfb1c837b8d500a : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131257145) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9875: [hotfix][docs] Fix the incorrect scala checkstyle configure file path
flinkbot edited a comment on issue #9875: [hotfix][docs] Fix the incorrect scala checkstyle configure file path URL: https://github.com/apache/flink/pull/9875#issuecomment-540311608 ## CI report: * 3ab8ff37e57d7f9a4e3bc2421f5f978a66022589 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131247414) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9866: [FLINK-14349][hbase] Create a Connector Descriptor for HBase so that user can connect HBase by TableEnvironment#connect
flinkbot edited a comment on issue #9866: [FLINK-14349][hbase] Create a Connector Descriptor for HBase so that user can connect HBase by TableEnvironment#connect URL: https://github.com/apache/flink/pull/9866#issuecomment-539968272 ## CI report: * 372923b25d5fa8376fc40b18e2bf024efef23ed3 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131126146) * c4ff26413585cec6efc71732e1bf241f69b76c26 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131251071) * 66c42babf0280998ef474633adfb1c837b8d500a : UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9802: [FLINK-13361][documention] Add documentation for JDBC connector for Table API & SQL
flinkbot edited a comment on issue #9802: [FLINK-13361][documention] Add documentation for JDBC connector for Table API & SQL URL: https://github.com/apache/flink/pull/9802#issuecomment-536264046 ## CI report: * 0587de749d0d7a8c8dcc3dadeb53ae9599255abd : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129581826) * 6ff3a62cea9cd913e45541f3d870b3d39e83dcb5 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131106234) * 47f49d86148fc9f4226c853ac2a7c87103c638c6 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131135276) * aaabbc1cc6583431c9f0c2f080a9fbc42f2f4985 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131139726) * 27fae5fd56f6a70c93b37c854f3f3b937c62d073 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/131255534) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] KurtYoung commented on issue #8468: [FLINK-12399][table][table-planner] Fix FilterableTableSource does not change after applyPredicate
KurtYoung commented on issue #8468: [FLINK-12399][table][table-planner] Fix FilterableTableSource does not change after applyPredicate URL: https://github.com/apache/flink/pull/8468#issuecomment-540358588 Sorry for the delay due to flink forward, I will review this ASAP This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9802: [FLINK-13361][documention] Add documentation for JDBC connector for Table API & SQL
flinkbot edited a comment on issue #9802: [FLINK-13361][documention] Add documentation for JDBC connector for Table API & SQL URL: https://github.com/apache/flink/pull/9802#issuecomment-536264046 ## CI report: * 0587de749d0d7a8c8dcc3dadeb53ae9599255abd : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129581826) * 6ff3a62cea9cd913e45541f3d870b3d39e83dcb5 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131106234) * 47f49d86148fc9f4226c853ac2a7c87103c638c6 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131135276) * aaabbc1cc6583431c9f0c2f080a9fbc42f2f4985 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131139726) * 27fae5fd56f6a70c93b37c854f3f3b937c62d073 : UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9782: [FLINK-14241][test] Add ARM installation steps for flink e2e container tests
flinkbot edited a comment on issue #9782: [FLINK-14241][test] Add ARM installation steps for flink e2e container tests URL: https://github.com/apache/flink/pull/9782#issuecomment-535826739 ## CI report: * d48b95539070679639d5e8c4e640b9a710d7 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/129403938) * bc7ff380b3c3deb9751c0a596c8fef46c3b48ef3 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129413892) * 58fe983f436f82e015d7c3635708d60235b9f078 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131251050) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] JingsongLi commented on a change in pull request #9802: [FLINK-13361][documention] Add documentation for JDBC connector for Table API & SQL
JingsongLi commented on a change in pull request #9802: [FLINK-13361][documention] Add documentation for JDBC connector for Table API & SQL URL: https://github.com/apache/flink/pull/9802#discussion_r26702 ## File path: docs/dev/table/connect.md ## @@ -1075,6 +1075,88 @@ CREATE TABLE MyUserTable ( {% top %} +### JDBC Connector + +Source: Batch +Sink: Batch +Sink: Streaming Append Mode +Sink: Streaming Upsert Mode +Temporal Join: Sync Mode + +The JDBC connector allows for reading from an JDBC client. +The JDBC connector allows for writing into an JDBC client. + +The connector can operate in [upsert mode](#update-modes) for exchanging UPSERT/DELETE messages with the external system using a [key defined by the query](./streaming/dynamic_tables.html#table-to-stream-conversion). + +For append-only queries, the connector can also operate in [append mode](#update-modes) for exchanging only INSERT messages with the external system. + +To use this connector, add the following dependency to your project: + +{% highlight xml %} + + org.apache.flink + flink-connector-jdbc{{ site.scala_version_suffix }} + {{site.version }} + +{% endhighlight %} + +And must also specify JDBC library, for example, if want to use Mysql library, the following dependency to your project: + +{% highlight xml %} + +mysql +mysql-connector-java +8.0.17 + +{% endhighlight %} + +**Library support:** Now, we only support mysql, derby, postgres. Review comment: LGTM This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] liupc commented on issue #9851: [FLINK-14341]Fix flink-python builds failure: no such option: --prefix
liupc commented on issue #9851: [FLINK-14341]Fix flink-python builds failure: no such option: --prefix URL: https://github.com/apache/flink/pull/9851#issuecomment-540352478 @dianfu I will update the readme and check it again later, thanks! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] openinx commented on a change in pull request #9866: [FLINK-14349][hbase] Create a Connector Descriptor for HBase so that user can connect HBase by TableEnvironment#connect
openinx commented on a change in pull request #9866: [FLINK-14349][hbase] Create a Connector Descriptor for HBase so that user can connect HBase by TableEnvironment#connect URL: https://github.com/apache/flink/pull/9866#discussion_r25165 ## File path: flink-connectors/flink-hbase/src/main/java/org/apache/flink/table/descriptors/HBase.java ## @@ -0,0 +1,120 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.descriptors; + +import org.apache.flink.configuration.MemorySize; + +import java.util.Map; + +import static org.apache.flink.table.descriptors.ConnectorDescriptorValidator.CONNECTOR_VERSION; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_TABLE_NAME; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_TYPE_VALUE_HBASE; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_WRITE_BUFFER_FLUSH_INTERVAL; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_WRITE_BUFFER_FLUSH_MAX_ROWS; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_WRITE_BUFFER_FLUSH_MAX_SIZE; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_ZK_NODE_PARENT; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_ZK_QUORUM; + +/** + * Connector descriptor for Apache HBase. + */ +public class HBase extends ConnectorDescriptor { + private DescriptorProperties properties = new DescriptorProperties(); + + public HBase() { + super(CONNECTOR_TYPE_VALUE_HBASE, 1, true); + } + + /** +* Set the Apache HBase version to be used. Required. +* +* @param version HBase version. E.g., "1.4.3". +*/ + public HBase version(String version) { + properties.putString(CONNECTOR_VERSION, version); + return this; + } + + /** +* Set the HBase table name, Required. +* +* @param tableName Name of HBase table. E.g., "testNamespace:testTable", "testDefaultTable" +*/ + public HBase tableName(String tableName) { + properties.putString(CONNECTOR_TABLE_NAME, tableName); + return this; + } + + /** +* Set the zookeeper quorum address to connect the HBase cluster. Required. +* +* @param zookeeperQuorum zookeeper quorum address to connect the HBase cluster. E.g., "localhost:2181,localhost:2182,localhost:2183". +*/ + public HBase zookeeperQuorum(String zookeeperQuorum) { + properties.putString(CONNECTOR_ZK_QUORUM, zookeeperQuorum); + return this; + } + + /** +* Set the zookeeper node parent path of HBase cluster. Optional. Review comment: OK, will add this in Javadoc. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] openinx commented on a change in pull request #9866: [FLINK-14349][hbase] Create a Connector Descriptor for HBase so that user can connect HBase by TableEnvironment#connect
openinx commented on a change in pull request #9866: [FLINK-14349][hbase] Create a Connector Descriptor for HBase so that user can connect HBase by TableEnvironment#connect URL: https://github.com/apache/flink/pull/9866#discussion_r25108 ## File path: flink-connectors/flink-hbase/src/main/java/org/apache/flink/table/descriptors/HBase.java ## @@ -0,0 +1,120 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.descriptors; + +import org.apache.flink.configuration.MemorySize; + +import java.util.Map; + +import static org.apache.flink.table.descriptors.ConnectorDescriptorValidator.CONNECTOR_VERSION; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_TABLE_NAME; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_TYPE_VALUE_HBASE; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_WRITE_BUFFER_FLUSH_INTERVAL; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_WRITE_BUFFER_FLUSH_MAX_ROWS; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_WRITE_BUFFER_FLUSH_MAX_SIZE; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_ZK_NODE_PARENT; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_ZK_QUORUM; + +/** + * Connector descriptor for Apache HBase. + */ +public class HBase extends ConnectorDescriptor { Review comment: Its parent class ConnectorDescriptor has been marked as PublicEvolving, I think we don't need to mark the HBase class any more ? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] openinx commented on a change in pull request #9866: [FLINK-14349][hbase] Create a Connector Descriptor for HBase so that user can connect HBase by TableEnvironment#connect
openinx commented on a change in pull request #9866: [FLINK-14349][hbase] Create a Connector Descriptor for HBase so that user can connect HBase by TableEnvironment#connect URL: https://github.com/apache/flink/pull/9866#discussion_r25670 ## File path: flink-connectors/flink-hbase/src/main/java/org/apache/flink/table/descriptors/HBase.java ## @@ -0,0 +1,120 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.descriptors; + +import org.apache.flink.configuration.MemorySize; + +import java.util.Map; + +import static org.apache.flink.table.descriptors.ConnectorDescriptorValidator.CONNECTOR_VERSION; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_TABLE_NAME; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_TYPE_VALUE_HBASE; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_WRITE_BUFFER_FLUSH_INTERVAL; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_WRITE_BUFFER_FLUSH_MAX_ROWS; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_WRITE_BUFFER_FLUSH_MAX_SIZE; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_ZK_NODE_PARENT; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_ZK_QUORUM; + +/** + * Connector descriptor for Apache HBase. + */ +public class HBase extends ConnectorDescriptor { + private DescriptorProperties properties = new DescriptorProperties(); + + public HBase() { + super(CONNECTOR_TYPE_VALUE_HBASE, 1, true); + } + + /** +* Set the Apache HBase version to be used. Required. +* +* @param version HBase version. E.g., "1.4.3". +*/ + public HBase version(String version) { + properties.putString(CONNECTOR_VERSION, version); + return this; + } + + /** +* Set the HBase table name, Required. +* +* @param tableName Name of HBase table. E.g., "testNamespace:testTable", "testDefaultTable" +*/ + public HBase tableName(String tableName) { + properties.putString(CONNECTOR_TABLE_NAME, tableName); + return this; + } + + /** +* Set the zookeeper quorum address to connect the HBase cluster. Required. +* +* @param zookeeperQuorum zookeeper quorum address to connect the HBase cluster. E.g., "localhost:2181,localhost:2182,localhost:2183". +*/ + public HBase zookeeperQuorum(String zookeeperQuorum) { + properties.putString(CONNECTOR_ZK_QUORUM, zookeeperQuorum); + return this; + } + + /** +* Set the zookeeper node parent path of HBase cluster. Optional. +* +* @param zookeeperNodeParent zookeeper node path of hbase cluster. E.g, "/hbase/example-root-znode". +*/ + public HBase zookeeperNodeParent(String zookeeperNodeParent) { + properties.putString(CONNECTOR_ZK_NODE_PARENT, zookeeperNodeParent); + return this; + } + + /** +* Set threshold when to flush buffered request based on the memory byte size of rows currently added . Default to 2mb. Optional. +* +* @param maxSize threshold (Byte size) to flush a buffered request. E.g, "2097152", "2mb", "4kb". Review comment: OK, sounds good. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] openinx commented on a change in pull request #9866: [FLINK-14349][hbase] Create a Connector Descriptor for HBase so that user can connect HBase by TableEnvironment#connect
openinx commented on a change in pull request #9866: [FLINK-14349][hbase] Create a Connector Descriptor for HBase so that user can connect HBase by TableEnvironment#connect URL: https://github.com/apache/flink/pull/9866#discussion_r17232 ## File path: flink-connectors/flink-hbase/src/main/java/org/apache/flink/table/descriptors/HBase.java ## @@ -0,0 +1,120 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.descriptors; + +import org.apache.flink.configuration.MemorySize; + +import java.util.Map; + +import static org.apache.flink.table.descriptors.ConnectorDescriptorValidator.CONNECTOR_VERSION; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_TABLE_NAME; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_TYPE_VALUE_HBASE; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_WRITE_BUFFER_FLUSH_INTERVAL; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_WRITE_BUFFER_FLUSH_MAX_ROWS; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_WRITE_BUFFER_FLUSH_MAX_SIZE; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_ZK_NODE_PARENT; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_ZK_QUORUM; + +/** + * Connector descriptor for Apache HBase. + */ +public class HBase extends ConnectorDescriptor { + private DescriptorProperties properties = new DescriptorProperties(); + + public HBase() { + super(CONNECTOR_TYPE_VALUE_HBASE, 1, true); + } + + /** +* Set the Apache HBase version to be used. Required. +* +* @param version HBase version. E.g., "1.4.3". +*/ + public HBase version(String version) { + properties.putString(CONNECTOR_VERSION, version); + return this; + } + + /** +* Set the HBase table name, Required. +* +* @param tableName Name of HBase table. E.g., "testNamespace:testTable", "testDefaultTable" +*/ + public HBase tableName(String tableName) { + properties.putString(CONNECTOR_TABLE_NAME, tableName); + return this; + } + + /** +* Set the zookeeper quorum address to connect the HBase cluster. Required. +* +* @param zookeeperQuorum zookeeper quorum address to connect the HBase cluster. E.g., "localhost:2181,localhost:2182,localhost:2183". +*/ + public HBase zookeeperQuorum(String zookeeperQuorum) { + properties.putString(CONNECTOR_ZK_QUORUM, zookeeperQuorum); + return this; + } + + /** +* Set the zookeeper node parent path of HBase cluster. Optional. +* +* @param zookeeperNodeParent zookeeper node path of hbase cluster. E.g, "/hbase/example-root-znode". +*/ + public HBase zookeeperNodeParent(String zookeeperNodeParent) { + properties.putString(CONNECTOR_ZK_NODE_PARENT, zookeeperNodeParent); + return this; + } + + /** +* Set threshold when to flush buffered request based on the memory byte size of rows currently added . Default to 2mb. Optional. Review comment: Fine, will do. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] openinx commented on a change in pull request #9866: [FLINK-14349][hbase] Create a Connector Descriptor for HBase so that user can connect HBase by TableEnvironment#connect
openinx commented on a change in pull request #9866: [FLINK-14349][hbase] Create a Connector Descriptor for HBase so that user can connect HBase by TableEnvironment#connect URL: https://github.com/apache/flink/pull/9866#discussion_r25783 ## File path: flink-connectors/flink-hbase/src/main/java/org/apache/flink/table/descriptors/HBase.java ## @@ -0,0 +1,118 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.descriptors; + +import java.util.Map; + +import static org.apache.flink.table.descriptors.ConnectorDescriptorValidator.CONNECTOR_VERSION; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_TABLE_NAME; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_TYPE_VALUE_HBASE; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_WRITE_BUFFER_FLUSH_INTERVAL; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_WRITE_BUFFER_FLUSH_MAX_ROWS; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_WRITE_BUFFER_FLUSH_MAX_SIZE; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_ZK_NODE_PARENT; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_ZK_QUORUM; + +/** + * Connector descriptor for Apache HBase. + */ +public class HBase extends ConnectorDescriptor { + private DescriptorProperties properties = new DescriptorProperties(); + + public HBase() { + super(CONNECTOR_TYPE_VALUE_HBASE, 1, true); + } + + /** +* Set the Apache HBase version to be used. Optional. +* +* @param version HBase version. E.g., "1.4.3". +*/ + public HBase version(String version) { + properties.putString(CONNECTOR_VERSION, version); + return this; + } + + /** +* Set the HBase table name, Required. +* +* @param tableName Name of HBase table. E.g., "testNamespace:testTable", "testDefaultTable" +*/ + public HBase tableName(String tableName) { + properties.putString(CONNECTOR_TABLE_NAME, tableName); + return this; + } + + /** +* Set the zookeeper quorum address to connect the HBase cluster. Required. +* +* @param zookeeperQuorum zookeeper quorum address to connect the HBase cluster. E.g., "localhost:2181,localhost:2182,localhost:2183". +*/ + public HBase zookeeperQuorum(String zookeeperQuorum) { + properties.putString(CONNECTOR_ZK_QUORUM, zookeeperQuorum); + return this; + } + + /** +* Set the zookeeper node parent path of HBase cluster. Optional. +* +* @param zookeeperNodeParent zookeeper node path of hbase cluster. E.g, "/hbase/example-root-znode". +*/ + public HBase zookeeperNodeParent(String zookeeperNodeParent) { + properties.putString(CONNECTOR_ZK_NODE_PARENT, zookeeperNodeParent); + return this; + } + + /** +* Set threshold when to flush buffered request based on the memory byte size of rows currently added . Default to 2mb. Optional. +* +* @param writeBufferFlushMaxSize threshold (Byte size) to flush a buffered request. E.g, 2097152 (2MB). +*/ + public HBase writeBufferFlushMaxSize(long writeBufferFlushMaxSize) { + properties.putLong(CONNECTOR_WRITE_BUFFER_FLUSH_MAX_SIZE, writeBufferFlushMaxSize); + return this; + } + + /** +* Set threshold when to flush buffered request based on the number of rows currently added. +* Defaults to not set, i.e. won't flush based on the number of buffered rows. Optional. +* +* @param writeBufferFlushMaxRows number of added rows when begin the request flushing. +*/ + public HBase writeBufferFlushMaxRows(long writeBufferFlushMaxRows) { + properties.putLong(CONNECTOR_WRITE_BUFFER_FLUSH_MAX_ROWS, writeBufferFlushMaxRows); + return this; + } + + /** +* Set a flush interval flushing buffered requesting if
[GitHub] [flink] flinkbot edited a comment on issue #9875: [hotfix][docs] Fix the incorrect scala checkstyle configure file path
flinkbot edited a comment on issue #9875: [hotfix][docs] Fix the incorrect scala checkstyle configure file path URL: https://github.com/apache/flink/pull/9875#issuecomment-540311608 ## CI report: * 3ab8ff37e57d7f9a4e3bc2421f5f978a66022589 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/131247414) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9864: [FLINK-14254][table] Introduce FileSystemOutputFormat for batch
flinkbot edited a comment on issue #9864: [FLINK-14254][table] Introduce FileSystemOutputFormat for batch URL: https://github.com/apache/flink/pull/9864#issuecomment-539910284 ## CI report: * e66114b1aa73a82b4c6bcf5c3a16baedb598f8c3 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131098600) * b7887760a3c3d28ca88eb31800ebd61084a520fc : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131249622) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9799: [FLINK-13360][documentation] Add documentation for HBase connector for Table API & SQL
flinkbot edited a comment on issue #9799: [FLINK-13360][documentation] Add documentation for HBase connector for Table API & SQL URL: https://github.com/apache/flink/pull/9799#issuecomment-536259382 ## CI report: * be81d3cce668cce8d87bb76da9dc74b182d0a681 : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/129580458) * 035f7bdb410f56801af3a1acddfff07483523082 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129581129) * 53f82b69b355a35fac4595759772d21f6136e3e0 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131064416) * 04b90f602c1474b2ef4b20d98239a072042463d9 : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/131098544) * a37a872615fa44cb90f29c8675ae187470177a96 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131106199) * e95b683b74045a3630cd90a9c432849c858b8020 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131135245) * ada8ddfbbbc2bf579879b467cd4cb389259843ae : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131139719) * b4633119add361466867352212240b3a615f26fd : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/131251062) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] openinx commented on issue #9875: [hotfix][docs] Fix the incorrect scala checkstyle configure file path
openinx commented on issue #9875: [hotfix][docs] Fix the incorrect scala checkstyle configure file path URL: https://github.com/apache/flink/pull/9875#issuecomment-540344558 @flinkbot run travis This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] zhuzhurk commented on a change in pull request #9872: [FLINK-14291][runtime, tests] Add test coverage to DefaultScheduler
zhuzhurk commented on a change in pull request #9872: [FLINK-14291][runtime,tests] Add test coverage to DefaultScheduler URL: https://github.com/apache/flink/pull/9872#discussion_r17197 ## File path: flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/TestExecutionSlotAllocator.java ## @@ -0,0 +1,105 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.flink.runtime.scheduler; + +import org.apache.flink.runtime.jobmaster.LogicalSlot; +import org.apache.flink.runtime.jobmaster.TestingLogicalSlotBuilder; +import org.apache.flink.runtime.scheduler.strategy.ExecutionVertexID; + +import java.util.ArrayList; +import java.util.Collection; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.TimeoutException; + +/** + * Test {@link ExecutionSlotAllocator} implementation. + */ +public class TestExecutionSlotAllocator implements ExecutionSlotAllocator { + + private final Map pendingRequests = new HashMap<>(); + + private boolean autoCompletePendingRequests = true; + + @Override + public Collection allocateSlotsFor(final Collection schedulingRequirementsCollection) { + final List slotVertexAssignments = createSlotVertexAssignments(schedulingRequirementsCollection); + registerPendingRequests(slotVertexAssignments); + maybeCompletePendingRequests(); + return slotVertexAssignments; + } + + private void registerPendingRequests(final List slotVertexAssignments) { + for (SlotExecutionVertexAssignment slotVertexAssignment : slotVertexAssignments) { + pendingRequests.put(slotVertexAssignment.getExecutionVertexId(), slotVertexAssignment); + } + } + + private List createSlotVertexAssignments( + final Collection schedulingRequirementsCollection) { + + final List result = new ArrayList<>(); + for (ExecutionVertexSchedulingRequirements schedulingRequirements : schedulingRequirementsCollection) { + final ExecutionVertexID executionVertexId = schedulingRequirements.getExecutionVertexId(); + final CompletableFuture logicalSlotFuture = new CompletableFuture<>(); + result.add(new SlotExecutionVertexAssignment(executionVertexId, logicalSlotFuture)); + } + return result; + } + + private void maybeCompletePendingRequests() { + if (autoCompletePendingRequests) { + completePendingRequests(); + } + } + + public void completePendingRequests() { + pendingRequests.forEach(((executionVertexId, slotExecutionVertexAssignment) -> + slotExecutionVertexAssignment + .getLogicalSlotFuture() + .complete(new TestingLogicalSlotBuilder().createTestingLogicalSlot(; + } + + public void timeoutPendingRequests() { + pendingRequests.forEach(((executionVertexId, slotExecutionVertexAssignment) -> + slotExecutionVertexAssignment + .getLogicalSlotFuture() + .completeExceptionally(new TimeoutException(; + } + + public void enableAutoCompletePendingRequests() { + autoCompletePendingRequests = true; + } + + public void disableAutoCompletePendingRequests() { + autoCompletePendingRequests = false; + } + + @Override + public void cancel(final ExecutionVertexID executionVertexId) { + } Review comment: I think we should cancel the corresponding pending request here. Although it would only take effect when autoCompletePendingRequests == false. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about
[GitHub] [flink] zhuzhurk commented on a change in pull request #9872: [FLINK-14291][runtime, tests] Add test coverage to DefaultScheduler
zhuzhurk commented on a change in pull request #9872: [FLINK-14291][runtime,tests] Add test coverage to DefaultScheduler URL: https://github.com/apache/flink/pull/9872#discussion_r04919 ## File path: flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/DefaultExecutionSlotAllocatorFactory.java ## @@ -0,0 +1,48 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.flink.runtime.scheduler; + +import org.apache.flink.api.common.time.Time; +import org.apache.flink.runtime.jobmaster.slotpool.SlotProvider; + +import static org.apache.flink.util.Preconditions.checkNotNull; + +/** + * Factory for {@link DefaultExecutionSlotAllocator}. + */ +public class DefaultExecutionSlotAllocatorFactory implements ExecutionSlotAllocatorFactory { Review comment: How about to put the factory implementation together with DefaultExecutionSlotAllocatorFactory, as we usually do? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] zhuzhurk commented on a change in pull request #9872: [FLINK-14291][runtime, tests] Add test coverage to DefaultScheduler
zhuzhurk commented on a change in pull request #9872: [FLINK-14291][runtime,tests] Add test coverage to DefaultScheduler URL: https://github.com/apache/flink/pull/9872#discussion_r04549 ## File path: flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/ExecutionSlotAllocatorFactory.java ## @@ -0,0 +1,29 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.flink.runtime.scheduler; + +/** + * Interface for {@link ExecutionSlotAllocator} factories. + */ +public interface ExecutionSlotAllocatorFactory { Review comment: How about to put the factory together with ExecutionSlotAllocator, as we usually do? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] zhuzhurk commented on a change in pull request #9872: [FLINK-14291][runtime, tests] Add test coverage to DefaultScheduler
zhuzhurk commented on a change in pull request #9872: [FLINK-14291][runtime,tests] Add test coverage to DefaultScheduler URL: https://github.com/apache/flink/pull/9872#discussion_r23027 ## File path: flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/DefaultSchedulerTest.java ## @@ -252,18 +249,27 @@ public void failJobIfNotEnoughResources() throws Exception { findThrowableWithMessage( failureCause, "Could not allocate the required slot within slot request timeout.").isPresent()); + assertThat(jobStatus, is(equalTo(JobStatus.FAILED))); } - private void drainAllAvailableSlots() { - final int numberOfAvailableSlots = slotProvider.getNumberOfAvailableSlots(); - for (int i = 0; i < numberOfAvailableSlots; i++) { - slotProvider.allocateSlot( - new SlotRequestId(), - new ScheduledUnit(new JobVertexID(), null, null), - SlotProfile.noRequirements(), - true, - Time.milliseconds(TIMEOUT_MS)); - } + @Test + public void skipDeploymentIfVertexVersionOutdated() { + final JobGraph jobGraph = singleNonParallelJobVertexJobGraph(); + + final DefaultScheduler scheduler = createSchedulerAndStartScheduling(jobGraph); + + final List initiallyScheduledVertices = testExecutionVertexOperations.getDeployedVertices(); + + final ArchivedExecutionVertex onlyExecutionVertex = Iterables.getOnlyElement(scheduler.requestJob().getAllExecutionVertices()); + final ExecutionAttemptID attemptId = onlyExecutionVertex.getCurrentExecutionAttempt().getAttemptId(); + scheduler.updateTaskExecutionState(new TaskExecutionState(jobGraph.getJobID(), attemptId, ExecutionState.FAILED)); + + testExecutionSlotAllocator.disableAutoCompletePendingRequests(); + taskRestartExecutor.triggerScheduledTasks(); + executionVertexVersioner.recordModification(new ExecutionVertexID(getOnlyJobVertex(jobGraph).getID(), 0)); Review comment: To verify that concurrent failovers are working fine, I think it's better to let the DefaultScheduler do the `recordModification` in `restartTasksWithDelay` rather than doing it directly in the test. This may require a task failure to be triggered. The failure should affect task X and happen when task X is in a certain stage, including (waiting for assigning resource), (resource is assigned but waiting for other tasks to finish resource assignment to do `deployAll`). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-14356) Support some special RowDeserializationSchema and RowSerializationSchema
[ https://issues.apache.org/jira/browse/FLINK-14356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16948183#comment-16948183 ] Jark Wu commented on FLINK-14356: - Thanks [~hackergin], I think this is a reasonable requirement. Currently, csv/json format is not good at handling single value format. How about introducing a new format called "single-field"? And dedicated {{SingleFieldSerializationSchema}} {{SingleFieldDeserializationSchema}} which support to (de)serialize a row with a single field in varbinary/string or other types. cc [~twalthr] > Support some special RowDeserializationSchema and RowSerializationSchema > - > > Key: FLINK-14356 > URL: https://issues.apache.org/jira/browse/FLINK-14356 > Project: Flink > Issue Type: Improvement > Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table > SQL / API >Reporter: jinfeng >Priority: Major > > I want to use flink sql to write kafka messages directly to hdfs. The > serialization and deserialization of messages are not involved in the middle. > The bytes of the message directly convert the first field of Row. However, > the current RowSerializationSchema does not support the conversion of bytes > to VARBINARY. Can we add some special RowSerializationSchema and > RowDerializationSchema ? -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on issue #9875: [hotfix][docs] Fix the incorrect scala checkstyle configure file path
flinkbot edited a comment on issue #9875: [hotfix][docs] Fix the incorrect scala checkstyle configure file path URL: https://github.com/apache/flink/pull/9875#issuecomment-540311608 ## CI report: * 3ab8ff37e57d7f9a4e3bc2421f5f978a66022589 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131247414) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9866: [FLINK-14349][hbase] Create a Connector Descriptor for HBase so that user can connect HBase by TableEnvironment#connect
flinkbot edited a comment on issue #9866: [FLINK-14349][hbase] Create a Connector Descriptor for HBase so that user can connect HBase by TableEnvironment#connect URL: https://github.com/apache/flink/pull/9866#issuecomment-539968272 ## CI report: * 372923b25d5fa8376fc40b18e2bf024efef23ed3 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131126146) * c4ff26413585cec6efc71732e1bf241f69b76c26 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131251071) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9799: [FLINK-13360][documentation] Add documentation for HBase connector for Table API & SQL
flinkbot edited a comment on issue #9799: [FLINK-13360][documentation] Add documentation for HBase connector for Table API & SQL URL: https://github.com/apache/flink/pull/9799#issuecomment-536259382 ## CI report: * be81d3cce668cce8d87bb76da9dc74b182d0a681 : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/129580458) * 035f7bdb410f56801af3a1acddfff07483523082 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129581129) * 53f82b69b355a35fac4595759772d21f6136e3e0 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131064416) * 04b90f602c1474b2ef4b20d98239a072042463d9 : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/131098544) * a37a872615fa44cb90f29c8675ae187470177a96 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131106199) * e95b683b74045a3630cd90a9c432849c858b8020 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131135245) * ada8ddfbbbc2bf579879b467cd4cb389259843ae : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131139719) * b4633119add361466867352212240b3a615f26fd : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/131251062) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9782: [FLINK-14241][test] Add ARM installation steps for flink e2e container tests
flinkbot edited a comment on issue #9782: [FLINK-14241][test] Add ARM installation steps for flink e2e container tests URL: https://github.com/apache/flink/pull/9782#issuecomment-535826739 ## CI report: * d48b95539070679639d5e8c4e640b9a710d7 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/129403938) * bc7ff380b3c3deb9751c0a596c8fef46c3b48ef3 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129413892) * 58fe983f436f82e015d7c3635708d60235b9f078 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/131251050) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Resolved] (FLINK-13360) Add documentation for HBase connector for Table API & SQL
[ https://issues.apache.org/jira/browse/FLINK-13360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu resolved FLINK-13360. - Resolution: Fixed 1.10.0: 592fba617655d26fbbafdde3f03d267022cc3a94 1.9.2: 42027a4d9572d329d64f684d7e393ace7b6bd799 > Add documentation for HBase connector for Table API & SQL > - > > Key: FLINK-13360 > URL: https://issues.apache.org/jira/browse/FLINK-13360 > Project: Flink > Issue Type: Sub-task > Components: Connectors / HBase, Documentation >Reporter: Jark Wu >Assignee: Jingsong Lee >Priority: Major > Labels: pull-request-available > Fix For: 1.10.0, 1.9.2 > > Time Spent: 20m > Remaining Estimate: 0h > > Add documentation for HBase connector for Table API & SQL > - “Connect to External Systems”: Add DDL for HBase in “Table Connector” > section. HBase support batch-source & lookup & sink. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] wuchong commented on a change in pull request #9866: [FLINK-14349][hbase] Create a Connector Descriptor for HBase so that user can connect HBase by TableEnvironment#connect
wuchong commented on a change in pull request #9866: [FLINK-14349][hbase] Create a Connector Descriptor for HBase so that user can connect HBase by TableEnvironment#connect URL: https://github.com/apache/flink/pull/9866#discussion_r15438 ## File path: flink-connectors/flink-hbase/src/main/java/org/apache/flink/table/descriptors/HBase.java ## @@ -0,0 +1,120 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.descriptors; + +import org.apache.flink.configuration.MemorySize; + +import java.util.Map; + +import static org.apache.flink.table.descriptors.ConnectorDescriptorValidator.CONNECTOR_VERSION; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_TABLE_NAME; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_TYPE_VALUE_HBASE; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_WRITE_BUFFER_FLUSH_INTERVAL; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_WRITE_BUFFER_FLUSH_MAX_ROWS; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_WRITE_BUFFER_FLUSH_MAX_SIZE; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_ZK_NODE_PARENT; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_ZK_QUORUM; + +/** + * Connector descriptor for Apache HBase. + */ +public class HBase extends ConnectorDescriptor { + private DescriptorProperties properties = new DescriptorProperties(); + + public HBase() { + super(CONNECTOR_TYPE_VALUE_HBASE, 1, true); + } + + /** +* Set the Apache HBase version to be used. Required. +* +* @param version HBase version. E.g., "1.4.3". +*/ + public HBase version(String version) { + properties.putString(CONNECTOR_VERSION, version); + return this; + } + + /** +* Set the HBase table name, Required. +* +* @param tableName Name of HBase table. E.g., "testNamespace:testTable", "testDefaultTable" +*/ + public HBase tableName(String tableName) { + properties.putString(CONNECTOR_TABLE_NAME, tableName); + return this; + } + + /** +* Set the zookeeper quorum address to connect the HBase cluster. Required. +* +* @param zookeeperQuorum zookeeper quorum address to connect the HBase cluster. E.g., "localhost:2181,localhost:2182,localhost:2183". +*/ + public HBase zookeeperQuorum(String zookeeperQuorum) { + properties.putString(CONNECTOR_ZK_QUORUM, zookeeperQuorum); + return this; + } + + /** +* Set the zookeeper node parent path of HBase cluster. Optional. Review comment: The default value is "/hbase". This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] wuchong commented on a change in pull request #9866: [FLINK-14349][hbase] Create a Connector Descriptor for HBase so that user can connect HBase by TableEnvironment#connect
wuchong commented on a change in pull request #9866: [FLINK-14349][hbase] Create a Connector Descriptor for HBase so that user can connect HBase by TableEnvironment#connect URL: https://github.com/apache/flink/pull/9866#discussion_r15200 ## File path: flink-connectors/flink-hbase/src/main/java/org/apache/flink/table/descriptors/HBase.java ## @@ -0,0 +1,120 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.descriptors; + +import org.apache.flink.configuration.MemorySize; + +import java.util.Map; + +import static org.apache.flink.table.descriptors.ConnectorDescriptorValidator.CONNECTOR_VERSION; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_TABLE_NAME; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_TYPE_VALUE_HBASE; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_WRITE_BUFFER_FLUSH_INTERVAL; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_WRITE_BUFFER_FLUSH_MAX_ROWS; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_WRITE_BUFFER_FLUSH_MAX_SIZE; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_ZK_NODE_PARENT; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_ZK_QUORUM; + +/** + * Connector descriptor for Apache HBase. + */ +public class HBase extends ConnectorDescriptor { Review comment: Please add a `@PublicEvolve` annotation above this. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] wuchong commented on a change in pull request #9866: [FLINK-14349][hbase] Create a Connector Descriptor for HBase so that user can connect HBase by TableEnvironment#connect
wuchong commented on a change in pull request #9866: [FLINK-14349][hbase] Create a Connector Descriptor for HBase so that user can connect HBase by TableEnvironment#connect URL: https://github.com/apache/flink/pull/9866#discussion_r15080 ## File path: flink-connectors/flink-hbase/src/main/java/org/apache/flink/table/descriptors/HBase.java ## @@ -0,0 +1,118 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.descriptors; + +import java.util.Map; + +import static org.apache.flink.table.descriptors.ConnectorDescriptorValidator.CONNECTOR_VERSION; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_TABLE_NAME; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_TYPE_VALUE_HBASE; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_WRITE_BUFFER_FLUSH_INTERVAL; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_WRITE_BUFFER_FLUSH_MAX_ROWS; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_WRITE_BUFFER_FLUSH_MAX_SIZE; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_ZK_NODE_PARENT; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_ZK_QUORUM; + +/** + * Connector descriptor for Apache HBase. + */ +public class HBase extends ConnectorDescriptor { + private DescriptorProperties properties = new DescriptorProperties(); + + public HBase() { + super(CONNECTOR_TYPE_VALUE_HBASE, 1, true); + } + + /** +* Set the Apache HBase version to be used. Optional. +* +* @param version HBase version. E.g., "1.4.3". +*/ + public HBase version(String version) { + properties.putString(CONNECTOR_VERSION, version); + return this; + } + + /** +* Set the HBase table name, Required. +* +* @param tableName Name of HBase table. E.g., "testNamespace:testTable", "testDefaultTable" +*/ + public HBase tableName(String tableName) { + properties.putString(CONNECTOR_TABLE_NAME, tableName); + return this; + } + + /** +* Set the zookeeper quorum address to connect the HBase cluster. Required. +* +* @param zookeeperQuorum zookeeper quorum address to connect the HBase cluster. E.g., "localhost:2181,localhost:2182,localhost:2183". +*/ + public HBase zookeeperQuorum(String zookeeperQuorum) { + properties.putString(CONNECTOR_ZK_QUORUM, zookeeperQuorum); + return this; + } + + /** +* Set the zookeeper node parent path of HBase cluster. Optional. +* +* @param zookeeperNodeParent zookeeper node path of hbase cluster. E.g, "/hbase/example-root-znode". +*/ + public HBase zookeeperNodeParent(String zookeeperNodeParent) { + properties.putString(CONNECTOR_ZK_NODE_PARENT, zookeeperNodeParent); + return this; + } + + /** +* Set threshold when to flush buffered request based on the memory byte size of rows currently added . Default to 2mb. Optional. +* +* @param writeBufferFlushMaxSize threshold (Byte size) to flush a buffered request. E.g, 2097152 (2MB). +*/ + public HBase writeBufferFlushMaxSize(long writeBufferFlushMaxSize) { + properties.putLong(CONNECTOR_WRITE_BUFFER_FLUSH_MAX_SIZE, writeBufferFlushMaxSize); + return this; + } + + /** +* Set threshold when to flush buffered request based on the number of rows currently added. +* Defaults to not set, i.e. won't flush based on the number of buffered rows. Optional. +* +* @param writeBufferFlushMaxRows number of added rows when begin the request flushing. +*/ + public HBase writeBufferFlushMaxRows(long writeBufferFlushMaxRows) { + properties.putLong(CONNECTOR_WRITE_BUFFER_FLUSH_MAX_ROWS, writeBufferFlushMaxRows); + return this; + } + + /** +* Set a flush interval flushing buffered requesting if
[GitHub] [flink] wuchong closed pull request #9799: [FLINK-13360][documentation] Add documentation for HBase connector for Table API & SQL
wuchong closed pull request #9799: [FLINK-13360][documentation] Add documentation for HBase connector for Table API & SQL URL: https://github.com/apache/flink/pull/9799 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-14359) Create a module called flink-sql-connector-hbase to shade HBase
[ https://issues.apache.org/jira/browse/FLINK-14359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16948174#comment-16948174 ] Jark Wu commented on FLINK-14359: - Thanks [~openinx] for taking this. I assigned this issue to you. > Create a module called flink-sql-connector-hbase to shade HBase > --- > > Key: FLINK-14359 > URL: https://issues.apache.org/jira/browse/FLINK-14359 > Project: Flink > Issue Type: Bug > Components: Connectors / HBase >Reporter: Jingsong Lee >Assignee: Zheng Hu >Priority: Major > Fix For: 1.10.0 > > > We need do the same thing as kafka and elasticsearch to HBase. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-14359) Create a module called flink-sql-connector-hbase to shade HBase
[ https://issues.apache.org/jira/browse/FLINK-14359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu updated FLINK-14359: Issue Type: New Feature (was: Bug) > Create a module called flink-sql-connector-hbase to shade HBase > --- > > Key: FLINK-14359 > URL: https://issues.apache.org/jira/browse/FLINK-14359 > Project: Flink > Issue Type: New Feature > Components: Connectors / HBase >Reporter: Jingsong Lee >Assignee: Zheng Hu >Priority: Major > Fix For: 1.10.0 > > > We need do the same thing as kafka and elasticsearch to HBase. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on issue #9874: [FLINK-14240][table] Merge table config parameters(TableConfig#getConfiguration) into global job parameters(ExecutionConfig#getGlobalJobParam
flinkbot edited a comment on issue #9874: [FLINK-14240][table] Merge table config parameters(TableConfig#getConfiguration) into global job parameters(ExecutionConfig#getGlobalJobParameters) when running with the legacy planner. URL: https://github.com/apache/flink/pull/9874#issuecomment-540302986 ## CI report: * f4fed52c78d33ef7ec5df2fe2faa26b0690e9c9b : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131246067) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Assigned] (FLINK-14359) Create a module called flink-sql-connector-hbase to shade HBase
[ https://issues.apache.org/jira/browse/FLINK-14359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu reassigned FLINK-14359: --- Assignee: Zheng Hu > Create a module called flink-sql-connector-hbase to shade HBase > --- > > Key: FLINK-14359 > URL: https://issues.apache.org/jira/browse/FLINK-14359 > Project: Flink > Issue Type: Bug > Components: Connectors / HBase >Reporter: Jingsong Lee >Assignee: Zheng Hu >Priority: Major > Fix For: 1.10.0 > > > We need do the same thing as kafka and elasticsearch to HBase. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on issue #9864: [FLINK-14254][table] Introduce FileSystemOutputFormat for batch
flinkbot edited a comment on issue #9864: [FLINK-14254][table] Introduce FileSystemOutputFormat for batch URL: https://github.com/apache/flink/pull/9864#issuecomment-539910284 ## CI report: * e66114b1aa73a82b4c6bcf5c3a16baedb598f8c3 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131098600) * b7887760a3c3d28ca88eb31800ebd61084a520fc : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/131249622) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9866: [FLINK-14349][hbase] Create a Connector Descriptor for HBase so that user can connect HBase by TableEnvironment#connect
flinkbot edited a comment on issue #9866: [FLINK-14349][hbase] Create a Connector Descriptor for HBase so that user can connect HBase by TableEnvironment#connect URL: https://github.com/apache/flink/pull/9866#issuecomment-539968272 ## CI report: * 372923b25d5fa8376fc40b18e2bf024efef23ed3 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131126146) * c4ff26413585cec6efc71732e1bf241f69b76c26 : UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9799: [FLINK-13360][documentation] Add documentation for HBase connector for Table API & SQL
flinkbot edited a comment on issue #9799: [FLINK-13360][documentation] Add documentation for HBase connector for Table API & SQL URL: https://github.com/apache/flink/pull/9799#issuecomment-536259382 ## CI report: * be81d3cce668cce8d87bb76da9dc74b182d0a681 : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/129580458) * 035f7bdb410f56801af3a1acddfff07483523082 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129581129) * 53f82b69b355a35fac4595759772d21f6136e3e0 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131064416) * 04b90f602c1474b2ef4b20d98239a072042463d9 : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/131098544) * a37a872615fa44cb90f29c8675ae187470177a96 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131106199) * e95b683b74045a3630cd90a9c432849c858b8020 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131135245) * ada8ddfbbbc2bf579879b467cd4cb389259843ae : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131139719) * b4633119add361466867352212240b3a615f26fd : UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9542: [FLINK-13873][metrics] Change the column family as tags for influxdb …
flinkbot edited a comment on issue #9542: [FLINK-13873][metrics] Change the column family as tags for influxdb … URL: https://github.com/apache/flink/pull/9542#issuecomment-525276537 ## CI report: * e8636926351f3d406962dcadba275e20e49aff39 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/124736151) * d23ea97e8419bbacea0698b3ba82a459d940cf38 : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/128606376) * 05977cd49eb306d768668be4e8cb31034343df02 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/128606947) * 057d453e5e00656bcfcb87d1d69172f614b2d11f : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/128681895) * 5001042b5fc5202da14c06e2d21faf1427f50a66 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/128683546) * 3a1e39e889daf0fca82c54982911ded35df6cb77 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/128692102) * 2745a3cb02e601a1a26360e9d6b3f0af5428c66a : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129027621) * e108aa24e739d6f48f63efc560c3fc049b509860 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131172797) * 24a270d468dea677379713d5cf402ea453d9f222 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131246042) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9782: [FLINK-14241][test] Add ARM installation steps for flink e2e container tests
flinkbot edited a comment on issue #9782: [FLINK-14241][test] Add ARM installation steps for flink e2e container tests URL: https://github.com/apache/flink/pull/9782#issuecomment-535826739 ## CI report: * d48b95539070679639d5e8c4e640b9a710d7 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/129403938) * bc7ff380b3c3deb9751c0a596c8fef46c3b48ef3 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129413892) * 58fe983f436f82e015d7c3635708d60235b9f078 : UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-14359) Create a module called flink-sql-connector-hbase to shade HBase
[ https://issues.apache.org/jira/browse/FLINK-14359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16948166#comment-16948166 ] Jingsong Lee commented on FLINK-14359: -- [~openinx] Good~ feel free to take this ticket. > Create a module called flink-sql-connector-hbase to shade HBase > --- > > Key: FLINK-14359 > URL: https://issues.apache.org/jira/browse/FLINK-14359 > Project: Flink > Issue Type: Bug > Components: Connectors / HBase >Reporter: Jingsong Lee >Priority: Major > Fix For: 1.10.0 > > > We need do the same thing as kafka and elasticsearch to HBase. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-14359) Create a module called flink-sql-connector-hbase to shade HBase
[ https://issues.apache.org/jira/browse/FLINK-14359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16948164#comment-16948164 ] Zheng Hu commented on FLINK-14359: -- Also FYI [~jark] > Create a module called flink-sql-connector-hbase to shade HBase > --- > > Key: FLINK-14359 > URL: https://issues.apache.org/jira/browse/FLINK-14359 > Project: Flink > Issue Type: Bug > Components: Connectors / HBase >Reporter: Jingsong Lee >Priority: Major > Fix For: 1.10.0 > > > We need do the same thing as kafka and elasticsearch to HBase. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-14359) Create a module called flink-sql-connector-hbase to shade HBase
[ https://issues.apache.org/jira/browse/FLINK-14359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16948163#comment-16948163 ] Zheng Hu commented on FLINK-14359: -- [~lzljs3620320] I'm interested in this task, Mind to let me handle this ? Thanks. > Create a module called flink-sql-connector-hbase to shade HBase > --- > > Key: FLINK-14359 > URL: https://issues.apache.org/jira/browse/FLINK-14359 > Project: Flink > Issue Type: Bug > Components: Connectors / HBase >Reporter: Jingsong Lee >Priority: Major > Fix For: 1.10.0 > > > We need do the same thing as kafka and elasticsearch to HBase. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] TisonKun commented on issue #9832: [FLINK-11843] Bind lifespan of Dispatcher to leader session
TisonKun commented on issue #9832: [FLINK-11843] Bind lifespan of Dispatcher to leader session URL: https://github.com/apache/flink/pull/9832#issuecomment-540323951 The problem above is abstractly actions in `(grant|revoke)Leadership` are asynchronous. `(grant|revoke)Leadership` are synchronized inside LeaderElectionService but we later drop the synchronization by trigger an asynchronous operation. Given that component cannot serve before it confirms leadership it might be reasonable we keep synchronization for operations in `(grant|revoke)Leadership`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] TisonKun commented on issue #9832: [FLINK-11843] Bind lifespan of Dispatcher to leader session
TisonKun commented on issue #9832: [FLINK-11843] Bind lifespan of Dispatcher to leader session URL: https://github.com/apache/flink/pull/9832#issuecomment-540322366 Thanks for opening this pull request. I have one concern about the synchronization between `Dispatcher` and `JobGraphStore`. Since close async operation and modify job graph store operation both queued in main thread, even `Dispatcher` has been revoked leadership the previous modification wasn't cancelled. Said `DispatcherRunner` lost leadership and re-granted leadership without another leader occurred(e.g., ZK connection loss), it is possibly that the previous `Dispatcher` ran modification before it was terminated. If you also consider it is a valid problem, then we can go ahead for a solution. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] JingsongLi commented on a change in pull request #9866: [FLINK-14349][hbase] Create a Connector Descriptor for HBase so that user can connect HBase by TableEnvironment#connect
JingsongLi commented on a change in pull request #9866: [FLINK-14349][hbase] Create a Connector Descriptor for HBase so that user can connect HBase by TableEnvironment#connect URL: https://github.com/apache/flink/pull/9866#discussion_r14855 ## File path: flink-connectors/flink-hbase/src/main/java/org/apache/flink/table/descriptors/HBase.java ## @@ -0,0 +1,120 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.descriptors; + +import org.apache.flink.configuration.MemorySize; + +import java.util.Map; + +import static org.apache.flink.table.descriptors.ConnectorDescriptorValidator.CONNECTOR_VERSION; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_TABLE_NAME; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_TYPE_VALUE_HBASE; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_WRITE_BUFFER_FLUSH_INTERVAL; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_WRITE_BUFFER_FLUSH_MAX_ROWS; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_WRITE_BUFFER_FLUSH_MAX_SIZE; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_ZK_NODE_PARENT; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_ZK_QUORUM; + +/** + * Connector descriptor for Apache HBase. + */ +public class HBase extends ConnectorDescriptor { + private DescriptorProperties properties = new DescriptorProperties(); + + public HBase() { + super(CONNECTOR_TYPE_VALUE_HBASE, 1, true); + } + + /** +* Set the Apache HBase version to be used. Required. +* +* @param version HBase version. E.g., "1.4.3". +*/ + public HBase version(String version) { + properties.putString(CONNECTOR_VERSION, version); + return this; + } + + /** +* Set the HBase table name, Required. +* +* @param tableName Name of HBase table. E.g., "testNamespace:testTable", "testDefaultTable" +*/ + public HBase tableName(String tableName) { + properties.putString(CONNECTOR_TABLE_NAME, tableName); + return this; + } + + /** +* Set the zookeeper quorum address to connect the HBase cluster. Required. +* +* @param zookeeperQuorum zookeeper quorum address to connect the HBase cluster. E.g., "localhost:2181,localhost:2182,localhost:2183". +*/ + public HBase zookeeperQuorum(String zookeeperQuorum) { + properties.putString(CONNECTOR_ZK_QUORUM, zookeeperQuorum); + return this; + } + + /** +* Set the zookeeper node parent path of HBase cluster. Optional. +* +* @param zookeeperNodeParent zookeeper node path of hbase cluster. E.g, "/hbase/example-root-znode". +*/ + public HBase zookeeperNodeParent(String zookeeperNodeParent) { + properties.putString(CONNECTOR_ZK_NODE_PARENT, zookeeperNodeParent); + return this; + } + + /** +* Set threshold when to flush buffered request based on the memory byte size of rows currently added . Default to 2mb. Optional. +* +* @param maxSize threshold (Byte size) to flush a buffered request. E.g, "2097152", "2mb", "4kb". Review comment: `(Byte size)` a little misunderstanding. Just say `the maximum size` and add `(using the syntax of {@link MemorySize})` like `Elasticsearch.bulkFlushMaxSize`? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] JingsongLi commented on a change in pull request #9866: [FLINK-14349][hbase] Create a Connector Descriptor for HBase so that user can connect HBase by TableEnvironment#connect
JingsongLi commented on a change in pull request #9866: [FLINK-14349][hbase] Create a Connector Descriptor for HBase so that user can connect HBase by TableEnvironment#connect URL: https://github.com/apache/flink/pull/9866#discussion_r14364 ## File path: flink-connectors/flink-hbase/src/main/java/org/apache/flink/table/descriptors/HBase.java ## @@ -0,0 +1,120 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.descriptors; + +import org.apache.flink.configuration.MemorySize; + +import java.util.Map; + +import static org.apache.flink.table.descriptors.ConnectorDescriptorValidator.CONNECTOR_VERSION; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_TABLE_NAME; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_TYPE_VALUE_HBASE; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_WRITE_BUFFER_FLUSH_INTERVAL; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_WRITE_BUFFER_FLUSH_MAX_ROWS; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_WRITE_BUFFER_FLUSH_MAX_SIZE; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_ZK_NODE_PARENT; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_ZK_QUORUM; + +/** + * Connector descriptor for Apache HBase. + */ +public class HBase extends ConnectorDescriptor { + private DescriptorProperties properties = new DescriptorProperties(); + + public HBase() { + super(CONNECTOR_TYPE_VALUE_HBASE, 1, true); + } + + /** +* Set the Apache HBase version to be used. Required. +* +* @param version HBase version. E.g., "1.4.3". +*/ + public HBase version(String version) { + properties.putString(CONNECTOR_VERSION, version); + return this; + } + + /** +* Set the HBase table name, Required. +* +* @param tableName Name of HBase table. E.g., "testNamespace:testTable", "testDefaultTable" +*/ + public HBase tableName(String tableName) { + properties.putString(CONNECTOR_TABLE_NAME, tableName); + return this; + } + + /** +* Set the zookeeper quorum address to connect the HBase cluster. Required. +* +* @param zookeeperQuorum zookeeper quorum address to connect the HBase cluster. E.g., "localhost:2181,localhost:2182,localhost:2183". +*/ + public HBase zookeeperQuorum(String zookeeperQuorum) { + properties.putString(CONNECTOR_ZK_QUORUM, zookeeperQuorum); + return this; + } + + /** +* Set the zookeeper node parent path of HBase cluster. Optional. +* +* @param zookeeperNodeParent zookeeper node path of hbase cluster. E.g, "/hbase/example-root-znode". +*/ + public HBase zookeeperNodeParent(String zookeeperNodeParent) { + properties.putString(CONNECTOR_ZK_NODE_PARENT, zookeeperNodeParent); + return this; + } + + /** +* Set threshold when to flush buffered request based on the memory byte size of rows currently added . Default to 2mb. Optional. Review comment: `added .` remove space. This line is too long? change line when `Default to`? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] JingsongLi commented on a change in pull request #9866: [FLINK-14349][hbase] Create a Connector Descriptor for HBase so that user can connect HBase by TableEnvironment#connect
JingsongLi commented on a change in pull request #9866: [FLINK-14349][hbase] Create a Connector Descriptor for HBase so that user can connect HBase by TableEnvironment#connect URL: https://github.com/apache/flink/pull/9866#discussion_r15283 ## File path: flink-connectors/flink-hbase/src/main/java/org/apache/flink/table/descriptors/HBase.java ## @@ -0,0 +1,118 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.descriptors; + +import java.util.Map; + +import static org.apache.flink.table.descriptors.ConnectorDescriptorValidator.CONNECTOR_VERSION; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_TABLE_NAME; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_TYPE_VALUE_HBASE; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_WRITE_BUFFER_FLUSH_INTERVAL; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_WRITE_BUFFER_FLUSH_MAX_ROWS; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_WRITE_BUFFER_FLUSH_MAX_SIZE; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_ZK_NODE_PARENT; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_ZK_QUORUM; + +/** + * Connector descriptor for Apache HBase. + */ +public class HBase extends ConnectorDescriptor { + private DescriptorProperties properties = new DescriptorProperties(); + + public HBase() { + super(CONNECTOR_TYPE_VALUE_HBASE, 1, true); + } + + /** +* Set the Apache HBase version to be used. Optional. +* +* @param version HBase version. E.g., "1.4.3". +*/ + public HBase version(String version) { + properties.putString(CONNECTOR_VERSION, version); + return this; + } + + /** +* Set the HBase table name, Required. +* +* @param tableName Name of HBase table. E.g., "testNamespace:testTable", "testDefaultTable" +*/ + public HBase tableName(String tableName) { + properties.putString(CONNECTOR_TABLE_NAME, tableName); + return this; + } + + /** +* Set the zookeeper quorum address to connect the HBase cluster. Required. +* +* @param zookeeperQuorum zookeeper quorum address to connect the HBase cluster. E.g., "localhost:2181,localhost:2182,localhost:2183". +*/ + public HBase zookeeperQuorum(String zookeeperQuorum) { + properties.putString(CONNECTOR_ZK_QUORUM, zookeeperQuorum); + return this; + } + + /** +* Set the zookeeper node parent path of HBase cluster. Optional. +* +* @param zookeeperNodeParent zookeeper node path of hbase cluster. E.g, "/hbase/example-root-znode". +*/ + public HBase zookeeperNodeParent(String zookeeperNodeParent) { + properties.putString(CONNECTOR_ZK_NODE_PARENT, zookeeperNodeParent); + return this; + } + + /** +* Set threshold when to flush buffered request based on the memory byte size of rows currently added . Default to 2mb. Optional. +* +* @param writeBufferFlushMaxSize threshold (Byte size) to flush a buffered request. E.g, 2097152 (2MB). +*/ + public HBase writeBufferFlushMaxSize(long writeBufferFlushMaxSize) { + properties.putLong(CONNECTOR_WRITE_BUFFER_FLUSH_MAX_SIZE, writeBufferFlushMaxSize); + return this; + } + + /** +* Set threshold when to flush buffered request based on the number of rows currently added. +* Defaults to not set, i.e. won't flush based on the number of buffered rows. Optional. +* +* @param writeBufferFlushMaxRows number of added rows when begin the request flushing. +*/ + public HBase writeBufferFlushMaxRows(long writeBufferFlushMaxRows) { + properties.putLong(CONNECTOR_WRITE_BUFFER_FLUSH_MAX_ROWS, writeBufferFlushMaxRows); + return this; + } + + /** +* Set a flush interval flushing buffered requesting
[GitHub] [flink] flinkbot edited a comment on issue #9875: [hotfix][docs] Fix the incorrect scala checkstyle configure file path
flinkbot edited a comment on issue #9875: [hotfix][docs] Fix the incorrect scala checkstyle configure file path URL: https://github.com/apache/flink/pull/9875#issuecomment-540311608 ## CI report: * 3ab8ff37e57d7f9a4e3bc2421f5f978a66022589 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/131247414) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9864: [FLINK-14254][table] Introduce FileSystemOutputFormat for batch
flinkbot edited a comment on issue #9864: [FLINK-14254][table] Introduce FileSystemOutputFormat for batch URL: https://github.com/apache/flink/pull/9864#issuecomment-539910284 ## CI report: * e66114b1aa73a82b4c6bcf5c3a16baedb598f8c3 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131098600) * b7887760a3c3d28ca88eb31800ebd61084a520fc : UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] wuchong commented on a change in pull request #9802: [FLINK-13361][documention] Add documentation for JDBC connector for Table API & SQL
wuchong commented on a change in pull request #9802: [FLINK-13361][documention] Add documentation for JDBC connector for Table API & SQL URL: https://github.com/apache/flink/pull/9802#discussion_r13932 ## File path: docs/dev/table/connect.md ## @@ -1075,6 +1076,143 @@ CREATE TABLE MyUserTable ( {% top %} +### JDBC Connector + +Source: Batch +Sink: Batch +Sink: Streaming Append Mode +Sink: Streaming Upsert Mode +Temporal Join: Sync Mode + +The JDBC connector allows for reading from an JDBC client. +The JDBC connector allows for writing into an JDBC client. + +The connector can operate in [upsert mode](#update-modes) for exchanging UPSERT/DELETE messages with the external system using a [key defined by the query](./streaming/dynamic_tables.html#table-to-stream-conversion). + +For append-only queries, the connector can also operate in [append mode](#update-modes) for exchanging only INSERT messages with the external system. + +Need specify JDBC library, for example, if want to use Mysql library, the following dependency to your project: + +{% highlight xml %} + +mysql +mysql-connector-java +8.0.17 + +{% endhighlight %} + +**Library support:** Now, we only support mysql, derby, postgres. + +The connector can be defined as follows: + + + +{% highlight yaml %} +connector: + type: jdbc + url: "jdbc:mysql://localhost:3306/flink-test" # required: JDBC DB url + table: "jdbc_table_name"# required: jdbc table name + driver: "com.mysql.jdbc.Driver" # optional: the class name of the JDBC driver to use to connect to this URL. + # If not set, it will automatically be derived from the URL. + + username: "name"# optional: jdbc user name and password + password: "password" + + read: # scan options, optional, used when reading from table +partition: # These options must all be specified if any of them is specified. In addition, partition.num must be specified. They + # describe how to partition the table when reading in parallel from multiple tasks. partition.column must be a numeric, + # date, or timestamp column from the table in question. Notice that lowerBound and upperBound are just used to decide + # the partition stride, not for filtering the rows in table. So all rows in the table will be partitioned and returned. + # This option applies only to reading. + column: "column_name" # optional, name of the column used for partitioning the input. + num: 50 # optional, the largest value of the last partition. Review comment: the number of partitions? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] wuchong commented on a change in pull request #9802: [FLINK-13361][documention] Add documentation for JDBC connector for Table API & SQL
wuchong commented on a change in pull request #9802: [FLINK-13361][documention] Add documentation for JDBC connector for Table API & SQL URL: https://github.com/apache/flink/pull/9802#discussion_r14037 ## File path: docs/dev/table/connect.md ## @@ -1075,6 +1076,143 @@ CREATE TABLE MyUserTable ( {% top %} +### JDBC Connector + +Source: Batch +Sink: Batch +Sink: Streaming Append Mode +Sink: Streaming Upsert Mode +Temporal Join: Sync Mode + +The JDBC connector allows for reading from an JDBC client. +The JDBC connector allows for writing into an JDBC client. + +The connector can operate in [upsert mode](#update-modes) for exchanging UPSERT/DELETE messages with the external system using a [key defined by the query](./streaming/dynamic_tables.html#table-to-stream-conversion). + +For append-only queries, the connector can also operate in [append mode](#update-modes) for exchanging only INSERT messages with the external system. + +Need specify JDBC library, for example, if want to use Mysql library, the following dependency to your project: + +{% highlight xml %} + +mysql +mysql-connector-java +8.0.17 + +{% endhighlight %} + +**Library support:** Now, we only support mysql, derby, postgres. + +The connector can be defined as follows: + + + +{% highlight yaml %} +connector: + type: jdbc + url: "jdbc:mysql://localhost:3306/flink-test" # required: JDBC DB url + table: "jdbc_table_name"# required: jdbc table name + driver: "com.mysql.jdbc.Driver" # optional: the class name of the JDBC driver to use to connect to this URL. + # If not set, it will automatically be derived from the URL. + + username: "name"# optional: jdbc user name and password + password: "password" + + read: # scan options, optional, used when reading from table +partition: # These options must all be specified if any of them is specified. In addition, partition.num must be specified. They + # describe how to partition the table when reading in parallel from multiple tasks. partition.column must be a numeric, + # date, or timestamp column from the table in question. Notice that lowerBound and upperBound are just used to decide + # the partition stride, not for filtering the rows in table. So all rows in the table will be partitioned and returned. + # This option applies only to reading. + column: "column_name" # optional, name of the column used for partitioning the input. + num: 50 # optional, the largest value of the last partition. + lower-bound: 500 # optional, the smallest value of the first partition. + upper-bound: 1000 # optional, the largest value of the last partition. +fetch-size: 100 # optional, Gives the reader a hint as to the number of rows that should be fetched +# from the database when reading per round trip. If the value specified is zero, then +# the hint is ignored. The default value is zero. + + lookup: # lookup options, optional, used in temporary join +cache: + max-rows: 5000 # optional, max number of rows of lookup cache, over this value, the oldest rows will + # be eliminated. "cache.max-rows" and "cache.ttl" options must all be specified if any + # of them is specified. Cache is not enabled as default. + ttl: "10s" # optional, the max time to live for each rows in lookup cache, over this time, the oldest rows + # will be expired. "cache.max-rows" and "cache.ttl" options must all be specified if any of + # them is specified. Cache is not enabled as default. +max-retries: 3 # optional, max retry times if lookup database failed + + write: # sink options, optional, used when writing into table + flush: +max-rows: 5000 # optional, flush max size (includes all append, upsert and delete records), + # over this number of records, will flush data. The default value is "5000". +interval: "2s" # optional, flush interval mills, over this time, asynchronous threads will flush data. + # The default value is "0s", which means no asynchronous flush thread will be scheduled. + max-retries: 3 # optional, max retry times if writing records to database failed. +{% endhighlight %} + + + +{% highlight sql %} +CREATE TABLE MyUserTable ( + ... +) WITH ( + 'connector.type' = 'jdbc', -- required: specify this table type is jdbc + + 'connector.url' = 'jdbc:mysql://localhost:3306/flink-test', -- required: JDBC DB url + + 'connector.table' = 'jdbc_table_name', -- required: jdbc table name + + 'connector.driver' = 'com.mysql.jdbc.Driver', -- optional: the class name of the JDBC driver to use to connect to this URL. +
[GitHub] [flink] wuchong commented on a change in pull request #9802: [FLINK-13361][documention] Add documentation for JDBC connector for Table API & SQL
wuchong commented on a change in pull request #9802: [FLINK-13361][documention] Add documentation for JDBC connector for Table API & SQL URL: https://github.com/apache/flink/pull/9802#discussion_r13168 ## File path: docs/dev/table/connect.md ## @@ -1075,6 +1075,88 @@ CREATE TABLE MyUserTable ( {% top %} +### JDBC Connector + +Source: Batch +Sink: Batch +Sink: Streaming Append Mode +Sink: Streaming Upsert Mode +Temporal Join: Sync Mode + +The JDBC connector allows for reading from an JDBC client. +The JDBC connector allows for writing into an JDBC client. + +The connector can operate in [upsert mode](#update-modes) for exchanging UPSERT/DELETE messages with the external system using a [key defined by the query](./streaming/dynamic_tables.html#table-to-stream-conversion). + +For append-only queries, the connector can also operate in [append mode](#update-modes) for exchanging only INSERT messages with the external system. + +To use this connector, add the following dependency to your project: + +{% highlight xml %} + + org.apache.flink + flink-connector-jdbc{{ site.scala_version_suffix }} + {{site.version }} + +{% endhighlight %} + +And must also specify JDBC library, for example, if want to use Mysql library, the following dependency to your project: + +{% highlight xml %} + +mysql +mysql-connector-java +8.0.17 + +{% endhighlight %} + +**Library support:** Now, we only support mysql, derby, postgres. Review comment: I agree, so we can omit the version column. How about ``` Name | Group Id | Artifact Id | JAR | MySQL Driver | mysql | mysql-connector-java | [Download](http://central.maven.org/maven2/mysql/mysql-connector-java/) | PostgreSQL Driver | org.postgresql | postgresql | [Download](https://jdbc.postgresql.org/download.html) | Derby Driver | ... ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-14339) The checkpoint ID count wrong on restore savepoint log
[ https://issues.apache.org/jira/browse/FLINK-14339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16948152#comment-16948152 ] king's uncle commented on FLINK-14339: -- Yes, But we want to see current checkpoint ID, not next checkpoint ID. This log can cause misunderstanding. > The checkpoint ID count wrong on restore savepoint log > -- > > Key: FLINK-14339 > URL: https://issues.apache.org/jira/browse/FLINK-14339 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing >Affects Versions: 1.8.0 >Reporter: king's uncle >Priority: Minor > > I saw the below log when I tested Flink restore from the savepoint. > {code:java} > [flink-akka.actor.default-dispatcher-2] INFO > org.apache.flink.runtime.checkpoint.ZooKeeperCompletedCheckpointStore - > Recovering checkpoints from ZooKeeper. > [flink-akka.actor.default-dispatcher-2] INFO > org.apache.flink.runtime.checkpoint.ZooKeeperCompletedCheckpointStore - Found > 0 checkpoints in ZooKeeper. > [flink-akka.actor.default-dispatcher-2] INFO > org.apache.flink.runtime.checkpoint.ZooKeeperCompletedCheckpointStore - > Trying to fetch 0 checkpoints from storage. > [flink-akka.actor.default-dispatcher-2] INFO > org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Starting job > from savepoint > /nfsdata/ecs/flink-savepoints/flink-savepoint-test//201910080158/savepoint-00-003c9b080832 > (allowing non restored state) > [flink-akka.actor.default-dispatcher-2] INFO > org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Reset the > checkpoint ID of job to 12285. > [flink-akka.actor.default-dispatcher-2] INFO > org.apache.flink.runtime.checkpoint.ZooKeeperCompletedCheckpointStore - > Recovering checkpoints from ZooKeeper. > [flink-akka.actor.default-dispatcher-2] INFO > org.apache.flink.runtime.checkpoint.ZooKeeperCompletedCheckpointStore - Found > 1 checkpoints in ZooKeeper. > [flink-akka.actor.default-dispatcher-2] INFO > org.apache.flink.runtime.checkpoint.ZooKeeperCompletedCheckpointStore - > Trying to fetch 1 checkpoints from storage. > [flink-akka.actor.default-dispatcher-2] INFO > org.apache.flink.runtime.checkpoint.ZooKeeperCompletedCheckpointStore - > Trying to retrieve checkpoint 12284. > [flink-akka.actor.default-dispatcher-2] INFO > org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Restoring job > from latest valid checkpoint: Checkpoint > 12284 @ 0 for . > {code} > You can find the final resotre checkpoint ID is 12284, but we can see the log > print "Reset the checkpoint ID of job to > 12285". So, I checked the source code. > {code:java} > // Reset the checkpoint ID counter > long nextCheckpointId = savepoint.getCheckpointID() + 1; > checkpointIdCounter.setCount(nextCheckpointId); > LOG.info("Reset the checkpoint ID of job {} to {}.", job, nextCheckpointId); > {code} > I think they should print a checkpoint ID instead of the next checkpoint ID. > {code:java} > LOG.info("Reset the checkpoint ID of job {} to {}.", job, > savepoint.getCheckpointID()); > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] JingsongLi commented on a change in pull request #9799: [FLINK-13360][documentation] Add documentation for HBase connector for Table API & SQL
JingsongLi commented on a change in pull request #9799: [FLINK-13360][documentation] Add documentation for HBase connector for Table API & SQL URL: https://github.com/apache/flink/pull/9799#discussion_r13600 ## File path: docs/dev/table/connect.md ## @@ -1115,12 +1106,15 @@ connector: znode.parent: "/test"# required: the root dir in Zookeeper for HBase cluster write.buffer-flush: -max-size: 1048576# optional: Write option, sets when to flush a buffered request - # based on the memory size of rows currently added. -max-rows: 1 # optional: Write option, sets when to flush buffered - # request based on the number of rows currently added. -interval: 1 # optional: Write option, sets a flush interval flushing buffered - # requesting if the interval passes, in milliseconds. +max-size: 1048576# optional: writing option, determines how many size in memory of buffered + # rows to insert per round trip. This can help performance on writing to JDBC + # database. The default value is "2mb". +max-rows: 1 # optional: writing option, determines how many rows to insert per round trip. + #This can help performance on writing to JDBC database. No default value, + # i.e. the default flushing is not depends on the number of buffered rows. +interval: 1 # optional: writing option, sets a flush interval flushing buffered requesting Review comment: Sorry, forgot this, Updated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Created] (FLINK-14359) Create a module called flink-sql-connector-hbase to shade HBase
Jingsong Lee created FLINK-14359: Summary: Create a module called flink-sql-connector-hbase to shade HBase Key: FLINK-14359 URL: https://issues.apache.org/jira/browse/FLINK-14359 Project: Flink Issue Type: Bug Components: Connectors / HBase Reporter: Jingsong Lee Fix For: 1.10.0 We need do the same thing as kafka and elasticsearch to HBase. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] JingsongLi commented on a change in pull request #9799: [FLINK-13360][documentation] Add documentation for HBase connector for Table API & SQL
JingsongLi commented on a change in pull request #9799: [FLINK-13360][documentation] Add documentation for HBase connector for Table API & SQL URL: https://github.com/apache/flink/pull/9799#discussion_r13084 ## File path: docs/dev/table/connect.md ## @@ -49,6 +49,7 @@ The following tables list all available connectors and formats. Their mutual com | Apache Kafka | 0.10| `flink-connector-kafka-0.10` | [Download](http://central.maven.org/maven2/org/apache/flink/flink-sql-connector-kafka-0.10{{site.scala_version_suffix}}/{{site.version}}/flink-sql-connector-kafka-0.10{{site.scala_version_suffix}}-{{site.version}}.jar) | | Apache Kafka | 0.11| `flink-connector-kafka-0.11` | [Download](http://central.maven.org/maven2/org/apache/flink/flink-sql-connector-kafka-0.11{{site.scala_version_suffix}}/{{site.version}}/flink-sql-connector-kafka-0.11{{site.scala_version_suffix}}-{{site.version}}.jar) | | Apache Kafka | 0.11+ (`universal`) | `flink-connector-kafka` | [Download](http://central.maven.org/maven2/org/apache/flink/flink-sql-connector-kafka{{site.scala_version_suffix}}/{{site.version}}/flink-sql-connector-kafka{{site.scala_version_suffix}}-{{site.version}}.jar) | +| HBase | 1.4.3 | `flink-hbase`| [Download](http://central.maven.org/maven2/org/apache/flink/flink-hbase{{site.scala_version_suffix}}/{{site.version}}/flink-hbase{{site.scala_version_suffix}}-{{site.version}}.jar) | Review comment: Yeah, you are right, More appropriate way is use shade HBase. I created a JIRA to track this: https://issues.apache.org/jira/browse/FLINK-14359 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] dianfu commented on a change in pull request #9865: [FLINK-14212][python] Support no-argument Python UDFs.
dianfu commented on a change in pull request #9865: [FLINK-14212][python] Support no-argument Python UDFs. URL: https://github.com/apache/flink/pull/9865#discussion_r07912 ## File path: flink-python/pyflink/table/tests/test_udf.py ## @@ -204,6 +204,26 @@ def eval(self, col): self.t_env.register_function( "non-callable-udf", udf(Plus(), DataTypes.BIGINT(), DataTypes.BIGINT())) +def test_no_argument_deterministic_udf(self): +@udf(input_types=[], result_type=DataTypes.BIGINT()) +def one(): +return 1 + +self.t_env.register_function( Review comment: one line is enough This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] dianfu commented on a change in pull request #9865: [FLINK-14212][python] Support no-argument Python UDFs.
dianfu commented on a change in pull request #9865: [FLINK-14212][python] Support no-argument Python UDFs. URL: https://github.com/apache/flink/pull/9865#discussion_r07873 ## File path: flink-python/pyflink/table/tests/test_udf.py ## @@ -204,6 +204,26 @@ def eval(self, col): self.t_env.register_function( "non-callable-udf", udf(Plus(), DataTypes.BIGINT(), DataTypes.BIGINT())) +def test_no_argument_deterministic_udf(self): Review comment: rename to test_udf_without_arguments and add tests for both deterministic and non-deterministic udfs? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] dianfu commented on a change in pull request #9865: [FLINK-14212][python] Support no-argument Python UDFs.
dianfu commented on a change in pull request #9865: [FLINK-14212][python] Support no-argument Python UDFs. URL: https://github.com/apache/flink/pull/9865#discussion_r08079 ## File path: flink-python/pyflink/table/tests/test_udf.py ## @@ -204,6 +204,26 @@ def eval(self, col): self.t_env.register_function( "non-callable-udf", udf(Plus(), DataTypes.BIGINT(), DataTypes.BIGINT())) +def test_no_argument_deterministic_udf(self): +@udf(input_types=[], result_type=DataTypes.BIGINT()) +def one(): +return 1 + +self.t_env.register_function( +"one", one) +self.t_env.register_function("add", add) + +table_sink = source_sink_utils.TestAppendSink(['a', 'b'], + [DataTypes.BIGINT(), DataTypes.BIGINT()]) +self.t_env.register_table_sink("Results", table_sink) + +t = self.t_env.from_elements([(1, 2), (2, 5), (3, 1)], ['a', 'b']) +t.select("one(), add(a, b)") \ Review comment: Why test "add" in this test case? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13567) Avro Confluent Schema Registry nightly end-to-end test failed on Travis
[ https://issues.apache.org/jira/browse/FLINK-13567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16948149#comment-16948149 ] Yu Li commented on FLINK-13567: --- Another instance: https://api.travis-ci.org/v3/job/595592185/log.txt It seems to happen stably recently. > Avro Confluent Schema Registry nightly end-to-end test failed on Travis > --- > > Key: FLINK-13567 > URL: https://issues.apache.org/jira/browse/FLINK-13567 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka, Tests >Affects Versions: 1.8.2, 1.9.0, 1.10.0 >Reporter: Till Rohrmann >Priority: Critical > Labels: test-stability > Fix For: 1.10.0 > > Attachments: patch.diff > > > The {{Avro Confluent Schema Registry nightly end-to-end test}} failed on > Travis with > {code} > [FAIL] 'Avro Confluent Schema Registry nightly end-to-end test' failed after > 2 minutes and 11 seconds! Test exited with exit code 1 > No taskexecutor daemon (pid: 29044) is running anymore on > travis-job-b0823aec-c4ec-4d4b-8b59-e9f968de9501. > No standalonesession daemon to stop on host > travis-job-b0823aec-c4ec-4d4b-8b59-e9f968de9501. > rm: cannot remove > '/home/travis/build/apache/flink/flink-dist/target/flink-1.10-SNAPSHOT-bin/flink-1.10-SNAPSHOT/plugins': > No such file or directory > {code} > https://api.travis-ci.org/v3/job/567273939/log.txt -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] TisonKun commented on a change in pull request #9832: [FLINK-11843] Bind lifespan of Dispatcher to leader session
TisonKun commented on a change in pull request #9832: [FLINK-11843] Bind lifespan of Dispatcher to leader session URL: https://github.com/apache/flink/pull/9832#discussion_r12163 ## File path: flink-runtime/src/main/java/org/apache/flink/runtime/dispatcher/runner/DefaultDispatcherRunner.java ## @@ -0,0 +1,235 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + *http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.runtime.dispatcher.runner; + +import org.apache.flink.runtime.clusterframework.ApplicationStatus; +import org.apache.flink.runtime.concurrent.FutureUtils; +import org.apache.flink.runtime.dispatcher.DispatcherGateway; +import org.apache.flink.runtime.leaderelection.LeaderContender; +import org.apache.flink.runtime.leaderelection.LeaderElectionService; +import org.apache.flink.runtime.rpc.FatalErrorHandler; +import org.apache.flink.util.FlinkException; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.util.Arrays; +import java.util.UUID; +import java.util.concurrent.CompletableFuture; + +/** + * Runner for the {@link org.apache.flink.runtime.dispatcher.Dispatcher} which is responsible for the + * leader election. + */ +public class DefaultDispatcherRunner implements DispatcherRunner, LeaderContender { + + private static final Logger LOG = LoggerFactory.getLogger(DefaultDispatcherRunner.class); + + private final Object lock = new Object(); + + private final LeaderElectionService leaderElectionService; + + private final FatalErrorHandler fatalErrorHandler; + + private final DispatcherLeaderProcessFactory dispatcherLeaderProcessFactory; + + private final CompletableFuture terminationFuture; + + private final CompletableFuture shutDownFuture; + + private boolean isRunning; + + private DispatcherLeaderProcess dispatcherLeaderProcess; + + private CompletableFuture previousDispatcherLeaderProcessTerminationFuture; + + private CompletableFuture dispatcherGatewayFuture; + + DefaultDispatcherRunner( + LeaderElectionService leaderElectionService, + FatalErrorHandler fatalErrorHandler, + DispatcherLeaderProcessFactory dispatcherLeaderProcessFactory) throws Exception { + this.leaderElectionService = leaderElectionService; + this.fatalErrorHandler = fatalErrorHandler; + this.dispatcherLeaderProcessFactory = dispatcherLeaderProcessFactory; + this.terminationFuture = new CompletableFuture<>(); + this.shutDownFuture = new CompletableFuture<>(); + + this.isRunning = true; + this.dispatcherLeaderProcess = StoppedDispatcherLeaderProcess.INSTANCE; + this.previousDispatcherLeaderProcessTerminationFuture = CompletableFuture.completedFuture(null); + this.dispatcherGatewayFuture = new CompletableFuture<>(); + + startDispatcherRunner(leaderElectionService); + } + + private void startDispatcherRunner(LeaderElectionService leaderElectionService) throws Exception { + LOG.info("Starting {}.", getClass().getName()); + + leaderElectionService.start(this); + } + + @Override + public CompletableFuture getDispatcherGateway() { + synchronized (lock) { + return dispatcherGatewayFuture; + } + } + + @Override + public CompletableFuture getShutDownFuture() { + return shutDownFuture; + } + + @Override + public CompletableFuture closeAsync() { + synchronized (lock) { + if (!isRunning) { + return terminationFuture; + } else { + isRunning = false; + } + } + + stopDispatcherLeaderProcess(); + final CompletableFuture servicesTerminationFuture = stopServices(); + + FutureUtils.forward( + FutureUtils.completeAll( + Arrays.asList( + previ
[GitHub] [flink] openinx commented on a change in pull request #9866: [FLINK-14349][hbase] Create a Connector Descriptor for HBase so that user can connect HBase by TableEnvironment#connect
openinx commented on a change in pull request #9866: [FLINK-14349][hbase] Create a Connector Descriptor for HBase so that user can connect HBase by TableEnvironment#connect URL: https://github.com/apache/flink/pull/9866#discussion_r12118 ## File path: flink-connectors/flink-hbase/src/main/java/org/apache/flink/table/descriptors/HBase.java ## @@ -0,0 +1,118 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.descriptors; + +import java.util.Map; + +import static org.apache.flink.table.descriptors.ConnectorDescriptorValidator.CONNECTOR_VERSION; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_TABLE_NAME; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_TYPE_VALUE_HBASE; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_WRITE_BUFFER_FLUSH_INTERVAL; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_WRITE_BUFFER_FLUSH_MAX_ROWS; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_WRITE_BUFFER_FLUSH_MAX_SIZE; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_ZK_NODE_PARENT; +import static org.apache.flink.table.descriptors.HBaseValidator.CONNECTOR_ZK_QUORUM; + +/** + * Connector descriptor for Apache HBase. + */ +public class HBase extends ConnectorDescriptor { + private DescriptorProperties properties = new DescriptorProperties(); + + public HBase() { + super(CONNECTOR_TYPE_VALUE_HBASE, 1, true); + } + + /** +* Set the Apache HBase version to be used. Optional. +* +* @param version HBase version. E.g., "1.4.3". +*/ + public HBase version(String version) { + properties.putString(CONNECTOR_VERSION, version); + return this; + } + + /** +* Set the HBase table name, Required. +* +* @param tableName Name of HBase table. E.g., "testNamespace:testTable", "testDefaultTable" +*/ + public HBase tableName(String tableName) { + properties.putString(CONNECTOR_TABLE_NAME, tableName); + return this; + } + + /** +* Set the zookeeper quorum address to connect the HBase cluster. Required. +* +* @param zookeeperQuorum zookeeper quorum address to connect the HBase cluster. E.g., "localhost:2181,localhost:2182,localhost:2183". +*/ + public HBase zookeeperQuorum(String zookeeperQuorum) { + properties.putString(CONNECTOR_ZK_QUORUM, zookeeperQuorum); + return this; + } + + /** +* Set the zookeeper node parent path of HBase cluster. Optional. +* +* @param zookeeperNodeParent zookeeper node path of hbase cluster. E.g, "/hbase/example-root-znode". +*/ + public HBase zookeeperNodeParent(String zookeeperNodeParent) { + properties.putString(CONNECTOR_ZK_NODE_PARENT, zookeeperNodeParent); + return this; + } + + /** +* Set threshold when to flush buffered request based on the memory byte size of rows currently added . Default to 2mb. Optional. +* +* @param writeBufferFlushMaxSize threshold (Byte size) to flush a buffered request. E.g, 2097152 (2MB). +*/ + public HBase writeBufferFlushMaxSize(long writeBufferFlushMaxSize) { + properties.putLong(CONNECTOR_WRITE_BUFFER_FLUSH_MAX_SIZE, writeBufferFlushMaxSize); + return this; + } + + /** +* Set threshold when to flush buffered request based on the number of rows currently added. +* Defaults to not set, i.e. won't flush based on the number of buffered rows. Optional. +* +* @param writeBufferFlushMaxRows number of added rows when begin the request flushing. +*/ + public HBase writeBufferFlushMaxRows(long writeBufferFlushMaxRows) { + properties.putLong(CONNECTOR_WRITE_BUFFER_FLUSH_MAX_ROWS, writeBufferFlushMaxRows); + return this; + } + + /** +* Set a flush interval flushing buffered requesting if
[GitHub] [flink] wuchong commented on a change in pull request #9799: [FLINK-13360][documentation] Add documentation for HBase connector for Table API & SQL
wuchong commented on a change in pull request #9799: [FLINK-13360][documentation] Add documentation for HBase connector for Table API & SQL URL: https://github.com/apache/flink/pull/9799#discussion_r6 ## File path: docs/dev/table/connect.md ## @@ -49,6 +49,7 @@ The following tables list all available connectors and formats. Their mutual com | Apache Kafka | 0.10| `flink-connector-kafka-0.10` | [Download](http://central.maven.org/maven2/org/apache/flink/flink-sql-connector-kafka-0.10{{site.scala_version_suffix}}/{{site.version}}/flink-sql-connector-kafka-0.10{{site.scala_version_suffix}}-{{site.version}}.jar) | | Apache Kafka | 0.11| `flink-connector-kafka-0.11` | [Download](http://central.maven.org/maven2/org/apache/flink/flink-sql-connector-kafka-0.11{{site.scala_version_suffix}}/{{site.version}}/flink-sql-connector-kafka-0.11{{site.scala_version_suffix}}-{{site.version}}.jar) | | Apache Kafka | 0.11+ (`universal`) | `flink-connector-kafka` | [Download](http://central.maven.org/maven2/org/apache/flink/flink-sql-connector-kafka{{site.scala_version_suffix}}/{{site.version}}/flink-sql-connector-kafka{{site.scala_version_suffix}}-{{site.version}}.jar) | +| HBase | 1.4.3 | `flink-hbase`| [Download](http://central.maven.org/maven2/org/apache/flink/flink-hbase{{site.scala_version_suffix}}/{{site.version}}/flink-hbase{{site.scala_version_suffix}}-{{site.version}}.jar) | Review comment: I'm not sure about this. From my understanding, the downloaded jar should contain **shaded** HBase dependencies. So, we may need a module called `flink-sql-connector-hbase` to shade HBase. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] wuchong commented on a change in pull request #9799: [FLINK-13360][documentation] Add documentation for HBase connector for Table API & SQL
wuchong commented on a change in pull request #9799: [FLINK-13360][documentation] Add documentation for HBase connector for Table API & SQL URL: https://github.com/apache/flink/pull/9799#discussion_r10277 ## File path: docs/dev/table/connect.md ## @@ -1115,12 +1106,15 @@ connector: znode.parent: "/test"# required: the root dir in Zookeeper for HBase cluster write.buffer-flush: -max-size: 1048576# optional: Write option, sets when to flush a buffered request - # based on the memory size of rows currently added. -max-rows: 1 # optional: Write option, sets when to flush buffered - # request based on the number of rows currently added. -interval: 1 # optional: Write option, sets a flush interval flushing buffered - # requesting if the interval passes, in milliseconds. +max-size: 1048576# optional: writing option, determines how many size in memory of buffered + # rows to insert per round trip. This can help performance on writing to JDBC + # database. The default value is "2mb". +max-rows: 1 # optional: writing option, determines how many rows to insert per round trip. + #This can help performance on writing to JDBC database. No default value, + # i.e. the default flushing is not depends on the number of buffered rows. +interval: 1 # optional: writing option, sets a flush interval flushing buffered requesting Review comment: Please also update the values in YAML. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9874: [FLINK-14240][table] Merge table config parameters(TableConfig#getConfiguration) into global job parameters(ExecutionConfig#getGlobalJobParam
flinkbot edited a comment on issue #9874: [FLINK-14240][table] Merge table config parameters(TableConfig#getConfiguration) into global job parameters(ExecutionConfig#getGlobalJobParameters) when running with the legacy planner. URL: https://github.com/apache/flink/pull/9874#issuecomment-540302986 ## CI report: * f4fed52c78d33ef7ec5df2fe2faa26b0690e9c9b : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/131246067) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot commented on issue #9875: [hotfix][docs] Fix the incorrect scala checkstyle configure file path
flinkbot commented on issue #9875: [hotfix][docs] Fix the incorrect scala checkstyle configure file path URL: https://github.com/apache/flink/pull/9875#issuecomment-540311608 ## CI report: * 3ab8ff37e57d7f9a4e3bc2421f5f978a66022589 : UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9848: [FLINK-14332] [flink-metrics-signalfx] add flink-metrics-signalfx
flinkbot edited a comment on issue #9848: [FLINK-14332] [flink-metrics-signalfx] add flink-metrics-signalfx URL: https://github.com/apache/flink/pull/9848#issuecomment-538885706 ## CI report: * dd22d245973b8caf25f8f26a392a1ca95f863736 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/13062) * 6ae59be715e9454885ea94df650b61a5adc9d9f1 : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/131049194) * d0c22910828121d32e0a847ff33bd705fb9aae35 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131050758) * 538f0528ff44a648a878e7c9eb5bac62b005c39c : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/131190746) * 62f76f23a0dae45d3efa9a773d37155f16565df8 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131224128) * 3f43b51e9e9c8d3121786043901a4cbe3e4822a3 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131246054) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9542: [FLINK-13873][metrics] Change the column family as tags for influxdb …
flinkbot edited a comment on issue #9542: [FLINK-13873][metrics] Change the column family as tags for influxdb … URL: https://github.com/apache/flink/pull/9542#issuecomment-525276537 ## CI report: * e8636926351f3d406962dcadba275e20e49aff39 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/124736151) * d23ea97e8419bbacea0698b3ba82a459d940cf38 : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/128606376) * 05977cd49eb306d768668be4e8cb31034343df02 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/128606947) * 057d453e5e00656bcfcb87d1d69172f614b2d11f : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/128681895) * 5001042b5fc5202da14c06e2d21faf1427f50a66 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/128683546) * 3a1e39e889daf0fca82c54982911ded35df6cb77 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/128692102) * 2745a3cb02e601a1a26360e9d6b3f0af5428c66a : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129027621) * e108aa24e739d6f48f63efc560c3fc049b509860 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131172797) * 24a270d468dea677379713d5cf402ea453d9f222 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/131246042) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9529: [FLINK-12979]Allowing set an empty line delimiter at end of an message using CsvRowSerializationSchema
flinkbot edited a comment on issue #9529: [FLINK-12979]Allowing set an empty line delimiter at end of an message using CsvRowSerializationSchema URL: https://github.com/apache/flink/pull/9529#issuecomment-524607772 ## CI report: * 88ae31ca30e15c2af5fb6d396673bf07eedfc5c3 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/124485998) * 2d725881480d282828efd4ea0a0f2615cf3fe82d : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/131241271) * 71824e4e066fc151208a9a3eb96e656c65daa298 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131242664) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot commented on issue #9874: [FLINK-14240][table] Merge table config parameters(TableConfig#getConfiguration) into global job parameters(ExecutionConfig#getGlobalJobParameters)
flinkbot commented on issue #9874: [FLINK-14240][table] Merge table config parameters(TableConfig#getConfiguration) into global job parameters(ExecutionConfig#getGlobalJobParameters) when running with the legacy planner. URL: https://github.com/apache/flink/pull/9874#issuecomment-540302986 ## CI report: * f4fed52c78d33ef7ec5df2fe2faa26b0690e9c9b : UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-14352) Dependencies section in Connect page of Table is broken
[ https://issues.apache.org/jira/browse/FLINK-14352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16948132#comment-16948132 ] Jingsong Lee commented on FLINK-14352: -- [~karmagyz] Ah.. you are right, unstable version has no formal URL to download... I'll close this. > Dependencies section in Connect page of Table is broken > --- > > Key: FLINK-14352 > URL: https://issues.apache.org/jira/browse/FLINK-14352 > Project: Flink > Issue Type: Bug > Components: Documentation, Table SQL / Ecosystem >Reporter: Jingsong Lee >Priority: Major > Fix For: 1.10.0 > > > In > [https://ci.apache.org/projects/flink/flink-docs-master/dev/table/connect.html] > Dependencies section not show the dependencies table in master, it work good > in 1.9. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-14352) Dependencies section in Connect page of Table is broken
[ https://issues.apache.org/jira/browse/FLINK-14352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jingsong Lee closed FLINK-14352. Resolution: Invalid > Dependencies section in Connect page of Table is broken > --- > > Key: FLINK-14352 > URL: https://issues.apache.org/jira/browse/FLINK-14352 > Project: Flink > Issue Type: Bug > Components: Documentation, Table SQL / Ecosystem >Reporter: Jingsong Lee >Priority: Major > Fix For: 1.10.0 > > > In > [https://ci.apache.org/projects/flink/flink-docs-master/dev/table/connect.html] > Dependencies section not show the dependencies table in master, it work good > in 1.9. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot commented on issue #9875: [hotfix][docs] Fix the incorrect scala checkstyle configure file path
flinkbot commented on issue #9875: [hotfix][docs] Fix the incorrect scala checkstyle configure file path URL: https://github.com/apache/flink/pull/9875#issuecomment-540302059 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 612291a674556e2da87e8eab02adee41239bc486 (Thu Oct 10 02:25:30 UTC 2019) **Warnings:** * Documentation files were touched, but no `.zh.md` files: Update Chinese documentation or file Jira ticket. Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services