[GitHub] [flink] 1u0 opened a new pull request #9564: [FLINK-12481][FLINK-12482][FLINK-12958] Streaming runtime: integrate mailbox for timer triggers, checkpoints and AsyncWaitOperator
1u0 opened a new pull request #9564: [FLINK-12481][FLINK-12482][FLINK-12958] Streaming runtime: integrate mailbox for timer triggers, checkpoints and AsyncWaitOperator URL: https://github.com/apache/flink/pull/9564 ## What is the purpose of the change This PR contains changes for * FLINK-12481: to migrate timer triggers to mailbox execution model; * FLINK-12482: to migrate checkpoint triggers to mailbox execution model; * FLINK-12958: modifies `AsyncWaitOperator` to be compatible with mailbox execution model. ## Verifying this change This change is already covered by existing tests. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (yes / **no**) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes / **no**) - The serializers: (yes / **no** / don't know) - The runtime per-record code paths (performance sensitive): (**yes** / no / don't know) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know) - The S3 file system connector: (yes / **no** / don't know) ## Documentation - Does this pull request introduce a new feature? (yes / **no**) - If yes, how is the feature documented? (**not applicable** / docs / JavaDocs / not documented) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9354: [FLINK-13568][sql-parser] DDL create table doesn't allow STRING data …
flinkbot edited a comment on issue #9354: [FLINK-13568][sql-parser] DDL create table doesn't allow STRING data … URL: https://github.com/apache/flink/pull/9354#issuecomment-518082256 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 390c0c1d31cbd3ada69f6ec375dec2b788754f85 (Fri Aug 30 06:53:23 UTC 2019) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9502: [FLINK-13814][hive] HiveTableSink should strip quotes from partition …
flinkbot edited a comment on issue #9502: [FLINK-13814][hive] HiveTableSink should strip quotes from partition … URL: https://github.com/apache/flink/pull/9502#issuecomment-523778174 ## CI report: * 22278924cd49e4fab6eabb9bf4f6ec8310894808 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/124170593) * 5ad14b3392b439f2b63b9fb079bf5546c874607d : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/124325231) * 2215b05905a52ac58a828ce5b817481b0b132d8d : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/125031058) * 69012717a4b8820f9ebeb97287124aa336bdf137 : UNKNOWN This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9494: [FLINK-12847] [Connectors / Kinesis] update flink-connector-kinesis to use Apache 2.0 licensed code
flinkbot edited a comment on issue #9494: [FLINK-12847] [Connectors / Kinesis] update flink-connector-kinesis to use Apache 2.0 licensed code URL: https://github.com/apache/flink/pull/9494#issuecomment-523030190 ## CI report: * 394dfe73e75cf9db8dcbbb96ead65ba5130e3e0c : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/123898225) * cb912bb2b7358c5c01ed702b64ddd2f4c9b0a0ca : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/123909324) * a622fc55f05b38912b83ab7b6424a0b2d6ac2375 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124174020) * 7bf48d8eac338f9494db333229307ae0cc45c695 : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/124760490) * 8e70b4f2c96109f83f6aa42a11ba42afe1b3924f : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/124765153) * 5af3517a366a5efc85358dfabed7c1bbcafd560e : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/125025233) * f047437e3886dd5e4f945ee5fb946f373ea60382 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/125185847) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9354: [FLINK-13568][sql-parser] DDL create table doesn't allow STRING data …
flinkbot edited a comment on issue #9354: [FLINK-13568][sql-parser] DDL create table doesn't allow STRING data … URL: https://github.com/apache/flink/pull/9354#issuecomment-518083679 ## CI report: * 754c52de984cb476ae0442c6704219b64c68441e : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/121903039) * af75fff40f4e9e57bd09403741ff1a7c63285941 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/122105364) * 8eddadbdb9543c7a42cdba7c1ebe938934671e28 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/122210862) * 81f5ee77a0e7bb83ce0a2b2447e45a6c364d69ea : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/13853) * 4a41ecb487a5dadaf868814b676c51a04670b76e : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/122242467) * b7ef3d2f42edabc376e859f0bdc963f1f535c812 : UNKNOWN This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9502: [FLINK-13814][hive] HiveTableSink should strip quotes from partition …
flinkbot edited a comment on issue #9502: [FLINK-13814][hive] HiveTableSink should strip quotes from partition … URL: https://github.com/apache/flink/pull/9502#issuecomment-523777539 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 69012717a4b8820f9ebeb97287124aa336bdf137 (Fri Aug 30 06:41:09 UTC 2019) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] lirui-apache commented on issue #9502: [FLINK-13814][hive] HiveTableSink should strip quotes from partition …
lirui-apache commented on issue #9502: [FLINK-13814][hive] HiveTableSink should strip quotes from partition … URL: https://github.com/apache/flink/pull/9502#issuecomment-526478917 @bowenli86 I can't reproduce the error you mentioned. Rebase to trigger travis again. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Closed] (FLINK-13248) Enhance mailbox executor with yield-to-downstream functionality
[ https://issues.apache.org/jira/browse/FLINK-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Piotr Nowojski closed FLINK-13248. -- Assignee: Arvid Heise Resolution: Fixed Merged as 8805406a0ba6de6d5b190aefc7c7707a51a660da..ccc7eb431477059b32fb924104c17af953620c74 to master > Enhance mailbox executor with yield-to-downstream functionality > --- > > Key: FLINK-13248 > URL: https://issues.apache.org/jira/browse/FLINK-13248 > Project: Flink > Issue Type: Sub-task > Components: Runtime / Task >Reporter: Stefan Richter >Assignee: Arvid Heise >Priority: Major > Labels: pull-request-available > Fix For: 1.10.0 > > Time Spent: 20m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Created] (FLINK-13905) Separate checkpoint triggering into stages
Biao Liu created FLINK-13905: Summary: Separate checkpoint triggering into stages Key: FLINK-13905 URL: https://issues.apache.org/jira/browse/FLINK-13905 Project: Flink Issue Type: Sub-task Components: Runtime / Coordination Reporter: Biao Liu Fix For: 1.10.0 Currently {{CheckpointCoordinator#triggerCheckpoint}} includes some heavy IO operations. We plan to separate the triggering into different stages. The IO operations are executed in IO threads, while other on-memory operations are not. This is a preparation for making all on-memory operations of {{CheckpointCoordinator}} single threaded (in main thread). Note that we could not put on-memory operations of triggering into main thread directly now. Because there are still some operations on a heavy lock (coordinator-wide). -- This message was sent by Atlassian Jira (v8.3.2#803003)
[GitHub] [flink] pnowojski merged pull request #9383: [FLINK-13248] [runtime] Adding processing of downstream messages for blocking operators
pnowojski merged pull request #9383: [FLINK-13248] [runtime] Adding processing of downstream messages for blocking operators URL: https://github.com/apache/flink/pull/9383 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9354: [FLINK-13568][sql-parser] DDL create table doesn't allow STRING data …
flinkbot edited a comment on issue #9354: [FLINK-13568][sql-parser] DDL create table doesn't allow STRING data … URL: https://github.com/apache/flink/pull/9354#issuecomment-518082256 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit b7ef3d2f42edabc376e859f0bdc963f1f535c812 (Fri Aug 30 06:34:03 UTC 2019) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Assigned] (FLINK-13896) Scala 2.11 maven compile should target Java 1.8
[ https://issues.apache.org/jira/browse/FLINK-13896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Timo Walther reassigned FLINK-13896: Assignee: Terry Wang > Scala 2.11 maven compile should target Java 1.8 > --- > > Key: FLINK-13896 > URL: https://issues.apache.org/jira/browse/FLINK-13896 > Project: Flink > Issue Type: Bug > Components: Build System >Affects Versions: 1.9.0 >Reporter: Terry Wang >Assignee: Terry Wang >Priority: Major > > When setting TableEnvironment in scala as follwing: > > {code:java} > // we can repoduce this problem by put following code in > // org.apache.flink.table.api.scala.internal.StreamTableEnvironmentImplTest > @Test > def testCreateEnvironment(): Unit = { > val settings = > EnvironmentSettings.newInstance().useBlinkPlanner().inBatchMode().build(); > val tEnv = TableEnvironment.create(settings); > } > {code} > Then mvn test would fail with an error message like: > > error: Static methods in interface require -target:JVM-1.8 > > We can fix this bug by adding: > > > -target:jvm-1.8 > > > > to scala-maven-plugin config > > > -- This message was sent by Atlassian Jira (v8.3.2#803003)
[GitHub] [flink] danny0405 commented on issue #9354: [FLINK-13568][sql-parser] DDL create table doesn't allow STRING data …
danny0405 commented on issue #9354: [FLINK-13568][sql-parser] DDL create table doesn't allow STRING data … URL: https://github.com/apache/flink/pull/9354#issuecomment-526476994 @twalthr I have updated based on your review comments, please review if you have time. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9354: [FLINK-13568][sql-parser] DDL create table doesn't allow STRING data …
flinkbot edited a comment on issue #9354: [FLINK-13568][sql-parser] DDL create table doesn't allow STRING data … URL: https://github.com/apache/flink/pull/9354#issuecomment-518082256 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit b7ef3d2f42edabc376e859f0bdc963f1f535c812 (Fri Aug 30 06:32:00 UTC 2019) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9494: [FLINK-12847] [Connectors / Kinesis] update flink-connector-kinesis to use Apache 2.0 licensed code
flinkbot edited a comment on issue #9494: [FLINK-12847] [Connectors / Kinesis] update flink-connector-kinesis to use Apache 2.0 licensed code URL: https://github.com/apache/flink/pull/9494#issuecomment-523030190 ## CI report: * 394dfe73e75cf9db8dcbbb96ead65ba5130e3e0c : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/123898225) * cb912bb2b7358c5c01ed702b64ddd2f4c9b0a0ca : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/123909324) * a622fc55f05b38912b83ab7b6424a0b2d6ac2375 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124174020) * 7bf48d8eac338f9494db333229307ae0cc45c695 : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/124760490) * 8e70b4f2c96109f83f6aa42a11ba42afe1b3924f : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/124765153) * 5af3517a366a5efc85358dfabed7c1bbcafd560e : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/125025233) * f047437e3886dd5e4f945ee5fb946f373ea60382 : UNKNOWN This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9494: [FLINK-12847] [Connectors / Kinesis] update flink-connector-kinesis to use Apache 2.0 licensed code
flinkbot edited a comment on issue #9494: [FLINK-12847] [Connectors / Kinesis] update flink-connector-kinesis to use Apache 2.0 licensed code URL: https://github.com/apache/flink/pull/9494#issuecomment-523025067 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit f047437e3886dd5e4f945ee5fb946f373ea60382 (Fri Aug 30 06:29:59 UTC 2019) **Warnings:** * **3 pom.xml files were touched**: Check for build and licensing issues. Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9354: [FLINK-13568][sql-parser] DDL create table doesn't allow STRING data …
flinkbot edited a comment on issue #9354: [FLINK-13568][sql-parser] DDL create table doesn't allow STRING data … URL: https://github.com/apache/flink/pull/9354#issuecomment-518082256 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 4a41ecb487a5dadaf868814b676c51a04670b76e (Fri Aug 30 06:27:57 UTC 2019) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] danny0405 commented on a change in pull request #9354: [FLINK-13568][sql-parser] DDL create table doesn't allow STRING data …
danny0405 commented on a change in pull request #9354: [FLINK-13568][sql-parser] DDL create table doesn't allow STRING data … URL: https://github.com/apache/flink/pull/9354#discussion_r319370977 ## File path: flink-table/flink-table-planner/src/test/java/org/apache/flink/table/sqlexec/SqlToOperationConverterTest.java ## @@ -163,6 +169,229 @@ public void testSqlInsertWithStaticPartition() { assertEquals(expectedStaticPartitions, sinkModifyOperation.getStaticPartitions()); } + @Test(expected = AssertionError.class) // TODO: tweak the tests when FLINK-13604 is fixed. + public void testCreateTableWithFullDataTypes() { + final String sql = "create table t1(\n" + + " f0 CHAR,\n" + + " f1 CHAR NOT NULL,\n" + + " f2 CHAR NULL,\n" + + " f3 CHAR(33),\n" + + " f4 VARCHAR,\n" + + " f5 VARCHAR(33),\n" + + " f6 STRING,\n" + + " f7 BOOLEAN,\n" + + // " f13 DECIMAL,\n" + + // " f14 DEC,\n" + + // " f15 NUMERIC,\n" + + // " f16 DECIMAL(10),\n" + + // " f17 DEC(10),\n" + + // " f18 NUMERIC(10),\n" + + // " f19 DECIMAL(10, 3),\n" + + // " f20 DEC(10, 3),\n" + + // " f21 NUMERIC(10, 3),\n" + + " f22 TINYINT,\n" + + " f23 SMALLINT,\n" + + " f24 INTEGER,\n" + + " f25 INT,\n" + + " f26 BIGINT,\n" + + " f27 FLOAT,\n" + + " f28 DOUBLE,\n" + + " f29 DOUBLE PRECISION,\n" + + " f30 DATE,\n" + + " f31 TIME,\n" + + " f32 TIME WITHOUT TIME ZONE,\n" + + " f33 TIME(3),\n" + + " f34 TIME(3) WITHOUT TIME ZONE,\n" + + " f35 TIMESTAMP,\n" + + " f36 TIMESTAMP WITHOUT TIME ZONE,\n" + + " f37 TIMESTAMP(3),\n" + + " f38 TIMESTAMP(3) WITHOUT TIME ZONE,\n" + + " f42 ARRAY,\n" + + " f43 INT ARRAY,\n" + + " f44 INT NOT NULL ARRAY,\n" + + " f45 INT ARRAY NOT NULL,\n" + + " f46 MULTISET,\n" + + " f47 INT MULTISET,\n" + + " f48 INT NOT NULL MULTISET,\n" + + " f49 INT MULTISET NOT NULL,\n" + + " f50 MAP,\n" + + " f51 ROW,\n" + + " f52 ROW(f0 INT NOT NULL, f1 BOOLEAN),\n" + + " f53 ROW<`f0` INT>,\n" + + " f54 ROW(`f0` INT),\n" + + " f55 ROW<>,\n" + + " f56 ROW(),\n" + + " f57 ROW)"; + final FlinkPlannerImpl planner = getPlannerBySqlDialect(SqlDialect.DEFAULT); + SqlNode node = planner.parse(sql); + assert node instanceof SqlCreateTable; + Operation operation = SqlToOperationConverter.convert(planner, node); + assert operation instanceof CreateTableOperation; + TableSchema schema = ((CreateTableOperation) operation).getCatalogTable().getSchema(); + assertArrayEquals(new DataType[] { + // expect to be CHAR(1) + DataTypes.STRING(), + // expect to be CHAR(1) NOT NULL + DataTypes.STRING(), + // expect to be CHAR(1) + DataTypes.STRING(), + // expect to be CHAR(33) + DataTypes.STRING(), + DataTypes.STRING(), + // expect to be VARCHAR(33) + DataTypes.STRING(), + DataTypes.STRING(), + DataTypes.BOOLEAN(), + // DataTypes.DECIMAL(10, 0), + // DataTypes.DECIMAL(10, 0), + // DataTypes.DECIMAL(10, 0), + // DataTypes.DECIMAL(10, 0), + // DataTypes.DECIMAL(10, 0), + // DataTypes.DECIMAL(10, 0), + // DataTypes.DECIMAL(10, 3), + // DataTypes.DECIMAL(10, 3), + // DataTypes.DECIMAL(10, 3), + DataTypes.TINYINT(), + DataTypes.SMALLINT(), + DataTypes.INT(), + DataTypes.INT(), + DataTypes.B
[GitHub] [flink] danny0405 commented on a change in pull request #9354: [FLINK-13568][sql-parser] DDL create table doesn't allow STRING data …
danny0405 commented on a change in pull request #9354: [FLINK-13568][sql-parser] DDL create table doesn't allow STRING data … URL: https://github.com/apache/flink/pull/9354#discussion_r319356738 ## File path: flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/sqlexec/SqlToOperationConverterTest.java ## @@ -157,6 +158,143 @@ public void testSqlInsertWithStaticPartition() { assertEquals(expectedStaticPartitions, sinkModifyOperation.getStaticPartitions()); } + @Test // TODO: tweak the tests when FLINK-13604 is fixed. + public void testCreateTableWithFullDataTypes() { + final String sql = "create table t1(\n" + + " f0 CHAR,\n" + + " f1 CHAR NOT NULL,\n" + + " f2 CHAR NULL,\n" + + " f3 CHAR(33),\n" + + " f4 VARCHAR,\n" + + " f5 VARCHAR(33),\n" + + " f6 STRING,\n" + + " f7 BOOLEAN,\n" + + " f8 BINARY,\n" + + " f9 BINARY(33),\n" + + " f10 VARBINARY,\n" + + " f11 VARBINARY(33),\n" + + " f12 BYTES,\n" + + " f13 DECIMAL,\n" + + " f14 DEC,\n" + + " f15 NUMERIC,\n" + + " f16 DECIMAL(10),\n" + + " f17 DEC(10),\n" + + " f18 NUMERIC(10),\n" + + " f19 DECIMAL(10, 3),\n" + + " f20 DEC(10, 3),\n" + + " f21 NUMERIC(10, 3),\n" + + " f22 TINYINT,\n" + + " f23 SMALLINT,\n" + + " f24 INTEGER,\n" + + " f25 INT,\n" + + " f26 BIGINT,\n" + + " f27 FLOAT,\n" + + " f28 DOUBLE,\n" + + " f29 DOUBLE PRECISION,\n" + + " f30 DATE,\n" + + " f31 TIME,\n" + + " f32 TIME WITHOUT TIME ZONE,\n" + + " f33 TIME(3),\n" + + " f34 TIME(3) WITHOUT TIME ZONE,\n" + + " f35 TIMESTAMP,\n" + + " f36 TIMESTAMP WITHOUT TIME ZONE,\n" + + " f37 TIMESTAMP(3),\n" + + " f38 TIMESTAMP(3) WITHOUT TIME ZONE,\n" + + " f39 TIMESTAMP WITH LOCAL TIME ZONE,\n" + + " f40 TIMESTAMP(3) WITH LOCAL TIME ZONE,\n" + + " f41 ARRAY,\n" + + " f42 ARRAY,\n" + + " f43 INT ARRAY,\n" + + " f44 INT NOT NULL ARRAY,\n" + + " f45 INT ARRAY NOT NULL,\n" + + " f46 MULTISET,\n" + + " f47 INT MULTISET,\n" + + " f48 INT NOT NULL MULTISET,\n" + + " f49 INT MULTISET NOT NULL,\n" + + " f50 MAP,\n" + + " f51 ROW,\n" + + " f52 ROW(f0 INT NOT NULL, f1 BOOLEAN),\n" + + " f53 ROW<`f0` INT>,\n" + + " f54 ROW(`f0` INT),\n" + + " f55 ROW<>,\n" + + " f56 ROW(),\n" + + " f57 ROW)"; + final FlinkPlannerImpl planner = getPlannerBySqlDialect(SqlDialect.DEFAULT); + SqlNode node = planner.parse(sql); + assert node instanceof SqlCreateTable; + Operation operation = SqlToOperationConverter.convert(planner, node); + assert operation instanceof CreateTableOperation; + TableSchema schema = ((CreateTableOperation) operation).getCatalogTable().getSchema(); + assertArrayEquals(new DataType[] {DataTypes.CHAR(1), Review comment: Thanks, i agree. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Created] (FLINK-13904) Avoid competition between different rounds of checkpoint triggering
Biao Liu created FLINK-13904: Summary: Avoid competition between different rounds of checkpoint triggering Key: FLINK-13904 URL: https://issues.apache.org/jira/browse/FLINK-13904 Project: Flink Issue Type: Sub-task Components: Runtime / Coordination Reporter: Biao Liu Fix For: 1.10.0 As a part of {{CheckpointCoordinator}} refactoring, I'd like to simplify the concurrent triggering logic. The different rounds of checkpoint triggering would be processed sequentially. The final target is getting rid of timer thread and {{triggerLock}}. Note that we can't avoid all competitions of triggering for now. There is still a competition between normal checkpoint triggering and savepoint triggering. We could avoid this competition by executing triggering in main thread. But it could not be achieved until all blocking operations are handled well in IO threads. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[GitHub] [flink] flinkbot edited a comment on issue #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink
flinkbot edited a comment on issue #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink URL: https://github.com/apache/flink/pull/9485#issuecomment-522655319 ## CI report: * 64391df18f5c2e3c25268ad52c724abcaa4211b5 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/123755053) * 042b4f58d39980820d1c9dc1d779536ece0834c9 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125046886) * c4645a6663767d1dfcd242ea5ceb38ef838a5580 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/125183439) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9562: [FLINK-13898] Migrate restart stratey config constants to ConfigOptions
flinkbot edited a comment on issue #9562: [FLINK-13898] Migrate restart stratey config constants to ConfigOptions URL: https://github.com/apache/flink/pull/9562#issuecomment-526204724 ## CI report: * e90b46f3cfde29e5f5858b01e6ebdd4ba203393f : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125089916) * 7927124fb0bf9449bd8010944abd54d4c2311196 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125107987) * 2a981df9820faff24e637e737dba4a6cda0a0764 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125122513) * 4d5aa4e6bbd588fdaa1cab4b25eb6e1ed3eabf8e : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/125182685) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9552: [FLINK-13884] Set default failure rate restart strategy delay to 0s
flinkbot edited a comment on issue #9552: [FLINK-13884] Set default failure rate restart strategy delay to 0s URL: https://github.com/apache/flink/pull/9552#issuecomment-525754671 ## CI report: * b78e598a216202595f6fd4748ca0860af3468668 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124918117) * 1d10c2983f80da0d65517a9695cd473876c70ab5 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125092286) * 2c935a81114634de8dbc1c87e329e3b1123572c3 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125107962) * a67e92eb76efc48663bb4aa3af213b645d757bab : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/125122488) * 95d0bf236135a5c0367d934ff7d41fd594c46f65 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/125182666) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink
flinkbot edited a comment on issue #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink URL: https://github.com/apache/flink/pull/9485#issuecomment-522655319 ## CI report: * 64391df18f5c2e3c25268ad52c724abcaa4211b5 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/123755053) * 042b4f58d39980820d1c9dc1d779536ece0834c9 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125046886) * c4645a6663767d1dfcd242ea5ceb38ef838a5580 : UNKNOWN This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink
flinkbot edited a comment on issue #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink URL: https://github.com/apache/flink/pull/9485#issuecomment-522652393 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit c4645a6663767d1dfcd242ea5ceb38ef838a5580 (Fri Aug 30 05:50:21 UTC 2019) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] JingsongLi commented on issue #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink
JingsongLi commented on issue #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink URL: https://github.com/apache/flink/pull/9485#issuecomment-526466020 Hi @dawidwys , Thanks for your review. Comments addressed. Sorry for the force push, I try to fix the checkstyle of c3a0c9b . This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9562: [FLINK-13898] Migrate restart stratey config constants to ConfigOptions
flinkbot edited a comment on issue #9562: [FLINK-13898] Migrate restart stratey config constants to ConfigOptions URL: https://github.com/apache/flink/pull/9562#issuecomment-526204724 ## CI report: * e90b46f3cfde29e5f5858b01e6ebdd4ba203393f : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125089916) * 7927124fb0bf9449bd8010944abd54d4c2311196 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125107987) * 2a981df9820faff24e637e737dba4a6cda0a0764 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125122513) * 4d5aa4e6bbd588fdaa1cab4b25eb6e1ed3eabf8e : UNKNOWN This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9552: [FLINK-13884] Set default failure rate restart strategy delay to 0s
flinkbot edited a comment on issue #9552: [FLINK-13884] Set default failure rate restart strategy delay to 0s URL: https://github.com/apache/flink/pull/9552#issuecomment-525754671 ## CI report: * b78e598a216202595f6fd4748ca0860af3468668 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124918117) * 1d10c2983f80da0d65517a9695cd473876c70ab5 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125092286) * 2c935a81114634de8dbc1c87e329e3b1123572c3 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125107962) * a67e92eb76efc48663bb4aa3af213b645d757bab : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/125122488) * 95d0bf236135a5c0367d934ff7d41fd594c46f65 : UNKNOWN This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink
flinkbot edited a comment on issue #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink URL: https://github.com/apache/flink/pull/9485#issuecomment-522652393 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit c4645a6663767d1dfcd242ea5ceb38ef838a5580 (Fri Aug 30 05:44:15 UTC 2019) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9562: [FLINK-13898] Migrate restart stratey config constants to ConfigOptions
flinkbot edited a comment on issue #9562: [FLINK-13898] Migrate restart stratey config constants to ConfigOptions URL: https://github.com/apache/flink/pull/9562#issuecomment-526200719 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 4d5aa4e6bbd588fdaa1cab4b25eb6e1ed3eabf8e (Fri Aug 30 05:38:09 UTC 2019) ✅no warnings Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9555: [FLINK-13868][REST] Job vertex add taskmanager id in rest api
flinkbot edited a comment on issue #9555: [FLINK-13868][REST] Job vertex add taskmanager id in rest api URL: https://github.com/apache/flink/pull/9555#issuecomment-526075111 ## CI report: * 64f02281954e69bcfbe858eb40b8f1846cfbe195 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125035687) * 5d3d03dffbaf00112a31663caaba3558886cd373 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125055694) * fc91d03d717971e53abc68ba8a93a1e456fc6608 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/125176326) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink
flinkbot edited a comment on issue #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink URL: https://github.com/apache/flink/pull/9485#issuecomment-522652393 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 042b4f58d39980820d1c9dc1d779536ece0834c9 (Fri Aug 30 05:14:46 UTC 2019) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] JingsongLi commented on a change in pull request #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink
JingsongLi commented on a change in pull request #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink URL: https://github.com/apache/flink/pull/9485#discussion_r319358397 ## File path: flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/expressions/converter/CustomizedConvertRule.java ## @@ -0,0 +1,361 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.planner.expressions.converter; + +import org.apache.flink.table.expressions.CallExpression; +import org.apache.flink.table.expressions.Expression; +import org.apache.flink.table.expressions.ExpressionUtils; +import org.apache.flink.table.expressions.TableReferenceExpression; +import org.apache.flink.table.expressions.TypeLiteralExpression; +import org.apache.flink.table.expressions.ValueLiteralExpression; +import org.apache.flink.table.expressions.utils.ApiExpressionUtils; +import org.apache.flink.table.functions.BuiltInFunctionDefinitions; +import org.apache.flink.table.functions.FunctionDefinition; +import org.apache.flink.table.operations.QueryOperation; +import org.apache.flink.table.planner.calcite.FlinkRelBuilder; +import org.apache.flink.table.planner.functions.InternalFunctionDefinitions; +import org.apache.flink.table.planner.functions.sql.FlinkSqlOperatorTable; +import org.apache.flink.table.planner.functions.sql.SqlThrowExceptionFunction; +import org.apache.flink.table.types.DataType; +import org.apache.flink.table.types.logical.ArrayType; +import org.apache.flink.table.types.logical.LogicalType; +import org.apache.flink.table.types.logical.RowType; +import org.apache.flink.util.Preconditions; + +import com.google.common.collect.ImmutableList; +import org.apache.calcite.rel.type.RelDataType; +import org.apache.calcite.rex.RexNode; +import org.apache.calcite.rex.RexSubQuery; +import org.apache.calcite.sql.fun.SqlTrimFunction; + +import java.util.Arrays; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Optional; + +import static org.apache.calcite.sql.type.SqlTypeName.VARCHAR; +import static org.apache.flink.table.planner.calcite.FlinkTypeFactory.toLogicalType; +import static org.apache.flink.table.planner.expressions.converter.ExpressionConverter.extractValue; +import static org.apache.flink.table.runtime.types.LogicalTypeDataTypeConverter.fromDataTypeToLogicalType; +import static org.apache.flink.table.runtime.typeutils.TypeCheckUtils.isCharacterString; +import static org.apache.flink.table.runtime.typeutils.TypeCheckUtils.isTemporal; +import static org.apache.flink.table.runtime.typeutils.TypeCheckUtils.isTimeInterval; + +/** + * Customized {@link CallExpressionConvertRule}, Functions conversion here all require special logic, + * and there may be some special rules, such as needing get the literal values of inputs, such as + * converting to combinations of functions, to convert to RexNode of calcite. + */ +public class CustomizedConvertRule implements CallExpressionConvertRule { + + private static final Map DEFINITION_RULE_MAP = new HashMap<>(); + static { + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.CAST, CustomizedConvertRule::convertCast); + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.REINTERPRET_CAST, CustomizedConvertRule::convertReinterpretCast); + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.IN, CustomizedConvertRule::convertIn); + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.GET, CustomizedConvertRule::convertGet); + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.TRIM, CustomizedConvertRule::convertTrim); + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.AS, CustomizedConvertRule::convertAs); + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.IS_NULL, CustomizedConvertRule::convertIsNull); + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.BETWEEN, CustomizedConvertRule::convertBetween); + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.NOT_BETWEEN, CustomizedConvertRule::convertNotBetween); +
[GitHub] [flink] flinkbot edited a comment on issue #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink
flinkbot edited a comment on issue #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink URL: https://github.com/apache/flink/pull/9485#issuecomment-522652393 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 042b4f58d39980820d1c9dc1d779536ece0834c9 (Fri Aug 30 04:09:37 UTC 2019) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] JingsongLi commented on a change in pull request #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink
JingsongLi commented on a change in pull request #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink URL: https://github.com/apache/flink/pull/9485#discussion_r319350369 ## File path: flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/expressions/converter/CustomizedConvertRule.java ## @@ -0,0 +1,361 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.planner.expressions.converter; + +import org.apache.flink.table.expressions.CallExpression; +import org.apache.flink.table.expressions.Expression; +import org.apache.flink.table.expressions.ExpressionUtils; +import org.apache.flink.table.expressions.TableReferenceExpression; +import org.apache.flink.table.expressions.TypeLiteralExpression; +import org.apache.flink.table.expressions.ValueLiteralExpression; +import org.apache.flink.table.expressions.utils.ApiExpressionUtils; +import org.apache.flink.table.functions.BuiltInFunctionDefinitions; +import org.apache.flink.table.functions.FunctionDefinition; +import org.apache.flink.table.operations.QueryOperation; +import org.apache.flink.table.planner.calcite.FlinkRelBuilder; +import org.apache.flink.table.planner.functions.InternalFunctionDefinitions; +import org.apache.flink.table.planner.functions.sql.FlinkSqlOperatorTable; +import org.apache.flink.table.planner.functions.sql.SqlThrowExceptionFunction; +import org.apache.flink.table.types.DataType; +import org.apache.flink.table.types.logical.ArrayType; +import org.apache.flink.table.types.logical.LogicalType; +import org.apache.flink.table.types.logical.RowType; +import org.apache.flink.util.Preconditions; + +import com.google.common.collect.ImmutableList; +import org.apache.calcite.rel.type.RelDataType; +import org.apache.calcite.rex.RexNode; +import org.apache.calcite.rex.RexSubQuery; +import org.apache.calcite.sql.fun.SqlTrimFunction; + +import java.util.Arrays; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Optional; + +import static org.apache.calcite.sql.type.SqlTypeName.VARCHAR; +import static org.apache.flink.table.planner.calcite.FlinkTypeFactory.toLogicalType; +import static org.apache.flink.table.planner.expressions.converter.ExpressionConverter.extractValue; +import static org.apache.flink.table.runtime.types.LogicalTypeDataTypeConverter.fromDataTypeToLogicalType; +import static org.apache.flink.table.runtime.typeutils.TypeCheckUtils.isCharacterString; +import static org.apache.flink.table.runtime.typeutils.TypeCheckUtils.isTemporal; +import static org.apache.flink.table.runtime.typeutils.TypeCheckUtils.isTimeInterval; + +/** + * Customized {@link CallExpressionConvertRule}, Functions conversion here all require special logic, + * and there may be some special rules, such as needing get the literal values of inputs, such as + * converting to combinations of functions, to convert to RexNode of calcite. + */ +public class CustomizedConvertRule implements CallExpressionConvertRule { + + private static final Map DEFINITION_RULE_MAP = new HashMap<>(); + static { + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.CAST, CustomizedConvertRule::convertCast); + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.REINTERPRET_CAST, CustomizedConvertRule::convertReinterpretCast); + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.IN, CustomizedConvertRule::convertIn); + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.GET, CustomizedConvertRule::convertGet); + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.TRIM, CustomizedConvertRule::convertTrim); + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.AS, CustomizedConvertRule::convertAs); + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.IS_NULL, CustomizedConvertRule::convertIsNull); + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.BETWEEN, CustomizedConvertRule::convertBetween); + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.NOT_BETWEEN, CustomizedConvertRule::convertNotBetween); +
[GitHub] [flink] flinkbot edited a comment on issue #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink
flinkbot edited a comment on issue #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink URL: https://github.com/apache/flink/pull/9485#issuecomment-522652393 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 042b4f58d39980820d1c9dc1d779536ece0834c9 (Fri Aug 30 04:07:35 UTC 2019) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] JingsongLi commented on a change in pull request #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink
JingsongLi commented on a change in pull request #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink URL: https://github.com/apache/flink/pull/9485#discussion_r319350136 ## File path: flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/expressions/converter/DefinedConvertRule.java ## @@ -0,0 +1,158 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.planner.expressions.converter; + +import org.apache.flink.table.expressions.CallExpression; +import org.apache.flink.table.functions.BuiltInFunctionDefinitions; +import org.apache.flink.table.functions.FunctionDefinition; +import org.apache.flink.table.planner.functions.sql.FlinkSqlOperatorTable; + +import org.apache.calcite.rex.RexNode; +import org.apache.calcite.sql.SqlOperator; + +import java.util.HashMap; +import java.util.Map; +import java.util.Optional; + +/** + * Defined function {@link CallExpressionConvertRule}, it included conversions are one-to-one with + * calcite SqlOperator. + */ +public class DefinedConvertRule implements CallExpressionConvertRule { + + private static final Map DEFINITION_OPERATOR_MAP = new HashMap<>(); + static { + DEFINITION_OPERATOR_MAP.put(BuiltInFunctionDefinitions.SHA1, FlinkSqlOperatorTable.SHA1); Review comment: Yeah, I will introduce a new Rule and remove these after I introduce `standardSql`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] JingsongLi commented on a change in pull request #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink
JingsongLi commented on a change in pull request #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink URL: https://github.com/apache/flink/pull/9485#discussion_r319349920 ## File path: flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/expressions/converter/CustomizedConvertRule.java ## @@ -0,0 +1,361 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.planner.expressions.converter; + +import org.apache.flink.table.expressions.CallExpression; +import org.apache.flink.table.expressions.Expression; +import org.apache.flink.table.expressions.ExpressionUtils; +import org.apache.flink.table.expressions.TableReferenceExpression; +import org.apache.flink.table.expressions.TypeLiteralExpression; +import org.apache.flink.table.expressions.ValueLiteralExpression; +import org.apache.flink.table.expressions.utils.ApiExpressionUtils; +import org.apache.flink.table.functions.BuiltInFunctionDefinitions; +import org.apache.flink.table.functions.FunctionDefinition; +import org.apache.flink.table.operations.QueryOperation; +import org.apache.flink.table.planner.calcite.FlinkRelBuilder; +import org.apache.flink.table.planner.functions.InternalFunctionDefinitions; +import org.apache.flink.table.planner.functions.sql.FlinkSqlOperatorTable; +import org.apache.flink.table.planner.functions.sql.SqlThrowExceptionFunction; +import org.apache.flink.table.types.DataType; +import org.apache.flink.table.types.logical.ArrayType; +import org.apache.flink.table.types.logical.LogicalType; +import org.apache.flink.table.types.logical.RowType; +import org.apache.flink.util.Preconditions; + +import com.google.common.collect.ImmutableList; +import org.apache.calcite.rel.type.RelDataType; +import org.apache.calcite.rex.RexNode; +import org.apache.calcite.rex.RexSubQuery; +import org.apache.calcite.sql.fun.SqlTrimFunction; + +import java.util.Arrays; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Optional; + +import static org.apache.calcite.sql.type.SqlTypeName.VARCHAR; +import static org.apache.flink.table.planner.calcite.FlinkTypeFactory.toLogicalType; +import static org.apache.flink.table.planner.expressions.converter.ExpressionConverter.extractValue; +import static org.apache.flink.table.runtime.types.LogicalTypeDataTypeConverter.fromDataTypeToLogicalType; +import static org.apache.flink.table.runtime.typeutils.TypeCheckUtils.isCharacterString; +import static org.apache.flink.table.runtime.typeutils.TypeCheckUtils.isTemporal; +import static org.apache.flink.table.runtime.typeutils.TypeCheckUtils.isTimeInterval; + +/** + * Customized {@link CallExpressionConvertRule}, Functions conversion here all require special logic, + * and there may be some special rules, such as needing get the literal values of inputs, such as + * converting to combinations of functions, to convert to RexNode of calcite. + */ +public class CustomizedConvertRule implements CallExpressionConvertRule { + + private static final Map DEFINITION_RULE_MAP = new HashMap<>(); + static { + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.CAST, CustomizedConvertRule::convertCast); + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.REINTERPRET_CAST, CustomizedConvertRule::convertReinterpretCast); + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.IN, CustomizedConvertRule::convertIn); + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.GET, CustomizedConvertRule::convertGet); + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.TRIM, CustomizedConvertRule::convertTrim); + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.AS, CustomizedConvertRule::convertAs); + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.IS_NULL, CustomizedConvertRule::convertIsNull); + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.BETWEEN, CustomizedConvertRule::convertBetween); + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.NOT_BETWEEN, CustomizedConvertRule::convertNotBetween); +
[GitHub] [flink] flinkbot edited a comment on issue #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink
flinkbot edited a comment on issue #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink URL: https://github.com/apache/flink/pull/9485#issuecomment-522652393 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 042b4f58d39980820d1c9dc1d779536ece0834c9 (Fri Aug 30 04:05:33 UTC 2019) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] JingsongLi commented on a change in pull request #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink
JingsongLi commented on a change in pull request #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink URL: https://github.com/apache/flink/pull/9485#discussion_r319349711 ## File path: flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/expressions/converter/CustomizedConvertRule.java ## @@ -0,0 +1,361 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.planner.expressions.converter; + +import org.apache.flink.table.expressions.CallExpression; +import org.apache.flink.table.expressions.Expression; +import org.apache.flink.table.expressions.ExpressionUtils; +import org.apache.flink.table.expressions.TableReferenceExpression; +import org.apache.flink.table.expressions.TypeLiteralExpression; +import org.apache.flink.table.expressions.ValueLiteralExpression; +import org.apache.flink.table.expressions.utils.ApiExpressionUtils; +import org.apache.flink.table.functions.BuiltInFunctionDefinitions; +import org.apache.flink.table.functions.FunctionDefinition; +import org.apache.flink.table.operations.QueryOperation; +import org.apache.flink.table.planner.calcite.FlinkRelBuilder; +import org.apache.flink.table.planner.functions.InternalFunctionDefinitions; +import org.apache.flink.table.planner.functions.sql.FlinkSqlOperatorTable; +import org.apache.flink.table.planner.functions.sql.SqlThrowExceptionFunction; +import org.apache.flink.table.types.DataType; +import org.apache.flink.table.types.logical.ArrayType; +import org.apache.flink.table.types.logical.LogicalType; +import org.apache.flink.table.types.logical.RowType; +import org.apache.flink.util.Preconditions; + +import com.google.common.collect.ImmutableList; +import org.apache.calcite.rel.type.RelDataType; +import org.apache.calcite.rex.RexNode; +import org.apache.calcite.rex.RexSubQuery; +import org.apache.calcite.sql.fun.SqlTrimFunction; + +import java.util.Arrays; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Optional; + +import static org.apache.calcite.sql.type.SqlTypeName.VARCHAR; +import static org.apache.flink.table.planner.calcite.FlinkTypeFactory.toLogicalType; +import static org.apache.flink.table.planner.expressions.converter.ExpressionConverter.extractValue; +import static org.apache.flink.table.runtime.types.LogicalTypeDataTypeConverter.fromDataTypeToLogicalType; +import static org.apache.flink.table.runtime.typeutils.TypeCheckUtils.isCharacterString; +import static org.apache.flink.table.runtime.typeutils.TypeCheckUtils.isTemporal; +import static org.apache.flink.table.runtime.typeutils.TypeCheckUtils.isTimeInterval; + +/** + * Customized {@link CallExpressionConvertRule}, Functions conversion here all require special logic, + * and there may be some special rules, such as needing get the literal values of inputs, such as + * converting to combinations of functions, to convert to RexNode of calcite. + */ +public class CustomizedConvertRule implements CallExpressionConvertRule { + + private static final Map DEFINITION_RULE_MAP = new HashMap<>(); + static { + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.CAST, CustomizedConvertRule::convertCast); + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.REINTERPRET_CAST, CustomizedConvertRule::convertReinterpretCast); + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.IN, CustomizedConvertRule::convertIn); + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.GET, CustomizedConvertRule::convertGet); + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.TRIM, CustomizedConvertRule::convertTrim); + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.AS, CustomizedConvertRule::convertAs); + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.IS_NULL, CustomizedConvertRule::convertIsNull); + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.BETWEEN, CustomizedConvertRule::convertBetween); + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.NOT_BETWEEN, CustomizedConvertRule::convertNotBetween); +
[GitHub] [flink] flinkbot edited a comment on issue #9555: [FLINK-13868][REST] Job vertex add taskmanager id in rest api
flinkbot edited a comment on issue #9555: [FLINK-13868][REST] Job vertex add taskmanager id in rest api URL: https://github.com/apache/flink/pull/9555#issuecomment-526075111 ## CI report: * 64f02281954e69bcfbe858eb40b8f1846cfbe195 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125035687) * 5d3d03dffbaf00112a31663caaba3558886cd373 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125055694) * fc91d03d717971e53abc68ba8a93a1e456fc6608 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/125176326) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9555: [FLINK-13868][REST] Job vertex add taskmanager id in rest api
flinkbot edited a comment on issue #9555: [FLINK-13868][REST] Job vertex add taskmanager id in rest api URL: https://github.com/apache/flink/pull/9555#issuecomment-526075111 ## CI report: * 64f02281954e69bcfbe858eb40b8f1846cfbe195 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125035687) * 5d3d03dffbaf00112a31663caaba3558886cd373 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125055694) * fc91d03d717971e53abc68ba8a93a1e456fc6608 : UNKNOWN This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13813) metrics is different between overview ui and metric response
[ https://issues.apache.org/jira/browse/FLINK-13813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16919160#comment-16919160 ] lidesheng commented on FLINK-13813: --- In my experiment, value displayed on UI(5,000,000) is correct. If rounding affected the metric value ,it will not be a number ended with 3(4084803). I checked some source code and found that the value of UI is from IOMetrics of operator, but the metric is from other metric group. If this is the reason, why the value in metrics group is not correct? > metrics is different between overview ui and metric response > > > Key: FLINK-13813 > URL: https://issues.apache.org/jira/browse/FLINK-13813 > Project: Flink > Issue Type: Bug > Environment: Flink 1.8.1 with hdfs >Reporter: lidesheng >Priority: Major > Attachments: metrics.png, overview.png > > > After a flink task is over, I get metrics by http request. The record count > in response is different with job overview.After a flink task is over, I get > metrics by http request. The record count in response is different with job > overview. > get http://x.x.x.x:8081/jobs/57969f3978edf3115354fab1a72fd0c8 returns: > { "jid": "57969f3978edf3115354fab1a72fd0c8", "name": > "3349_cjrw_1566177018152", "isStoppable": false, "state": "FINISHED", ... > "vertices": [ \{ "id": "d1cdde18b91ef6ce7c6a1cfdfa9e968d", "name": "CHAIN > DataSource (3349_cjrw_1566177018152/SOURCE/0) -> Map > (3349_cjrw_1566177018152/AUDIT/0)", "parallelism": 1, "status": "FINISHED", > ... "metrics": { "read-bytes": 0, "read-bytes-complete": true, "write-bytes": > 555941888, "write-bytes-complete": true, "read-records": 0, > "read-records-complete": true, "write-records": 500, > "write-records-complete": true } ... } ... ] }} > But, the metrics by > http://x.x.x.x:8081/jobs/57969f3978edf3115354fab1a72fd0c8/vertices/d1cdde18b91ef6ce7c6a1cfdfa9e968d/metrics?get=0.numRecordsOut > returns > [\{"id":"0.numRecordsOut","value":"4084803"}] > The overview record count is different with task metrics, please view the > apppendix. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[GitHub] [flink] flinkbot edited a comment on issue #9555: [FLINK-13868][REST] Job vertex add taskmanager id in rest api
flinkbot edited a comment on issue #9555: [FLINK-13868][REST] Job vertex add taskmanager id in rest api URL: https://github.com/apache/flink/pull/9555#issuecomment-526074152 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit fc91d03d717971e53abc68ba8a93a1e456fc6608 (Fri Aug 30 03:38:09 UTC 2019) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9336: [FLINK-13548][Deployment/YARN]Support priority of the Flink YARN application
flinkbot edited a comment on issue #9336: [FLINK-13548][Deployment/YARN]Support priority of the Flink YARN application URL: https://github.com/apache/flink/pull/9336#issuecomment-517608148 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 3981f27cdd12ae38fa03041779ed159891c102fa (Fri Aug 30 03:26:56 UTC 2019) ✅no warnings Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] wzhero1 commented on issue #9336: [FLINK-13548][Deployment/YARN]Support priority of the Flink YARN application
wzhero1 commented on issue #9336: [FLINK-13548][Deployment/YARN]Support priority of the Flink YARN application URL: https://github.com/apache/flink/pull/9336#issuecomment-526441235 Thanks for the supplementary test of YARN priority @tillrohrmann . I think it looks good to merge. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13848) Support “scheduleAtFixedRate/scheduleAtFixedDelay” in RpcEndpoint#MainThreadExecutor
[ https://issues.apache.org/jira/browse/FLINK-13848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16919138#comment-16919138 ] Biao Liu commented on FLINK-13848: -- [~till.rohrmann] Sorry I didn't describe it clearly. In this refactoring, the {{triggerCheckpoint}} would be separated into several stages. For example A -> B -> C. A is "start triggering", running in main thread. B is doing some heavy initializations, running in IO threads. C is sending the trigger message to tasks, running in main thread. The workflow should be like: # First A is triggered and executed in main thread. A launches the B into IO threads, and returns immediately. # B is executed in IO thread. When it's finished, a callback of triggering C executes (maybe through {{CompletableFuture#thenRunAsync}} with main thread executor). # C is executed in main thread. After then the next A is scheduled after a delay. It's A -> B -> C -> A ... There would be no competition. That's what we really want. But for the {{scheduleAtFixedDelay}} way, what we scheduled here is just A. The next A would be scheduled when the prior A is finished. It's A -> B -> C \ -> A -> B -> C So I think we should abandon this ticket. Or convert it to a normal issue not a subtask of this refactoring. What do you think? > Support “scheduleAtFixedRate/scheduleAtFixedDelay” in > RpcEndpoint#MainThreadExecutor > > > Key: FLINK-13848 > URL: https://issues.apache.org/jira/browse/FLINK-13848 > Project: Flink > Issue Type: Sub-task > Components: Runtime / Coordination >Reporter: Biao Liu >Priority: Major > Fix For: 1.10.0 > > > Currently the methods “scheduleAtFixedRate/scheduleAtFixedDelay" of > {{RpcEndpoint#MainThreadExecutor}} are not implemented. Because there was no > requirement on them before. > Now we are planning to implement these methods to support periodic checkpoint > triggering. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Closed] (FLINK-13859) JSONDeserializationSchema spell error
[ https://issues.apache.org/jira/browse/FLINK-13859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] limbo closed FLINK-13859. - Resolution: Duplicate > JSONDeserializationSchema spell error > - > > Key: FLINK-13859 > URL: https://issues.apache.org/jira/browse/FLINK-13859 > Project: Flink > Issue Type: Improvement > Components: Documentation >Affects Versions: 1.9.0 >Reporter: limbo >Priority: Minor > Labels: pull-request-available > Attachments: image-2019-08-26-20-14-20-075.png > > Time Spent: 20m > Remaining Estimate: 0h > > In kafka page the JsonDeserializationSchema would be JSONDeserializationSchema > !image-2019-08-26-20-14-20-075.png! > > [https://ci.apache.org/projects/flink/flink-docs-stable/dev/connectors/kafka.html] -- This message was sent by Atlassian Jira (v8.3.2#803003)
[GitHub] [flink] flinkbot edited a comment on issue #9540: [FLINK-13859][docs] JSONDeserializationSchema spell error
flinkbot edited a comment on issue #9540: [FLINK-13859][docs] JSONDeserializationSchema spell error URL: https://github.com/apache/flink/pull/9540#issuecomment-525126046 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 5cb9691c978214d5f868c0e6a43f782b05b29ff6 (Fri Aug 30 02:48:18 UTC 2019) **Warnings:** * **This pull request references an unassigned [Jira ticket](https://issues.apache.org/jira/browse/FLINK-13859).** According to the [code contribution guide](https://flink.apache.org/contributing/contribute-code.html), tickets need to be assigned before starting with the implementation work. Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink
flinkbot edited a comment on issue #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink URL: https://github.com/apache/flink/pull/9485#issuecomment-522652393 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 042b4f58d39980820d1c9dc1d779536ece0834c9 (Fri Aug 30 02:47:17 UTC 2019) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] taizilongxu closed pull request #9540: [FLINK-13859][docs] JSONDeserializationSchema spell error
taizilongxu closed pull request #9540: [FLINK-13859][docs] JSONDeserializationSchema spell error URL: https://github.com/apache/flink/pull/9540 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] taizilongxu commented on issue #9540: [FLINK-13859][docs] JSONDeserializationSchema spell error
taizilongxu commented on issue #9540: [FLINK-13859][docs] JSONDeserializationSchema spell error URL: https://github.com/apache/flink/pull/9540#issuecomment-526434469 @1u0 thanks to review This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] JingsongLi commented on a change in pull request #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink
JingsongLi commented on a change in pull request #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink URL: https://github.com/apache/flink/pull/9485#discussion_r319339615 ## File path: flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/expressions/converter/CustomizedConvertRule.java ## @@ -0,0 +1,361 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.planner.expressions.converter; + +import org.apache.flink.table.expressions.CallExpression; +import org.apache.flink.table.expressions.Expression; +import org.apache.flink.table.expressions.ExpressionUtils; +import org.apache.flink.table.expressions.TableReferenceExpression; +import org.apache.flink.table.expressions.TypeLiteralExpression; +import org.apache.flink.table.expressions.ValueLiteralExpression; +import org.apache.flink.table.expressions.utils.ApiExpressionUtils; +import org.apache.flink.table.functions.BuiltInFunctionDefinitions; +import org.apache.flink.table.functions.FunctionDefinition; +import org.apache.flink.table.operations.QueryOperation; +import org.apache.flink.table.planner.calcite.FlinkRelBuilder; +import org.apache.flink.table.planner.functions.InternalFunctionDefinitions; +import org.apache.flink.table.planner.functions.sql.FlinkSqlOperatorTable; +import org.apache.flink.table.planner.functions.sql.SqlThrowExceptionFunction; +import org.apache.flink.table.types.DataType; +import org.apache.flink.table.types.logical.ArrayType; +import org.apache.flink.table.types.logical.LogicalType; +import org.apache.flink.table.types.logical.RowType; +import org.apache.flink.util.Preconditions; + +import com.google.common.collect.ImmutableList; +import org.apache.calcite.rel.type.RelDataType; +import org.apache.calcite.rex.RexNode; +import org.apache.calcite.rex.RexSubQuery; +import org.apache.calcite.sql.fun.SqlTrimFunction; + +import java.util.Arrays; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Optional; + +import static org.apache.calcite.sql.type.SqlTypeName.VARCHAR; +import static org.apache.flink.table.planner.calcite.FlinkTypeFactory.toLogicalType; +import static org.apache.flink.table.planner.expressions.converter.ExpressionConverter.extractValue; +import static org.apache.flink.table.runtime.types.LogicalTypeDataTypeConverter.fromDataTypeToLogicalType; +import static org.apache.flink.table.runtime.typeutils.TypeCheckUtils.isCharacterString; +import static org.apache.flink.table.runtime.typeutils.TypeCheckUtils.isTemporal; +import static org.apache.flink.table.runtime.typeutils.TypeCheckUtils.isTimeInterval; + +/** + * Customized {@link CallExpressionConvertRule}, Functions conversion here all require special logic, + * and there may be some special rules, such as needing get the literal values of inputs, such as + * converting to combinations of functions, to convert to RexNode of calcite. + */ +public class CustomizedConvertRule implements CallExpressionConvertRule { + + private static final Map DEFINITION_RULE_MAP = new HashMap<>(); + static { + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.CAST, CustomizedConvertRule::convertCast); + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.REINTERPRET_CAST, CustomizedConvertRule::convertReinterpretCast); + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.IN, CustomizedConvertRule::convertIn); + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.GET, CustomizedConvertRule::convertGet); + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.TRIM, CustomizedConvertRule::convertTrim); + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.AS, CustomizedConvertRule::convertAs); + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.IS_NULL, CustomizedConvertRule::convertIsNull); + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.BETWEEN, CustomizedConvertRule::convertBetween); + DEFINITION_RULE_MAP.put(BuiltInFunctionDefinitions.NOT_BETWEEN, CustomizedConvertRule::convertNotBetween); +
[jira] [Comment Edited] (FLINK-13896) Scala 2.11 maven compile should target Java 1.8
[ https://issues.apache.org/jira/browse/FLINK-13896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16918598#comment-16918598 ] Terry Wang edited comment on FLINK-13896 at 8/30/19 2:43 AM: - Hi [~twalthr] Sorry to put the description on the environment section, just correct it. I'd like to fix this one, plz feel free to assign this ticket to me. was (Author: terry1897): Hi [~twalthr] Sorry to put the description on the environment section, just correct it. I'd like to fix this one, plz feel free to assign this ticket for me. > Scala 2.11 maven compile should target Java 1.8 > --- > > Key: FLINK-13896 > URL: https://issues.apache.org/jira/browse/FLINK-13896 > Project: Flink > Issue Type: Bug > Components: Build System >Affects Versions: 1.9.0 >Reporter: Terry Wang >Priority: Major > > When setting TableEnvironment in scala as follwing: > > {code:java} > // we can repoduce this problem by put following code in > // org.apache.flink.table.api.scala.internal.StreamTableEnvironmentImplTest > @Test > def testCreateEnvironment(): Unit = { > val settings = > EnvironmentSettings.newInstance().useBlinkPlanner().inBatchMode().build(); > val tEnv = TableEnvironment.create(settings); > } > {code} > Then mvn test would fail with an error message like: > > error: Static methods in interface require -target:JVM-1.8 > > We can fix this bug by adding: > > > -target:jvm-1.8 > > > > to scala-maven-plugin config > > > -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Issue Comment Deleted] (FLINK-13872) Translate Operations Playground to Chinese
[ https://issues.apache.org/jira/browse/FLINK-13872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lord i Will updated FLINK-13872: Comment: was deleted (was: [~fhueske], [~jark] My team has three men, and all of us want to cooperate on different parts of this translation. How should we manage our commits, merge and the final pull requests? Need some points~~) > Translate Operations Playground to Chinese > -- > > Key: FLINK-13872 > URL: https://issues.apache.org/jira/browse/FLINK-13872 > Project: Flink > Issue Type: Task > Components: chinese-translation, Documentation >Affects Versions: 1.9.0 >Reporter: Fabian Hueske >Assignee: Lord i Will >Priority: Major > > The [Operations > Playground|https://ci.apache.org/projects/flink/flink-docs-release-1.9/getting-started/docker-playgrounds/flink_operations_playground.html] > is a quick and convenient way to learn about Flink's operational features > (job submission, failure recovery, job updates, scaling, metrics). > We should translate it to Chinese as well. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Updated] (FLINK-13899) Add SQL DDL for Elasticsearch 5.X version
[ https://issues.apache.org/jira/browse/FLINK-13899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] limbo updated FLINK-13899: -- Fix Version/s: (was: 1.9.1) > Add SQL DDL for Elasticsearch 5.X version > - > > Key: FLINK-13899 > URL: https://issues.apache.org/jira/browse/FLINK-13899 > Project: Flink > Issue Type: Improvement > Components: Connectors / ElasticSearch >Affects Versions: 1.9.0 >Reporter: limbo >Priority: Major > > Hi, I need Elasticsearch 5.X verison DDL to connect our old version > Elasticsearch, can I conrtribute to this feature? -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Updated] (FLINK-13899) Add SQL DDL for Elasticsearch 5.X version
[ https://issues.apache.org/jira/browse/FLINK-13899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] limbo updated FLINK-13899: -- Description: Hi, I need Elasticsearch 5.X verison DDL to connect our old version Elasticsearch, can I conrtribute to this feature? (was: Hi, I need Elasticsearch 5.X verison DDL to connect our old version Elasticsearch) > Add SQL DDL for Elasticsearch 5.X version > - > > Key: FLINK-13899 > URL: https://issues.apache.org/jira/browse/FLINK-13899 > Project: Flink > Issue Type: Improvement > Components: Connectors / ElasticSearch >Affects Versions: 1.9.0 >Reporter: limbo >Priority: Major > Fix For: 1.9.1 > > > Hi, I need Elasticsearch 5.X verison DDL to connect our old version > Elasticsearch, can I conrtribute to this feature? -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Updated] (FLINK-13903) Support Hive version 2.3.6
[ https://issues.apache.org/jira/browse/FLINK-13903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuefu Zhang updated FLINK-13903: Description: Hive 2.3.6 is released a few days ago. We can trivially support this version as well, as we have already provided support for previous 2.3.x releases. (was: This is to support all 1.2 (1.2.0, 1.2.1, 1.2.2) and 2.3 (2.3.0-5) versions.) > Support Hive version 2.3.6 > -- > > Key: FLINK-13903 > URL: https://issues.apache.org/jira/browse/FLINK-13903 > Project: Flink > Issue Type: Improvement > Components: Connectors / Hive >Reporter: Xuefu Zhang >Assignee: Xuefu Zhang >Priority: Major > Labels: pull-request-available > Fix For: 1.10.0 > > > Hive 2.3.6 is released a few days ago. We can trivially support this version > as well, as we have already provided support for previous 2.3.x releases. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Created] (FLINK-13903) Support Hive version 2.3.6
Xuefu Zhang created FLINK-13903: --- Summary: Support Hive version 2.3.6 Key: FLINK-13903 URL: https://issues.apache.org/jira/browse/FLINK-13903 Project: Flink Issue Type: Improvement Components: Connectors / Hive Reporter: Xuefu Zhang Assignee: Xuefu Zhang Fix For: 1.10.0 This is to support all 1.2 (1.2.0, 1.2.1, 1.2.2) and 2.3 (2.3.0-5) versions. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[GitHub] [flink] flinkbot edited a comment on issue #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink
flinkbot edited a comment on issue #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink URL: https://github.com/apache/flink/pull/9485#issuecomment-522652393 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 042b4f58d39980820d1c9dc1d779536ece0834c9 (Fri Aug 30 02:35:04 UTC 2019) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] JingsongLi commented on a change in pull request #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink
JingsongLi commented on a change in pull request #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink URL: https://github.com/apache/flink/pull/9485#discussion_r319337893 ## File path: flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/expressions/converter/CallExpressionConvertRule.java ## @@ -0,0 +1,62 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.planner.expressions.converter; + +import org.apache.flink.table.expressions.CallExpression; +import org.apache.flink.table.expressions.Expression; +import org.apache.flink.table.planner.calcite.FlinkTypeFactory; + +import org.apache.calcite.rex.RexNode; +import org.apache.calcite.tools.RelBuilder; + +import java.util.List; +import java.util.Optional; +import java.util.stream.Collectors; + +/** + * Rule to convert {@link CallExpression}. + */ +public interface CallExpressionConvertRule { + + /** +* Convert call expression with context to RexNode. +* +* @return Success return RexNode of {@link Optional#of}, Fail return {@link Optional#empty()}. +*/ + Optional convert(CallExpression call, ConvertContext context); + + /** +* Context of {@link CallExpressionConvertRule}. +*/ + interface ConvertContext { + + /** +* Convert expression to RexNode, used by children conversion. +*/ + RexNode toRexNode(Expression expr); + + default List toRexNodes(List expr) { Review comment: Right, it should be a static helper method. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink
flinkbot edited a comment on issue #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink URL: https://github.com/apache/flink/pull/9485#issuecomment-522652393 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 042b4f58d39980820d1c9dc1d779536ece0834c9 (Fri Aug 30 02:33:02 UTC 2019) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] JingsongLi commented on a change in pull request #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink
JingsongLi commented on a change in pull request #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink URL: https://github.com/apache/flink/pull/9485#discussion_r319337619 ## File path: flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/expressions/converter/DefinedConvertRule.java ## @@ -0,0 +1,158 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.planner.expressions.converter; + +import org.apache.flink.table.expressions.CallExpression; +import org.apache.flink.table.functions.BuiltInFunctionDefinitions; +import org.apache.flink.table.functions.FunctionDefinition; +import org.apache.flink.table.planner.functions.sql.FlinkSqlOperatorTable; + +import org.apache.calcite.rex.RexNode; +import org.apache.calcite.sql.SqlOperator; + +import java.util.HashMap; +import java.util.Map; +import java.util.Optional; + +/** + * Defined function {@link CallExpressionConvertRule}, it included conversions are one-to-one with + * calcite SqlOperator. + */ +public class DefinedConvertRule implements CallExpressionConvertRule { + + private static final Map DEFINITION_OPERATOR_MAP = new HashMap<>(); + static { + DEFINITION_OPERATOR_MAP.put(BuiltInFunctionDefinitions.SHA1, FlinkSqlOperatorTable.SHA1); + + // logic functions + DEFINITION_OPERATOR_MAP.put(BuiltInFunctionDefinitions.AND, FlinkSqlOperatorTable.AND); + DEFINITION_OPERATOR_MAP.put(BuiltInFunctionDefinitions.OR, FlinkSqlOperatorTable.OR); + DEFINITION_OPERATOR_MAP.put(BuiltInFunctionDefinitions.NOT, FlinkSqlOperatorTable.NOT); + DEFINITION_OPERATOR_MAP.put(BuiltInFunctionDefinitions.IF, FlinkSqlOperatorTable.CASE); + + // comparison functions + DEFINITION_OPERATOR_MAP.put(BuiltInFunctionDefinitions.EQUALS, FlinkSqlOperatorTable.EQUALS); + DEFINITION_OPERATOR_MAP.put(BuiltInFunctionDefinitions.GREATER_THAN, FlinkSqlOperatorTable.GREATER_THAN); + DEFINITION_OPERATOR_MAP.put(BuiltInFunctionDefinitions.GREATER_THAN_OR_EQUAL, FlinkSqlOperatorTable.GREATER_THAN_OR_EQUAL); + DEFINITION_OPERATOR_MAP.put(BuiltInFunctionDefinitions.LESS_THAN, FlinkSqlOperatorTable.LESS_THAN); + DEFINITION_OPERATOR_MAP.put(BuiltInFunctionDefinitions.LESS_THAN_OR_EQUAL, FlinkSqlOperatorTable.LESS_THAN_OR_EQUAL); + DEFINITION_OPERATOR_MAP.put(BuiltInFunctionDefinitions.NOT_EQUALS, FlinkSqlOperatorTable.NOT_EQUALS); + DEFINITION_OPERATOR_MAP.put(BuiltInFunctionDefinitions.IS_NULL, FlinkSqlOperatorTable.IS_NULL); + DEFINITION_OPERATOR_MAP.put(BuiltInFunctionDefinitions.IS_NOT_NULL, FlinkSqlOperatorTable.IS_NOT_NULL); + DEFINITION_OPERATOR_MAP.put(BuiltInFunctionDefinitions.IS_TRUE, FlinkSqlOperatorTable.IS_TRUE); + DEFINITION_OPERATOR_MAP.put(BuiltInFunctionDefinitions.IS_FALSE, FlinkSqlOperatorTable.IS_FALSE); + DEFINITION_OPERATOR_MAP.put(BuiltInFunctionDefinitions.IS_NOT_TRUE, FlinkSqlOperatorTable.IS_NOT_TRUE); + DEFINITION_OPERATOR_MAP.put(BuiltInFunctionDefinitions.IS_NOT_FALSE, FlinkSqlOperatorTable.IS_NOT_FALSE); + + // string functions + DEFINITION_OPERATOR_MAP.put(BuiltInFunctionDefinitions.CHAR_LENGTH, FlinkSqlOperatorTable.CHAR_LENGTH); + DEFINITION_OPERATOR_MAP.put(BuiltInFunctionDefinitions.INIT_CAP, FlinkSqlOperatorTable.INITCAP); + DEFINITION_OPERATOR_MAP.put(BuiltInFunctionDefinitions.LIKE, FlinkSqlOperatorTable.LIKE); + DEFINITION_OPERATOR_MAP.put(BuiltInFunctionDefinitions.LOWER, FlinkSqlOperatorTable.LOWER); + DEFINITION_OPERATOR_MAP.put(BuiltInFunctionDefinitions.SIMILAR, FlinkSqlOperatorTable.SIMILAR_TO); + DEFINITION_OPERATOR_MAP.put(BuiltInFunctionDefinitions.SUBSTRING, FlinkSqlOperatorTable.SUBSTRING); + DEFINITION_OPERATOR_MAP.put(BuiltInFunctionDefinitions.UPPER, FlinkSqlOperatorTable.UPPER); + DEFINITION_OPERATOR_MAP.put(BuiltInFunctionDefin
[GitHub] [flink] flinkbot edited a comment on issue #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink
flinkbot edited a comment on issue #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink URL: https://github.com/apache/flink/pull/9485#issuecomment-522652393 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 042b4f58d39980820d1c9dc1d779536ece0834c9 (Fri Aug 30 02:28:57 UTC 2019) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Created] (FLINK-13902) Can not use index to convert FieldReferenceExpression to RexNode
Jingsong Lee created FLINK-13902: Summary: Can not use index to convert FieldReferenceExpression to RexNode Key: FLINK-13902 URL: https://issues.apache.org/jira/browse/FLINK-13902 Project: Flink Issue Type: Bug Components: Table SQL / Planner Reporter: Jingsong Lee Now, we can not use inputCount+inputIndex+FieldIndex to construct rex input reference of calcite. See QueryOperationConverter.SingleRelVisitor.visit(AggregateQueryOperation). Calcite will shuffle the output order of groupings.(See RelBuilder.aggregate, it use ImmutableBitSet to store groupings) So the output fields order will be changed too. This lead to the output fields orders of AggregateOperationFactory is different from calcite output orders. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[GitHub] [flink] JingsongLi commented on a change in pull request #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink
JingsongLi commented on a change in pull request #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink URL: https://github.com/apache/flink/pull/9485#discussion_r319337008 ## File path: flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/expressions/converter/ExpressionConverter.java ## @@ -0,0 +1,322 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.planner.expressions.converter; + +import org.apache.flink.table.api.TableException; +import org.apache.flink.table.dataformat.Decimal; +import org.apache.flink.table.expressions.CallExpression; +import org.apache.flink.table.expressions.Expression; +import org.apache.flink.table.expressions.ExpressionVisitor; +import org.apache.flink.table.expressions.FieldReferenceExpression; +import org.apache.flink.table.expressions.LocalReferenceExpression; +import org.apache.flink.table.expressions.TimeIntervalUnit; +import org.apache.flink.table.expressions.TimePointUnit; +import org.apache.flink.table.expressions.TypeLiteralExpression; +import org.apache.flink.table.expressions.ValueLiteralExpression; +import org.apache.flink.table.planner.calcite.FlinkContext; +import org.apache.flink.table.planner.calcite.FlinkRelBuilder; +import org.apache.flink.table.planner.calcite.FlinkTypeFactory; +import org.apache.flink.table.planner.calcite.RexFieldVariable; +import org.apache.flink.table.planner.expressions.RexNodeExpression; +import org.apache.flink.table.types.logical.DecimalType; +import org.apache.flink.table.types.logical.LogicalType; + +import org.apache.calcite.avatica.util.DateTimeUtils; +import org.apache.calcite.avatica.util.TimeUnit; +import org.apache.calcite.avatica.util.TimeUnitRange; +import org.apache.calcite.rel.type.RelDataType; +import org.apache.calcite.rex.RexBuilder; +import org.apache.calcite.rex.RexNode; +import org.apache.calcite.sql.SqlIntervalQualifier; +import org.apache.calcite.sql.parser.SqlParserPos; +import org.apache.calcite.sql.type.SqlTypeName; +import org.apache.calcite.tools.RelBuilder; +import org.apache.calcite.util.DateString; +import org.apache.calcite.util.TimeString; +import org.apache.calcite.util.TimestampString; +import org.apache.calcite.util.TimestampWithTimeZoneString; + +import java.math.BigDecimal; +import java.util.Arrays; +import java.util.Calendar; +import java.util.Date; +import java.util.List; +import java.util.Optional; +import java.util.TimeZone; + +import static org.apache.flink.table.runtime.types.LogicalTypeDataTypeConverter.fromDataTypeToLogicalType; + +/** + * Visit expression to generator {@link RexNode}. + */ +public class ExpressionConverter implements ExpressionVisitor { + + private static final List FUNCTION_CONVERT_CHAIN = Arrays.asList( + new ScalarFunctionConvertRule(), + new OverConvertRule(), + new DefinedConvertRule(), + new CustomizedConvertRule() + ); + + private final RelBuilder relBuilder; + private final FlinkTypeFactory typeFactory; + + public ExpressionConverter(RelBuilder relBuilder) { + this.relBuilder = relBuilder; + this.typeFactory = (FlinkTypeFactory) relBuilder.getRexBuilder().getTypeFactory(); + } + + @Override + public RexNode visit(CallExpression call) { + for (CallExpressionConvertRule rule : FUNCTION_CONVERT_CHAIN) { + Optional converted = rule.convert(call, newFunctionContext()); + if (converted.isPresent()) { + return converted.get(); + } + } + throw new RuntimeException("Unknown call expression: " + call); + } + + @Override + public RexNode visit(ValueLiteralExpression valueLiteral) { + LogicalType type = fromDataTypeToLogicalType(valueLiteral.getOutputDataType()); + RexBuilder rexBuilder = relBuilder.getRexBuilder(); + FlinkTypeFactory typeFactory = (FlinkTypeFactory) relBuilder.getTypeFactory(); + if (valueLiteral.isNull()) { +
[GitHub] [flink] JingsongLi commented on a change in pull request #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink
JingsongLi commented on a change in pull request #9485: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter(RexNodeConverter) in blink URL: https://github.com/apache/flink/pull/9485#discussion_r319337008 ## File path: flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/expressions/converter/ExpressionConverter.java ## @@ -0,0 +1,322 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.planner.expressions.converter; + +import org.apache.flink.table.api.TableException; +import org.apache.flink.table.dataformat.Decimal; +import org.apache.flink.table.expressions.CallExpression; +import org.apache.flink.table.expressions.Expression; +import org.apache.flink.table.expressions.ExpressionVisitor; +import org.apache.flink.table.expressions.FieldReferenceExpression; +import org.apache.flink.table.expressions.LocalReferenceExpression; +import org.apache.flink.table.expressions.TimeIntervalUnit; +import org.apache.flink.table.expressions.TimePointUnit; +import org.apache.flink.table.expressions.TypeLiteralExpression; +import org.apache.flink.table.expressions.ValueLiteralExpression; +import org.apache.flink.table.planner.calcite.FlinkContext; +import org.apache.flink.table.planner.calcite.FlinkRelBuilder; +import org.apache.flink.table.planner.calcite.FlinkTypeFactory; +import org.apache.flink.table.planner.calcite.RexFieldVariable; +import org.apache.flink.table.planner.expressions.RexNodeExpression; +import org.apache.flink.table.types.logical.DecimalType; +import org.apache.flink.table.types.logical.LogicalType; + +import org.apache.calcite.avatica.util.DateTimeUtils; +import org.apache.calcite.avatica.util.TimeUnit; +import org.apache.calcite.avatica.util.TimeUnitRange; +import org.apache.calcite.rel.type.RelDataType; +import org.apache.calcite.rex.RexBuilder; +import org.apache.calcite.rex.RexNode; +import org.apache.calcite.sql.SqlIntervalQualifier; +import org.apache.calcite.sql.parser.SqlParserPos; +import org.apache.calcite.sql.type.SqlTypeName; +import org.apache.calcite.tools.RelBuilder; +import org.apache.calcite.util.DateString; +import org.apache.calcite.util.TimeString; +import org.apache.calcite.util.TimestampString; +import org.apache.calcite.util.TimestampWithTimeZoneString; + +import java.math.BigDecimal; +import java.util.Arrays; +import java.util.Calendar; +import java.util.Date; +import java.util.List; +import java.util.Optional; +import java.util.TimeZone; + +import static org.apache.flink.table.runtime.types.LogicalTypeDataTypeConverter.fromDataTypeToLogicalType; + +/** + * Visit expression to generator {@link RexNode}. + */ +public class ExpressionConverter implements ExpressionVisitor { + + private static final List FUNCTION_CONVERT_CHAIN = Arrays.asList( + new ScalarFunctionConvertRule(), + new OverConvertRule(), + new DefinedConvertRule(), + new CustomizedConvertRule() + ); + + private final RelBuilder relBuilder; + private final FlinkTypeFactory typeFactory; + + public ExpressionConverter(RelBuilder relBuilder) { + this.relBuilder = relBuilder; + this.typeFactory = (FlinkTypeFactory) relBuilder.getRexBuilder().getTypeFactory(); + } + + @Override + public RexNode visit(CallExpression call) { + for (CallExpressionConvertRule rule : FUNCTION_CONVERT_CHAIN) { + Optional converted = rule.convert(call, newFunctionContext()); + if (converted.isPresent()) { + return converted.get(); + } + } + throw new RuntimeException("Unknown call expression: " + call); + } + + @Override + public RexNode visit(ValueLiteralExpression valueLiteral) { + LogicalType type = fromDataTypeToLogicalType(valueLiteral.getOutputDataType()); + RexBuilder rexBuilder = relBuilder.getRexBuilder(); + FlinkTypeFactory typeFactory = (FlinkTypeFactory) relBuilder.getTypeFactory(); + if (valueLiteral.isNull()) { +
[jira] [Commented] (FLINK-13872) Translate Operations Playground to Chinese
[ https://issues.apache.org/jira/browse/FLINK-13872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16919127#comment-16919127 ] Lord i Will commented on FLINK-13872: - [~fhueske], [~jark] My team has three men, and all of us want to cooperate on different parts of this translation. How should we manage our commits, merge and the final pull requests? Need some points~~ > Translate Operations Playground to Chinese > -- > > Key: FLINK-13872 > URL: https://issues.apache.org/jira/browse/FLINK-13872 > Project: Flink > Issue Type: Task > Components: chinese-translation, Documentation >Affects Versions: 1.9.0 >Reporter: Fabian Hueske >Assignee: Lord i Will >Priority: Major > > The [Operations > Playground|https://ci.apache.org/projects/flink/flink-docs-release-1.9/getting-started/docker-playgrounds/flink_operations_playground.html] > is a quick and convenient way to learn about Flink's operational features > (job submission, failure recovery, job updates, scaling, metrics). > We should translate it to Chinese as well. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Resolved] (FLINK-13901) Documentation links check errors in release-1.9
[ https://issues.apache.org/jira/browse/FLINK-13901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu resolved FLINK-13901. - Resolution: Fixed Fixed in 1.9.1: c175cc424458fd2425cd8e8463e4f6cb7bd228f5 > Documentation links check errors in release-1.9 > > > Key: FLINK-13901 > URL: https://issues.apache.org/jira/browse/FLINK-13901 > Project: Flink > Issue Type: Bug > Components: Documentation >Affects Versions: 1.9.0 >Reporter: Jark Wu >Priority: Major > Fix For: 1.9.1 > > > [2019-08-29 16:04:44] ERROR `/zh/dev/table/config.html' not found. > [2019-08-29 16:04:47] ERROR `/zh/dev/table/catalog.html' not found. > http://localhost:4000/zh/dev/table/config.html: > Remote file does not exist -- broken link!!! > Here is an instance: https://travis-ci.org/apache/flink/jobs/578322338 -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Commented] (FLINK-13582) Improve the implementation of LISTAGG in Blink planner to remove delimiter from state
[ https://issues.apache.org/jira/browse/FLINK-13582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16919123#comment-16919123 ] Jingsong Lee commented on FLINK-13582: -- Is modifying the State format affecting compatibility? So I think it should be determined before 1.10 release? > Improve the implementation of LISTAGG in Blink planner to remove delimiter > from state > -- > > Key: FLINK-13582 > URL: https://issues.apache.org/jira/browse/FLINK-13582 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.10.0 >Reporter: Jing Zhang >Assignee: Jark Wu >Priority: Minor > > The implementation of LISTAGG save delimiter as a part of state, which is not > necessary, because delimiter is constant character. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Assigned] (FLINK-13901) Documentation links check errors in release-1.9
[ https://issues.apache.org/jira/browse/FLINK-13901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu reassigned FLINK-13901: --- Assignee: Jark Wu > Documentation links check errors in release-1.9 > > > Key: FLINK-13901 > URL: https://issues.apache.org/jira/browse/FLINK-13901 > Project: Flink > Issue Type: Bug > Components: Documentation >Affects Versions: 1.9.0 >Reporter: Jark Wu >Assignee: Jark Wu >Priority: Major > Fix For: 1.9.1 > > > [2019-08-29 16:04:44] ERROR `/zh/dev/table/config.html' not found. > [2019-08-29 16:04:47] ERROR `/zh/dev/table/catalog.html' not found. > http://localhost:4000/zh/dev/table/config.html: > Remote file does not exist -- broken link!!! > Here is an instance: https://travis-ci.org/apache/flink/jobs/578322338 -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Updated] (FLINK-13901) Documentation links check errors in release-1.9
[ https://issues.apache.org/jira/browse/FLINK-13901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu updated FLINK-13901: Description: [2019-08-29 16:04:44] ERROR `/zh/dev/table/config.html' not found. [2019-08-29 16:04:47] ERROR `/zh/dev/table/catalog.html' not found. http://localhost:4000/zh/dev/table/config.html: Remote file does not exist -- broken link!!! Here is an instance: https://travis-ci.org/apache/flink/jobs/578322338 was: [2019-08-29 16:04:44] ERROR `/zh/dev/table/config.html' not found. [2019-08-29 16:04:47] ERROR `/zh/dev/table/catalog.html' not found. http://localhost:4000/zh/dev/table/config.html: Remote file does not exist -- broken link!!! > Documentation links check errors in release-1.9 > > > Key: FLINK-13901 > URL: https://issues.apache.org/jira/browse/FLINK-13901 > Project: Flink > Issue Type: Bug > Components: Documentation >Affects Versions: 1.9.0 >Reporter: Jark Wu >Priority: Major > Fix For: 1.9.1 > > > [2019-08-29 16:04:44] ERROR `/zh/dev/table/config.html' not found. > [2019-08-29 16:04:47] ERROR `/zh/dev/table/catalog.html' not found. > http://localhost:4000/zh/dev/table/config.html: > Remote file does not exist -- broken link!!! > Here is an instance: https://travis-ci.org/apache/flink/jobs/578322338 -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Created] (FLINK-13901) Documentation links check errors in release-1.9
Jark Wu created FLINK-13901: --- Summary: Documentation links check errors in release-1.9 Key: FLINK-13901 URL: https://issues.apache.org/jira/browse/FLINK-13901 Project: Flink Issue Type: Bug Components: Documentation Affects Versions: 1.9.0 Reporter: Jark Wu Fix For: 1.9.1 [2019-08-29 16:04:44] ERROR `/zh/dev/table/config.html' not found. [2019-08-29 16:04:47] ERROR `/zh/dev/table/catalog.html' not found. http://localhost:4000/zh/dev/table/config.html: Remote file does not exist -- broken link!!! -- This message was sent by Atlassian Jira (v8.3.2#803003)
[GitHub] [flink] flinkbot edited a comment on issue #8727: [FLINK-12828][sql-client] Support -f option with a sql script file as…
flinkbot edited a comment on issue #8727: [FLINK-12828][sql-client] Support -f option with a sql script file as… URL: https://github.com/apache/flink/pull/8727#issuecomment-501633487 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 94b5fa9694c53cd89229def77690088e396da8e5 (Fri Aug 30 02:09:39 UTC 2019) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] bowenli86 commented on a change in pull request #8727: [FLINK-12828][sql-client] Support -f option with a sql script file as…
bowenli86 commented on a change in pull request #8727: [FLINK-12828][sql-client] Support -f option with a sql script file as… URL: https://github.com/apache/flink/pull/8727#discussion_r316403702 ## File path: flink-table/flink-sql-client/src/test/resources/one-DML-statement.sql ## @@ -0,0 +1 @@ +INSERT INTO MyTable SELECT * FROM MyOtherTable; Review comment: add license head? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Comment Edited] (FLINK-12164) JobMasterTest.testJobFailureWhenTaskExecutorHeartbeatTimeout is unstable
[ https://issues.apache.org/jira/browse/FLINK-12164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16919110#comment-16919110 ] Jark Wu edited comment on FLINK-12164 at 8/30/19 2:07 AM: -- Another instance in release-1.9: https://api.travis-ci.org/v3/job/578428491/log.txt was (Author: jark): Another instance: https://api.travis-ci.org/v3/job/578428491/log.txt > JobMasterTest.testJobFailureWhenTaskExecutorHeartbeatTimeout is unstable > > > Key: FLINK-12164 > URL: https://issues.apache.org/jira/browse/FLINK-12164 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Reporter: Aljoscha Krettek >Assignee: Biao Liu >Priority: Critical > Labels: pull-request-available, test-stability > Fix For: 1.10.0 > > Time Spent: 20m > Remaining Estimate: 0h > > {code} > 07:28:23.957 [ERROR] Tests run: 24, Failures: 0, Errors: 1, Skipped: 0, Time > elapsed: 8.968 s <<< FAILURE! - in > org.apache.flink.runtime.jobmaster.JobMasterTest > 07:28:23.957 [ERROR] > testJobFailureWhenTaskExecutorHeartbeatTimeout(org.apache.flink.runtime.jobmaster.JobMasterTest) > Time elapsed: 0.177 s <<< ERROR! > java.util.concurrent.ExecutionException: java.lang.Exception: Unknown > TaskManager 69a7c8c18a36069ff90a1eae8ec41066 > at > org.apache.flink.runtime.jobmaster.JobMasterTest.registerSlotsAtJobMaster(JobMasterTest.java:1746) > at > org.apache.flink.runtime.jobmaster.JobMasterTest.runJobFailureWhenTaskExecutorTerminatesTest(JobMasterTest.java:1670) > at > org.apache.flink.runtime.jobmaster.JobMasterTest.testJobFailureWhenTaskExecutorHeartbeatTimeout(JobMasterTest.java:1630) > Caused by: java.lang.Exception: Unknown TaskManager > 69a7c8c18a36069ff90a1eae8ec41066 > {code} -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Updated] (FLINK-12164) JobMasterTest.testJobFailureWhenTaskExecutorHeartbeatTimeout is unstable
[ https://issues.apache.org/jira/browse/FLINK-12164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu updated FLINK-12164: Affects Version/s: 1.9.0 > JobMasterTest.testJobFailureWhenTaskExecutorHeartbeatTimeout is unstable > > > Key: FLINK-12164 > URL: https://issues.apache.org/jira/browse/FLINK-12164 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.9.0 >Reporter: Aljoscha Krettek >Assignee: Biao Liu >Priority: Critical > Labels: pull-request-available, test-stability > Fix For: 1.10.0 > > Time Spent: 20m > Remaining Estimate: 0h > > {code} > 07:28:23.957 [ERROR] Tests run: 24, Failures: 0, Errors: 1, Skipped: 0, Time > elapsed: 8.968 s <<< FAILURE! - in > org.apache.flink.runtime.jobmaster.JobMasterTest > 07:28:23.957 [ERROR] > testJobFailureWhenTaskExecutorHeartbeatTimeout(org.apache.flink.runtime.jobmaster.JobMasterTest) > Time elapsed: 0.177 s <<< ERROR! > java.util.concurrent.ExecutionException: java.lang.Exception: Unknown > TaskManager 69a7c8c18a36069ff90a1eae8ec41066 > at > org.apache.flink.runtime.jobmaster.JobMasterTest.registerSlotsAtJobMaster(JobMasterTest.java:1746) > at > org.apache.flink.runtime.jobmaster.JobMasterTest.runJobFailureWhenTaskExecutorTerminatesTest(JobMasterTest.java:1670) > at > org.apache.flink.runtime.jobmaster.JobMasterTest.testJobFailureWhenTaskExecutorHeartbeatTimeout(JobMasterTest.java:1630) > Caused by: java.lang.Exception: Unknown TaskManager > 69a7c8c18a36069ff90a1eae8ec41066 > {code} -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Commented] (FLINK-12164) JobMasterTest.testJobFailureWhenTaskExecutorHeartbeatTimeout is unstable
[ https://issues.apache.org/jira/browse/FLINK-12164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16919110#comment-16919110 ] Jark Wu commented on FLINK-12164: - Another instance: https://api.travis-ci.org/v3/job/578428491/log.txt > JobMasterTest.testJobFailureWhenTaskExecutorHeartbeatTimeout is unstable > > > Key: FLINK-12164 > URL: https://issues.apache.org/jira/browse/FLINK-12164 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Reporter: Aljoscha Krettek >Assignee: Biao Liu >Priority: Critical > Labels: pull-request-available, test-stability > Fix For: 1.10.0 > > Time Spent: 20m > Remaining Estimate: 0h > > {code} > 07:28:23.957 [ERROR] Tests run: 24, Failures: 0, Errors: 1, Skipped: 0, Time > elapsed: 8.968 s <<< FAILURE! - in > org.apache.flink.runtime.jobmaster.JobMasterTest > 07:28:23.957 [ERROR] > testJobFailureWhenTaskExecutorHeartbeatTimeout(org.apache.flink.runtime.jobmaster.JobMasterTest) > Time elapsed: 0.177 s <<< ERROR! > java.util.concurrent.ExecutionException: java.lang.Exception: Unknown > TaskManager 69a7c8c18a36069ff90a1eae8ec41066 > at > org.apache.flink.runtime.jobmaster.JobMasterTest.registerSlotsAtJobMaster(JobMasterTest.java:1746) > at > org.apache.flink.runtime.jobmaster.JobMasterTest.runJobFailureWhenTaskExecutorTerminatesTest(JobMasterTest.java:1670) > at > org.apache.flink.runtime.jobmaster.JobMasterTest.testJobFailureWhenTaskExecutorHeartbeatTimeout(JobMasterTest.java:1630) > Caused by: java.lang.Exception: Unknown TaskManager > 69a7c8c18a36069ff90a1eae8ec41066 > {code} -- This message was sent by Atlassian Jira (v8.3.2#803003)
[GitHub] [flink] flinkbot edited a comment on issue #9494: [FLINK-12847] [Connectors / Kinesis] update flink-connector-kinesis to use Apache 2.0 licensed code
flinkbot edited a comment on issue #9494: [FLINK-12847] [Connectors / Kinesis] update flink-connector-kinesis to use Apache 2.0 licensed code URL: https://github.com/apache/flink/pull/9494#issuecomment-523025067 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 5af3517a366a5efc85358dfabed7c1bbcafd560e (Fri Aug 30 02:02:32 UTC 2019) **Warnings:** * **3 pom.xml files were touched**: Check for build and licensing issues. Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9501: [FLINK-12697] [State Backends] Support on-disk state storage for spill-able heap backend
flinkbot edited a comment on issue #9501: [FLINK-12697] [State Backends] Support on-disk state storage for spill-able heap backend URL: https://github.com/apache/flink/pull/9501#issuecomment-523767672 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit df676fa3d969772b8995538adc147bbb86c36189 (Fri Aug 30 02:01:30 UTC 2019) **Warnings:** * **1 pom.xml files were touched**: Check for build and licensing issues. * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] bowenli86 commented on issue #9494: [FLINK-12847] [Connectors / Kinesis] update flink-connector-kinesis to use Apache 2.0 licensed code
bowenli86 commented on issue #9494: [FLINK-12847] [Connectors / Kinesis] update flink-connector-kinesis to use Apache 2.0 licensed code URL: https://github.com/apache/flink/pull/9494#issuecomment-526425843 @dyanarose can you rebase to latest master? Otherwise, LGTM. @zentol @tweise if you don't have other concerns , I'll merge it after the rebase and build passes This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] carp84 commented on a change in pull request #9501: [FLINK-12697] [State Backends] Support on-disk state storage for spill-able heap backend
carp84 commented on a change in pull request #9501: [FLINK-12697] [State Backends] Support on-disk state storage for spill-able heap backend URL: https://github.com/apache/flink/pull/9501#discussion_r319333262 ## File path: flink-core/src/main/java/org/apache/flink/core/memory/UnsafeHelp.java ## @@ -0,0 +1,328 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.core.memory; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import sun.misc.Unsafe; + +import javax.annotation.Nonnull; + +import java.lang.reflect.Field; +import java.lang.reflect.Method; +import java.nio.ByteBuffer; +import java.nio.ByteOrder; +import java.security.AccessController; +import java.security.PrivilegedAction; + +/** + * Unsafe use help. + */ +public class UnsafeHelp { Review comment: Yes, mainly referred to HBase's [UnsafeAccess](https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/util/UnsafeAccess.java), will add a comment to mention the source. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9502: [FLINK-13814][hive] HiveTableSink should strip quotes from partition …
flinkbot edited a comment on issue #9502: [FLINK-13814][hive] HiveTableSink should strip quotes from partition … URL: https://github.com/apache/flink/pull/9502#issuecomment-523777539 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 2215b05905a52ac58a828ce5b817481b0b132d8d (Thu Aug 29 23:42:03 UTC 2019) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] bowenli86 commented on issue #9502: [FLINK-13814][hive] HiveTableSink should strip quotes from partition …
bowenli86 commented on issue #9502: [FLINK-13814][hive] HiveTableSink should strip quotes from partition … URL: https://github.com/apache/flink/pull/9502#issuecomment-526400173 The following test is failing on my machine: ``` [INFO] Results: [INFO] [ERROR] Failures: [ERROR] TableEnvHiveConnectorTest.testStaticPartition:203->verifyHiveQueryResult:253 expected:<[2 1'1 1.1, 1 1'1 1.1]> but was:<[2 '1''1' 1.1, 1 '1''1' 1.1]> [INFO] [ERROR] Tests run: 235, Failures: 1, Errors: 0, Skipped: 0 ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Assigned] (FLINK-13877) Support Hive version 2.1.0 and 2.1.1
[ https://issues.apache.org/jira/browse/FLINK-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bowen Li reassigned FLINK-13877: Assignee: Xuefu Zhang > Support Hive version 2.1.0 and 2.1.1 > > > Key: FLINK-13877 > URL: https://issues.apache.org/jira/browse/FLINK-13877 > Project: Flink > Issue Type: Improvement > Components: Connectors / Hive >Reporter: Xuefu Zhang >Assignee: Xuefu Zhang >Priority: Major > Labels: pull-request-available > Fix For: 1.10.0 > > Time Spent: 20m > Remaining Estimate: 0h > > This is to support Hive 2.1 versions (2.1.0 and 2.1.1). -- This message was sent by Atlassian Jira (v8.3.2#803003)
[GitHub] [flink] flinkbot edited a comment on issue #9547: [FLINK-13877][hive] Support Hive version 2.1.0 and 2.1.1
flinkbot edited a comment on issue #9547: [FLINK-13877][hive] Support Hive version 2.1.0 and 2.1.1 URL: https://github.com/apache/flink/pull/9547#issuecomment-525512048 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit b4db37b1c3f0b858183732495d06390a3023ae3c (Thu Aug 29 23:30:52 UTC 2019) **Warnings:** * **1 pom.xml files were touched**: Check for build and licensing issues. * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Closed] (FLINK-13877) Support Hive version 2.1.0 and 2.1.1
[ https://issues.apache.org/jira/browse/FLINK-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bowen Li closed FLINK-13877. Resolution: Fixed merged in master: a40b31bf22efe1bee9d0def1730eaa25ef1c0c52 > Support Hive version 2.1.0 and 2.1.1 > > > Key: FLINK-13877 > URL: https://issues.apache.org/jira/browse/FLINK-13877 > Project: Flink > Issue Type: Improvement > Components: Connectors / Hive >Reporter: Xuefu Zhang >Assignee: Xuefu Zhang >Priority: Major > Labels: pull-request-available > Fix For: 1.10.0 > > Time Spent: 20m > Remaining Estimate: 0h > > This is to support Hive 2.1 versions (2.1.0 and 2.1.1). -- This message was sent by Atlassian Jira (v8.3.2#803003)
[GitHub] [flink] asfgit closed pull request #9547: [FLINK-13877][hive] Support Hive version 2.1.0 and 2.1.1
asfgit closed pull request #9547: [FLINK-13877][hive] Support Hive version 2.1.0 and 2.1.1 URL: https://github.com/apache/flink/pull/9547 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9502: [FLINK-13814][hive] HiveTableSink should strip quotes from partition …
flinkbot edited a comment on issue #9502: [FLINK-13814][hive] HiveTableSink should strip quotes from partition … URL: https://github.com/apache/flink/pull/9502#issuecomment-523777539 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 2215b05905a52ac58a828ce5b817481b0b132d8d (Thu Aug 29 23:01:22 UTC 2019) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] bowenli86 commented on issue #9502: [FLINK-13814][hive] HiveTableSink should strip quotes from partition …
bowenli86 commented on issue #9502: [FLINK-13814][hive] HiveTableSink should strip quotes from partition … URL: https://github.com/apache/flink/pull/9502#issuecomment-526392413 LGTM, merging This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #8121: [FLINK-10437]Some of keys under withDeprecatedKeys aren't marked as @…
flinkbot edited a comment on issue #8121: [FLINK-10437]Some of keys under withDeprecatedKeys aren't marked as @… URL: https://github.com/apache/flink/pull/8121#issuecomment-526352237 ## CI report: * 4b5cad1779513318ef2aeb8dce5fecf212571ac3 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/125145275) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9383: [FLINK-13248] [runtime] Adding processing of downstream messages for blocking operators
flinkbot edited a comment on issue #9383: [FLINK-13248] [runtime] Adding processing of downstream messages for blocking operators URL: https://github.com/apache/flink/pull/9383#issuecomment-519130955 ## CI report: * 5d8448c4813f5b362f98f898998f1278f062d807 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/122292142) * 4d628935e8899d6019566bfc93b5c688bc1835ec : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/122941321) * d7c0bd5edc65110910d79ca7c7bf2139672f8c02 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/123229382) * b7a19fe5d83ee271e7560f90fbf07a7703937273 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/123235786) * 7650b3b19b05ed6a121566d7c19d5e7bc71489fa : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/123332630) * 2493723ebd2c307f47bbdfcf154a31ab97cda312 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/123335647) * f3f0fe6d16ef3bba35d06a797196f94f372701ff : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/123336279) * c6ee15104ee678c239367670773723920e34c26d : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/123371348) * d0e4fbf25a8ff9982171ed982868b51ad851aaf0 : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/123472764) * 741386a495a5657bb654dcd0168f2d42873445e7 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/123476701) * 05e27c097851c65bd9a405b4aae376e2ef6c2b50 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/123645059) * 8fba78f22bc6c0d042cb1dde270c02af08d98bbd : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/123696981) * 681ac331e4c0b547e1d410b448bc34ff651dbc6a : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/123698096) * e18739460fbcf7c59be3c9121fc26fc279e0353a : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/123703814) * 670affbc0ae2883e93bb2f6ca3c1300fb78f26c5 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/123726809) * 50a91cbc74c645576432d25d40e7a42190ac28e8 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/123758034) * 2fba64d8e9fa4939c5a6c3fb3d758d55ca344b6c : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/123842040) * 0507aa67d2f7183c3a7e4556fbf7732414647ac7 : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/123855539) * 9a293ae331986d8fae16f619158f1e59572cd1e9 : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/123856922) * 2bb1bc290a3fc8e4dc843be781063515e86509d8 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/123859379) * 9ff3bc6cdeaff18473a075076ae4931bbdae7173 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124474961) * bfc5c6d643c502ec3a7ba181d4a80286af08ca50 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124487989) * 0b54f92f49a56077b36856c4eda400938f2cda75 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124488476) * 97b0b55c896ce68a8540758a38321ddbfac139c5 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124489022) * fae34dfb176d977e452dd5a3e7af8838341e565f : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124489295) * f312b58169cdd3ce32d603c5ef410b98064c084c : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/124534465) * 4125a241c713c467c47cb49aacfea5b99bbe4b47 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124544764) * 8308ec556edf17ce57d9ff377f646f6f1bbd47fc : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124548686) * ffe70dd34f21b84eeb6f448ab1611e5a6a0ecaa3 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124550076) * b925c47fa923e0f90fd0a4ee0205ae29fe32662e : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124554252) * 7815bc0047f41fd2d5dc3dcbe4adbd7be514e0da : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124561559) * 5821301ec74ba680e5e530fe1f698ed5c12f4b11 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/124578295) * 9e6aa380dc9d286316c372981df091dea77b28df : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/124700113) * 355736bc75b13d3e9caeea1fb03c510019985bf8 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124765237) * 85a77bc0ffdd7a4e06e1a183950ec075b21f030f : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124872507) * e311962f6b0177494709cd66770bb4ef6ef11d3a : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/124889848) * 3d2d4be4b8b4430247a41a8e27e4305b82c99345 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/124929657) * 5701af5109106c4fc5c062e5f0501201429d61bf : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124974019) * 6e4b34a3ab74d394faa1b0df639c5232f64cedd3 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125053981) * 8e4f5b91e2b25edbf8ad5aac412f0c9791df7a0f : FAILURE [Build](https://travis-ci.c
[GitHub] [flink] flinkbot edited a comment on issue #8121: [FLINK-10437]Some of keys under withDeprecatedKeys aren't marked as @…
flinkbot edited a comment on issue #8121: [FLINK-10437]Some of keys under withDeprecatedKeys aren't marked as @… URL: https://github.com/apache/flink/pull/8121#issuecomment-526352237 ## CI report: * 4b5cad1779513318ef2aeb8dce5fecf212571ac3 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/125145275) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9205: [FLINK-13240] Wrong check argument error messages and comments in QueryableStateConfiguration
flinkbot edited a comment on issue #9205: [FLINK-13240] Wrong check argument error messages and comments in QueryableStateConfiguration URL: https://github.com/apache/flink/pull/9205#issuecomment-514118154 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 574df76362382a105bf9c13329c070d2e5fa92aa (Thu Aug 29 20:50:59 UTC 2019) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9383: [FLINK-13248] [runtime] Adding processing of downstream messages for blocking operators
flinkbot edited a comment on issue #9383: [FLINK-13248] [runtime] Adding processing of downstream messages for blocking operators URL: https://github.com/apache/flink/pull/9383#issuecomment-519130955 ## CI report: * 5d8448c4813f5b362f98f898998f1278f062d807 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/122292142) * 4d628935e8899d6019566bfc93b5c688bc1835ec : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/122941321) * d7c0bd5edc65110910d79ca7c7bf2139672f8c02 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/123229382) * b7a19fe5d83ee271e7560f90fbf07a7703937273 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/123235786) * 7650b3b19b05ed6a121566d7c19d5e7bc71489fa : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/123332630) * 2493723ebd2c307f47bbdfcf154a31ab97cda312 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/123335647) * f3f0fe6d16ef3bba35d06a797196f94f372701ff : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/123336279) * c6ee15104ee678c239367670773723920e34c26d : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/123371348) * d0e4fbf25a8ff9982171ed982868b51ad851aaf0 : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/123472764) * 741386a495a5657bb654dcd0168f2d42873445e7 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/123476701) * 05e27c097851c65bd9a405b4aae376e2ef6c2b50 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/123645059) * 8fba78f22bc6c0d042cb1dde270c02af08d98bbd : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/123696981) * 681ac331e4c0b547e1d410b448bc34ff651dbc6a : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/123698096) * e18739460fbcf7c59be3c9121fc26fc279e0353a : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/123703814) * 670affbc0ae2883e93bb2f6ca3c1300fb78f26c5 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/123726809) * 50a91cbc74c645576432d25d40e7a42190ac28e8 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/123758034) * 2fba64d8e9fa4939c5a6c3fb3d758d55ca344b6c : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/123842040) * 0507aa67d2f7183c3a7e4556fbf7732414647ac7 : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/123855539) * 9a293ae331986d8fae16f619158f1e59572cd1e9 : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/123856922) * 2bb1bc290a3fc8e4dc843be781063515e86509d8 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/123859379) * 9ff3bc6cdeaff18473a075076ae4931bbdae7173 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124474961) * bfc5c6d643c502ec3a7ba181d4a80286af08ca50 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124487989) * 0b54f92f49a56077b36856c4eda400938f2cda75 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124488476) * 97b0b55c896ce68a8540758a38321ddbfac139c5 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124489022) * fae34dfb176d977e452dd5a3e7af8838341e565f : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124489295) * f312b58169cdd3ce32d603c5ef410b98064c084c : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/124534465) * 4125a241c713c467c47cb49aacfea5b99bbe4b47 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124544764) * 8308ec556edf17ce57d9ff377f646f6f1bbd47fc : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124548686) * ffe70dd34f21b84eeb6f448ab1611e5a6a0ecaa3 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124550076) * b925c47fa923e0f90fd0a4ee0205ae29fe32662e : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124554252) * 7815bc0047f41fd2d5dc3dcbe4adbd7be514e0da : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124561559) * 5821301ec74ba680e5e530fe1f698ed5c12f4b11 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/124578295) * 9e6aa380dc9d286316c372981df091dea77b28df : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/124700113) * 355736bc75b13d3e9caeea1fb03c510019985bf8 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124765237) * 85a77bc0ffdd7a4e06e1a183950ec075b21f030f : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124872507) * e311962f6b0177494709cd66770bb4ef6ef11d3a : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/124889848) * 3d2d4be4b8b4430247a41a8e27e4305b82c99345 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/124929657) * 5701af5109106c4fc5c062e5f0501201429d61bf : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124974019) * 6e4b34a3ab74d394faa1b0df639c5232f64cedd3 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125053981) * 8e4f5b91e2b25edbf8ad5aac412f0c9791df7a0f : FAILURE [Build](https://travis-ci.c
[jira] [Closed] (FLINK-13240) Wrong check argument error messages and comments in QueryableStateConfiguration
[ https://issues.apache.org/jira/browse/FLINK-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chesnay Schepler closed FLINK-13240. Fix Version/s: 1.10.0 Resolution: Fixed master: 21c981443b08a5cec7167c347d82cdaeecee37cc > Wrong check argument error messages and comments in > QueryableStateConfiguration > --- > > Key: FLINK-13240 > URL: https://issues.apache.org/jira/browse/FLINK-13240 > Project: Flink > Issue Type: Improvement > Components: Runtime / Queryable State >Reporter: vinoyang >Assignee: vinoyang >Priority: Minor > Labels: pull-request-available > Fix For: 1.10.0 > > Time Spent: 20m > Remaining Estimate: 0h > > The relevant code snippet is : > {code:java} > checkArgument(numProxyThreads >= 0, "queryable state number of server threads > must be zero or larger"); > checkArgument(numPQueryThreads >= 0, "queryable state number of query threads > must be zero or larger"); > checkArgument(numServerThreads >= 0, "queryable state number of server > threads must be zero or larger"); > checkArgument(numSQueryThreads >= 0, "queryable state number of query threads > must be zero or larger"); > {code} > -- This message was sent by Atlassian Jira (v8.3.2#803003)
[GitHub] [flink] zentol merged pull request #9205: [FLINK-13240] Wrong check argument error messages and comments in QueryableStateConfiguration
zentol merged pull request #9205: [FLINK-13240] Wrong check argument error messages and comments in QueryableStateConfiguration URL: https://github.com/apache/flink/pull/9205 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #8967: [FLINK-13059][Cassandra Connector] Release Semaphore correctly on Exception in send()
flinkbot edited a comment on issue #8967: [FLINK-13059][Cassandra Connector] Release Semaphore correctly on Exception in send() URL: https://github.com/apache/flink/pull/8967#issuecomment-511307628 ## CI report: * 7fe85952cde10780b93bf27d784b77f1cc381d11 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/119431068) * 811a8032918c18225f88f120a58de7e80b625ab2 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/119441267) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9383: [FLINK-13248] [runtime] Adding processing of downstream messages for blocking operators
flinkbot edited a comment on issue #9383: [FLINK-13248] [runtime] Adding processing of downstream messages for blocking operators URL: https://github.com/apache/flink/pull/9383#issuecomment-519129010 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 8dca7860d8983eddb7542cb6a7e92d8298af3360 (Thu Aug 29 20:47:56 UTC 2019) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! * **This pull request references an unassigned [Jira ticket](https://issues.apache.org/jira/browse/FLINK-13248).** According to the [code contribution guide](https://flink.apache.org/contributing/contribute-code.html), tickets need to be assigned before starting with the implementation work. Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] zentol merged pull request #9178: Typo in `scala_api_quickstart.md`
zentol merged pull request #9178: Typo in `scala_api_quickstart.md` URL: https://github.com/apache/flink/pull/9178 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services