[GitHub] [flink] flinkbot commented on pull request #14638: [FLINK-20966][table-planner-blink] Rename Stream(/Batch)ExecIntermediateTableScan to Stream(/Batch)PhysicalIntermediateTableScan
flinkbot commented on pull request #14638: URL: https://github.com/apache/flink/pull/14638#issuecomment-759991130 ## CI report: * 99448538758f042069a6e9eb6e319726a54ca92c UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14637: [FLINK-20949][table-planner-blink] Separate the implementation of sink nodes
flinkbot edited a comment on pull request #14637: URL: https://github.com/apache/flink/pull/14637#issuecomment-759980085 ## CI report: * 8b2bf9a0a0d1c74ea4a6d1712dccb415dcd34a4d Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12028) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14631: [FLINK-20885][canal][json] Deserialization exception when using 'canal-json.table.include' to filter out the binlog of the specified
flinkbot edited a comment on pull request #14631: URL: https://github.com/apache/flink/pull/14631#issuecomment-759388845 ## CI report: * e283ba5376620acf2206a9c9ff7a4cdc9ba9 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12018) * 571a38f52edb662a2b5c8a157ef96a52ad5ddb68 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12027) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14627: [FLINK-20946][python] Optimize Python ValueState Implementation In PyFlink
flinkbot edited a comment on pull request #14627: URL: https://github.com/apache/flink/pull/14627#issuecomment-759272918 ## CI report: * a0e6a39605a3d1a80f75f8764534dfb82d08c31f Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11971) * ff00387c962b937a5dabaaeb9241acfdbc9e49ab UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14621: [FLINK-20933][python] Config Python Operator Use Managed Memory In Python DataStream
flinkbot edited a comment on pull request #14621: URL: https://github.com/apache/flink/pull/14621#issuecomment-75873 ## CI report: * 1b468d46e941274186770f42412fc0725992f364 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11940) * dab22fa1d5e18201415322cbd928767223258e16 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12026) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14626: [FLINK-20948][table-planner-blink] Introduce StreamPhysicalDeduplicate, and make StreamExecDeduplicate only extended from ExecNode
flinkbot edited a comment on pull request #14626: URL: https://github.com/apache/flink/pull/14626#issuecomment-759272820 ## CI report: * 1387747f865a79d120ee7f5d6f24685c90e0076f Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11970) * c5c5c0750d0bca932e319cb507e54897e0b2c4ec Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12023) * 9bf89577783692484d25df4186459cc1aaeb6000 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14614: [FLINK-20911][state-backend-rocksdb] Support configuration of RocksDB log level
flinkbot edited a comment on pull request #14614: URL: https://github.com/apache/flink/pull/14614#issuecomment-758512892 ## CI report: * 879417b9fbfe847c1a90de5caef7817edd10d69a Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11931) * f3a5b0e4feb3bf16edecba4946fc0d160d7a4bce UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14536: [FLINK-20812][Connector][Hbase] hbase in sql mode,can use 'properties.*' add Configuration parameter.
flinkbot edited a comment on pull request #14536: URL: https://github.com/apache/flink/pull/14536#issuecomment-752909965 ## CI report: * a4c6b2e3d222c1679ce19d21a7f108d63d8dc3fc UNKNOWN * e8c1e77209e80aa39985342f01c3d8d566220d1a UNKNOWN * ba7aceff1a94c93ce89ed15359c992f62ad83e93 UNKNOWN * 3c42f8358ae07557917ce71eae8d092ed501b45d UNKNOWN * 477cd8c0b5b31588f7bd0174e0d87393a6df19ca UNKNOWN * 6558b9f806137c611bd648d3b9672c415a23b061 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12004) * cc47bd3f4429803694fed9fb9853827b11ed124c UNKNOWN * 3baca24234153ee39be0bc8ddd6a4e565e3d1eca Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12021) * a5cb7af54aab9fcd01b83e348949fc3623d34d26 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] chaozwn commented on pull request #14536: [FLINK-20812][Connector][Hbase] hbase in sql mode,can use 'properties.*' add Configuration parameter.
chaozwn commented on pull request #14536: URL: https://github.com/apache/flink/pull/14536#issuecomment-759990275 > Thanks for the contribution @chaozwn , I rebased and squashed the pull request. Will merge it once build is passed. > > I found that the documentation is missed? Could you create an issue to add docs? And would be great if you can take it too. ok, i will do it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14530: [FLINK-20348][kafka] Make "schema-registry.subject" optional for Kafka sink with avro-confluent format
flinkbot edited a comment on pull request #14530: URL: https://github.com/apache/flink/pull/14530#issuecomment-752828495 ## CI report: * Unknown: [CANCELED](TBD) * eaa0445691254f544c231ec0d2d55519af277b33 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12025) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] wuchong edited a comment on pull request #14536: [FLINK-20812][Connector][Hbase] hbase in sql mode,can use 'properties.*' add Configuration parameter.
wuchong edited a comment on pull request #14536: URL: https://github.com/apache/flink/pull/14536#issuecomment-759989135 Thanks for the contribution @chaozwn , I rebased and squashed the pull request. Will merge it once build is passed. I found that the documentation is missed? Could you create an issue to add docs? And would be great if you can take it too. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] wuchong commented on pull request #14536: [FLINK-20812][Connector][Hbase] hbase in sql mode,can use 'properties.*' add Configuration parameter.
wuchong commented on pull request #14536: URL: https://github.com/apache/flink/pull/14536#issuecomment-759989135 Thanks for the contribution @chaozwn , I rebased and squashed the pull request. Will merge it once build is passed. I found that the documentation is missed, could you create an issue to add docs and would be great if you can take it too. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] HuangXingBo commented on pull request #14627: [FLINK-20946][python] Optimize Python ValueState Implementation In PyFlink
HuangXingBo commented on pull request #14627: URL: https://github.com/apache/flink/pull/14627#issuecomment-759982957 @WeiZhong94 Thanks a lot for the suggestion. I have updated the PR at the latest commit. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] chaozwn commented on a change in pull request #14536: [FLINK-20812][Connector][Hbase] hbase in sql mode,can use 'properties.*' add Configuration parameter.
chaozwn commented on a change in pull request #14536: URL: https://github.com/apache/flink/pull/14536#discussion_r557100620 ## File path: flink-connectors/flink-connector-hbase-1.4/src/main/java/org/apache/flink/connector/hbase1/HBase1DynamicTableFactory.java ## @@ -74,13 +72,13 @@ public DynamicTableSource createDynamicTableSource(Context context) { validatePrimaryKey(tableSchema); validateTableSourceOptions(tableOptions); Review comment: validatePrimaryKey is necessary,The other can be removed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] chaozwn commented on a change in pull request #14536: [FLINK-20812][Connector][Hbase] hbase in sql mode,can use 'properties.*' add Configuration parameter.
chaozwn commented on a change in pull request #14536: URL: https://github.com/apache/flink/pull/14536#discussion_r557099873 ## File path: flink-connectors/flink-connector-hbase-2.2/src/test/java/org/apache/flink/connector/hbase2/HBaseDynamicTableFactoryTest.java ## @@ -181,6 +185,25 @@ public void testTableSinkFactory() { new DataType[] {DECIMAL(10, 3), TIMESTAMP(3), DATE(), TIME()}, hbaseSchema.getQualifierDataTypes("f4")); +// verify hadoop Configuration +org.apache.hadoop.conf.Configuration expectedConfiguration = +HBaseConfigurationUtil.getHBaseConfiguration(); +expectedConfiguration.set(HConstants.ZOOKEEPER_QUORUM, "localhost:2181"); +expectedConfiguration.set(HConstants.ZOOKEEPER_ZNODE_PARENT, "/flink"); +expectedConfiguration.set("hbase.security.authentication", "kerberos"); +Map expectedProperties = Review comment: solved This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] chaozwn commented on a change in pull request #14536: [FLINK-20812][Connector][Hbase] hbase in sql mode,can use 'properties.*' add Configuration parameter.
chaozwn commented on a change in pull request #14536: URL: https://github.com/apache/flink/pull/14536#discussion_r557099740 ## File path: flink-connectors/flink-connector-hbase-1.4/src/test/java/org/apache/flink/connector/hbase1/HBaseDynamicTableFactoryTest.java ## @@ -181,6 +185,25 @@ public void testTableSinkFactory() { new DataType[] {DECIMAL(10, 3), TIMESTAMP(3), DATE(), TIME()}, hbaseSchema.getQualifierDataTypes("f4")); +// verify hadoop Configuration +org.apache.hadoop.conf.Configuration expectedConfiguration = +HBaseConfigurationUtil.getHBaseConfiguration(); +expectedConfiguration.set(HConstants.ZOOKEEPER_QUORUM, "localhost:2181"); +expectedConfiguration.set(HConstants.ZOOKEEPER_ZNODE_PARENT, "/flink"); +expectedConfiguration.set("hbase.security.authentication", "kerberos"); +Map expectedProperties = +Lists.newArrayList(expectedConfiguration.iterator()).stream() +.collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue)); Review comment: solved This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #14638: [FLINK-20966][table-planner-blink] Rename Stream(/Batch)ExecIntermediateTableScan to Stream(/Batch)PhysicalIntermediateTableScan
flinkbot commented on pull request #14638: URL: https://github.com/apache/flink/pull/14638#issuecomment-759980402 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 99448538758f042069a6e9eb6e319726a54ca92c (Thu Jan 14 07:23:49 UTC 2021) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! * **This pull request references an unassigned [Jira ticket](https://issues.apache.org/jira/browse/FLINK-20966).** According to the [code contribution guide](https://flink.apache.org/contributing/contribute-code.html), tickets need to be assigned before starting with the implementation work. Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #14637: [FLINK-20949][table-planner-blink] Separate the implementation of sink nodes
flinkbot commented on pull request #14637: URL: https://github.com/apache/flink/pull/14637#issuecomment-759980085 ## CI report: * 8b2bf9a0a0d1c74ea4a6d1712dccb415dcd34a4d UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14631: [FLINK-20885][canal][json] Deserialization exception when using 'canal-json.table.include' to filter out the binlog of the specified
flinkbot edited a comment on pull request #14631: URL: https://github.com/apache/flink/pull/14631#issuecomment-759388845 ## CI report: * e283ba5376620acf2206a9c9ff7a4cdc9ba9 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12018) * 571a38f52edb662a2b5c8a157ef96a52ad5ddb68 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14621: [FLINK-20933][python] Config Python Operator Use Managed Memory In Python DataStream
flinkbot edited a comment on pull request #14621: URL: https://github.com/apache/flink/pull/14621#issuecomment-75873 ## CI report: * 1b468d46e941274186770f42412fc0725992f364 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11940) * dab22fa1d5e18201415322cbd928767223258e16 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14626: [FLINK-20948][table-planner-blink] Introduce StreamPhysicalDeduplicate, and make StreamExecDeduplicate only extended from ExecNode
flinkbot edited a comment on pull request #14626: URL: https://github.com/apache/flink/pull/14626#issuecomment-759272820 ## CI report: * 1387747f865a79d120ee7f5d6f24685c90e0076f Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11970) * c5c5c0750d0bca932e319cb507e54897e0b2c4ec Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12023) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14625: [FLINK-20941][table-planner-blink] Introduce StreamPhysicalMatch, and make StreamExecMatch only extended from ExecNode
flinkbot edited a comment on pull request #14625: URL: https://github.com/apache/flink/pull/14625#issuecomment-759222713 ## CI report: * fdf9c3005c53149eb7be9501c3c451c4e5cb77e5 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11963) * af83d032e2bd478541c9fa548866ce39318f Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12022) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14616: [FLINK-20879][table-api] Use MemorySize type instead of String type for memory ConfigOption in ExecutionConfigOptions
flinkbot edited a comment on pull request #14616: URL: https://github.com/apache/flink/pull/14616#issuecomment-758526377 ## CI report: * 448c026a402e045e050f405daf934a8a7c880c9d UNKNOWN * e57906f184411f95085433096ec5cffb2ec7ed88 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12016) * aac6ca6fa0792f2ecb4d982545fea6403115f24b UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-20966) Rename Stream(/Batch)ExecIntermediateTableScan to Stream(/Batch)PhysicalIntermediateTableScan
[ https://issues.apache.org/jira/browse/FLINK-20966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-20966: --- Labels: pull-request-available (was: ) > Rename Stream(/Batch)ExecIntermediateTableScan to > Stream(/Batch)PhysicalIntermediateTableScan > - > > Key: FLINK-20966 > URL: https://issues.apache.org/jira/browse/FLINK-20966 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / Planner >Reporter: godfrey he >Priority: Major > Labels: pull-request-available > Fix For: 1.13.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #14530: [FLINK-20348][kafka] Make "schema-registry.subject" optional for Kafka sink with avro-confluent format
flinkbot edited a comment on pull request #14530: URL: https://github.com/apache/flink/pull/14530#issuecomment-752828495 ## CI report: * Unknown: [CANCELED](TBD) * eaa0445691254f544c231ec0d2d55519af277b33 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] godfreyhe opened a new pull request #14638: [FLINK-20966][table-planner-blink] Rename Stream(/Batch)ExecIntermediateTableScan to Stream(/Batch)PhysicalIntermediateTableScan
godfreyhe opened a new pull request #14638: URL: https://github.com/apache/flink/pull/14638 ## What is the purpose of the change *Rename Stream(/Batch)ExecIntermediateTableScan to Stream(/Batch)PhysicalIntermediateTableScan* ## Brief change log - *Rename StreamExecIntermediateTableScan to StreamPhysicalIntermediateTableScan* - *Rename BatchExecIntermediateTableScan to BatchPhysicalIntermediateTableScan* ## Verifying this change This change is a refactoring rework covered by existing tests. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (yes / **no**) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes / **no**) - The serializers: (yes / **no** / don't know) - The runtime per-record code paths (performance sensitive): (yes / **no** / don't know) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / don't know) - The S3 file system connector: (yes / **no** / don't know) ## Documentation - Does this pull request introduce a new feature? (yes / **no**) - If yes, how is the feature documented? (not applicable / docs / JavaDocs / **not documented**) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-20921) Fix Date/Time/Timestamp in Python DataStream
[ https://issues.apache.org/jira/browse/FLINK-20921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17264656#comment-17264656 ] Dian Fu commented on FLINK-20921: - Fixed in release-1.12 via a7deac4d769d3f5cee65c5be4375ea4aa40766ad > Fix Date/Time/Timestamp in Python DataStream > > > Key: FLINK-20921 > URL: https://issues.apache.org/jira/browse/FLINK-20921 > Project: Flink > Issue Type: Bug > Components: API / Python >Affects Versions: 1.12.0, 1.13.0 >Reporter: Huang Xingbo >Assignee: Huang Xingbo >Priority: Major > Labels: pull-request-available > Fix For: 1.13.0, 1.12.2 > > > Currently the Date/Time/Timestamp type doesn't works in Python DataStream. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-20921) Fix Date/Time/Timestamp in Python DataStream
[ https://issues.apache.org/jira/browse/FLINK-20921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dian Fu closed FLINK-20921. --- Resolution: Fixed > Fix Date/Time/Timestamp in Python DataStream > > > Key: FLINK-20921 > URL: https://issues.apache.org/jira/browse/FLINK-20921 > Project: Flink > Issue Type: Bug > Components: API / Python >Affects Versions: 1.12.0, 1.13.0 >Reporter: Huang Xingbo >Assignee: Huang Xingbo >Priority: Major > Labels: pull-request-available > Fix For: 1.13.0, 1.12.2 > > > Currently the Date/Time/Timestamp type doesn't works in Python DataStream. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] dianfu merged pull request #14636: [FLINK-20921][python] Fixes the Date/Time/Timestamp type in Python DataStream API
dianfu merged pull request #14636: URL: https://github.com/apache/flink/pull/14636 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] zhuxiaoshang commented on pull request #14530: [FLINK-20348][kafka] Make "schema-registry.subject" optional for Kafka sink with avro-confluent format
zhuxiaoshang commented on pull request #14530: URL: https://github.com/apache/flink/pull/14530#issuecomment-759976341 @flinkbot run azure This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-20966) Rename Stream(/Batch)ExecIntermediateTableScan to Stream(/Batch)PhysicalIntermediateTableScan
[ https://issues.apache.org/jira/browse/FLINK-20966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] godfrey he updated FLINK-20966: --- Summary: Rename Stream(/Batch)ExecIntermediateTableScan to Stream(/Batch)PhysicalIntermediateTableScan (was: Rename StreamExecIntermediateTableScan to StreamPhysicalIntermediateTableScan) > Rename Stream(/Batch)ExecIntermediateTableScan to > Stream(/Batch)PhysicalIntermediateTableScan > - > > Key: FLINK-20966 > URL: https://issues.apache.org/jira/browse/FLINK-20966 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / Planner >Reporter: godfrey he >Priority: Major > Fix For: 1.13.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot commented on pull request #14637: [FLINK-20949][table-planner-blink] Separate the implementation of sink nodes
flinkbot commented on pull request #14637: URL: https://github.com/apache/flink/pull/14637#issuecomment-759975866 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 8b2bf9a0a0d1c74ea4a6d1712dccb415dcd34a4d (Thu Jan 14 07:13:31 UTC 2021) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] xiaoHoly commented on pull request #14616: [FLINK-20879][table-api] Use MemorySize type instead of String type for memory ConfigOption in ExecutionConfigOptions
xiaoHoly commented on pull request #14616: URL: https://github.com/apache/flink/pull/14616#issuecomment-759975195 The conflict has been resolved @wuchong This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-20966) Rename StreamExecIntermediateTableScan to StreamPhysicalIntermediateTableScan
godfrey he created FLINK-20966: -- Summary: Rename StreamExecIntermediateTableScan to StreamPhysicalIntermediateTableScan Key: FLINK-20966 URL: https://issues.apache.org/jira/browse/FLINK-20966 Project: Flink Issue Type: Sub-task Components: Table SQL / Planner Reporter: godfrey he Fix For: 1.13.0 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-20949) Separate the implementation of sink nodes
[ https://issues.apache.org/jira/browse/FLINK-20949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-20949: --- Labels: pull-request-available (was: ) > Separate the implementation of sink nodes > - > > Key: FLINK-20949 > URL: https://issues.apache.org/jira/browse/FLINK-20949 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / Planner >Reporter: godfrey he >Assignee: godfrey he >Priority: Major > Labels: pull-request-available > Fix For: 1.13.0 > > > including StreamExecSink, BatchExecSink, StreamExecLegacySink, > BatchExecLegacySink -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] godfreyhe opened a new pull request #14637: [FLINK-20949][table-planner-blink] Separate the implementation of sink nodes
godfreyhe opened a new pull request #14637: URL: https://github.com/apache/flink/pull/14637 ## What is the purpose of the change *Separate the implementation of sink nodes, including StreamExecSink, BatchExecSink, StreamExecLegacySink, BatchExecLegacySink* ## Brief change log - *Introduce StreamPhysicalSink, and make StreamExecSink only extended from ExecNode* - *Introduce BatchPhysicalSink, and make BatchExecSink only extended from ExecNode* - *Introduce StreamPhysicalLegacySink, and make StreamExecLegacySink only extended from ExecNode* - *Introduce BatchPhysicalLegacySink, and make BatchExecLegacySink only extended from ExecNode* - *BatchCommonSubGraphBasedOptimizer should also consider Sink node* ## Verifying this change This change is a refactoring rework covered by existing tests. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (yes / **no**) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes / **no**) - The serializers: (yes / **no** / don't know) - The runtime per-record code paths (performance sensitive): (yes / **no** / don't know) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / don't know) - The S3 file system connector: (yes / **no** / don't know) ## Documentation - Does this pull request introduce a new feature? (yes / **no**) - If yes, how is the feature documented? (not applicable / docs / JavaDocs / **not documented**) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-20949) Separate the implementation of sink nodes
[ https://issues.apache.org/jira/browse/FLINK-20949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] godfrey he updated FLINK-20949: --- Description: including StreamExecSink, BatchExecSink, StreamExecLegacySink, BatchExecLegacySink > Separate the implementation of sink nodes > - > > Key: FLINK-20949 > URL: https://issues.apache.org/jira/browse/FLINK-20949 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / Planner >Reporter: godfrey he >Assignee: godfrey he >Priority: Major > Fix For: 1.13.0 > > > including StreamExecSink, BatchExecSink, StreamExecLegacySink, > BatchExecLegacySink -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] SteNicholas commented on pull request #14631: [FLINK-20885][canal][json] Deserialization exception when using 'canal-json.table.include' to filter out the binlog of the specified tabl
SteNicholas commented on pull request #14631: URL: https://github.com/apache/flink/pull/14631#issuecomment-759973593 @wuchong , sorry for my mistake to change the behavior message of throwing exception. I have updated the `deserialize(...)` method. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14626: [FLINK-20948][table-planner-blink] Introduce StreamPhysicalDeduplicate, and make StreamExecDeduplicate only extended from ExecNode
flinkbot edited a comment on pull request #14626: URL: https://github.com/apache/flink/pull/14626#issuecomment-759272820 ## CI report: * 1387747f865a79d120ee7f5d6f24685c90e0076f Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11970) * c5c5c0750d0bca932e319cb507e54897e0b2c4ec UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14625: [FLINK-20941][table-planner-blink] Introduce StreamPhysicalMatch, and make StreamExecMatch only extended from ExecNode
flinkbot edited a comment on pull request #14625: URL: https://github.com/apache/flink/pull/14625#issuecomment-759222713 ## CI report: * fdf9c3005c53149eb7be9501c3c451c4e5cb77e5 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11963) * af83d032e2bd478541c9fa548866ce39318f UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14536: [FLINK-20812][Connector][Hbase] hbase in sql mode,can use 'properties.*' add Configuration parameter.
flinkbot edited a comment on pull request #14536: URL: https://github.com/apache/flink/pull/14536#issuecomment-752909965 ## CI report: * a4c6b2e3d222c1679ce19d21a7f108d63d8dc3fc UNKNOWN * e8c1e77209e80aa39985342f01c3d8d566220d1a UNKNOWN * ba7aceff1a94c93ce89ed15359c992f62ad83e93 UNKNOWN * 3c42f8358ae07557917ce71eae8d092ed501b45d UNKNOWN * 477cd8c0b5b31588f7bd0174e0d87393a6df19ca UNKNOWN * 6558b9f806137c611bd648d3b9672c415a23b061 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12004) * cc47bd3f4429803694fed9fb9853827b11ed124c UNKNOWN * 3baca24234153ee39be0bc8ddd6a4e565e3d1eca Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12021) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-docker] wangyang0918 commented on a change in pull request #54: [FLINK-20915][docker] Move docker entrypoint to distribution
wangyang0918 commented on a change in pull request #54: URL: https://github.com/apache/flink-docker/pull/54#discussion_r557077935 ## File path: docker-entrypoint.sh ## @@ -41,60 +33,6 @@ drop_privs_cmd() { fi } -copy_plugins_if_required() { - if [ -z "$ENABLE_BUILT_IN_PLUGINS" ]; then -return 0 - fi - - echo "Enabling required built-in plugins" - for target_plugin in $(echo "$ENABLE_BUILT_IN_PLUGINS" | tr ';' ' '); do -echo "Linking ${target_plugin} to plugin directory" -plugin_name=${target_plugin%.jar} - -mkdir -p "${FLINK_HOME}/plugins/${plugin_name}" -if [ ! -e "${FLINK_HOME}/opt/${target_plugin}" ]; then - echo "Plugin ${target_plugin} does not exist. Exiting." - exit 1 -else - ln -fs "${FLINK_HOME}/opt/${target_plugin}" "${FLINK_HOME}/plugins/${plugin_name}" - echo "Successfully enabled ${target_plugin}" -fi - done -} - -set_config_option() { - local option=$1 - local value=$2 - - # escape periods for usage in regular expressions - local escaped_option=$(echo ${option} | sed -e "s/\./\\\./g") - - # either override an existing entry, or append a new one - if grep -E "^${escaped_option}:.*" "${CONF_FILE}" > /dev/null; then -sed -i -e "s/${escaped_option}:.*/$option: $value/g" "${CONF_FILE}" - else -echo "${option}: ${value}" >> "${CONF_FILE}" - fi -} - -set_common_options() { -set_config_option jobmanager.rpc.address ${JOB_MANAGER_RPC_ADDRESS} -set_config_option blob.server.port 6124 -set_config_option query.server.port 6125 -} - -prepare_job_manager_start() { -echo "Starting Job Manager" -copy_plugins_if_required - -set_common_options - -if [ -n "${FLINK_PROPERTIES}" ]; then -echo "${FLINK_PROPERTIES}" >> "${CONF_FILE}" -fi -envsubst < "${CONF_FILE}" > "${CONF_FILE}.tmp" && mv "${CONF_FILE}.tmp" "${CONF_FILE}" -} - Review comment: I believe setting the env(`export DISABLE_JEMALLOC=true`) is more convenient to use and could be easily integrated in all modes(docker, standalone, native-k8s). I agree that this could be done as a follow-up. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] xiaoHoly commented on pull request #14616: [FLINK-20879][table-api] Use MemorySize type instead of String type for memory ConfigOption in ExecutionConfigOptions
xiaoHoly commented on pull request #14616: URL: https://github.com/apache/flink/pull/14616#issuecomment-759966071 ok,a piece of cake This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] wuchong commented on pull request #14631: [FLINK-20885][canal][json] Deserialization exception when using 'canal-json.table.include' to filter out the binlog of the specified table
wuchong commented on pull request #14631: URL: https://github.com/apache/flink/pull/14631#issuecomment-759965317 There are failed tests. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-19158) Revisit java e2e download timeouts
[ https://issues.apache.org/jira/browse/FLINK-19158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17264640#comment-17264640 ] Huang Xingbo commented on FLINK-19158: -- failed instance happened again [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=12012=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529] > Revisit java e2e download timeouts > -- > > Key: FLINK-19158 > URL: https://issues.apache.org/jira/browse/FLINK-19158 > Project: Flink > Issue Type: Improvement > Components: Build System >Affects Versions: 1.12.0 >Reporter: Robert Metzger >Assignee: Robert Metzger >Priority: Major > Labels: pull-request-available, test-stability > Fix For: 1.12.0 > > > Consider this failed test case > {code} > Test testHBase(org.apache.flink.tests.util.hbase.SQLClientHBaseITCase) is > running. > > 09:38:38,719 [main] INFO > org.apache.flink.tests.util.cache.PersistingDownloadCache[] - Downloading > https://archive.apache.org/dist/hbase/1.4.3/hbase-1.4.3-bin.tar.gz. > 09:40:38,732 [main] ERROR > org.apache.flink.tests.util.hbase.SQLClientHBaseITCase [] - > > Test testHBase(org.apache.flink.tests.util.hbase.SQLClientHBaseITCase) failed > with: > java.io.IOException: Process ([wget, -q, -P, > /home/vsts/work/1/e2e_cache/downloads/1598516010, > https://archive.apache.org/dist/hbase/1.4.3/hbase-1.4.3-bin.tar.gz]) exceeded > timeout (12) or number of retries (3). > at > org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlockingWithRetry(AutoClosableProcess.java:148) > at > org.apache.flink.tests.util.cache.AbstractDownloadCache.getOrDownload(AbstractDownloadCache.java:127) > at > org.apache.flink.tests.util.cache.PersistingDownloadCache.getOrDownload(PersistingDownloadCache.java:36) > at > org.apache.flink.tests.util.hbase.LocalStandaloneHBaseResource.setupHBaseDist(LocalStandaloneHBaseResource.java:76) > at > org.apache.flink.tests.util.hbase.LocalStandaloneHBaseResource.before(LocalStandaloneHBaseResource.java:70) > at > org.apache.flink.util.ExternalResource$1.evaluate(ExternalResource.java:46) > {code} > It seems that the download has not been retried. The download might be stuck? > I would propose to set a timeout per try and increase the total time from 2 > to 5 minutes. > This example is from: > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=6267=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] wuchong commented on pull request #14616: [FLINK-20879][table-api] Use MemorySize type instead of String type for memory ConfigOption in ExecutionConfigOptions
wuchong commented on pull request #14616: URL: https://github.com/apache/flink/pull/14616#issuecomment-759964282 @xiaoHoly , there is a conflict, could you rebase the branch again? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] wuchong commented on a change in pull request #14536: [FLINK-20812][Connector][Hbase] hbase in sql mode,can use 'properties.*' add Configuration parameter.
wuchong commented on a change in pull request #14536: URL: https://github.com/apache/flink/pull/14536#discussion_r557074319 ## File path: flink-connectors/flink-connector-hbase-1.4/src/main/java/org/apache/flink/connector/hbase1/HBase1DynamicTableFactory.java ## @@ -74,13 +72,13 @@ public DynamicTableSource createDynamicTableSource(Context context) { validatePrimaryKey(tableSchema); validateTableSourceOptions(tableOptions); Review comment: We don't need this, this has been checked in `helper.validateExcept(PROPERTIES_PREFIX)`. ## File path: flink-connectors/flink-connector-hbase-2.2/src/test/java/org/apache/flink/connector/hbase2/HBaseDynamicTableFactoryTest.java ## @@ -181,6 +185,25 @@ public void testTableSinkFactory() { new DataType[] {DECIMAL(10, 3), TIMESTAMP(3), DATE(), TIME()}, hbaseSchema.getQualifierDataTypes("f4")); +// verify hadoop Configuration +org.apache.hadoop.conf.Configuration expectedConfiguration = +HBaseConfigurationUtil.getHBaseConfiguration(); +expectedConfiguration.set(HConstants.ZOOKEEPER_QUORUM, "localhost:2181"); +expectedConfiguration.set(HConstants.ZOOKEEPER_ZNODE_PARENT, "/flink"); +expectedConfiguration.set("hbase.security.authentication", "kerberos"); +Map expectedProperties = Review comment: ditto. ## File path: flink-connectors/flink-connector-hbase-1.4/src/test/java/org/apache/flink/connector/hbase1/HBaseDynamicTableFactoryTest.java ## @@ -181,6 +185,25 @@ public void testTableSinkFactory() { new DataType[] {DECIMAL(10, 3), TIMESTAMP(3), DATE(), TIME()}, hbaseSchema.getQualifierDataTypes("f4")); +// verify hadoop Configuration +org.apache.hadoop.conf.Configuration expectedConfiguration = +HBaseConfigurationUtil.getHBaseConfiguration(); +expectedConfiguration.set(HConstants.ZOOKEEPER_QUORUM, "localhost:2181"); +expectedConfiguration.set(HConstants.ZOOKEEPER_ZNODE_PARENT, "/flink"); +expectedConfiguration.set("hbase.security.authentication", "kerberos"); +Map expectedProperties = +Lists.newArrayList(expectedConfiguration.iterator()).stream() +.collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue)); Review comment: We can directly construct the expected Map. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14631: [FLINK-20885][canal][json] Deserialization exception when using 'canal-json.table.include' to filter out the binlog of the specified
flinkbot edited a comment on pull request #14631: URL: https://github.com/apache/flink/pull/14631#issuecomment-759388845 ## CI report: * e283ba5376620acf2206a9c9ff7a4cdc9ba9 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12018) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14536: [FLINK-20812][Connector][Hbase] hbase in sql mode,can use 'properties.*' add Configuration parameter.
flinkbot edited a comment on pull request #14536: URL: https://github.com/apache/flink/pull/14536#issuecomment-752909965 ## CI report: * a4c6b2e3d222c1679ce19d21a7f108d63d8dc3fc UNKNOWN * e8c1e77209e80aa39985342f01c3d8d566220d1a UNKNOWN * ba7aceff1a94c93ce89ed15359c992f62ad83e93 UNKNOWN * 3c42f8358ae07557917ce71eae8d092ed501b45d UNKNOWN * 477cd8c0b5b31588f7bd0174e0d87393a6df19ca UNKNOWN * 6558b9f806137c611bd648d3b9672c415a23b061 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12004) * cc47bd3f4429803694fed9fb9853827b11ed124c UNKNOWN * 3baca24234153ee39be0bc8ddd6a4e565e3d1eca UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] wangyang0918 commented on a change in pull request #14630: [FLINK-20915][docker] Move docker entrypoint to distribution
wangyang0918 commented on a change in pull request #14630: URL: https://github.com/apache/flink/pull/14630#discussion_r557073151 ## File path: flink-dist/src/main/flink-bin/bin/docker-entrypoint.sh ## @@ -0,0 +1,176 @@ +#!/usr/bin/env bash + +### +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +### + +### +# This script is called by the Flink docker images when starting a process. +# It contains docker-specific features, and hard-codes a few options. +# +# Globals: +# FLINK_HOME - (required) path to the Flink home directory +# ENABLE_BUILT_IN_PLUGINS - semi-colon (;) separated list of plugins to enable, e.g., "flink-plugin1.jar;flink-plugin2.jar" +# FLINK_PROPERTIES- additional flink-conf.yaml entries as a multi-line string +# JOB_MANAGER_RPC_ADDRESS - RPC address of the job manager +# TASK_MANAGER_NUMBER_OF_TASK_SLOTS - number of slots for task executors +### + +COMMAND_STANDALONE="standalone-job" +# Deprecated, should be remove in Flink release 1.13 +COMMAND_NATIVE_KUBERNETES="native-k8s" +COMMAND_HISTORY_SERVER="history-server" + +args=("$@") +echo "${args[@]}" Review comment: We should not print any text in the pass-through mode. Otherwise, the PR for docker-library/official-images will fail since CI will run the this test[1]. [1]. https://github.com/docker-library/official-images/blob/master/test/tests/override-cmd/run.sh This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] wangyang0918 commented on a change in pull request #14630: [FLINK-20915][docker] Move docker entrypoint to distribution
wangyang0918 commented on a change in pull request #14630: URL: https://github.com/apache/flink/pull/14630#discussion_r557073151 ## File path: flink-dist/src/main/flink-bin/bin/docker-entrypoint.sh ## @@ -0,0 +1,176 @@ +#!/usr/bin/env bash + +### +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +### + +### +# This script is called by the Flink docker images when starting a process. +# It contains docker-specific features, and hard-codes a few options. +# +# Globals: +# FLINK_HOME - (required) path to the Flink home directory +# ENABLE_BUILT_IN_PLUGINS - semi-colon (;) separated list of plugins to enable, e.g., "flink-plugin1.jar;flink-plugin2.jar" +# FLINK_PROPERTIES- additional flink-conf.yaml entries as a multi-line string +# JOB_MANAGER_RPC_ADDRESS - RPC address of the job manager +# TASK_MANAGER_NUMBER_OF_TASK_SLOTS - number of slots for task executors +### + +COMMAND_STANDALONE="standalone-job" +# Deprecated, should be remove in Flink release 1.13 +COMMAND_NATIVE_KUBERNETES="native-k8s" +COMMAND_HISTORY_SERVER="history-server" + +args=("$@") +echo "${args[@]}" Review comment: We should not print any text in the pass through mode. Otherwise, the PR for docker-library/official-images will fail since CI will run the this test[1]. [1]. https://github.com/docker-library/official-images/blob/master/test/tests/override-cmd/run.sh This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] zhangzhefang-github commented on pull request #11676: [FLINK-17011][tests] Introduce Builder to AbstractStreamOperatorTestHarness for testing
zhangzhefang-github commented on pull request #11676: URL: https://github.com/apache/flink/pull/11676#issuecomment-759959730 So far, these test classes Still can't use ``` KeyedOneInputStreamOperatorTestHarness OneInputStreamOperatorTestHarness ``` and so on This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-20924) Port StreamExecPythonOverAggregate and BatchExecPythonOverAggregate to Java
[ https://issues.apache.org/jira/browse/FLINK-20924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17264636#comment-17264636 ] godfrey he commented on FLINK-20924: cc [~hxbks2ks] > Port StreamExecPythonOverAggregate and BatchExecPythonOverAggregate to Java > --- > > Key: FLINK-20924 > URL: https://issues.apache.org/jira/browse/FLINK-20924 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / Planner >Reporter: godfrey he >Priority: Major > Fix For: 1.13.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (FLINK-20949) Separate the implementation of sink nodes
[ https://issues.apache.org/jira/browse/FLINK-20949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] godfrey he reassigned FLINK-20949: -- Assignee: godfrey he > Separate the implementation of sink nodes > - > > Key: FLINK-20949 > URL: https://issues.apache.org/jira/browse/FLINK-20949 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / Planner >Reporter: godfrey he >Assignee: godfrey he >Priority: Major > Fix For: 1.13.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-20883) Separate the implementation of BatchExecOverAggregate and StreamExecOverAggregate
[ https://issues.apache.org/jira/browse/FLINK-20883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] godfrey he closed FLINK-20883. -- Resolution: Fixed Fixed in 1.13.0: 284b58b1..0cf6b6df > Separate the implementation of BatchExecOverAggregate and > StreamExecOverAggregate > - > > Key: FLINK-20883 > URL: https://issues.apache.org/jira/browse/FLINK-20883 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / Planner >Reporter: godfrey he >Assignee: godfrey he >Priority: Major > Labels: pull-request-available > Fix For: 1.13.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] wangyang0918 commented on a change in pull request #14630: [FLINK-20915][docker] Move docker entrypoint to distribution
wangyang0918 commented on a change in pull request #14630: URL: https://github.com/apache/flink/pull/14630#discussion_r557070055 ## File path: flink-dist/src/main/flink-bin/bin/docker-entrypoint.sh ## @@ -0,0 +1,161 @@ +#!/usr/bin/env bash + +### +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +### + +COMMAND_STANDALONE="standalone-job" +# Deprecated, should be remove in Flink release 1.13 Review comment: In the master branch, the native K8s will not need the `native-k8s` command anymore. It will generate the command and arguments as followings. ``` - args: - bash - -c - $JAVA_HOME/bin/java -classpath $FLINK_CLASSPATH -Xmx1073741824 -Xms1073741824 -XX:MaxMetaspaceSize=268435456 -Dlog.file=/opt/flink/log/jobmanager.log -Dlogback.configurationFile=file:/opt/flink/conf/logback-console.xml -Dlog4j.configuration=file:/opt/flink/conf/log4j2.xml -Dlog4j.configurationFile=file:/opt/flink/conf/log4j2.xml org.apache.flink.kubernetes.entrypoint.KubernetesApplicationClusterEntrypoint ... command: - /docker-entrypoint.sh ``` We already have a ticket FLINK-20676 to track this. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] godfreyhe closed pull request #14605: [FLINK-20883][table-planner-blink] Separate the implementation of BatchExecOverAggregate and StreamExecOverAggregate
godfreyhe closed pull request #14605: URL: https://github.com/apache/flink/pull/14605 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-20650) Mark "native-k8s" as deprecated in docker-entrypoint.sh
[ https://issues.apache.org/jira/browse/FLINK-20650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Wang updated FLINK-20650: -- Description: When we are publishing the image 1.12 to docker hub, some docker guys raise up a issue for the {{docker-entrypoint.sh}}. They want the images to have a certain standard, because they are the official ones. However the proposed {{native-k8s}} command is more like an internal bridge. It is only used for native Kubernetes integration. Another suggestion is removing the "bash -c" wrapper and generate it in the flink codes. Refer here[1] for more information. Note: We mark the {{native-k8s}} as deprecated and export the environments for all pass-through mode commands, the flink Kubernetes codes should be adjusted accordingly. [1]. [https://github.com/docker-library/official-images/pull/9249] was: When we are publishing the image 1.12 to docker hub, some docker guys raise up a issue for the {{docker-entrypoint.sh}}. They want the images to have a certain standard, because they are the official ones. However the proposed {{native-k8s}} command is more like an internal bridge. It is only used for native Kubernetes integration. Another suggestion is removing the "bash -c" wrapper and generate it in the flink codes. Refer here[1] for more information. Note: when we rename the {{native-k8s}} to {{generic}} in the flink-docker project, the flink Kubernetes codes should be adjusted accordingly. [1]. https://github.com/docker-library/official-images/pull/9249 > Mark "native-k8s" as deprecated in docker-entrypoint.sh > --- > > Key: FLINK-20650 > URL: https://issues.apache.org/jira/browse/FLINK-20650 > Project: Flink > Issue Type: Improvement > Components: Deployment / Kubernetes, flink-docker >Affects Versions: 1.12.0 >Reporter: Yang Wang >Assignee: Yang Wang >Priority: Blocker > Labels: pull-request-available > Fix For: 1.13.0, 1.12.1 > > > When we are publishing the image 1.12 to docker hub, some docker guys raise > up a issue for the {{docker-entrypoint.sh}}. They want the images to have a > certain standard, because they are the official ones. However the proposed > {{native-k8s}} command is more like an internal bridge. It is only used for > native Kubernetes integration. > > Another suggestion is removing the "bash -c" wrapper and generate it in the > flink codes. Refer here[1] for more information. > > Note: We mark the {{native-k8s}} as deprecated and export the environments > for all pass-through mode commands, the flink Kubernetes codes should be > adjusted accordingly. > > [1]. [https://github.com/docker-library/official-images/pull/9249] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-20676) Remove deprecated command "native-k8s" in docker-entrypoint.sh
[ https://issues.apache.org/jira/browse/FLINK-20676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Wang updated FLINK-20676: -- Description: In FLINK-20650, we have marked "native-k8s" as deprecated in docker-entrypoint.sh and set the environments for native modes for all pass-through commands. The deprecated command "native-k8s" should be removed in the next major release(1.13). (was: In FLINK-20650, we have introduced a new general command "run" and mark "native-k8s" as deprecated in docker-entrypoint.sh. The deprecated command "native-k8s" should be removed in the next major release(1.13).) > Remove deprecated command "native-k8s" in docker-entrypoint.sh > -- > > Key: FLINK-20676 > URL: https://issues.apache.org/jira/browse/FLINK-20676 > Project: Flink > Issue Type: Improvement > Components: flink-docker >Reporter: Yang Wang >Priority: Blocker > Fix For: 1.13.0 > > > In FLINK-20650, we have marked "native-k8s" as deprecated in > docker-entrypoint.sh and set the environments for native modes for all > pass-through commands. The deprecated command "native-k8s" should be removed > in the next major release(1.13). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] xiaoHoly commented on pull request #14616: [FLINK-20879][table-api] Use MemorySize type instead of String type for memory ConfigOption in ExecutionConfigOptions
xiaoHoly commented on pull request #14616: URL: https://github.com/apache/flink/pull/14616#issuecomment-759955407 Thanks for review, @wuchong This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14605: [FLINK-20883][table-planner-blink] Separate the implementation of BatchExecOverAggregate and StreamExecOverAggregate
flinkbot edited a comment on pull request #14605: URL: https://github.com/apache/flink/pull/14605#issuecomment-757729308 ## CI report: * 1bbba19ff3ea721d502f433ba3e53936f3851799 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11981) * 3dd4ffb51cd59a27ac7bcefc74197e1f2e1bbdaa UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14536: [FLINK-20812][Connector][Hbase] hbase in sql mode,can use 'properties.*' add Configuration parameter.
flinkbot edited a comment on pull request #14536: URL: https://github.com/apache/flink/pull/14536#issuecomment-752909965 ## CI report: * a4c6b2e3d222c1679ce19d21a7f108d63d8dc3fc UNKNOWN * e8c1e77209e80aa39985342f01c3d8d566220d1a UNKNOWN * ba7aceff1a94c93ce89ed15359c992f62ad83e93 UNKNOWN * 3c42f8358ae07557917ce71eae8d092ed501b45d UNKNOWN * 477cd8c0b5b31588f7bd0174e0d87393a6df19ca UNKNOWN * 6558b9f806137c611bd648d3b9672c415a23b061 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12004) * cc47bd3f4429803694fed9fb9853827b11ed124c UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14631: [FLINK-20885][canal][json] Deserialization exception when using 'canal-json.table.include' to filter out the binlog of the specified
flinkbot edited a comment on pull request #14631: URL: https://github.com/apache/flink/pull/14631#issuecomment-759388845 ## CI report: * 6d9cb1f0126e86b640d2775f0a3b1bc20d2821e9 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11997) * e283ba5376620acf2206a9c9ff7a4cdc9ba9 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12018) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14630: [FLINK-20915][docker] Move docker entrypoint to distribution
flinkbot edited a comment on pull request #14630: URL: https://github.com/apache/flink/pull/14630#issuecomment-759372895 ## CI report: * 656628f8e564ef8ee29032d392f1485a8b0d9eea UNKNOWN * b3370da98fb5903795c396172842e33c2bbf7575 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12009) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14631: [FLINK-20885][canal][json] Deserialization exception when using 'canal-json.table.include' to filter out the binlog of the specified
flinkbot edited a comment on pull request #14631: URL: https://github.com/apache/flink/pull/14631#issuecomment-759388845 ## CI report: * 6d9cb1f0126e86b640d2775f0a3b1bc20d2821e9 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11997) * e283ba5376620acf2206a9c9ff7a4cdc9ba9 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14616: [FLINK-20879][table-api] Use MemorySize type instead of String type for memory ConfigOption in ExecutionConfigOptions
flinkbot edited a comment on pull request #14616: URL: https://github.com/apache/flink/pull/14616#issuecomment-758526377 ## CI report: * 448c026a402e045e050f405daf934a8a7c880c9d UNKNOWN * e57906f184411f95085433096ec5cffb2ec7ed88 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12016) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (FLINK-19446) canal-json has a situation that -U and +U are equal, when updating the null field to be non-null
[ https://issues.apache.org/jira/browse/FLINK-19446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu reassigned FLINK-19446: --- Assignee: Nicholas Jiang > canal-json has a situation that -U and +U are equal, when updating the null > field to be non-null > > > Key: FLINK-19446 > URL: https://issues.apache.org/jira/browse/FLINK-19446 > Project: Flink > Issue Type: Bug > Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile) >Affects Versions: 1.11.1 >Reporter: shizhengchao >Assignee: Nicholas Jiang >Priority: Major > Fix For: 1.13.0 > > > line 118 in CanalJsonDeserializationSchema#deserialize method: > {code:java} > GenericRowData after = (GenericRowData) data.getRow(i, fieldCount); > GenericRowData before = (GenericRowData) old.getRow(i, fieldCount); > for (int f = 0; f < fieldCount; f++) { > if (before.isNullAt(f)) { > // not null fields in "old" (before) means the fields are > changed > // null/empty fields in "old" (before) means the fields are not > changed > // so we just copy the not changed fields into before > before.setField(f, after.getField(f)); > } > } > before.setRowKind(RowKind.UPDATE_BEFORE); > after.setRowKind(RowKind.UPDATE_AFTER); > {code} > if a field is null before update,it will cause -U and +U to be equal -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] SteNicholas commented on pull request #14631: [FLINK-20885][canal][json] Deserialization exception when using 'canal-json.table.include' to filter out the binlog of the specified tabl
SteNicholas commented on pull request #14631: URL: https://github.com/apache/flink/pull/14631#issuecomment-759932673 @wuchong , thanks for your detailed review. I have updated `deserialize(...)` to reuse the following two methods. Please help to review again. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-19446) canal-json has a situation that -U and +U are equal, when updating the null field to be non-null
[ https://issues.apache.org/jira/browse/FLINK-19446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17264602#comment-17264602 ] Nicholas Jiang commented on FLINK-19446: [~jark], if no one is available to work for this issue, I would like to take. Could you please assign this to me? > canal-json has a situation that -U and +U are equal, when updating the null > field to be non-null > > > Key: FLINK-19446 > URL: https://issues.apache.org/jira/browse/FLINK-19446 > Project: Flink > Issue Type: Bug > Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile) >Affects Versions: 1.11.1 >Reporter: shizhengchao >Priority: Major > Fix For: 1.13.0 > > > line 118 in CanalJsonDeserializationSchema#deserialize method: > {code:java} > GenericRowData after = (GenericRowData) data.getRow(i, fieldCount); > GenericRowData before = (GenericRowData) old.getRow(i, fieldCount); > for (int f = 0; f < fieldCount; f++) { > if (before.isNullAt(f)) { > // not null fields in "old" (before) means the fields are > changed > // null/empty fields in "old" (before) means the fields are not > changed > // so we just copy the not changed fields into before > before.setField(f, after.getField(f)); > } > } > before.setRowKind(RowKind.UPDATE_BEFORE); > after.setRowKind(RowKind.UPDATE_AFTER); > {code} > if a field is null before update,it will cause -U and +U to be equal -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #14616: [FLINK-20879][table-api] Use MemorySize type instead of String type for memory ConfigOption in ExecutionConfigOptions
flinkbot edited a comment on pull request #14616: URL: https://github.com/apache/flink/pull/14616#issuecomment-758526377 ## CI report: * 448c026a402e045e050f405daf934a8a7c880c9d UNKNOWN * 3ab1ce460fa37cc33355462cfefa7aed970bd092 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12013) * e57906f184411f95085433096ec5cffb2ec7ed88 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12016) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] wuchong commented on a change in pull request #14631: [FLINK-20885][canal][json] Deserialization exception when using 'canal-json.table.include' to filter out the binlog of the specif
wuchong commented on a change in pull request #14631: URL: https://github.com/apache/flink/pull/14631#discussion_r557038157 ## File path: flink-formats/flink-json/src/main/java/org/apache/flink/formats/json/JsonRowDataDeserializationSchema.java ## @@ -113,6 +113,36 @@ public RowData deserialize(@Nullable byte[] message) throws IOException { } } +public JsonNode deserializeToJsonNode(@Nullable byte[] message) throws IOException { Review comment: Could you update `deserialize(..)` to reuse these two method? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-20940) The LOCALTIME/LOCALTIMSTAMP functions should use session time zone
[ https://issues.apache.org/jira/browse/FLINK-20940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu closed FLINK-20940. --- Resolution: Fixed Fixed in master: d093611b5dfab95fe62e4f861879762ca2e43437 > The LOCALTIME/LOCALTIMSTAMP functions should use session time zone > --- > > Key: FLINK-20940 > URL: https://issues.apache.org/jira/browse/FLINK-20940 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / API, Table SQL / Runtime >Affects Versions: 1.12.0, 1.13.0 >Reporter: Leonard Xu >Assignee: Leonard Xu >Priority: Major > Labels: pull-request-available > Fix For: 1.13.0 > > > |LOCALTIME|LOCALTIME > TIME(0) NOT NULL > > #session timezone: UTC > 08:52:52 > > #session timezone: UTC+8 > 08:52:52 > > wall clock: > UTC+8:2020-12-29 08:52:52| > |LOCALTIMESTAMP|LOCALTIMESTAMP > TIMESTAMP(0) NOT NULL > > #session timezone: UTC > 2020-12-29T08:52:52 > > #session timezone: UTC + 8 > LOCALTIMESTAMP > TIMESTAMP(0) NOT NULL > 2020-12-29T08:52:52 > > wall clock: > UTC+8:2020-12-29 08:52:52| -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] wuchong merged pull request #14620: [FLINK-20940][table-planner] Use session time zone in LOCALTIME/LOCALTIMSTAMP functions
wuchong merged pull request #14620: URL: https://github.com/apache/flink/pull/14620 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14634: [FLINK-20913][hive]Improve HiveConf creation
flinkbot edited a comment on pull request #14634: URL: https://github.com/apache/flink/pull/14634#issuecomment-759553978 ## CI report: * 166b840d80a5b340ce0903b3255387cdfec3faf6 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12002) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] wuchong commented on a change in pull request #14536: [FLINK-20812][Connector][Hbase] hbase in sql mode,can use 'properties.*' add Configuration parameter.
wuchong commented on a change in pull request #14536: URL: https://github.com/apache/flink/pull/14536#discussion_r557022428 ## File path: flink-connectors/flink-connector-hbase-base/src/main/java/org/apache/flink/connector/hbase/options/HBaseOptions.java ## @@ -0,0 +1,263 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.connector.hbase.options; + +import org.apache.flink.annotation.Internal; +import org.apache.flink.configuration.ConfigOption; +import org.apache.flink.configuration.ConfigOptions; +import org.apache.flink.configuration.MemorySize; +import org.apache.flink.configuration.ReadableConfig; +import org.apache.flink.connector.hbase.util.HBaseConfigurationUtil; +import org.apache.flink.connector.hbase.util.HBaseTableSchema; +import org.apache.flink.table.api.TableSchema; +import org.apache.flink.table.api.ValidationException; +import org.apache.flink.table.descriptors.DescriptorProperties; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.HConstants; + +import java.io.Serializable; +import java.time.Duration; +import java.util.Map; +import java.util.Optional; +import java.util.Properties; + +import static org.apache.flink.table.descriptors.AbstractHBaseValidator.CONNECTOR_PROPERTIES; +import static org.apache.flink.table.descriptors.AbstractHBaseValidator.CONNECTOR_ZK_NODE_PARENT; +import static org.apache.flink.table.descriptors.AbstractHBaseValidator.CONNECTOR_ZK_QUORUM; +import static org.apache.flink.table.factories.FactoryUtil.SINK_PARALLELISM; + +/** Common Options for HBase. */ +@Internal +public class HBaseOptions implements Serializable { + +public static final ConfigOption TABLE_NAME = +ConfigOptions.key("table-name") +.stringType() +.noDefaultValue() +.withDescription("The name of HBase table to connect."); + +public static final ConfigOption ZOOKEEPER_QUORUM = +ConfigOptions.key("zookeeper.quorum") +.stringType() +.noDefaultValue() +.withDescription("The HBase Zookeeper quorum."); + +public static final ConfigOption ZOOKEEPER_ZNODE_PARENT = +ConfigOptions.key("zookeeper.znode.parent") +.stringType() +.defaultValue("/hbase") +.withDescription("The root dir in Zookeeper for HBase cluster."); + +public static final ConfigOption NULL_STRING_LITERAL = +ConfigOptions.key("null-string-literal") +.stringType() +.defaultValue("null") +.withDescription( +"Representation for null values for string fields. HBase source and " ++ "sink encodes/decodes empty bytes as null values for all types except string type."); + +public static final ConfigOption SINK_BUFFER_FLUSH_MAX_SIZE = +ConfigOptions.key("sink.buffer-flush.max-size") +.memoryType() +.defaultValue(MemorySize.parse("2mb")) +.withDescription( +"Writing option, maximum size in memory of buffered rows for each " ++ "writing request. This can improve performance for writing data to HBase database, " ++ "but may increase the latency. Can be set to '0' to disable it. "); + +public static final ConfigOption SINK_BUFFER_FLUSH_MAX_ROWS = +ConfigOptions.key("sink.buffer-flush.max-rows") +.intType() +.defaultValue(1000) +.withDescription( +"Writing option, maximum number of rows to buffer for each writing request. " ++ "This can improve performance for writing data to HBase database, but may increase the latency. " ++ "Can be set to '0' to disable it."); + +public static final ConfigOption SINK_BUFFER_FLUSH_INTERVAL = +ConfigOptions.key("sink.buffer-flush.interval") +
[GitHub] [flink] flinkbot edited a comment on pull request #14634: [FLINK-20913][hive]Improve HiveConf creation
flinkbot edited a comment on pull request #14634: URL: https://github.com/apache/flink/pull/14634#issuecomment-759553978 ## CI report: * 166b840d80a5b340ce0903b3255387cdfec3faf6 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12002) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14606: [Flink-20876][table-planner-blink] Separate the implementation of StreamExecTemporalJoin
flinkbot edited a comment on pull request #14606: URL: https://github.com/apache/flink/pull/14606#issuecomment-757754275 ## CI report: * 3776b52cfe3535dcc193b3a922a7d1d658126d66 UNKNOWN * 155b18c169e45a97cd52c5b43883d5cf6b79f038 UNKNOWN * 29868c9db791dc78af63512150e7f5c6a82950ea Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11996) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14531: [FLINK-20777][Connector][Kafka] Property "partition.discovery.interval.ms" shoule be enabled by default for unbounded mode, and disab
flinkbot edited a comment on pull request #14531: URL: https://github.com/apache/flink/pull/14531#issuecomment-752828536 ## CI report: * 638cd231fef2a3a5b866cf1d9e07884ead445c07 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12014) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-20417) Handle "Too old resource version" exception in Kubernetes watch more gracefully
[ https://issues.apache.org/jira/browse/FLINK-20417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Wang updated FLINK-20417: -- Fix Version/s: (was: 1.12.1) > Handle "Too old resource version" exception in Kubernetes watch more > gracefully > --- > > Key: FLINK-20417 > URL: https://issues.apache.org/jira/browse/FLINK-20417 > Project: Flink > Issue Type: Improvement > Components: Deployment / Kubernetes >Affects Versions: 1.12.0, 1.11.2 >Reporter: Yang Wang >Priority: Major > Fix For: 1.13.0 > > > Currently, when the watcher(pods watcher, configmap watcher) is closed with > exception, we will call {{WatchCallbackHandler#handleFatalError}}. And this > could cause JobManager terminating and then failover. > For most cases, this is correct. But not for "too old resource version" > exception. See more information here[1]. Usually this exception could happen > when the APIServer is restarted. And we just need to create a new watch and > continue to do the pods/configmap watching. This could help the Flink cluster > reducing the impact of K8s cluster restarting. > > The issue is inspired by this technical article[2]. Thanks the guys from > tencent for the debugging. Note this is a Chinese documentation. > > [1]. > [https://stackoverflow.com/questions/61409596/kubernetes-too-old-resource-version] > [2]. [https://cloud.tencent.com/developer/article/1731416] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] dixingxing0 edited a comment on pull request #14634: [FLINK-20913][hive]Improve HiveConf creation
dixingxing0 edited a comment on pull request #14634: URL: https://github.com/apache/flink/pull/14634#issuecomment-759896619 @flinkbot run travis This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14636: [FLINK-20921][python] Fixes the Date/Time/Timestamp type in Python DataStream API
flinkbot edited a comment on pull request #14636: URL: https://github.com/apache/flink/pull/14636#issuecomment-759899714 ## CI report: * 25bbb9709f9c0859c215e814659b2bedd951a526 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12015) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14616: [FLINK-20879][table-api] Use MemorySize type instead of String type for memory ConfigOption in ExecutionConfigOptions
flinkbot edited a comment on pull request #14616: URL: https://github.com/apache/flink/pull/14616#issuecomment-758526377 ## CI report: * 448c026a402e045e050f405daf934a8a7c880c9d UNKNOWN * 3ab1ce460fa37cc33355462cfefa7aed970bd092 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12013) * e57906f184411f95085433096ec5cffb2ec7ed88 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14606: [Flink-20876][table-planner-blink] Separate the implementation of StreamExecTemporalJoin
flinkbot edited a comment on pull request #14606: URL: https://github.com/apache/flink/pull/14606#issuecomment-757754275 ## CI report: * 3776b52cfe3535dcc193b3a922a7d1d658126d66 UNKNOWN * 155b18c169e45a97cd52c5b43883d5cf6b79f038 UNKNOWN * 29868c9db791dc78af63512150e7f5c6a82950ea Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11996) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] wenlong88 commented on pull request #14606: [Flink-20876][table-planner-blink] Separate the implementation of StreamExecTemporalJoin
wenlong88 commented on pull request #14606: URL: https://github.com/apache/flink/pull/14606#issuecomment-759905253 @flinkbot run azure This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (FLINK-20951) IllegalArgumentException when reading Hive parquet table if condition not contain all partitioned fields
[ https://issues.apache.org/jira/browse/FLINK-20951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17264572#comment-17264572 ] YUJIANBO edited comment on FLINK-20951 at 1/14/21, 3:37 AM: [~lirui] [~jark] Thanks to the help of Jark Wu and Rui Li, the configuration of this parameter can workaround the issue. But the official website said: “This feature is enabled *by default*. It may be disabled with the following configuration. table.exec.hive.fallback-mapred-reader=true” https://ci.apache.org/projects/flink/flink-docs-release-1.12/dev/table/connectors/hive/hive_read_write.html#vectorized-optimization-upon-read was (Author: yujianbo): [~lirui] [~jark] Thanks to the help of Jark Wu and Rui Li, the configuration of this parameter is solved. But the official website said: “This feature is enabled *by default*. It may be disabled with the following configuration. table.exec.hive.fallback-mapred-reader=true” https://ci.apache.org/projects/flink/flink-docs-release-1.12/dev/table/connectors/hive/hive_read_write.html#vectorized-optimization-upon-read > IllegalArgumentException when reading Hive parquet table if condition not > contain all partitioned fields > > > Key: FLINK-20951 > URL: https://issues.apache.org/jira/browse/FLINK-20951 > Project: Flink > Issue Type: Bug > Components: Connectors / Hive, Table SQL / Runtime >Affects Versions: 1.12.0 > Environment: flink 1.12.0release-12 > sql-cli >Reporter: YUJIANBO >Priority: Major > > The production hive table is partitioned by two fields:datekey and event > I have do this test by Flink-sql-cli:(Spark Sql All is OK) > (1)First: > SELECT vid From table_A WHERE datekey = '20210112' AND event = 'XXX' AND vid > = 'aa';(OK) > SELECT vid From table_A WHERE datekey = '20210112' AND vid = 'aa'; > (Error) > (2)Second: > SELECT vid From table_B WHERE datekey = '20210112' AND event = 'YYY' AND vid > = 'bb';(OK) > SELECT vid From table_B WHERE datekey = '20210112' AND vid = 'bb'; > (Error) > The exception is: > {code} > java.lang.RuntimeException: One or more fetchers have encountered exception > at > org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager.checkErrors(SplitFetcherManager.java:199) > at > org.apache.flink.connector.base.source.reader.SourceReaderBase.getNextFetch(SourceReaderBase.java:154) > at > org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:116) > at > org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:273) > at > org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:67) > at > org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:395) > at > org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:191) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:609) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:573) > at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:755) > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:570) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.RuntimeException: SplitFetcher thread 19 received > unexpected exception while polling the records > at > org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:146) > at > org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:101) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > ... 1 more > Caused by: java.lang.IllegalArgumentException > at java.nio.Buffer.position(Buffer.java:244) > at > org.apache.flink.hive.shaded.parquet.io.api.Binary$ByteBufferBackedBinary.getBytes(Binary.java:424) > at > org.apache.flink.hive.shaded.formats.parquet.vector.reader.BytesColumnReader.readBatchFromDictionaryIds(BytesColumnReader.java:79) > at > org.apache.flink.hive.shaded.formats.parquet.vector.reader.BytesColumnReader.readBatchFromDictionaryIds(BytesColumnReader.java:33) > at >
[GitHub] [flink] godfreyhe commented on pull request #14605: [FLINK-20883][table-planner-blink] Separate the implementation of BatchExecOverAggregate and StreamExecOverAggregate
godfreyhe commented on pull request #14605: URL: https://github.com/apache/flink/pull/14605#issuecomment-759903000 Thanks for the review, I will fix the conflicts in my local and merge the pr This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-20919) After the flink batch job is completed, the yarn application cannot be completed.
[ https://issues.apache.org/jira/browse/FLINK-20919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17264576#comment-17264576 ] Yang Wang commented on FLINK-20919: --- It is still a little strange. When you run the Flink per job in attach mode, Flink client should wait for the execution result and then exit. But in your situation, it seems that the client directly exits without getting the result. I am not sure whether it is related with {{TableEnvironment}}. But I have run a batch word count(DataSet API), it could finish normally. > After the flink batch job is completed, the yarn application cannot be > completed. > - > > Key: FLINK-20919 > URL: https://issues.apache.org/jira/browse/FLINK-20919 > Project: Flink > Issue Type: Bug > Components: Deployment / YARN >Affects Versions: 1.11.2 >Reporter: Wu >Priority: Major > Attachments: flink-Overview.png, flink-completed.png, > flink-jobManager.png, jobmanager.log, kaLr8Coy.png > > > I submit flink batch job in yarn-cluster mode. After the flink batch job is > completed, the yarn application cannot be completed. The yarn application > still occupies a vcore. How to automatically close the yarn application. > > {code:java} > //代码占位符 > EnvironmentSettings settings = > EnvironmentSettings.newInstance().useBlinkPlanner().inBatchMode().build(); > TableEnvironment tableEnv = TableEnvironment.create(settings); > tableEnv.executeSql("create table file_table"); > tableEnv.executeSql("create table print_table"); > String sql = "select count(1) from file_table"; > Table table = tableEnv.sqlQuery(sql); > tableEnv.createTemporaryView("t", table); > tableEnv.from("t").executeInsert("print_table"); > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] xiaoHoly commented on pull request #14616: [FLINK-20879][table-api] Use MemorySize type instead of String type for memory ConfigOption in ExecutionConfigOptions
xiaoHoly commented on pull request #14616: URL: https://github.com/apache/flink/pull/14616#issuecomment-759902228 > The following case is failed: > > ``` > [ERROR] Tests run: 11, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 13.404 s <<< FAILURE! - in org.apache.flink.table.client.gateway.local.ExecutionContextTest > [ERROR] testConfiguration(org.apache.flink.table.client.gateway.local.ExecutionContextTest) Time elapsed: 0.018 s <<< FAILURE! > java.lang.AssertionError: expected:<128kb> but was:<128 kb> > ``` done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (FLINK-20951) IllegalArgumentException when reading Hive parquet table if condition not contain all partitioned fields
[ https://issues.apache.org/jira/browse/FLINK-20951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17264572#comment-17264572 ] YUJIANBO edited comment on FLINK-20951 at 1/14/21, 3:29 AM: [~lirui] [~jark] Thanks to the help of Jark Wu and Rui Li, the configuration of this parameter is solved. But the official website said: “This feature is enabled *by default*. It may be disabled with the following configuration. table.exec.hive.fallback-mapred-reader=true” https://ci.apache.org/projects/flink/flink-docs-release-1.12/dev/table/connectors/hive/hive_read_write.html#vectorized-optimization-upon-read was (Author: yujianbo): [~lirui] [~jark] Thanks to the help of Jark Wu and Rui Li, the configuration of this parameter is solved. But the official website said: “This feature is enabled *by default*. It may be disabled with the following configuration. table.exec.hive.fallback-mapred-reader=true” > IllegalArgumentException when reading Hive parquet table if condition not > contain all partitioned fields > > > Key: FLINK-20951 > URL: https://issues.apache.org/jira/browse/FLINK-20951 > Project: Flink > Issue Type: Bug > Components: Connectors / Hive, Table SQL / Runtime >Affects Versions: 1.12.0 > Environment: flink 1.12.0release-12 > sql-cli >Reporter: YUJIANBO >Priority: Major > > The production hive table is partitioned by two fields:datekey and event > I have do this test by Flink-sql-cli:(Spark Sql All is OK) > (1)First: > SELECT vid From table_A WHERE datekey = '20210112' AND event = 'XXX' AND vid > = 'aa';(OK) > SELECT vid From table_A WHERE datekey = '20210112' AND vid = 'aa'; > (Error) > (2)Second: > SELECT vid From table_B WHERE datekey = '20210112' AND event = 'YYY' AND vid > = 'bb';(OK) > SELECT vid From table_B WHERE datekey = '20210112' AND vid = 'bb'; > (Error) > The exception is: > {code} > java.lang.RuntimeException: One or more fetchers have encountered exception > at > org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager.checkErrors(SplitFetcherManager.java:199) > at > org.apache.flink.connector.base.source.reader.SourceReaderBase.getNextFetch(SourceReaderBase.java:154) > at > org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:116) > at > org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:273) > at > org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:67) > at > org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:395) > at > org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:191) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:609) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:573) > at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:755) > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:570) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.RuntimeException: SplitFetcher thread 19 received > unexpected exception while polling the records > at > org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:146) > at > org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:101) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > ... 1 more > Caused by: java.lang.IllegalArgumentException > at java.nio.Buffer.position(Buffer.java:244) > at > org.apache.flink.hive.shaded.parquet.io.api.Binary$ByteBufferBackedBinary.getBytes(Binary.java:424) > at > org.apache.flink.hive.shaded.formats.parquet.vector.reader.BytesColumnReader.readBatchFromDictionaryIds(BytesColumnReader.java:79) > at > org.apache.flink.hive.shaded.formats.parquet.vector.reader.BytesColumnReader.readBatchFromDictionaryIds(BytesColumnReader.java:33) > at > org.apache.flink.hive.shaded.formats.parquet.vector.reader.AbstractColumnReader.readToVector(AbstractColumnReader.java:199) > at >
[jira] [Comment Edited] (FLINK-20951) IllegalArgumentException when reading Hive parquet table if condition not contain all partitioned fields
[ https://issues.apache.org/jira/browse/FLINK-20951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17264572#comment-17264572 ] YUJIANBO edited comment on FLINK-20951 at 1/14/21, 3:28 AM: [~lirui] [~jark] Thanks to the help of Jark Wu and Rui Li, the configuration of this parameter is solved. But the official website said: “This feature is enabled *by default*. It may be disabled with the following configuration. table.exec.hive.fallback-mapred-reader=true” was (Author: yujianbo): [~lirui] [~jark] Thanks to the help of jark Wu and Rui Li, the configuration of this parameter is solved. But the official website said: “This feature is enabled *by default*. It may be disabled with the following configuration. table.exec.hive.fallback-mapred-reader=true” > IllegalArgumentException when reading Hive parquet table if condition not > contain all partitioned fields > > > Key: FLINK-20951 > URL: https://issues.apache.org/jira/browse/FLINK-20951 > Project: Flink > Issue Type: Bug > Components: Connectors / Hive, Table SQL / Runtime >Affects Versions: 1.12.0 > Environment: flink 1.12.0release-12 > sql-cli >Reporter: YUJIANBO >Priority: Major > > The production hive table is partitioned by two fields:datekey and event > I have do this test by Flink-sql-cli:(Spark Sql All is OK) > (1)First: > SELECT vid From table_A WHERE datekey = '20210112' AND event = 'XXX' AND vid > = 'aa';(OK) > SELECT vid From table_A WHERE datekey = '20210112' AND vid = 'aa'; > (Error) > (2)Second: > SELECT vid From table_B WHERE datekey = '20210112' AND event = 'YYY' AND vid > = 'bb';(OK) > SELECT vid From table_B WHERE datekey = '20210112' AND vid = 'bb'; > (Error) > The exception is: > {code} > java.lang.RuntimeException: One or more fetchers have encountered exception > at > org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager.checkErrors(SplitFetcherManager.java:199) > at > org.apache.flink.connector.base.source.reader.SourceReaderBase.getNextFetch(SourceReaderBase.java:154) > at > org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:116) > at > org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:273) > at > org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:67) > at > org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:395) > at > org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:191) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:609) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:573) > at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:755) > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:570) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.RuntimeException: SplitFetcher thread 19 received > unexpected exception while polling the records > at > org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:146) > at > org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:101) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > ... 1 more > Caused by: java.lang.IllegalArgumentException > at java.nio.Buffer.position(Buffer.java:244) > at > org.apache.flink.hive.shaded.parquet.io.api.Binary$ByteBufferBackedBinary.getBytes(Binary.java:424) > at > org.apache.flink.hive.shaded.formats.parquet.vector.reader.BytesColumnReader.readBatchFromDictionaryIds(BytesColumnReader.java:79) > at > org.apache.flink.hive.shaded.formats.parquet.vector.reader.BytesColumnReader.readBatchFromDictionaryIds(BytesColumnReader.java:33) > at > org.apache.flink.hive.shaded.formats.parquet.vector.reader.AbstractColumnReader.readToVector(AbstractColumnReader.java:199) > at > org.apache.flink.hive.shaded.formats.parquet.ParquetVectorizedInputFormat$ParquetReader.nextBatch(ParquetVectorizedInputFormat.java:359) > at >
[jira] [Comment Edited] (FLINK-20951) IllegalArgumentException when reading Hive parquet table if condition not contain all partitioned fields
[ https://issues.apache.org/jira/browse/FLINK-20951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17264572#comment-17264572 ] YUJIANBO edited comment on FLINK-20951 at 1/14/21, 3:28 AM: [~lirui] [~jark] Thanks to the help of jark Wu and Rui Li, the configuration of this parameter is solved. But the official website said: “This feature is enabled *by default*. It may be disabled with the following configuration. table.exec.hive.fallback-mapred-reader=true” was (Author: yujianbo): [~lirui][~jark] Thanks to the help of Jack and Rui Li, the configuration of this parameter is solved. But the official website said: “This feature is enabled *by default*. It may be disabled with the following configuration. table.exec.hive.fallback-mapred-reader=true” > IllegalArgumentException when reading Hive parquet table if condition not > contain all partitioned fields > > > Key: FLINK-20951 > URL: https://issues.apache.org/jira/browse/FLINK-20951 > Project: Flink > Issue Type: Bug > Components: Connectors / Hive, Table SQL / Runtime >Affects Versions: 1.12.0 > Environment: flink 1.12.0release-12 > sql-cli >Reporter: YUJIANBO >Priority: Major > > The production hive table is partitioned by two fields:datekey and event > I have do this test by Flink-sql-cli:(Spark Sql All is OK) > (1)First: > SELECT vid From table_A WHERE datekey = '20210112' AND event = 'XXX' AND vid > = 'aa';(OK) > SELECT vid From table_A WHERE datekey = '20210112' AND vid = 'aa'; > (Error) > (2)Second: > SELECT vid From table_B WHERE datekey = '20210112' AND event = 'YYY' AND vid > = 'bb';(OK) > SELECT vid From table_B WHERE datekey = '20210112' AND vid = 'bb'; > (Error) > The exception is: > {code} > java.lang.RuntimeException: One or more fetchers have encountered exception > at > org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager.checkErrors(SplitFetcherManager.java:199) > at > org.apache.flink.connector.base.source.reader.SourceReaderBase.getNextFetch(SourceReaderBase.java:154) > at > org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:116) > at > org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:273) > at > org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:67) > at > org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:395) > at > org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:191) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:609) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:573) > at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:755) > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:570) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.RuntimeException: SplitFetcher thread 19 received > unexpected exception while polling the records > at > org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:146) > at > org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:101) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > ... 1 more > Caused by: java.lang.IllegalArgumentException > at java.nio.Buffer.position(Buffer.java:244) > at > org.apache.flink.hive.shaded.parquet.io.api.Binary$ByteBufferBackedBinary.getBytes(Binary.java:424) > at > org.apache.flink.hive.shaded.formats.parquet.vector.reader.BytesColumnReader.readBatchFromDictionaryIds(BytesColumnReader.java:79) > at > org.apache.flink.hive.shaded.formats.parquet.vector.reader.BytesColumnReader.readBatchFromDictionaryIds(BytesColumnReader.java:33) > at > org.apache.flink.hive.shaded.formats.parquet.vector.reader.AbstractColumnReader.readToVector(AbstractColumnReader.java:199) > at > org.apache.flink.hive.shaded.formats.parquet.ParquetVectorizedInputFormat$ParquetReader.nextBatch(ParquetVectorizedInputFormat.java:359) > at >
[jira] [Commented] (FLINK-20951) IllegalArgumentException when reading Hive parquet table if condition not contain all partitioned fields
[ https://issues.apache.org/jira/browse/FLINK-20951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17264572#comment-17264572 ] YUJIANBO commented on FLINK-20951: -- [~lirui][~jark] Thanks to the help of Jack and Rui Li, the configuration of this parameter is solved. But the official website said: “This feature is enabled *by default*. It may be disabled with the following configuration. table.exec.hive.fallback-mapred-reader=true” > IllegalArgumentException when reading Hive parquet table if condition not > contain all partitioned fields > > > Key: FLINK-20951 > URL: https://issues.apache.org/jira/browse/FLINK-20951 > Project: Flink > Issue Type: Bug > Components: Connectors / Hive, Table SQL / Runtime >Affects Versions: 1.12.0 > Environment: flink 1.12.0release-12 > sql-cli >Reporter: YUJIANBO >Priority: Major > > The production hive table is partitioned by two fields:datekey and event > I have do this test by Flink-sql-cli:(Spark Sql All is OK) > (1)First: > SELECT vid From table_A WHERE datekey = '20210112' AND event = 'XXX' AND vid > = 'aa';(OK) > SELECT vid From table_A WHERE datekey = '20210112' AND vid = 'aa'; > (Error) > (2)Second: > SELECT vid From table_B WHERE datekey = '20210112' AND event = 'YYY' AND vid > = 'bb';(OK) > SELECT vid From table_B WHERE datekey = '20210112' AND vid = 'bb'; > (Error) > The exception is: > {code} > java.lang.RuntimeException: One or more fetchers have encountered exception > at > org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager.checkErrors(SplitFetcherManager.java:199) > at > org.apache.flink.connector.base.source.reader.SourceReaderBase.getNextFetch(SourceReaderBase.java:154) > at > org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:116) > at > org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:273) > at > org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:67) > at > org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:395) > at > org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:191) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:609) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:573) > at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:755) > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:570) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.RuntimeException: SplitFetcher thread 19 received > unexpected exception while polling the records > at > org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:146) > at > org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:101) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > ... 1 more > Caused by: java.lang.IllegalArgumentException > at java.nio.Buffer.position(Buffer.java:244) > at > org.apache.flink.hive.shaded.parquet.io.api.Binary$ByteBufferBackedBinary.getBytes(Binary.java:424) > at > org.apache.flink.hive.shaded.formats.parquet.vector.reader.BytesColumnReader.readBatchFromDictionaryIds(BytesColumnReader.java:79) > at > org.apache.flink.hive.shaded.formats.parquet.vector.reader.BytesColumnReader.readBatchFromDictionaryIds(BytesColumnReader.java:33) > at > org.apache.flink.hive.shaded.formats.parquet.vector.reader.AbstractColumnReader.readToVector(AbstractColumnReader.java:199) > at > org.apache.flink.hive.shaded.formats.parquet.ParquetVectorizedInputFormat$ParquetReader.nextBatch(ParquetVectorizedInputFormat.java:359) > at > org.apache.flink.hive.shaded.formats.parquet.ParquetVectorizedInputFormat$ParquetReader.readBatch(ParquetVectorizedInputFormat.java:328) > at > org.apache.flink.connector.file.src.impl.FileSourceSplitReader.fetch(FileSourceSplitReader.java:67) > at > org.apache.flink.connector.base.source.reader.fetcher.FetchTask.run(FetchTask.java:56) > at >
[GitHub] [flink] wuchong commented on pull request #14616: [FLINK-20879][table-api] Use MemorySize type instead of String type for memory ConfigOption in ExecutionConfigOptions
wuchong commented on pull request #14616: URL: https://github.com/apache/flink/pull/14616#issuecomment-759899885 The following case is failed: ``` [ERROR] Tests run: 11, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 13.404 s <<< FAILURE! - in org.apache.flink.table.client.gateway.local.ExecutionContextTest [ERROR] testConfiguration(org.apache.flink.table.client.gateway.local.ExecutionContextTest) Time elapsed: 0.018 s <<< FAILURE! java.lang.AssertionError: expected:<128kb> but was:<128 kb> ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #14636: [FLINK-20921][python] Fixes the Date/Time/Timestamp type in Python DataStream API
flinkbot commented on pull request #14636: URL: https://github.com/apache/flink/pull/14636#issuecomment-759899714 ## CI report: * 25bbb9709f9c0859c215e814659b2bedd951a526 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14616: [FLINK-20879][table-api] Use MemorySize type instead of String type for memory ConfigOption in ExecutionConfigOptions
flinkbot edited a comment on pull request #14616: URL: https://github.com/apache/flink/pull/14616#issuecomment-758526377 ## CI report: * 448c026a402e045e050f405daf934a8a7c880c9d UNKNOWN * 3ab1ce460fa37cc33355462cfefa7aed970bd092 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12013) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] wuchong commented on pull request #14633: [FLINK-20961][table-planner-blink] Fix NPE when no assigned timestamp defined
wuchong commented on pull request #14633: URL: https://github.com/apache/flink/pull/14633#issuecomment-759899025 Btw, `TimeAttributesITCase` is only located in old planner, we can create one under `org.apache.flink.table.planner.runtime.stream.table` package in blink planner, and copy tests to it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] WeiZhong94 commented on pull request #14627: [FLINK-20946][python] Optimize Python ValueState Implementation In PyFlink
WeiZhong94 commented on pull request #14627: URL: https://github.com/apache/flink/pull/14627#issuecomment-759898742 @HuangXingBo Thanks for your PR! I think we can do better in this PR, e.g. improve the logic of SynchronousValueRuntimeState. Now the update operation of SynchronousValueRuntimeState will cause 2 network requests, we can optimize it to 1. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-20965) BigDecimalTypeInfo can not be converted.
[ https://issues.apache.org/jira/browse/FLINK-20965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu closed FLINK-20965. --- Resolution: Not A Problem {{BigDecimalTypeInfo}} is an internal type, users shouldn't use it. > BigDecimalTypeInfo can not be converted. > > > Key: FLINK-20965 > URL: https://issues.apache.org/jira/browse/FLINK-20965 > Project: Flink > Issue Type: Bug > Components: Table SQL / Runtime >Affects Versions: 1.10.1 >Reporter: Wong Mulan >Priority: Major > Attachments: image-2021-01-14-10-56-07-949.png, > image-2021-01-14-10-59-03-656.png > > > LegacyTypeInfoDataTypeConverter#toDataType can not correctly convert > BigDecimalTypeInfo > Types.BIG_DEC do not include BigDecimalTypeInfo. > !image-2021-01-14-10-56-07-949.png! > !image-2021-01-14-10-59-03-656.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot commented on pull request #14636: [FLINK-20921][python] Fixes the Date/Time/Timestamp type in Python DataStream API
flinkbot commented on pull request #14636: URL: https://github.com/apache/flink/pull/14636#issuecomment-759897006 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 25bbb9709f9c0859c215e814659b2bedd951a526 (Thu Jan 14 03:12:49 UTC 2021) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-web] wuchong commented on pull request #405: Add Apache Flink release 1.12.1
wuchong commented on pull request #405: URL: https://github.com/apache/flink-web/pull/405#issuecomment-759896621 LGTM. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org