[GitHub] [flink] flinkbot commented on issue #11360: [FLINK-16508][k8s] Name the ports exposed by the main Container in Pod
flinkbot commented on issue #11360: [FLINK-16508][k8s] Name the ports exposed by the main Container in Pod URL: https://github.com/apache/flink/pull/11360#issuecomment-596916787 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 0037e8816fb00f1a6d3f06a32b7acace3298e009 (Tue Mar 10 05:53:56 UTC 2020) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #11340: [Flink 14338] Upgrade Calcite version to 1.22 for Flink SQL
flinkbot edited a comment on issue #11340: [Flink 14338] Upgrade Calcite version to 1.22 for Flink SQL URL: https://github.com/apache/flink/pull/11340#issuecomment-596050968 ## CI report: * 1e95f02cc2803e695eb28597de8f7344362826fd UNKNOWN * f38af5e48e36464e94a14cb7b10e7fc740081618 UNKNOWN * 3b1addfba9d6d9c804a676097fe616e69ee6c303 Travis: [FAILURE](https://travis-ci.com/flink-ci/flink/builds/152408053) Azure: [FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6078) * 86a69a6c232098be9f883841f86b03e344b446e1 Travis: [PENDING](https://travis-ci.com/flink-ci/flink/builds/152576384) Azure: [PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6108) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #11328: [FLINK-16455][hive] Introduce flink-sql-connector-hive modules to provide hive uber jars
flinkbot edited a comment on issue #11328: [FLINK-16455][hive] Introduce flink-sql-connector-hive modules to provide hive uber jars URL: https://github.com/apache/flink/pull/11328#issuecomment-595621789 ## CI report: * f1d94b9792a597e9b935fe08001930a3eb27d789 UNKNOWN * b4a76d76d2c1e9722befabc03b2191d053c70fa8 Travis: [PENDING](https://travis-ci.com/flink-ci/flink/builds/152563303) Azure: [FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6098) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #11358: [FLINK-16516][python] Remove Python UDF Codegen Code
flinkbot edited a comment on issue #11358: [FLINK-16516][python] Remove Python UDF Codegen Code URL: https://github.com/apache/flink/pull/11358#issuecomment-596885127 ## CI report: * cf3f3aac4aa052f71bd4954fd55b48c84350edf6 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/152569306) Azure: [FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6104) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot commented on issue #11359: [FLINK-16095] [chinese-translation] Translate "Modules" page of "Tabl…
flinkbot commented on issue #11359: [FLINK-16095] [chinese-translation] Translate "Modules" page of "Tabl… URL: https://github.com/apache/flink/pull/11359#issuecomment-596916485 ## CI report: * 4d5019079937eb3a8d991d5e5e54a5570330f56b UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (FLINK-16508) Name the ports exposed by the main Container in Pod
[ https://issues.apache.org/jira/browse/FLINK-16508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-16508: --- Labels: pull-request-available (was: ) > Name the ports exposed by the main Container in Pod > --- > > Key: FLINK-16508 > URL: https://issues.apache.org/jira/browse/FLINK-16508 > Project: Flink > Issue Type: Improvement > Components: Deployment / Kubernetes >Affects Versions: 1.10.0 >Reporter: Canbin Zheng >Assignee: Canbin Zheng >Priority: Minor > Labels: pull-request-available > Fix For: 1.10.1, 1.11.0 > > > Currently, we expose some ports via the main Container of the JobManager and > the TaskManager, but we forget to name those ports so that people could be > confused because there is no description of the port usage. This ticket > proposes to explicitly name the ports in the Container to help people > understand the usage of those ports. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] zhengcanbin opened a new pull request #11360: [FLINK-16508][k8s] Name the ports exposed by the main Container in Pod
zhengcanbin opened a new pull request #11360: [FLINK-16508][k8s] Name the ports exposed by the main Container in Pod URL: https://github.com/apache/flink/pull/11360 ## What is the purpose of the change 1. Currently, we expose some ports via the main Container of the JobManager and the TaskManager, but we forget to name those ports so that people could be confused because there is no description of the port usage. This PR proposes to explicitly name the ports in the Container to help people understand the usage of those ports. 2. Since the port name in the Container must be less than 15 characters, meanwhile we prefer to keep the port name consistent between the Container and the Service, so this PR also makes little change to the existing name of the corresponding ports in the Service. This is a minor change that rarely requires modification from user perspective. ## Verifying this change This change is already covered by existing tests. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (yes / **no**) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes / **no**) - The serializers: (yes / **no** / don't know) - The runtime per-record code paths (performance sensitive): (yes / **no** / don't know) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / don't know) - The S3 file system connector: (yes / **no** / don't know) ## Documentation - Does this pull request introduce a new feature? (yes / **no**) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-10114) Support Orc for StreamingFileSink
[ https://issues.apache.org/jira/browse/FLINK-10114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17055609#comment-17055609 ] Sivaprasanna Sethuraman commented on FLINK-10114: - [~gaoyunhaii] I believe relying on presto-orc is a bit overkill for the purpose we are looking to solve. I guess digging deeper in to the lower level APIs or implementations (PhysicalWriter/PhysicalFsWriter) or OutStream may throw some light through which we can stop worrying about Orc's WriterImpl and Path being used as the deciding field. We are also looking for a ORC based writer on our side as well. We'll take a look. > Support Orc for StreamingFileSink > - > > Key: FLINK-10114 > URL: https://issues.apache.org/jira/browse/FLINK-10114 > Project: Flink > Issue Type: Sub-task > Components: Connectors / FileSystem >Reporter: zhangminglei >Assignee: vinoyang >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot commented on issue #11359: [FLINK-16095] [chinese-translation] Translate "Modules" page of "Tabl…
flinkbot commented on issue #11359: [FLINK-16095] [chinese-translation] Translate "Modules" page of "Tabl… URL: https://github.com/apache/flink/pull/11359#issuecomment-596915123 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 4d5019079937eb3a8d991d5e5e54a5570330f56b (Tue Mar 10 05:47:14 UTC 2020) ✅no warnings Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (FLINK-16095) Translate "Modules" page of "Table API & SQL" into Chinese
[ https://issues.apache.org/jira/browse/FLINK-16095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-16095: --- Labels: pull-request-available (was: ) > Translate "Modules" page of "Table API & SQL" into Chinese > -- > > Key: FLINK-16095 > URL: https://issues.apache.org/jira/browse/FLINK-16095 > Project: Flink > Issue Type: Sub-task > Components: chinese-translation, Documentation >Reporter: Jark Wu >Assignee: super.lee >Priority: Major > Labels: pull-request-available > > The page url is > https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/modules.html > The markdown file is located in {{flink/docs/dev/table/modules.zh.md}} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] melotlee opened a new pull request #11359: [FLINK-16095] [chinese-translation] Translate "Modules" page of "Tabl…
melotlee opened a new pull request #11359: [FLINK-16095] [chinese-translation] Translate "Modules" page of "Tabl… URL: https://github.com/apache/flink/pull/11359 Translate "Modules" page of "Table API & SQL" into Chinese ## What is the purpose of the change *(This pull request Translate "Modules" page of "Table API & SQL" into Chinese)* ## Brief change log - *Translate "Modules" page of "Table API & SQL" into Chinese* ## Verifying this change ## Does this pull request potentially affect one of the following parts: ## Documentation This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #11328: [FLINK-16455][hive] Introduce flink-sql-connector-hive modules to provide hive uber jars
flinkbot edited a comment on issue #11328: [FLINK-16455][hive] Introduce flink-sql-connector-hive modules to provide hive uber jars URL: https://github.com/apache/flink/pull/11328#issuecomment-595621789 ## CI report: * f1d94b9792a597e9b935fe08001930a3eb27d789 UNKNOWN * b4a76d76d2c1e9722befabc03b2191d053c70fa8 Travis: [PENDING](https://travis-ci.com/flink-ci/flink/builds/152563303) Azure: [FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6098) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #11340: [Flink 14338] Upgrade Calcite version to 1.22 for Flink SQL
flinkbot edited a comment on issue #11340: [Flink 14338] Upgrade Calcite version to 1.22 for Flink SQL URL: https://github.com/apache/flink/pull/11340#issuecomment-596050968 ## CI report: * 1e95f02cc2803e695eb28597de8f7344362826fd UNKNOWN * f38af5e48e36464e94a14cb7b10e7fc740081618 UNKNOWN * 3b1addfba9d6d9c804a676097fe616e69ee6c303 Travis: [FAILURE](https://travis-ci.com/flink-ci/flink/builds/152408053) Azure: [FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6078) * 86a69a6c232098be9f883841f86b03e344b446e1 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #11351: [FLINK-16404][runtime] Solve the potential deadlock problem when reducing exclusive buffers to zero
flinkbot edited a comment on issue #11351: [FLINK-16404][runtime] Solve the potential deadlock problem when reducing exclusive buffers to zero URL: https://github.com/apache/flink/pull/11351#issuecomment-596351676 ## CI report: * 715889a35cfcc3aaf1b17f39dadaa86f755cc75d UNKNOWN * 75745bb56c70eac5bbb2e5300097bb6a8c7bb59d Travis: [CANCELED](https://travis-ci.com/flink-ci/flink/builds/152561417) Azure: [FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6096) * 548b22258f5e87fd53b45c7f4bb6de40bfd4e6d2 UNKNOWN * b797d2725d26d67674de8339e6d2714cf5ae98f3 Travis: [PENDING](https://travis-ci.com/flink-ci/flink/builds/152575529) Azure: [PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6107) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] JingsongLi commented on issue #11328: [FLINK-16455][hive] Introduce flink-sql-connector-hive modules to provide hive uber jars
JingsongLi commented on issue #11328: [FLINK-16455][hive] Introduce flink-sql-connector-hive modules to provide hive uber jars URL: https://github.com/apache/flink/pull/11328#issuecomment-596911964 @flinkbot run travis This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #11342: [FLINK-16483][python] Add Python building blocks to make sure the basic functionality of vectorized Python UDF could work
flinkbot edited a comment on issue #11342: [FLINK-16483][python] Add Python building blocks to make sure the basic functionality of vectorized Python UDF could work URL: https://github.com/apache/flink/pull/11342#issuecomment-596093049 ## CI report: * c506306f83eec4450719dfdfb2fe205a3fb69857 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/152569293) Azure: [PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6103) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #11351: [FLINK-16404][runtime] Solve the potential deadlock problem when reducing exclusive buffers to zero
flinkbot edited a comment on issue #11351: [FLINK-16404][runtime] Solve the potential deadlock problem when reducing exclusive buffers to zero URL: https://github.com/apache/flink/pull/11351#issuecomment-596351676 ## CI report: * 715889a35cfcc3aaf1b17f39dadaa86f755cc75d UNKNOWN * 75745bb56c70eac5bbb2e5300097bb6a8c7bb59d Travis: [CANCELED](https://travis-ci.com/flink-ci/flink/builds/152561417) Azure: [FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6096) * 548b22258f5e87fd53b45c7f4bb6de40bfd4e6d2 UNKNOWN * b797d2725d26d67674de8339e6d2714cf5ae98f3 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #10526: [FLINK-15090][build] Reverse the dependency from flink-streaming-java to flink-client
flinkbot edited a comment on issue #10526: [FLINK-15090][build] Reverse the dependency from flink-streaming-java to flink-client URL: https://github.com/apache/flink/pull/10526#issuecomment-564436777 ## CI report: * 4ce7cfd0e007e050facf3a61b0c5519519f4feee UNKNOWN * c6a292a50427fa62dae2ff5cf7aeacfcd7920d46 UNKNOWN * ae66b0b34506d4addd902307ea18dae6834622b1 UNKNOWN * 0e9b5dba5f79e7a86f616a50efc9ec2ae41b429e UNKNOWN * 46e1b3bb21ff4363ad8bccfb595eb2e63ecd76ff Travis: [CANCELED](https://travis-ci.com/flink-ci/flink/builds/152570377) Azure: [FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6105) * 8ced0c9d23a3878b055db4710088c4bcfb0ce907 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Closed] (FLINK-16167) update python_shell document execution
[ https://issues.apache.org/jira/browse/FLINK-16167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hequn Cheng closed FLINK-16167. --- Resolution: Resolved > update python_shell document execution > -- > > Key: FLINK-16167 > URL: https://issues.apache.org/jira/browse/FLINK-16167 > Project: Flink > Issue Type: Improvement > Components: Documentation >Reporter: yuwenbing >Assignee: yuwenbing >Priority: Major > Labels: pull-request-available > Fix For: 1.9.3, 1.10.1, 1.11.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Hi gays , > [https://ci.apache.org/projects/flink/flink-docs-master/ops/python_shell.html] > , the execution "bin/pyflink-shell.sh local" above the line "in the > root directory of your binary Flink directory. To run the Shell on a cluster, > please see the Setup section below." needs to be updated to > "pyflink-shell.sh local" > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-16167) update python_shell document execution
[ https://issues.apache.org/jira/browse/FLINK-16167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hequn Cheng updated FLINK-16167: Fix Version/s: 1.9.3 > update python_shell document execution > -- > > Key: FLINK-16167 > URL: https://issues.apache.org/jira/browse/FLINK-16167 > Project: Flink > Issue Type: Improvement > Components: Documentation >Reporter: yuwenbing >Assignee: yuwenbing >Priority: Major > Labels: pull-request-available > Fix For: 1.9.3, 1.10.1, 1.11.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Hi gays , > [https://ci.apache.org/projects/flink/flink-docs-master/ops/python_shell.html] > , the execution "bin/pyflink-shell.sh local" above the line "in the > root directory of your binary Flink directory. To run the Shell on a cluster, > please see the Setup section below." needs to be updated to > "pyflink-shell.sh local" > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (FLINK-16167) update python_shell document execution
[ https://issues.apache.org/jira/browse/FLINK-16167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17055578#comment-17055578 ] Hequn Cheng edited comment on FLINK-16167 at 3/10/20, 5:12 AM: --- Resolved in 1.9.3 via 22d177ad5efa93d4c428dae076a02de7fe95d1e2 in 1.10.1 via 1046a474612df5479289701705a67e9ddafa41fd in 1.11.0 via 8f8e35815c917616f98c13d056f20fefe36098f3 was (Author: hequn8128): Resolved in 1.10.1 via 1046a474612df5479289701705a67e9ddafa41fd in 1.11.0 via 8f8e35815c917616f98c13d056f20fefe36098f3 > update python_shell document execution > -- > > Key: FLINK-16167 > URL: https://issues.apache.org/jira/browse/FLINK-16167 > Project: Flink > Issue Type: Improvement > Components: Documentation >Reporter: yuwenbing >Assignee: yuwenbing >Priority: Major > Labels: pull-request-available > Fix For: 1.10.1, 1.11.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Hi gays , > [https://ci.apache.org/projects/flink/flink-docs-master/ops/python_shell.html] > , the execution "bin/pyflink-shell.sh local" above the line "in the > root directory of your binary Flink directory. To run the Shell on a cluster, > please see the Setup section below." needs to be updated to > "pyflink-shell.sh local" > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-16167) update python_shell document execution
[ https://issues.apache.org/jira/browse/FLINK-16167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17055578#comment-17055578 ] Hequn Cheng commented on FLINK-16167: - Resolved in 1.10.1 via 1046a474612df5479289701705a67e9ddafa41fd in 1.11.0 via 8f8e35815c917616f98c13d056f20fefe36098f3 > update python_shell document execution > -- > > Key: FLINK-16167 > URL: https://issues.apache.org/jira/browse/FLINK-16167 > Project: Flink > Issue Type: Improvement > Components: Documentation >Reporter: yuwenbing >Assignee: yuwenbing >Priority: Major > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > Hi gays , > [https://ci.apache.org/projects/flink/flink-docs-master/ops/python_shell.html] > , the execution "bin/pyflink-shell.sh local" above the line "in the > root directory of your binary Flink directory. To run the Shell on a cluster, > please see the Setup section below." needs to be updated to > "pyflink-shell.sh local" > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-16167) update python_shell document execution
[ https://issues.apache.org/jira/browse/FLINK-16167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hequn Cheng updated FLINK-16167: Fix Version/s: 1.11.0 1.10.1 > update python_shell document execution > -- > > Key: FLINK-16167 > URL: https://issues.apache.org/jira/browse/FLINK-16167 > Project: Flink > Issue Type: Improvement > Components: Documentation >Reporter: yuwenbing >Assignee: yuwenbing >Priority: Major > Labels: pull-request-available > Fix For: 1.10.1, 1.11.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Hi gays , > [https://ci.apache.org/projects/flink/flink-docs-master/ops/python_shell.html] > , the execution "bin/pyflink-shell.sh local" above the line "in the > root directory of your binary Flink directory. To run the Shell on a cluster, > please see the Setup section below." needs to be updated to > "pyflink-shell.sh local" > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-16167) update python_shell document execution
[ https://issues.apache.org/jira/browse/FLINK-16167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17055574#comment-17055574 ] Hequn Cheng commented on FLINK-16167: - [~yuwenbing] Thanks a lot for the contribution. > update python_shell document execution > -- > > Key: FLINK-16167 > URL: https://issues.apache.org/jira/browse/FLINK-16167 > Project: Flink > Issue Type: Improvement > Components: Documentation >Reporter: yuwenbing >Assignee: yuwenbing >Priority: Major > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > Hi gays , > [https://ci.apache.org/projects/flink/flink-docs-master/ops/python_shell.html] > , the execution "bin/pyflink-shell.sh local" above the line "in the > root directory of your binary Flink directory. To run the Shell on a cluster, > please see the Setup section below." needs to be updated to > "pyflink-shell.sh local" > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] hequn8128 closed pull request #11142: [FLINK-16167][python][doc] update python_shell document execution
hequn8128 closed pull request #11142: [FLINK-16167][python][doc] update python_shell document execution URL: https://github.com/apache/flink/pull/11142 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #11358: [FLINK-16516][python] Remove Python UDF Codegen Code
flinkbot edited a comment on issue #11358: [FLINK-16516][python] Remove Python UDF Codegen Code URL: https://github.com/apache/flink/pull/11358#issuecomment-596885127 ## CI report: * cf3f3aac4aa052f71bd4954fd55b48c84350edf6 Travis: [PENDING](https://travis-ci.com/flink-ci/flink/builds/152569306) Azure: [FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6104) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] hequn8128 commented on issue #11142: [FLINK-16167][python][doc] update python_shell document execution
hequn8128 commented on issue #11142: [FLINK-16167][python][doc] update python_shell document execution URL: https://github.com/apache/flink/pull/11142#issuecomment-596899058 @jingwen-ywb Thanks a lot for the update. Looks good to me. Will do some minor changes during merge. Merging... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #11356: [FLINK-15000] [web] WebUI metrics is very slow in large parallelism
flinkbot edited a comment on issue #11356: [FLINK-15000] [web] WebUI metrics is very slow in large parallelism URL: https://github.com/apache/flink/pull/11356#issuecomment-596856858 ## CI report: * 8917c0690a4d84ac1ca15a4ca4f68af921f8bdc1 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/152561426) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6097) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Comment Edited] (FLINK-16018) Improve error reporting when submitting batch job (instead of AskTimeoutException)
[ https://issues.apache.org/jira/browse/FLINK-16018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17055566#comment-17055566 ] Zili Chen edited comment on FLINK-16018 at 3/10/20, 4:24 AM: - A rough thought is we respect {{timeout}} parameter in {{Dispatcher#submitJob}}, having a field that helps determine the progress, and complete the future on Timeout with that field(stringified in {{JobSubmissionException}}). was (Author: tison): A general thought is we respect {{timeout}} parameter in {{Dispatcher#submitJob}}, having a field that helps determine the progress, and complete the future on Timeout with that field(stringified in {{JobSubmissionException}}). > Improve error reporting when submitting batch job (instead of > AskTimeoutException) > -- > > Key: FLINK-16018 > URL: https://issues.apache.org/jira/browse/FLINK-16018 > Project: Flink > Issue Type: Improvement > Components: Runtime / Coordination >Affects Versions: 1.9.2, 1.10.0 >Reporter: Robert Metzger >Priority: Blocker > Fix For: 1.10.1, 1.11.0 > > > While debugging the {{Shaded Hadoop S3A end-to-end test (minio)}} pre-commit > test, I noticed that the JobSubmission is not producing very helpful error > messages. > Environment: > - A simple batch wordcount job > - a unavailable minio s3 filesystem service > What happens from a user's perspective: > - The job submission fails after 10 seconds with a AskTimeoutException: > {code} > 2020-02-07T11:38:27.1189393Z akka.pattern.AskTimeoutException: Ask timed out > on [Actor[akka://flink/user/dispatcher#-939201095]] after [1 ms]. Message > of type [org.apache.flink.runtime.rpc.messages.LocalFencedMessage]. A typical > reason for `AskTimeoutException` is that the recipient actor didn't send a > reply. > 2020-02-07T11:38:27.1189538Z at > akka.pattern.PromiseActorRef$$anonfun$2.apply(AskSupport.scala:635) > 2020-02-07T11:38:27.1189616Z at > akka.pattern.PromiseActorRef$$anonfun$2.apply(AskSupport.scala:635) > 2020-02-07T11:38:27.1189713Z at > akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:648) > 2020-02-07T11:38:27.1189789Z at > akka.actor.Scheduler$$anon$4.run(Scheduler.scala:205) > 2020-02-07T11:38:27.1189883Z at > scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601) > 2020-02-07T11:38:27.1189973Z at > scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109) > 2020-02-07T11:38:27.1190067Z at > scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599) > 2020-02-07T11:38:27.1190159Z at > akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:328) > 2020-02-07T11:38:27.1190267Z at > akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:279) > 2020-02-07T11:38:27.1190358Z at > akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:283) > 2020-02-07T11:38:27.1190465Z at > akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:235) > 2020-02-07T11:38:27.1190540Z at java.lang.Thread.run(Thread.java:748) > {code} > What a user would expect: > - An error message indicating why the job submission failed. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on issue #10526: [FLINK-15090][build] Reverse the dependency from flink-streaming-java to flink-client
flinkbot edited a comment on issue #10526: [FLINK-15090][build] Reverse the dependency from flink-streaming-java to flink-client URL: https://github.com/apache/flink/pull/10526#issuecomment-564436777 ## CI report: * 4ce7cfd0e007e050facf3a61b0c5519519f4feee UNKNOWN * c6a292a50427fa62dae2ff5cf7aeacfcd7920d46 UNKNOWN * ae66b0b34506d4addd902307ea18dae6834622b1 UNKNOWN * 0e9b5dba5f79e7a86f616a50efc9ec2ae41b429e UNKNOWN * 46e1b3bb21ff4363ad8bccfb595eb2e63ecd76ff Travis: [CANCELED](https://travis-ci.com/flink-ci/flink/builds/152570377) Azure: [PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6105) * 8ced0c9d23a3878b055db4710088c4bcfb0ce907 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-16018) Improve error reporting when submitting batch job (instead of AskTimeoutException)
[ https://issues.apache.org/jira/browse/FLINK-16018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17055566#comment-17055566 ] Zili Chen commented on FLINK-16018: --- A general thought is we respect {{timeout}} parameter in {{Dispatcher#submitJob}}, having a field that helps determine the progress, and complete the future on Timeout with that field(stringified in {{JobSubmissionException}}). > Improve error reporting when submitting batch job (instead of > AskTimeoutException) > -- > > Key: FLINK-16018 > URL: https://issues.apache.org/jira/browse/FLINK-16018 > Project: Flink > Issue Type: Improvement > Components: Runtime / Coordination >Affects Versions: 1.9.2, 1.10.0 >Reporter: Robert Metzger >Priority: Blocker > Fix For: 1.10.1, 1.11.0 > > > While debugging the {{Shaded Hadoop S3A end-to-end test (minio)}} pre-commit > test, I noticed that the JobSubmission is not producing very helpful error > messages. > Environment: > - A simple batch wordcount job > - a unavailable minio s3 filesystem service > What happens from a user's perspective: > - The job submission fails after 10 seconds with a AskTimeoutException: > {code} > 2020-02-07T11:38:27.1189393Z akka.pattern.AskTimeoutException: Ask timed out > on [Actor[akka://flink/user/dispatcher#-939201095]] after [1 ms]. Message > of type [org.apache.flink.runtime.rpc.messages.LocalFencedMessage]. A typical > reason for `AskTimeoutException` is that the recipient actor didn't send a > reply. > 2020-02-07T11:38:27.1189538Z at > akka.pattern.PromiseActorRef$$anonfun$2.apply(AskSupport.scala:635) > 2020-02-07T11:38:27.1189616Z at > akka.pattern.PromiseActorRef$$anonfun$2.apply(AskSupport.scala:635) > 2020-02-07T11:38:27.1189713Z at > akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:648) > 2020-02-07T11:38:27.1189789Z at > akka.actor.Scheduler$$anon$4.run(Scheduler.scala:205) > 2020-02-07T11:38:27.1189883Z at > scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601) > 2020-02-07T11:38:27.1189973Z at > scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109) > 2020-02-07T11:38:27.1190067Z at > scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599) > 2020-02-07T11:38:27.1190159Z at > akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:328) > 2020-02-07T11:38:27.1190267Z at > akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:279) > 2020-02-07T11:38:27.1190358Z at > akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:283) > 2020-02-07T11:38:27.1190465Z at > akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:235) > 2020-02-07T11:38:27.1190540Z at java.lang.Thread.run(Thread.java:748) > {code} > What a user would expect: > - An error message indicating why the job submission failed. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] zenfenan commented on a change in pull request #11307: [FLINK-16371] [BulkWriter] Fix Hadoop Compression BulkWriter
zenfenan commented on a change in pull request #11307: [FLINK-16371] [BulkWriter] Fix Hadoop Compression BulkWriter URL: https://github.com/apache/flink/pull/11307#discussion_r390087580 ## File path: flink-formats/flink-compress/src/main/java/org/apache/flink/formats/compress/writers/HadoopCompressionBulkWriter.java ## @@ -19,48 +19,40 @@ package org.apache.flink.formats.compress.writers; import org.apache.flink.api.common.serialization.BulkWriter; -import org.apache.flink.core.fs.FSDataOutputStream; import org.apache.flink.formats.compress.extractor.Extractor; -import org.apache.hadoop.io.compress.CompressionCodec; import org.apache.hadoop.io.compress.CompressionOutputStream; import java.io.IOException; /** - * A {@link BulkWriter} implementation that compresses data using Hadoop codecs. + * A {@link BulkWriter} implementation that writes data that have been + * compressed using Hadoop {@link org.apache.hadoop.io.compress.CompressionCodec}. * * @param The type of element to write. */ public class HadoopCompressionBulkWriter implements BulkWriter { private Extractor extractor; - private FSDataOutputStream outputStream; - private CompressionOutputStream compressor; + private CompressionOutputStream out; Review comment: Done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] zenfenan commented on a change in pull request #11307: [FLINK-16371] [BulkWriter] Fix Hadoop Compression BulkWriter
zenfenan commented on a change in pull request #11307: [FLINK-16371] [BulkWriter] Fix Hadoop Compression BulkWriter URL: https://github.com/apache/flink/pull/11307#discussion_r390087719 ## File path: flink-formats/flink-compress/src/main/java/org/apache/flink/formats/compress/writers/HadoopCompressionBulkWriter.java ## @@ -19,48 +19,40 @@ package org.apache.flink.formats.compress.writers; import org.apache.flink.api.common.serialization.BulkWriter; -import org.apache.flink.core.fs.FSDataOutputStream; import org.apache.flink.formats.compress.extractor.Extractor; -import org.apache.hadoop.io.compress.CompressionCodec; import org.apache.hadoop.io.compress.CompressionOutputStream; import java.io.IOException; /** - * A {@link BulkWriter} implementation that compresses data using Hadoop codecs. + * A {@link BulkWriter} implementation that writes data that have been + * compressed using Hadoop {@link org.apache.hadoop.io.compress.CompressionCodec}. * * @param The type of element to write. */ public class HadoopCompressionBulkWriter implements BulkWriter { private Extractor extractor; - private FSDataOutputStream outputStream; - private CompressionOutputStream compressor; + private CompressionOutputStream out; - public HadoopCompressionBulkWriter( - FSDataOutputStream outputStream, - Extractor extractor, - CompressionCodec compressionCodec) throws Exception { - this.outputStream = outputStream; + public HadoopCompressionBulkWriter(CompressionOutputStream out, Extractor extractor) { + this.out = out; Review comment: Done. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] zenfenan commented on a change in pull request #11307: [FLINK-16371] [BulkWriter] Fix Hadoop Compression BulkWriter
zenfenan commented on a change in pull request #11307: [FLINK-16371] [BulkWriter] Fix Hadoop Compression BulkWriter URL: https://github.com/apache/flink/pull/11307#discussion_r390087527 ## File path: flink-formats/flink-compress/src/main/java/org/apache/flink/formats/compress/writers/HadoopCompressionBulkWriter.java ## @@ -19,48 +19,40 @@ package org.apache.flink.formats.compress.writers; import org.apache.flink.api.common.serialization.BulkWriter; -import org.apache.flink.core.fs.FSDataOutputStream; import org.apache.flink.formats.compress.extractor.Extractor; -import org.apache.hadoop.io.compress.CompressionCodec; import org.apache.hadoop.io.compress.CompressionOutputStream; import java.io.IOException; /** - * A {@link BulkWriter} implementation that compresses data using Hadoop codecs. + * A {@link BulkWriter} implementation that writes data that have been + * compressed using Hadoop {@link org.apache.hadoop.io.compress.CompressionCodec}. * * @param The type of element to write. */ public class HadoopCompressionBulkWriter implements BulkWriter { private Extractor extractor; Review comment: Done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] zenfenan commented on a change in pull request #11307: [FLINK-16371] [BulkWriter] Fix Hadoop Compression BulkWriter
zenfenan commented on a change in pull request #11307: [FLINK-16371] [BulkWriter] Fix Hadoop Compression BulkWriter URL: https://github.com/apache/flink/pull/11307#discussion_r390087748 ## File path: flink-formats/flink-compress/src/main/java/org/apache/flink/formats/compress/CompressWriterFactory.java ## @@ -42,39 +47,57 @@ private Extractor extractor; Review comment: Done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #11357: [FLINK-16476] [table-planner-blink] Remove PowerMockito to avoid LinkageError in SelectivityEstimatorTest
flinkbot edited a comment on issue #11357: [FLINK-16476] [table-planner-blink] Remove PowerMockito to avoid LinkageError in SelectivityEstimatorTest URL: https://github.com/apache/flink/pull/11357#issuecomment-596871878 ## CI report: * 6fb60fcf06b43330aea1ea022423e0be3615b228 UNKNOWN * dbfd74200214880c119e046f04a9a89732f19f15 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/152565192) Azure: [FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6100) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-16489) Use flink on yarn,RM restart AM,but the flink job is not restart from the saved checkpoint.
[ https://issues.apache.org/jira/browse/FLINK-16489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17055563#comment-17055563 ] liangji commented on FLINK-16489: - Thank you. After add JM HA, this problem is solved. > Use flink on yarn,RM restart AM,but the flink job is not restart from the > saved checkpoint. > --- > > Key: FLINK-16489 > URL: https://issues.apache.org/jira/browse/FLINK-16489 > Project: Flink > Issue Type: Bug >Reporter: liangji >Priority: Major > Attachments: image-2020-03-09-18-06-59-710.png > > > 1. Environment > a. flink-1.9.0 > b. yarn version > Hadoop 2.6.0-cdh5.5.0 > Subversion [http://github.com/cloudera/hadoop] -r > fd21232cef7b8c1f536965897ce20f50b83ee7b2 > Compiled by jenkins on 2015-11-09T20:39Z > Compiled with protoc 2.5.0 > From source with checksum 98e07176d1787150a6a9c087627562c > This command was run using > /opt/cloudera/parcels/CDH-5.5.0-1.cdh5.5.0.p0.8/jars/hadoop-common-2.6.0-cdh5.5.0.jar > c. we enable flink checkpoint and use default configuration for flink > checkpoint > 2. Problem repetition > a. Make AM run in node1; > b. Do NM decomission for node1 > 3. Problem > !image-2020-03-09-18-06-59-710.png! > We can see form the pic above, last AM saved chk-1522 at 2020-03-04 14:12:48. > Then the second AM restarted with chk-1. But at last, we find data is not > correct. So we restarted the application from chk-1522 manually with flink > cli -s, then we confirmed the data is right. > Do as above, we find that AM restarted, but the flink job is not restart from > the saved checkpoint.So is it normal or are there some configurations that I > have not configed? -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-16489) Use flink on yarn,RM restart AM,but the flink job is not restart from the saved checkpoint.
[ https://issues.apache.org/jira/browse/FLINK-16489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] liangji closed FLINK-16489. --- Resolution: Fixed > Use flink on yarn,RM restart AM,but the flink job is not restart from the > saved checkpoint. > --- > > Key: FLINK-16489 > URL: https://issues.apache.org/jira/browse/FLINK-16489 > Project: Flink > Issue Type: Bug >Reporter: liangji >Priority: Major > Attachments: image-2020-03-09-18-06-59-710.png > > > 1. Environment > a. flink-1.9.0 > b. yarn version > Hadoop 2.6.0-cdh5.5.0 > Subversion [http://github.com/cloudera/hadoop] -r > fd21232cef7b8c1f536965897ce20f50b83ee7b2 > Compiled by jenkins on 2015-11-09T20:39Z > Compiled with protoc 2.5.0 > From source with checksum 98e07176d1787150a6a9c087627562c > This command was run using > /opt/cloudera/parcels/CDH-5.5.0-1.cdh5.5.0.p0.8/jars/hadoop-common-2.6.0-cdh5.5.0.jar > c. we enable flink checkpoint and use default configuration for flink > checkpoint > 2. Problem repetition > a. Make AM run in node1; > b. Do NM decomission for node1 > 3. Problem > !image-2020-03-09-18-06-59-710.png! > We can see form the pic above, last AM saved chk-1522 at 2020-03-04 14:12:48. > Then the second AM restarted with chk-1. But at last, we find data is not > correct. So we restarted the application from chk-1522 manually with flink > cli -s, then we confirmed the data is right. > Do as above, we find that AM restarted, but the flink job is not restart from > the saved checkpoint.So is it normal or are there some configurations that I > have not configed? -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on issue #10526: [FLINK-15090][build] Reverse the dependency from flink-streaming-java to flink-client
flinkbot edited a comment on issue #10526: [FLINK-15090][build] Reverse the dependency from flink-streaming-java to flink-client URL: https://github.com/apache/flink/pull/10526#issuecomment-564436777 ## CI report: * 4ce7cfd0e007e050facf3a61b0c5519519f4feee UNKNOWN * c6a292a50427fa62dae2ff5cf7aeacfcd7920d46 UNKNOWN * ae66b0b34506d4addd902307ea18dae6834622b1 UNKNOWN * 0e9b5dba5f79e7a86f616a50efc9ec2ae41b429e UNKNOWN * 349a92980417c26f063e0eebaf1a7760bbaaa419 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/152242766) * 46e1b3bb21ff4363ad8bccfb595eb2e63ecd76ff Travis: [PENDING](https://travis-ci.com/flink-ci/flink/builds/152570377) Azure: [PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6105) * 8ced0c9d23a3878b055db4710088c4bcfb0ce907 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #11358: [FLINK-16516][python] Remove Python UDF Codegen Code
flinkbot edited a comment on issue #11358: [FLINK-16516][python] Remove Python UDF Codegen Code URL: https://github.com/apache/flink/pull/11358#issuecomment-596885127 ## CI report: * cf3f3aac4aa052f71bd4954fd55b48c84350edf6 Travis: [PENDING](https://travis-ci.com/flink-ci/flink/builds/152569306) Azure: [PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6104) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #11342: [FLINK-16483][python] Add Python building blocks to make sure the basic functionality of vectorized Python UDF could work
flinkbot edited a comment on issue #11342: [FLINK-16483][python] Add Python building blocks to make sure the basic functionality of vectorized Python UDF could work URL: https://github.com/apache/flink/pull/11342#issuecomment-596093049 ## CI report: * 27f93cc605b980ae0ce93b1d5503031ae98f6cd9 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/152267927) * c506306f83eec4450719dfdfb2fe205a3fb69857 Travis: [PENDING](https://travis-ci.com/flink-ci/flink/builds/152569293) Azure: [PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6103) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #11142: [FLINK-16167][python][doc] update python_shell document execution
flinkbot edited a comment on issue #11142: [FLINK-16167][python][doc] update python_shell document execution URL: https://github.com/apache/flink/pull/11142#issuecomment-588249138 ## CI report: * 7ad5d6fcd65591e484cafd878fb1601403a070db Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/152564245) Azure: [PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6099) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #10526: [FLINK-15090][build] Reverse the dependency from flink-streaming-java to flink-client
flinkbot edited a comment on issue #10526: [FLINK-15090][build] Reverse the dependency from flink-streaming-java to flink-client URL: https://github.com/apache/flink/pull/10526#issuecomment-564436777 ## CI report: * 4ce7cfd0e007e050facf3a61b0c5519519f4feee UNKNOWN * c6a292a50427fa62dae2ff5cf7aeacfcd7920d46 UNKNOWN * ae66b0b34506d4addd902307ea18dae6834622b1 UNKNOWN * 0e9b5dba5f79e7a86f616a50efc9ec2ae41b429e UNKNOWN * 349a92980417c26f063e0eebaf1a7760bbaaa419 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/152242766) * 46e1b3bb21ff4363ad8bccfb595eb2e63ecd76ff UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] TisonKun commented on issue #10526: [FLINK-15090][build] Reverse the dependency from flink-streaming-java to flink-client
TisonKun commented on issue #10526: [FLINK-15090][build] Reverse the dependency from flink-streaming-java to flink-client URL: https://github.com/apache/flink/pull/10526#issuecomment-596887194 @aljoscha Thanks for your review! I've addressed your comments inline. For moving test to `flink-tests`, I agree that throwing everything into a full dependencies test module is bad practice. However, these tests don't belong to `flink-client` because they are primarily tests for streaming api. It happens we need to load services that are in `flink-client`. See also ExecutionEnvironment tests in `flink-tests`. If we want to clear dependencies for testing, and for the long term maintenance, I'd prefer re-struct `flink-tests`, `flink-yarn-tests`, `flink-fs-tests` to be under a parent module and separate these env related tests in a sub module. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] KurtYoung commented on a change in pull request #11334: [FLINK-16464][sql-client]result-mode tableau may shift when content contains Chinese String in SQL CLI
KurtYoung commented on a change in pull request #11334: [FLINK-16464][sql-client]result-mode tableau may shift when content contains Chinese String in SQL CLI URL: https://github.com/apache/flink/pull/11334#discussion_r390079440 ## File path: flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/cli/CliTableauResultViewTest.java ## @@ -156,19 +176,21 @@ public void testBatchResult() { view.displayBatchResults(); view.close(); - + // note: about Review comment: what's this? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] KurtYoung commented on a change in pull request #11334: [FLINK-16464][sql-client]result-mode tableau may shift when content contains Chinese String in SQL CLI
KurtYoung commented on a change in pull request #11334: [FLINK-16464][sql-client]result-mode tableau may shift when content contains Chinese String in SQL CLI URL: https://github.com/apache/flink/pull/11334#discussion_r390076434 ## File path: flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/CliTableauResultView.java ## @@ -276,13 +277,13 @@ private void printSingleRow(int[] colWidths, String[] cols) { sb.append("|"); int idx = 0; for (String col : cols) { - byte[] colBytes = getUTF8Bytes(col); sb.append(" "); - if (colBytes.length <= colWidths[idx]) { - sb.append(StringUtils.repeat(' ', colWidths[idx] - colBytes.length)); + int colWidth = getStringWidth(col); + if (colWidth <= colWidths[idx]) { + sb.append(StringUtils.repeat(' ', colWidths[idx] - colWidth)); sb.append(col); } else { - sb.append(subMaxString(col, colWidths[idx])); + sb.append(getFixedString(col, colWidths[idx])); Review comment: `getFixedString` => `truncateString`? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] KurtYoung commented on a change in pull request #11334: [FLINK-16464][sql-client]result-mode tableau may shift when content contains Chinese String in SQL CLI
KurtYoung commented on a change in pull request #11334: [FLINK-16464][sql-client]result-mode tableau may shift when content contains Chinese String in SQL CLI URL: https://github.com/apache/flink/pull/11334#discussion_r390074844 ## File path: flink-table/flink-sql-client/pom.xml ## @@ -109,6 +109,13 @@ under the License. 3.9.0 + + + com.ibm.icu Review comment: We should also shade this jar into `flink-sql-client` jar? If yes, then please also update the license files This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] KurtYoung commented on a change in pull request #11334: [FLINK-16464][sql-client]result-mode tableau may shift when content contains Chinese String in SQL CLI
KurtYoung commented on a change in pull request #11334: [FLINK-16464][sql-client]result-mode tableau may shift when content contains Chinese String in SQL CLI URL: https://github.com/apache/flink/pull/11334#discussion_r390079193 ## File path: flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/CliTableauResultView.java ## @@ -276,13 +277,13 @@ private void printSingleRow(int[] colWidths, String[] cols) { sb.append("|"); int idx = 0; for (String col : cols) { - byte[] colBytes = getUTF8Bytes(col); sb.append(" "); - if (colBytes.length <= colWidths[idx]) { - sb.append(StringUtils.repeat(' ', colWidths[idx] - colBytes.length)); + int colWidth = getStringWidth(col); + if (colWidth <= colWidths[idx]) { + sb.append(StringUtils.repeat(' ', colWidths[idx] - colWidth)); sb.append(col); } else { - sb.append(subMaxString(col, colWidths[idx])); + sb.append(getFixedString(col, colWidths[idx])); Review comment: and second parameter can be replaced with `colWidths[idx] - COLUMN_TRUNCATED_FLAG_WIDTH` as `targetLength` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] KurtYoung commented on a change in pull request #11334: [FLINK-16464][sql-client]result-mode tableau may shift when content contains Chinese String in SQL CLI
KurtYoung commented on a change in pull request #11334: [FLINK-16464][sql-client]result-mode tableau may shift when content contains Chinese String in SQL CLI URL: https://github.com/apache/flink/pull/11334#discussion_r390076202 ## File path: flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/CliTableauResultView.java ## @@ -276,13 +277,13 @@ private void printSingleRow(int[] colWidths, String[] cols) { sb.append("|"); int idx = 0; for (String col : cols) { - byte[] colBytes = getUTF8Bytes(col); sb.append(" "); - if (colBytes.length <= colWidths[idx]) { - sb.append(StringUtils.repeat(' ', colWidths[idx] - colBytes.length)); + int colWidth = getStringWidth(col); Review comment: `int displayWidth = getStringDisplayWidth(col)` would be more clear and accurate This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] KurtYoung commented on a change in pull request #11334: [FLINK-16464][sql-client]result-mode tableau may shift when content contains Chinese String in SQL CLI
KurtYoung commented on a change in pull request #11334: [FLINK-16464][sql-client]result-mode tableau may shift when content contains Chinese String in SQL CLI URL: https://github.com/apache/flink/pull/11334#discussion_r390075494 ## File path: flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/CliTableauResultView.java ## @@ -60,9 +61,9 @@ private static final int NULL_COLUMN_WIDTH = CliStrings.NULL_COLUMN.length(); private static final int MAX_COLUMN_WIDTH = 30; private static final int DEFAULT_COLUMN_WIDTH = 20; + private static final int COLUMN_TRUNCATED_FLAG_WIDTH = 3; Review comment: use `COLUMN_TRUNCATED_FLAG.length()` instead? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] KurtYoung commented on a change in pull request #11334: [FLINK-16464][sql-client]result-mode tableau may shift when content contains Chinese String in SQL CLI
KurtYoung commented on a change in pull request #11334: [FLINK-16464][sql-client]result-mode tableau may shift when content contains Chinese String in SQL CLI URL: https://github.com/apache/flink/pull/11334#discussion_r390079359 ## File path: flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/CliUtils.java ## @@ -111,4 +113,33 @@ public static void normalizeColumn(AttributedStringBuilder sb, String col, int m } return typesAsString; } + + public static int getStringWidth(String str) { Review comment: please add unit tests for both newly added methods. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] TisonKun commented on a change in pull request #10526: [FLINK-15090][build] Reverse the dependency from flink-streaming-java to flink-client
TisonKun commented on a change in pull request #10526: [FLINK-15090][build] Reverse the dependency from flink-streaming-java to flink-client URL: https://github.com/apache/flink/pull/10526#discussion_r390079194 ## File path: flink-clients/src/main/java/org/apache/flink/client/ClientUtils.java ## @@ -128,16 +128,23 @@ public static void executeProgram( LOG.info("Starting program (detached: {})", !configuration.getBoolean(DeploymentOptions.ATTACHED)); - ContextEnvironmentFactory factory = new ContextEnvironmentFactory( + ContextEnvironment benv = new ContextEnvironment( Review comment: Updated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #11357: [FLINK-16476] [table-planner-blink] Remove PowerMockito to avoid LinkageError in SelectivityEstimatorTest
flinkbot edited a comment on issue #11357: [FLINK-16476] [table-planner-blink] Remove PowerMockito to avoid LinkageError in SelectivityEstimatorTest URL: https://github.com/apache/flink/pull/11357#issuecomment-596871878 ## CI report: * 6fb60fcf06b43330aea1ea022423e0be3615b228 UNKNOWN * dbfd74200214880c119e046f04a9a89732f19f15 Travis: [PENDING](https://travis-ci.com/flink-ci/flink/builds/152565192) Azure: [FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6100) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot commented on issue #11358: [FLINK-16516][python] Remove Python UDF Codegen Code
flinkbot commented on issue #11358: [FLINK-16516][python] Remove Python UDF Codegen Code URL: https://github.com/apache/flink/pull/11358#issuecomment-596885127 ## CI report: * cf3f3aac4aa052f71bd4954fd55b48c84350edf6 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #11342: [FLINK-16483][python] Add Python building blocks to make sure the basic functionality of vectorized Python UDF could work
flinkbot edited a comment on issue #11342: [FLINK-16483][python] Add Python building blocks to make sure the basic functionality of vectorized Python UDF could work URL: https://github.com/apache/flink/pull/11342#issuecomment-596093049 ## CI report: * 27f93cc605b980ae0ce93b1d5503031ae98f6cd9 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/152267927) * c506306f83eec4450719dfdfb2fe205a3fb69857 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (FLINK-15131) Add Source API classes
[ https://issues.apache.org/jira/browse/FLINK-15131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jiangjie Qin updated FLINK-15131: - Fix Version/s: 1.11.0 > Add Source API classes > -- > > Key: FLINK-15131 > URL: https://issues.apache.org/jira/browse/FLINK-15131 > Project: Flink > Issue Type: Sub-task > Components: API / DataStream >Reporter: Jiangjie Qin >Assignee: Jiangjie Qin >Priority: Major > Labels: pull-request-available > Fix For: 1.11.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Add all the top tier classes defined in FLIP-27. > [https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface#FLIP-27:RefactorSourceInterface-Toplevelpublicinterfaces] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (FLINK-15131) Add Source API classes
[ https://issues.apache.org/jira/browse/FLINK-15131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jiangjie Qin resolved FLINK-15131. -- Resolution: Fixed Merged to master. 5a81fa0766abe3ef6c0e6b8f9217e9adcd18095f > Add Source API classes > -- > > Key: FLINK-15131 > URL: https://issues.apache.org/jira/browse/FLINK-15131 > Project: Flink > Issue Type: Sub-task > Components: API / DataStream >Reporter: Jiangjie Qin >Assignee: Jiangjie Qin >Priority: Major > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > Add all the top tier classes defined in FLIP-27. > [https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface#FLIP-27:RefactorSourceInterface-Toplevelpublicinterfaces] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] becketqin closed pull request #10486: [FLINK-15131][connector/source] Add the APIs for Source (FLIP-27).
becketqin closed pull request #10486: [FLINK-15131][connector/source] Add the APIs for Source (FLIP-27). URL: https://github.com/apache/flink/pull/10486 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] becketqin commented on issue #10486: [FLINK-15131][connector/source] Add the APIs for Source (FLIP-27).
becketqin commented on issue #10486: [FLINK-15131][connector/source] Add the APIs for Source (FLIP-27). URL: https://github.com/apache/flink/pull/10486#issuecomment-596881891 Thanks for the review. @StephanEwen @wuchong merged to master. 5a81fa0766abe3ef6c0e6b8f9217e9adcd18095f This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #11357: [FLINK-16476] [table-planner-blink] Remove PowerMockito to avoid LinkageError in SelectivityEstimatorTest
flinkbot edited a comment on issue #11357: [FLINK-16476] [table-planner-blink] Remove PowerMockito to avoid LinkageError in SelectivityEstimatorTest URL: https://github.com/apache/flink/pull/11357#issuecomment-596871878 ## CI report: * 6fb60fcf06b43330aea1ea022423e0be3615b228 UNKNOWN * dbfd74200214880c119e046f04a9a89732f19f15 Travis: [PENDING](https://travis-ci.com/flink-ci/flink/builds/152565192) Azure: [PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6100) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #11328: [FLINK-16455][hive] Introduce flink-sql-connector-hive modules to provide hive uber jars
flinkbot edited a comment on issue #11328: [FLINK-16455][hive] Introduce flink-sql-connector-hive modules to provide hive uber jars URL: https://github.com/apache/flink/pull/11328#issuecomment-595621789 ## CI report: * f1d94b9792a597e9b935fe08001930a3eb27d789 UNKNOWN * b4a76d76d2c1e9722befabc03b2191d053c70fa8 Travis: [FAILURE](https://travis-ci.com/flink-ci/flink/builds/152563303) Azure: [FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6098) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] dianfu commented on issue #11342: [FLINK-16483][python] Add Python building blocks to make sure the basic functionality of vectorized Python UDF could work
dianfu commented on issue #11342: [FLINK-16483][python] Add Python building blocks to make sure the basic functionality of vectorized Python UDF could work URL: https://github.com/apache/flink/pull/11342#issuecomment-596881203 @hequn8128 Thanks a lot for the review. Have updated the PR. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot commented on issue #11358: [FLINK-16516][python] Remove Python UDF Codegen Code
flinkbot commented on issue #11358: [FLINK-16516][python] Remove Python UDF Codegen Code URL: https://github.com/apache/flink/pull/11358#issuecomment-596880791 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit cf3f3aac4aa052f71bd4954fd55b48c84350edf6 (Tue Mar 10 03:17:34 UTC 2020) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (FLINK-16516) Remove Python UDF Codegen Code
[ https://issues.apache.org/jira/browse/FLINK-16516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-16516: --- Labels: pull-request-available (was: ) > Remove Python UDF Codegen Code > -- > > Key: FLINK-16516 > URL: https://issues.apache.org/jira/browse/FLINK-16516 > Project: Flink > Issue Type: Improvement > Components: API / Python >Reporter: Huang Xingbo >Assignee: Huang Xingbo >Priority: Major > Labels: pull-request-available > Fix For: 1.11.0 > > > Currently we take use of codegen to generate PythonScalarFunction and > PythonTableFunction, but it is unecessary.We can directly create a static > PythonScalarFunction and PythonTableFunction. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] HuangXingBo opened a new pull request #11358: [FLINK-16516][python] Remove Python UDF Codegen Code
HuangXingBo opened a new pull request #11358: [FLINK-16516][python] Remove Python UDF Codegen Code URL: https://github.com/apache/flink/pull/11358 ## What is the purpose of the change *This pull request will replace python udf codegen code with static class code* ## Brief change log - *Add PythonScalarFunction.java and PythonTableFunction.java in flink-table-common* - *Remove PythonFunctionCodeGenerator.scala* ## Verifying this change This change added tests and can be verified as follows: - *It is not a new feature, so current test is enough* ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (no) - The serializers: (no) - The runtime per-record code paths (performance sensitive): (no) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no) - The S3 file system connector: (no) ## Documentation - Does this pull request introduce a new feature? (no) - If yes, how is the feature documented? (not applicable) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-16476) SelectivityEstimatorTest logs LinkageErrors
[ https://issues.apache.org/jira/browse/FLINK-16476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17055547#comment-17055547 ] Jark Wu commented on FLINK-16476: - Good catch [~godfreyhe]! {{org.apache.flink.table.planner.plan.metadata.AggCallSelectivityEstimatorTest}} also uses it, if we want to remove {{PowerMock}}, we should also take this one into account. > SelectivityEstimatorTest logs LinkageErrors > --- > > Key: FLINK-16476 > URL: https://issues.apache.org/jira/browse/FLINK-16476 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.11.0 >Reporter: Robert Metzger >Assignee: godfrey he >Priority: Major > Labels: pull-request-available, test-stability > Time Spent: 10m > Remaining Estimate: 0h > > This is the test run > https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6038=logs=d47ab8d2-10c7-5d9e-8178-ef06a797a0d8=9a1abf5f-7cf4-58c3-bb2a-282a64aebb1f > Log output > {code} > 2020-03-07T00:35:20.1270791Z [INFO] Running > org.apache.flink.table.planner.plan.metadata.SelectivityEstimatorTest > 2020-03-07T00:35:21.6473057Z [INFO] Tests run: 3, Failures: 0, Errors: 0, > Skipped: 0, Time elapsed: 3.408 s - in > org.apache.flink.table.planner.plan.utils.FlinkRexUtilTest > 2020-03-07T00:35:21.6541713Z [INFO] Running > org.apache.flink.table.planner.plan.metadata.FlinkRelMdNonCumulativeCostTest > 2020-03-07T00:35:21.7294613Z [INFO] Tests run: 2, Failures: 0, Errors: 0, > Skipped: 0, Time elapsed: 0.073 s - in > org.apache.flink.table.planner.plan.metadata.FlinkRelMdNonCumulativeCostTest > 2020-03-07T00:35:21.7309958Z [INFO] Running > org.apache.flink.table.planner.plan.metadata.AggCallSelectivityEstimatorTest > 2020-03-07T00:35:23.7443246Z ScriptEngineManager providers.next(): > javax.script.ScriptEngineFactory: Provider > jdk.nashorn.api.scripting.NashornScriptEngineFactory not a subtype > 2020-03-07T00:35:23.8260013Z 2020-03-07 00:35:23,819 main ERROR Could not > reconfigure JMX java.lang.LinkageError: loader constraint violation: loader > (instance of > org/powermock/core/classloader/javassist/JavassistMockClassLoader) previously > initiated loading for a different type with name > "javax/management/MBeanServer" > 2020-03-07T00:35:23.8262329Z at java.lang.ClassLoader.defineClass1(Native > Method) > 2020-03-07T00:35:23.8263241Z at > java.lang.ClassLoader.defineClass(ClassLoader.java:757) > 2020-03-07T00:35:23.8264629Z at > org.powermock.core.classloader.javassist.JavassistMockClassLoader.loadUnmockedClass(JavassistMockClassLoader.java:90) > 2020-03-07T00:35:23.8266241Z at > org.powermock.core.classloader.MockClassLoader.loadClassByThisClassLoader(MockClassLoader.java:104) > 2020-03-07T00:35:23.8267808Z at > org.powermock.core.classloader.DeferSupportingClassLoader.loadClass1(DeferSupportingClassLoader.java:147) > 2020-03-07T00:35:23.8269485Z at > org.powermock.core.classloader.DeferSupportingClassLoader.loadClass(DeferSupportingClassLoader.java:98) > 2020-03-07T00:35:23.8270900Z at > java.lang.ClassLoader.loadClass(ClassLoader.java:352) > 2020-03-07T00:35:23.8272000Z at > org.apache.logging.log4j.core.jmx.Server.unregisterAllMatching(Server.java:337) > 2020-03-07T00:35:23.8273779Z at > org.apache.logging.log4j.core.jmx.Server.unregisterLoggerContext(Server.java:261) > 2020-03-07T00:35:23.8275087Z at > org.apache.logging.log4j.core.jmx.Server.reregisterMBeansAfterReconfigure(Server.java:165) > 2020-03-07T00:35:23.8276515Z at > org.apache.logging.log4j.core.jmx.Server.reregisterMBeansAfterReconfigure(Server.java:141) > 2020-03-07T00:35:23.8278036Z at > org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:590) > 2020-03-07T00:35:23.8279741Z at > org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:651) > 2020-03-07T00:35:23.8281190Z at > org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:668) > 2020-03-07T00:35:23.8282440Z at > org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:253) > 2020-03-07T00:35:23.8283717Z at > org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:153) > 2020-03-07T00:35:23.8285186Z at > org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:45) > 2020-03-07T00:35:23.8286575Z at > org.apache.logging.log4j.LogManager.getContext(LogManager.java:194) > 2020-03-07T00:35:23.8287933Z at > org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:138) > 2020-03-07T00:35:23.8289393Z at > org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFactory.java:45) > 2020-03-07T00:35:23.8290816Z at >
[GitHub] [flink] godfreyhe commented on a change in pull request #11260: [FLINK-16344][table-planner-blink] Preserve nullability for nested types
godfreyhe commented on a change in pull request #11260: [FLINK-16344][table-planner-blink] Preserve nullability for nested types URL: https://github.com/apache/flink/pull/11260#discussion_r390072796 ## File path: flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/calcite/FlinkTypeFactory.scala ## @@ -293,6 +293,11 @@ class FlinkTypeFactory(typeSystem: RelDataTypeSystem) extends JavaTypeFactoryImp case it: TimeIndicatorRelDataType => new TimeIndicatorRelDataType(it.typeSystem, it.originalType, isNullable, it.isEventTime) + // for nested rows we keep the nullability property, + // top-level rows fall back to Calcite's default handling + case rt: RelRecordType if rt.getStructKind == StructKind.PEEK_FIELDS_NO_EXPAND => Review comment: Make sense to me. I also notice `outer joins` is mentioned in `RelDataTypeFactoryImpl#copyRecordType`'s comment This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Closed] (FLINK-16273) Solve "sun.misc.Unseafe or java.nio.DirectByteBuffer.(long, int) not available" problem for users
[ https://issues.apache.org/jira/browse/FLINK-16273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hequn Cheng closed FLINK-16273. --- Resolution: Fixed > Solve "sun.misc.Unseafe or java.nio.DirectByteBuffer.(long, int) not > available" problem for users > --- > > Key: FLINK-16273 > URL: https://issues.apache.org/jira/browse/FLINK-16273 > Project: Flink > Issue Type: Sub-task > Components: API / Python >Reporter: Hequn Cheng >Assignee: Dian Fu >Priority: Major > Labels: pull-request-available > Fix For: 1.11.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Currently, the JVM property "io.netty.tryReflectionSetAccessible=true" at > startup should be set for pandas udf users. We should add a document for this > or solve this automatically. BTW, some other discussion about it: ARROW-7223 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-16273) Solve "sun.misc.Unseafe or java.nio.DirectByteBuffer.(long, int) not available" problem for users
[ https://issues.apache.org/jira/browse/FLINK-16273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17055545#comment-17055545 ] Hequn Cheng commented on FLINK-16273: - Fixed in 1.11.0 via f21fa9675b15b8a9673fa529a1d368427f846161 > Solve "sun.misc.Unseafe or java.nio.DirectByteBuffer.(long, int) not > available" problem for users > --- > > Key: FLINK-16273 > URL: https://issues.apache.org/jira/browse/FLINK-16273 > Project: Flink > Issue Type: Sub-task > Components: API / Python >Reporter: Hequn Cheng >Assignee: Dian Fu >Priority: Major > Labels: pull-request-available > Fix For: 1.11.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Currently, the JVM property "io.netty.tryReflectionSetAccessible=true" at > startup should be set for pandas udf users. We should add a document for this > or solve this automatically. BTW, some other discussion about it: ARROW-7223 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] hequn8128 merged pull request #11341: [FLINK-16273][python] Set io.netty.tryReflectionSetAccessible to true by default
hequn8128 merged pull request #11341: [FLINK-16273][python] Set io.netty.tryReflectionSetAccessible to true by default URL: https://github.com/apache/flink/pull/11341 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-16476) SelectivityEstimatorTest logs LinkageErrors
[ https://issues.apache.org/jira/browse/FLINK-16476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17055544#comment-17055544 ] godfrey he commented on FLINK-16476: I create another issue (https://issues.apache.org/jira/browse/FLINK-16519) to report LinkageErrors in {{CheckpointCoordinatorFailureTest}} > SelectivityEstimatorTest logs LinkageErrors > --- > > Key: FLINK-16476 > URL: https://issues.apache.org/jira/browse/FLINK-16476 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.11.0 >Reporter: Robert Metzger >Assignee: godfrey he >Priority: Major > Labels: pull-request-available, test-stability > Time Spent: 10m > Remaining Estimate: 0h > > This is the test run > https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6038=logs=d47ab8d2-10c7-5d9e-8178-ef06a797a0d8=9a1abf5f-7cf4-58c3-bb2a-282a64aebb1f > Log output > {code} > 2020-03-07T00:35:20.1270791Z [INFO] Running > org.apache.flink.table.planner.plan.metadata.SelectivityEstimatorTest > 2020-03-07T00:35:21.6473057Z [INFO] Tests run: 3, Failures: 0, Errors: 0, > Skipped: 0, Time elapsed: 3.408 s - in > org.apache.flink.table.planner.plan.utils.FlinkRexUtilTest > 2020-03-07T00:35:21.6541713Z [INFO] Running > org.apache.flink.table.planner.plan.metadata.FlinkRelMdNonCumulativeCostTest > 2020-03-07T00:35:21.7294613Z [INFO] Tests run: 2, Failures: 0, Errors: 0, > Skipped: 0, Time elapsed: 0.073 s - in > org.apache.flink.table.planner.plan.metadata.FlinkRelMdNonCumulativeCostTest > 2020-03-07T00:35:21.7309958Z [INFO] Running > org.apache.flink.table.planner.plan.metadata.AggCallSelectivityEstimatorTest > 2020-03-07T00:35:23.7443246Z ScriptEngineManager providers.next(): > javax.script.ScriptEngineFactory: Provider > jdk.nashorn.api.scripting.NashornScriptEngineFactory not a subtype > 2020-03-07T00:35:23.8260013Z 2020-03-07 00:35:23,819 main ERROR Could not > reconfigure JMX java.lang.LinkageError: loader constraint violation: loader > (instance of > org/powermock/core/classloader/javassist/JavassistMockClassLoader) previously > initiated loading for a different type with name > "javax/management/MBeanServer" > 2020-03-07T00:35:23.8262329Z at java.lang.ClassLoader.defineClass1(Native > Method) > 2020-03-07T00:35:23.8263241Z at > java.lang.ClassLoader.defineClass(ClassLoader.java:757) > 2020-03-07T00:35:23.8264629Z at > org.powermock.core.classloader.javassist.JavassistMockClassLoader.loadUnmockedClass(JavassistMockClassLoader.java:90) > 2020-03-07T00:35:23.8266241Z at > org.powermock.core.classloader.MockClassLoader.loadClassByThisClassLoader(MockClassLoader.java:104) > 2020-03-07T00:35:23.8267808Z at > org.powermock.core.classloader.DeferSupportingClassLoader.loadClass1(DeferSupportingClassLoader.java:147) > 2020-03-07T00:35:23.8269485Z at > org.powermock.core.classloader.DeferSupportingClassLoader.loadClass(DeferSupportingClassLoader.java:98) > 2020-03-07T00:35:23.8270900Z at > java.lang.ClassLoader.loadClass(ClassLoader.java:352) > 2020-03-07T00:35:23.8272000Z at > org.apache.logging.log4j.core.jmx.Server.unregisterAllMatching(Server.java:337) > 2020-03-07T00:35:23.8273779Z at > org.apache.logging.log4j.core.jmx.Server.unregisterLoggerContext(Server.java:261) > 2020-03-07T00:35:23.8275087Z at > org.apache.logging.log4j.core.jmx.Server.reregisterMBeansAfterReconfigure(Server.java:165) > 2020-03-07T00:35:23.8276515Z at > org.apache.logging.log4j.core.jmx.Server.reregisterMBeansAfterReconfigure(Server.java:141) > 2020-03-07T00:35:23.8278036Z at > org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:590) > 2020-03-07T00:35:23.8279741Z at > org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:651) > 2020-03-07T00:35:23.8281190Z at > org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:668) > 2020-03-07T00:35:23.8282440Z at > org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:253) > 2020-03-07T00:35:23.8283717Z at > org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:153) > 2020-03-07T00:35:23.8285186Z at > org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:45) > 2020-03-07T00:35:23.8286575Z at > org.apache.logging.log4j.LogManager.getContext(LogManager.java:194) > 2020-03-07T00:35:23.8287933Z at > org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:138) > 2020-03-07T00:35:23.8289393Z at > org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFactory.java:45) > 2020-03-07T00:35:23.8290816Z at > org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:48) > 2020-03-07T00:35:23.8292179Z
[jira] [Assigned] (FLINK-16508) Name the ports exposed by the main Container in Pod
[ https://issues.apache.org/jira/browse/FLINK-16508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zili Chen reassigned FLINK-16508: - Assignee: Canbin Zheng > Name the ports exposed by the main Container in Pod > --- > > Key: FLINK-16508 > URL: https://issues.apache.org/jira/browse/FLINK-16508 > Project: Flink > Issue Type: Improvement > Components: Deployment / Kubernetes >Affects Versions: 1.10.0 >Reporter: Canbin Zheng >Assignee: Canbin Zheng >Priority: Minor > Fix For: 1.10.1, 1.11.0 > > > Currently, we expose some ports via the main Container of the JobManager and > the TaskManager, but we forget to name those ports so that people could be > confused because there is no description of the port usage. This ticket > proposes to explicitly name the ports in the Container to help people > understand the usage of those ports. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] hequn8128 commented on issue #11341: [FLINK-16273][python] Set io.netty.tryReflectionSetAccessible to true by default
hequn8128 commented on issue #11341: [FLINK-16273][python] Set io.netty.tryReflectionSetAccessible to true by default URL: https://github.com/apache/flink/pull/11341#issuecomment-596878216 Merging... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Created] (FLINK-16519) CheckpointCoordinatorFailureTest logs LinkageErrors
godfrey he created FLINK-16519: -- Summary: CheckpointCoordinatorFailureTest logs LinkageErrors Key: FLINK-16519 URL: https://issues.apache.org/jira/browse/FLINK-16519 Project: Flink Issue Type: Bug Components: Runtime / Checkpointing Reporter: godfrey he Fix For: 1.11.0 This issue is in https://travis-ci.org/apache/flink/jobs/660152153?utm_medium=notification_source=slack Log output {code:java} 2020-03-09 15:52:14,550 main ERROR Could not reconfigure JMX java.lang.LinkageError: loader constraint violation: loader (instance of org/powermock/core/classloader/javassist/JavassistMockClassLoader) previously initiated loading for a different type with name "javax/management/MBeanServer" at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:757) at org.powermock.core.classloader.javassist.JavassistMockClassLoader.loadUnmockedClass(JavassistMockClassLoader.java:90) at org.powermock.core.classloader.MockClassLoader.loadClassByThisClassLoader(MockClassLoader.java:104) at org.powermock.core.classloader.DeferSupportingClassLoader.loadClass1(DeferSupportingClassLoader.java:147) at org.powermock.core.classloader.DeferSupportingClassLoader.loadClass(DeferSupportingClassLoader.java:98) at java.lang.ClassLoader.loadClass(ClassLoader.java:352) at org.apache.logging.log4j.core.jmx.Server.unregisterAllMatching(Server.java:337) at org.apache.logging.log4j.core.jmx.Server.unregisterLoggerContext(Server.java:261) at org.apache.logging.log4j.core.jmx.Server.reregisterMBeansAfterReconfigure(Server.java:165) at org.apache.logging.log4j.core.jmx.Server.reregisterMBeansAfterReconfigure(Server.java:141) at org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:590) at org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:651) at org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:668) at org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:253) at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:153) at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:45) at org.apache.logging.log4j.LogManager.getContext(LogManager.java:194) at org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:138) at org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFactory.java:45) at org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:48) at org.apache.logging.slf4j.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:30) at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:329) at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:349) at org.apache.flink.util.TestLogger.(TestLogger.java:36) at org.apache.flink.runtime.checkpoint.CheckpointCoordinatorFailureTest.(CheckpointCoordinatorFailureTest.java:55) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.createTestInstance(PowerMockJUnit44RunnerDelegateImpl.java:197) at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.createTest(PowerMockJUnit44RunnerDelegateImpl.java:182) at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.invokeTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:204) at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.runMethods(PowerMockJUnit44RunnerDelegateImpl.java:160) at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$1.run(PowerMockJUnit44RunnerDelegateImpl.java:134) at org.junit.internal.runners.ClassRoadie.runUnprotected(ClassRoadie.java:34) at org.junit.internal.runners.ClassRoadie.runProtected(ClassRoadie.java:44) at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.run(PowerMockJUnit44RunnerDelegateImpl.java:136) at org.powermock.modules.junit4.common.internal.impl.JUnit4TestSuiteChunkerImpl.run(JUnit4TestSuiteChunkerImpl.java:117) at org.powermock.modules.junit4.common.internal.impl.AbstractCommonPowerMockRunner.run(AbstractCommonPowerMockRunner.java:57) at
[jira] [Closed] (FLINK-16507) "update_branch_version" should also update the version in Stateful Function's Python SDK setup.py file
[ https://issues.apache.org/jira/browse/FLINK-16507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tzu-Li (Gordon) Tai closed FLINK-16507. --- Fix Version/s: statefun-1.1 Resolution: Fixed Merged to master via e2ffb95b6a3de36d5e78cbd29a5f439aaf82e4c7 > "update_branch_version" should also update the version in Stateful Function's > Python SDK setup.py file > -- > > Key: FLINK-16507 > URL: https://issues.apache.org/jira/browse/FLINK-16507 > Project: Flink > Issue Type: Task > Components: Stateful Functions >Affects Versions: statefun-1.1 >Reporter: Tzu-Li (Gordon) Tai >Assignee: Tzu-Li (Gordon) Tai >Priority: Blocker > Labels: pull-request-available > Fix For: statefun-1.1 > > Time Spent: 20m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on issue #11357: [FLINK-16476] [table-planner-blink] Remove PowerMockito to avoid LinkageError in SelectivityEstimatorTest
flinkbot edited a comment on issue #11357: [FLINK-16476] [table-planner-blink] Remove PowerMockito to avoid LinkageError in SelectivityEstimatorTest URL: https://github.com/apache/flink/pull/11357#issuecomment-596871878 ## CI report: * 6fb60fcf06b43330aea1ea022423e0be3615b228 UNKNOWN * dbfd74200214880c119e046f04a9a89732f19f15 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #11356: [FLINK-15000] [web] WebUI metrics is very slow in large parallelism
flinkbot edited a comment on issue #11356: [FLINK-15000] [web] WebUI metrics is very slow in large parallelism URL: https://github.com/apache/flink/pull/11356#issuecomment-596856858 ## CI report: * 8917c0690a4d84ac1ca15a4ca4f68af921f8bdc1 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/152561426) Azure: [PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6097) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] TisonKun commented on a change in pull request #10526: [FLINK-15090][build] Reverse the dependency from flink-streaming-java to flink-client
TisonKun commented on a change in pull request #10526: [FLINK-15090][build] Reverse the dependency from flink-streaming-java to flink-client URL: https://github.com/apache/flink/pull/10526#discussion_r390070761 ## File path: flink-clients/src/main/java/org/apache/flink/client/ClientUtils.java ## @@ -128,16 +128,23 @@ public static void executeProgram( LOG.info("Starting program (detached: {})", !configuration.getBoolean(DeploymentOptions.ATTACHED)); - ContextEnvironmentFactory factory = new ContextEnvironmentFactory( + ContextEnvironment benv = new ContextEnvironment( Review comment: I forget user could call ExecutionEnvironment multiple times. Generally speaking these envs have same configuration but if we then modify them in user code they should changed. The motivation I do this change is from `OptimizerPlanEnvironment` where return identical instance of env for all `getEnvironment` calls. However, `StreamPlanEnvironment` is different instance. As the semantic of `getEnvironment`, I think return different instance for each call is reasonable. Will update. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink-statefun] tzulitai commented on issue #55: [FLINK-16518] [kafka] Set client properties as strings in KafkaSinkProvider
tzulitai commented on issue #55: [FLINK-16518] [kafka] Set client properties as strings in KafkaSinkProvider URL: https://github.com/apache/flink-statefun/pull/55#issuecomment-596877164 cc @igalshilman This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (FLINK-16518) Stateful Function's KafkaSinkProvider should use `setProperty` instead of `put` for resolving client properties
[ https://issues.apache.org/jira/browse/FLINK-16518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-16518: --- Labels: pull-request-available (was: ) > Stateful Function's KafkaSinkProvider should use `setProperty` instead of > `put` for resolving client properties > --- > > Key: FLINK-16518 > URL: https://issues.apache.org/jira/browse/FLINK-16518 > Project: Flink > Issue Type: Bug >Reporter: Tzu-Li (Gordon) Tai >Assignee: Tzu-Li (Gordon) Tai >Priority: Blocker > Labels: pull-request-available > > The {{put}} method is strongly discourage to be used on {{Properties}} as a > bad practice, since it allows putting non-string values. > This has already caused a bug, where a long was put into the properties, > while Kafka was expecting an integer: > {code} > org.apache.kafka.common.config.ConfigException: Invalid value 10 for > configuration transaction.timeout.ms: Expected value to be a 32-bit integer, > but it was a java.lang.Long > at > org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:669) > at > org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:471) > at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:464) > at > org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:62) > at > org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:75) > at > org.apache.kafka.clients.producer.ProducerConfig.(ProducerConfig.java:396) > at > org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:326) > at > org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:298) > at > org.apache.flink.streaming.connectors.kafka.internal.FlinkKafkaInternalProducer.(FlinkKafkaInternalProducer.java:76) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink-statefun] tzulitai opened a new pull request #55: [FLINK-16518] [kafka] Set client properties as strings in KafkaSinkProvider
tzulitai opened a new pull request #55: [FLINK-16518] [kafka] Set client properties as strings in KafkaSinkProvider URL: https://github.com/apache/flink-statefun/pull/55 The put method is strongly discourage to be used on `Properties` as a bad practice, since it allows putting non-string values. This has already caused a bug, where a long was put into the properties, while Kafka was expecting an integer: ``` org.apache.kafka.common.config.ConfigException: Invalid value 10 for configuration transaction.timeout.ms: Expected value to be a 32-bit integer, but it was a java.lang.Long at org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:669) at org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:471) at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:464) at org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:62) at org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:75) at org.apache.kafka.clients.producer.ProducerConfig.(ProducerConfig.java:396) at org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:326) at org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:298) at org.apache.flink.streaming.connectors.kafka.internal.FlinkKafkaInternalProducer.(FlinkKafkaInternalProducer.java:76) ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink-statefun] tzulitai closed pull request #53: [FLINK-16507] [releasing] update_branch_version.sh should also update Python SDK version
tzulitai closed pull request #53: [FLINK-16507] [releasing] update_branch_version.sh should also update Python SDK version URL: https://github.com/apache/flink-statefun/pull/53 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-16476) SelectivityEstimatorTest logs LinkageErrors
[ https://issues.apache.org/jira/browse/FLINK-16476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17055541#comment-17055541 ] godfrey he commented on FLINK-16476: The error message is reported by {{CheckpointCoordinatorFailureTest}} not {{CompletedCheckpointTest}} > SelectivityEstimatorTest logs LinkageErrors > --- > > Key: FLINK-16476 > URL: https://issues.apache.org/jira/browse/FLINK-16476 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.11.0 >Reporter: Robert Metzger >Assignee: godfrey he >Priority: Major > Labels: pull-request-available, test-stability > Time Spent: 10m > Remaining Estimate: 0h > > This is the test run > https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6038=logs=d47ab8d2-10c7-5d9e-8178-ef06a797a0d8=9a1abf5f-7cf4-58c3-bb2a-282a64aebb1f > Log output > {code} > 2020-03-07T00:35:20.1270791Z [INFO] Running > org.apache.flink.table.planner.plan.metadata.SelectivityEstimatorTest > 2020-03-07T00:35:21.6473057Z [INFO] Tests run: 3, Failures: 0, Errors: 0, > Skipped: 0, Time elapsed: 3.408 s - in > org.apache.flink.table.planner.plan.utils.FlinkRexUtilTest > 2020-03-07T00:35:21.6541713Z [INFO] Running > org.apache.flink.table.planner.plan.metadata.FlinkRelMdNonCumulativeCostTest > 2020-03-07T00:35:21.7294613Z [INFO] Tests run: 2, Failures: 0, Errors: 0, > Skipped: 0, Time elapsed: 0.073 s - in > org.apache.flink.table.planner.plan.metadata.FlinkRelMdNonCumulativeCostTest > 2020-03-07T00:35:21.7309958Z [INFO] Running > org.apache.flink.table.planner.plan.metadata.AggCallSelectivityEstimatorTest > 2020-03-07T00:35:23.7443246Z ScriptEngineManager providers.next(): > javax.script.ScriptEngineFactory: Provider > jdk.nashorn.api.scripting.NashornScriptEngineFactory not a subtype > 2020-03-07T00:35:23.8260013Z 2020-03-07 00:35:23,819 main ERROR Could not > reconfigure JMX java.lang.LinkageError: loader constraint violation: loader > (instance of > org/powermock/core/classloader/javassist/JavassistMockClassLoader) previously > initiated loading for a different type with name > "javax/management/MBeanServer" > 2020-03-07T00:35:23.8262329Z at java.lang.ClassLoader.defineClass1(Native > Method) > 2020-03-07T00:35:23.8263241Z at > java.lang.ClassLoader.defineClass(ClassLoader.java:757) > 2020-03-07T00:35:23.8264629Z at > org.powermock.core.classloader.javassist.JavassistMockClassLoader.loadUnmockedClass(JavassistMockClassLoader.java:90) > 2020-03-07T00:35:23.8266241Z at > org.powermock.core.classloader.MockClassLoader.loadClassByThisClassLoader(MockClassLoader.java:104) > 2020-03-07T00:35:23.8267808Z at > org.powermock.core.classloader.DeferSupportingClassLoader.loadClass1(DeferSupportingClassLoader.java:147) > 2020-03-07T00:35:23.8269485Z at > org.powermock.core.classloader.DeferSupportingClassLoader.loadClass(DeferSupportingClassLoader.java:98) > 2020-03-07T00:35:23.8270900Z at > java.lang.ClassLoader.loadClass(ClassLoader.java:352) > 2020-03-07T00:35:23.8272000Z at > org.apache.logging.log4j.core.jmx.Server.unregisterAllMatching(Server.java:337) > 2020-03-07T00:35:23.8273779Z at > org.apache.logging.log4j.core.jmx.Server.unregisterLoggerContext(Server.java:261) > 2020-03-07T00:35:23.8275087Z at > org.apache.logging.log4j.core.jmx.Server.reregisterMBeansAfterReconfigure(Server.java:165) > 2020-03-07T00:35:23.8276515Z at > org.apache.logging.log4j.core.jmx.Server.reregisterMBeansAfterReconfigure(Server.java:141) > 2020-03-07T00:35:23.8278036Z at > org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:590) > 2020-03-07T00:35:23.8279741Z at > org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:651) > 2020-03-07T00:35:23.8281190Z at > org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:668) > 2020-03-07T00:35:23.8282440Z at > org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:253) > 2020-03-07T00:35:23.8283717Z at > org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:153) > 2020-03-07T00:35:23.8285186Z at > org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:45) > 2020-03-07T00:35:23.8286575Z at > org.apache.logging.log4j.LogManager.getContext(LogManager.java:194) > 2020-03-07T00:35:23.8287933Z at > org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:138) > 2020-03-07T00:35:23.8289393Z at > org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFactory.java:45) > 2020-03-07T00:35:23.8290816Z at > org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:48) > 2020-03-07T00:35:23.8292179Z at >
[jira] [Updated] (FLINK-16506) SqlCreateTable can not get the original text when there exists non-ascii char in the column definition
[ https://issues.apache.org/jira/browse/FLINK-16506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Terry Wang updated FLINK-16506: --- Description: We can reproduce this problem in FlinkSqlParserImplTest, add one more column definition ` x varchar comment 'Flink 社区', \n` ``` @Test public void testCreateTableWithComment() { conformance0 = FlinkSqlConformance.HIVE; check("CREATE TABLE tbl1 (\n" + " a bigint comment 'test column comment AAA.',\n" + " h varchar, \n" + " x varchar comment 'Flink 社区', \n" + " g as 2 * (a + 1), \n" + " ts as toTimestamp(b, '-MM-dd HH:mm:ss'), \n" + " b varchar,\n" + " proc as PROCTIME(), \n" + " PRIMARY KEY (a, b)\n" + ")\n" + "comment 'test table comment ABC.'\n" + "PARTITIONED BY (a, h)\n" + " with (\n" + "'connector' = 'kafka', \n" + "'kafka.topic' = 'log.test'\n" + ")\n", "CREATE TABLE `TBL1` (\n" + " `A` BIGINT COMMENT 'test column comment AAA.',\n" + " `H` VARCHAR,\n" + " `X` VARCHAR COMMENT 'Flink 社区', \n" + " `G` AS (2 * (`A` + 1)),\n" + " `TS` AS `TOTIMESTAMP`(`B`, '-MM-dd HH:mm:ss'),\n" + " `B` VARCHAR,\n" + " `PROC` AS `PROCTIME`(),\n" + " PRIMARY KEY (`A`, `B`)\n" + ")\n" + "COMMENT 'test table comment ABC.'\n" + "PARTITIONED BY (`A`, `H`)\n" + "WITH (\n" + " 'connector' = 'kafka',\n" + " 'kafka.topic' = 'log.test'\n" + ")"); } ``` the actual unparse of x column will be ` X` VARCHAR COMMENT u&'Flink \793e\533a' instead of our expection. was: We can reproduce this problem in FlinkSqlParserImplTest, add one more column definition ` x varchar comment 'Flink 社区', \n` ``` @Test public void testCreateTableWithComment() { conformance0 = FlinkSqlConformance.HIVE; check("CREATE TABLE tbl1 (\n" + " a bigint comment 'test column comment AAA.',\n" + " h varchar, \n" + " x varchar comment 'Flink 社区', \n" + " g as 2 * (a + 1), \n" + " ts as toTimestamp(b, '-MM-dd HH:mm:ss'), \n" + " b varchar,\n" + " proc as PROCTIME(), \n" + " PRIMARY KEY (a, b)\n" + ")\n" + "comment 'test table comment ABC.'\n" + "PARTITIONED BY (a, h)\n" + " with (\n" + "'connector' = 'kafka', \n" + "'kafka.topic' = 'log.test'\n" + ")\n", "CREATE TABLE `TBL1` (\n" + " `A` BIGINT COMMENT 'test column comment AAA.',\n" + " `H` VARCHAR,\n" + " `X` VARCHAR COMMENT 'Flink 社区', \n" + " `G` AS (2 * (`A` + 1)),\n" + " `TS` AS `TOTIMESTAMP`(`B`, '-MM-dd HH:mm:ss'),\n" + " `B` VARCHAR,\n" + " `PROC` AS `PROCTIME`(),\n" + " PRIMARY KEY (`A`, `B`)\n" + ")\n" + "COMMENT 'test table comment ABC.'\n" + "PARTITIONED BY (`A`, `H`)\n" + "WITH (\n" + " 'connector' = 'kafka',\n" + " 'kafka.topic' = 'log.test'\n" + ")"); } ``` the actual unparse of x column will be ` X` VARCHAR COMMENT u&'Flink \793e\533a' instead of out expection. > SqlCreateTable can not get the original text when there exists non-ascii char > in the
[jira] [Updated] (FLINK-16506) SqlCreateTable can not get the original text when there exists non-ascii char in the column definition
[ https://issues.apache.org/jira/browse/FLINK-16506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Terry Wang updated FLINK-16506: --- Affects Version/s: 1.10.0 > SqlCreateTable can not get the original text when there exists non-ascii char > in the column definition > -- > > Key: FLINK-16506 > URL: https://issues.apache.org/jira/browse/FLINK-16506 > Project: Flink > Issue Type: Bug >Affects Versions: 1.10.0 >Reporter: Terry Wang >Priority: Major > > We can reproduce this problem in FlinkSqlParserImplTest, add one more column > definition > ` x varchar comment 'Flink 社区', \n` > ``` > @Test > public void testCreateTableWithComment() { > conformance0 = FlinkSqlConformance.HIVE; > check("CREATE TABLE tbl1 (\n" + > " a bigint comment 'test column comment > AAA.',\n" + > " h varchar, \n" + > " x varchar comment 'Flink 社区', \n" + > " g as 2 * (a + 1), \n" + > " ts as toTimestamp(b, '-MM-dd HH:mm:ss'), > \n" + > " b varchar,\n" + > " proc as PROCTIME(), \n" + > " PRIMARY KEY (a, b)\n" + > ")\n" + > "comment 'test table comment ABC.'\n" + > "PARTITIONED BY (a, h)\n" + > " with (\n" + > "'connector' = 'kafka', \n" + > "'kafka.topic' = 'log.test'\n" + > ")\n", > "CREATE TABLE `TBL1` (\n" + > " `A` BIGINT COMMENT 'test column comment > AAA.',\n" + > " `H` VARCHAR,\n" + > " `X` VARCHAR COMMENT 'Flink 社区', \n" + > " `G` AS (2 * (`A` + 1)),\n" + > " `TS` AS `TOTIMESTAMP`(`B`, '-MM-dd > HH:mm:ss'),\n" + > " `B` VARCHAR,\n" + > " `PROC` AS `PROCTIME`(),\n" + > " PRIMARY KEY (`A`, `B`)\n" + > ")\n" + > "COMMENT 'test table comment ABC.'\n" + > "PARTITIONED BY (`A`, `H`)\n" + > "WITH (\n" + > " 'connector' = 'kafka',\n" + > " 'kafka.topic' = 'log.test'\n" + > ")"); > } > ``` > the actual unparse of x column will be ` X` VARCHAR COMMENT u&'Flink > \793e\533a' instead of our expection. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] TisonKun commented on a change in pull request #10526: [FLINK-15090][build] Reverse the dependency from flink-streaming-java to flink-client
TisonKun commented on a change in pull request #10526: [FLINK-15090][build] Reverse the dependency from flink-streaming-java to flink-client URL: https://github.com/apache/flink/pull/10526#discussion_r390068747 ## File path: flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/generator/JobGraphGeneratorUtils.java ## @@ -0,0 +1,72 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.runtime.jobgraph.generator; + +import org.apache.flink.api.common.cache.DistributedCache; +import org.apache.flink.api.java.tuple.Tuple2; +import org.apache.flink.core.fs.FileSystem; +import org.apache.flink.core.fs.Path; +import org.apache.flink.runtime.jobgraph.JobGraph; +import org.apache.flink.util.FileUtils; +import org.apache.flink.util.FlinkRuntimeException; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.IOException; +import java.nio.file.Files; +import java.util.Collection; + +/** + * Utilities for generating {@link JobGraph}. + */ +public enum JobGraphGeneratorUtils { Review comment: I agree with `JobGraphUtils`. Since it includes logics packaging files I'd like to hold it in a separated place since `JobGraph` is generally regarded as a data class. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-16476) SelectivityEstimatorTest logs LinkageErrors
[ https://issues.apache.org/jira/browse/FLINK-16476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17055531#comment-17055531 ] Jark Wu commented on FLINK-16476: - But it seems that {{CompletedCheckpointTest}} doesn't use {{PowerMock}}. > SelectivityEstimatorTest logs LinkageErrors > --- > > Key: FLINK-16476 > URL: https://issues.apache.org/jira/browse/FLINK-16476 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.11.0 >Reporter: Robert Metzger >Assignee: godfrey he >Priority: Major > Labels: pull-request-available, test-stability > Time Spent: 10m > Remaining Estimate: 0h > > This is the test run > https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6038=logs=d47ab8d2-10c7-5d9e-8178-ef06a797a0d8=9a1abf5f-7cf4-58c3-bb2a-282a64aebb1f > Log output > {code} > 2020-03-07T00:35:20.1270791Z [INFO] Running > org.apache.flink.table.planner.plan.metadata.SelectivityEstimatorTest > 2020-03-07T00:35:21.6473057Z [INFO] Tests run: 3, Failures: 0, Errors: 0, > Skipped: 0, Time elapsed: 3.408 s - in > org.apache.flink.table.planner.plan.utils.FlinkRexUtilTest > 2020-03-07T00:35:21.6541713Z [INFO] Running > org.apache.flink.table.planner.plan.metadata.FlinkRelMdNonCumulativeCostTest > 2020-03-07T00:35:21.7294613Z [INFO] Tests run: 2, Failures: 0, Errors: 0, > Skipped: 0, Time elapsed: 0.073 s - in > org.apache.flink.table.planner.plan.metadata.FlinkRelMdNonCumulativeCostTest > 2020-03-07T00:35:21.7309958Z [INFO] Running > org.apache.flink.table.planner.plan.metadata.AggCallSelectivityEstimatorTest > 2020-03-07T00:35:23.7443246Z ScriptEngineManager providers.next(): > javax.script.ScriptEngineFactory: Provider > jdk.nashorn.api.scripting.NashornScriptEngineFactory not a subtype > 2020-03-07T00:35:23.8260013Z 2020-03-07 00:35:23,819 main ERROR Could not > reconfigure JMX java.lang.LinkageError: loader constraint violation: loader > (instance of > org/powermock/core/classloader/javassist/JavassistMockClassLoader) previously > initiated loading for a different type with name > "javax/management/MBeanServer" > 2020-03-07T00:35:23.8262329Z at java.lang.ClassLoader.defineClass1(Native > Method) > 2020-03-07T00:35:23.8263241Z at > java.lang.ClassLoader.defineClass(ClassLoader.java:757) > 2020-03-07T00:35:23.8264629Z at > org.powermock.core.classloader.javassist.JavassistMockClassLoader.loadUnmockedClass(JavassistMockClassLoader.java:90) > 2020-03-07T00:35:23.8266241Z at > org.powermock.core.classloader.MockClassLoader.loadClassByThisClassLoader(MockClassLoader.java:104) > 2020-03-07T00:35:23.8267808Z at > org.powermock.core.classloader.DeferSupportingClassLoader.loadClass1(DeferSupportingClassLoader.java:147) > 2020-03-07T00:35:23.8269485Z at > org.powermock.core.classloader.DeferSupportingClassLoader.loadClass(DeferSupportingClassLoader.java:98) > 2020-03-07T00:35:23.8270900Z at > java.lang.ClassLoader.loadClass(ClassLoader.java:352) > 2020-03-07T00:35:23.8272000Z at > org.apache.logging.log4j.core.jmx.Server.unregisterAllMatching(Server.java:337) > 2020-03-07T00:35:23.8273779Z at > org.apache.logging.log4j.core.jmx.Server.unregisterLoggerContext(Server.java:261) > 2020-03-07T00:35:23.8275087Z at > org.apache.logging.log4j.core.jmx.Server.reregisterMBeansAfterReconfigure(Server.java:165) > 2020-03-07T00:35:23.8276515Z at > org.apache.logging.log4j.core.jmx.Server.reregisterMBeansAfterReconfigure(Server.java:141) > 2020-03-07T00:35:23.8278036Z at > org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:590) > 2020-03-07T00:35:23.8279741Z at > org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:651) > 2020-03-07T00:35:23.8281190Z at > org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:668) > 2020-03-07T00:35:23.8282440Z at > org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:253) > 2020-03-07T00:35:23.8283717Z at > org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:153) > 2020-03-07T00:35:23.8285186Z at > org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:45) > 2020-03-07T00:35:23.8286575Z at > org.apache.logging.log4j.LogManager.getContext(LogManager.java:194) > 2020-03-07T00:35:23.8287933Z at > org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:138) > 2020-03-07T00:35:23.8289393Z at > org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFactory.java:45) > 2020-03-07T00:35:23.8290816Z at > org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:48) > 2020-03-07T00:35:23.8292179Z at >
[jira] [Assigned] (FLINK-16476) SelectivityEstimatorTest logs LinkageErrors
[ https://issues.apache.org/jira/browse/FLINK-16476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu reassigned FLINK-16476: --- Assignee: godfrey he > SelectivityEstimatorTest logs LinkageErrors > --- > > Key: FLINK-16476 > URL: https://issues.apache.org/jira/browse/FLINK-16476 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.11.0 >Reporter: Robert Metzger >Assignee: godfrey he >Priority: Major > Labels: pull-request-available, test-stability > Time Spent: 10m > Remaining Estimate: 0h > > This is the test run > https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6038=logs=d47ab8d2-10c7-5d9e-8178-ef06a797a0d8=9a1abf5f-7cf4-58c3-bb2a-282a64aebb1f > Log output > {code} > 2020-03-07T00:35:20.1270791Z [INFO] Running > org.apache.flink.table.planner.plan.metadata.SelectivityEstimatorTest > 2020-03-07T00:35:21.6473057Z [INFO] Tests run: 3, Failures: 0, Errors: 0, > Skipped: 0, Time elapsed: 3.408 s - in > org.apache.flink.table.planner.plan.utils.FlinkRexUtilTest > 2020-03-07T00:35:21.6541713Z [INFO] Running > org.apache.flink.table.planner.plan.metadata.FlinkRelMdNonCumulativeCostTest > 2020-03-07T00:35:21.7294613Z [INFO] Tests run: 2, Failures: 0, Errors: 0, > Skipped: 0, Time elapsed: 0.073 s - in > org.apache.flink.table.planner.plan.metadata.FlinkRelMdNonCumulativeCostTest > 2020-03-07T00:35:21.7309958Z [INFO] Running > org.apache.flink.table.planner.plan.metadata.AggCallSelectivityEstimatorTest > 2020-03-07T00:35:23.7443246Z ScriptEngineManager providers.next(): > javax.script.ScriptEngineFactory: Provider > jdk.nashorn.api.scripting.NashornScriptEngineFactory not a subtype > 2020-03-07T00:35:23.8260013Z 2020-03-07 00:35:23,819 main ERROR Could not > reconfigure JMX java.lang.LinkageError: loader constraint violation: loader > (instance of > org/powermock/core/classloader/javassist/JavassistMockClassLoader) previously > initiated loading for a different type with name > "javax/management/MBeanServer" > 2020-03-07T00:35:23.8262329Z at java.lang.ClassLoader.defineClass1(Native > Method) > 2020-03-07T00:35:23.8263241Z at > java.lang.ClassLoader.defineClass(ClassLoader.java:757) > 2020-03-07T00:35:23.8264629Z at > org.powermock.core.classloader.javassist.JavassistMockClassLoader.loadUnmockedClass(JavassistMockClassLoader.java:90) > 2020-03-07T00:35:23.8266241Z at > org.powermock.core.classloader.MockClassLoader.loadClassByThisClassLoader(MockClassLoader.java:104) > 2020-03-07T00:35:23.8267808Z at > org.powermock.core.classloader.DeferSupportingClassLoader.loadClass1(DeferSupportingClassLoader.java:147) > 2020-03-07T00:35:23.8269485Z at > org.powermock.core.classloader.DeferSupportingClassLoader.loadClass(DeferSupportingClassLoader.java:98) > 2020-03-07T00:35:23.8270900Z at > java.lang.ClassLoader.loadClass(ClassLoader.java:352) > 2020-03-07T00:35:23.8272000Z at > org.apache.logging.log4j.core.jmx.Server.unregisterAllMatching(Server.java:337) > 2020-03-07T00:35:23.8273779Z at > org.apache.logging.log4j.core.jmx.Server.unregisterLoggerContext(Server.java:261) > 2020-03-07T00:35:23.8275087Z at > org.apache.logging.log4j.core.jmx.Server.reregisterMBeansAfterReconfigure(Server.java:165) > 2020-03-07T00:35:23.8276515Z at > org.apache.logging.log4j.core.jmx.Server.reregisterMBeansAfterReconfigure(Server.java:141) > 2020-03-07T00:35:23.8278036Z at > org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:590) > 2020-03-07T00:35:23.8279741Z at > org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:651) > 2020-03-07T00:35:23.8281190Z at > org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:668) > 2020-03-07T00:35:23.8282440Z at > org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:253) > 2020-03-07T00:35:23.8283717Z at > org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:153) > 2020-03-07T00:35:23.8285186Z at > org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:45) > 2020-03-07T00:35:23.8286575Z at > org.apache.logging.log4j.LogManager.getContext(LogManager.java:194) > 2020-03-07T00:35:23.8287933Z at > org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:138) > 2020-03-07T00:35:23.8289393Z at > org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFactory.java:45) > 2020-03-07T00:35:23.8290816Z at > org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:48) > 2020-03-07T00:35:23.8292179Z at > org.apache.logging.slf4j.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:30) > 2020-03-07T00:35:23.8293304Z at >
[jira] [Commented] (FLINK-16506) SqlCreateTable can not get the original text when there exists non-ascii char in the column definition
[ https://issues.apache.org/jira/browse/FLINK-16506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17055527#comment-17055527 ] Terry Wang commented on FLINK-16506: cc [~danny0405] to have a look > SqlCreateTable can not get the original text when there exists non-ascii char > in the column definition > -- > > Key: FLINK-16506 > URL: https://issues.apache.org/jira/browse/FLINK-16506 > Project: Flink > Issue Type: Bug >Reporter: Terry Wang >Priority: Major > > We can reproduce this problem in FlinkSqlParserImplTest, add one more column > definition > ` x varchar comment 'Flink 社区', \n` > ``` > @Test > public void testCreateTableWithComment() { > conformance0 = FlinkSqlConformance.HIVE; > check("CREATE TABLE tbl1 (\n" + > " a bigint comment 'test column comment > AAA.',\n" + > " h varchar, \n" + > " x varchar comment 'Flink 社区', \n" + > " g as 2 * (a + 1), \n" + > " ts as toTimestamp(b, '-MM-dd HH:mm:ss'), > \n" + > " b varchar,\n" + > " proc as PROCTIME(), \n" + > " PRIMARY KEY (a, b)\n" + > ")\n" + > "comment 'test table comment ABC.'\n" + > "PARTITIONED BY (a, h)\n" + > " with (\n" + > "'connector' = 'kafka', \n" + > "'kafka.topic' = 'log.test'\n" + > ")\n", > "CREATE TABLE `TBL1` (\n" + > " `A` BIGINT COMMENT 'test column comment > AAA.',\n" + > " `H` VARCHAR,\n" + > " `X` VARCHAR COMMENT 'Flink 社区', \n" + > " `G` AS (2 * (`A` + 1)),\n" + > " `TS` AS `TOTIMESTAMP`(`B`, '-MM-dd > HH:mm:ss'),\n" + > " `B` VARCHAR,\n" + > " `PROC` AS `PROCTIME`(),\n" + > " PRIMARY KEY (`A`, `B`)\n" + > ")\n" + > "COMMENT 'test table comment ABC.'\n" + > "PARTITIONED BY (`A`, `H`)\n" + > "WITH (\n" + > " 'connector' = 'kafka',\n" + > " 'kafka.topic' = 'log.test'\n" + > ")"); > } > ``` > the actual unparse of x column will be ` X` VARCHAR COMMENT u&'Flink > \793e\533a' instead of out expection. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-16025) Service could expose blob server port mismatched with JM Container
[ https://issues.apache.org/jira/browse/FLINK-16025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zili Chen updated FLINK-16025: -- Fix Version/s: (was: 1.11.0) 1.10.1 > Service could expose blob server port mismatched with JM Container > -- > > Key: FLINK-16025 > URL: https://issues.apache.org/jira/browse/FLINK-16025 > Project: Flink > Issue Type: Bug > Components: Deployment / Kubernetes >Affects Versions: 1.10.0 >Reporter: Canbin Zheng >Assignee: Canbin Zheng >Priority: Major > Labels: pull-request-available > Fix For: 1.10.1 > > Time Spent: 10m > Remaining Estimate: 0h > > The Service would always expose 6124 port if it should expose that port, and > while building ServicePort we do not explicitly specify a target port, so the > target port would always be 6124 too. > {code:java} > // From ServiceDecorator.java > servicePorts.add(getServicePort( > getPortName(BlobServerOptions.PORT.key()), > Constants.BLOB_SERVER_PORT)); > private ServicePort getServicePort(String name, int port) { >return new ServicePortBuilder() > .withName(name) > .withPort(port) > .build(); > } > {code} > > meanwhile, the Container of the JM would expose the blob server port which is > configured in the Flink Configuration, > {code:java} > // From FlinkMasterDeploymentDecorator.java > final int blobServerPort = KubernetesUtils.parsePort(flinkConfig, > BlobServerOptions.PORT); > ... > final Container container = createJobManagerContainer(flinkConfig, mainClass, > hasLogback, hasLog4j, blobServerPort); > {code} > > so there is a risk that in non-HA mode the TM could not execute Task due to > dependencies fetching failure if the Service exposes a blob server port which > is different from the JM Container when one configures the blob server port > with a value different from 6124. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on issue #11142: [FLINK-16167][python][doc] update python_shell document execution
flinkbot edited a comment on issue #11142: [FLINK-16167][python][doc] update python_shell document execution URL: https://github.com/apache/flink/pull/11142#issuecomment-588249138 ## CI report: * 04fbc2dcd76c47ce9fe126469d6b7d96da58651a Travis: [FAILURE](https://travis-ci.com/flink-ci/flink/builds/149639684) * 7ad5d6fcd65591e484cafd878fb1601403a070db Travis: [PENDING](https://travis-ci.com/flink-ci/flink/builds/152564245) Azure: [PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6099) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Created] (FLINK-16518) Stateful Function's KafkaSinkProvider should use `setProperty` instead of `put` for resolving client properties
Tzu-Li (Gordon) Tai created FLINK-16518: --- Summary: Stateful Function's KafkaSinkProvider should use `setProperty` instead of `put` for resolving client properties Key: FLINK-16518 URL: https://issues.apache.org/jira/browse/FLINK-16518 Project: Flink Issue Type: Bug Reporter: Tzu-Li (Gordon) Tai Assignee: Tzu-Li (Gordon) Tai The {{put}} method is strongly discourage to be used on {{Properties}} as a bad practice, since it allows putting non-string values. This has already caused a bug, where a long was put into the properties, while Kafka was expecting an integer: {code} org.apache.kafka.common.config.ConfigException: Invalid value 10 for configuration transaction.timeout.ms: Expected value to be a 32-bit integer, but it was a java.lang.Long at org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:669) at org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:471) at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:464) at org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:62) at org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:75) at org.apache.kafka.clients.producer.ProducerConfig.(ProducerConfig.java:396) at org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:326) at org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:298) at org.apache.flink.streaming.connectors.kafka.internal.FlinkKafkaInternalProducer.(FlinkKafkaInternalProducer.java:76) {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot commented on issue #11357: [FLINK-16476] [table-planner-blink] Remove PowerMockito to avoid LinkageError in SelectivityEstimatorTest
flinkbot commented on issue #11357: [FLINK-16476] [table-planner-blink] Remove PowerMockito to avoid LinkageError in SelectivityEstimatorTest URL: https://github.com/apache/flink/pull/11357#issuecomment-596871878 ## CI report: * 6fb60fcf06b43330aea1ea022423e0be3615b228 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot commented on issue #11357: [FLINK-16476] [table-planner-blink] Remove PowerMockito to avoid LinkageError in SelectivityEstimatorTest
flinkbot commented on issue #11357: [FLINK-16476] [table-planner-blink] Remove PowerMockito to avoid LinkageError in SelectivityEstimatorTest URL: https://github.com/apache/flink/pull/11357#issuecomment-596870971 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 6fb60fcf06b43330aea1ea022423e0be3615b228 (Tue Mar 10 02:36:22 UTC 2020) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! * **This pull request references an unassigned [Jira ticket](https://issues.apache.org/jira/browse/FLINK-16476).** According to the [code contribution guide](https://flink.apache.org/contributing/contribute-code.html), tickets need to be assigned before starting with the implementation work. Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] dianfu commented on a change in pull request #11342: [FLINK-16483][python] Add Python building blocks to make sure the basic functionality of vectorized Python UDF could work
dianfu commented on a change in pull request #11342: [FLINK-16483][python] Add Python building blocks to make sure the basic functionality of vectorized Python UDF could work URL: https://github.com/apache/flink/pull/11342#discussion_r390058585 ## File path: flink-python/pyflink/fn_execution/coder_impl.py ## @@ -373,3 +376,45 @@ def internal_to_timestamp(self, milliseconds, nanoseconds): second, microsecond = (milliseconds // 1000, milliseconds % 1000 * 1000 + nanoseconds // 1000) return datetime.datetime.utcfromtimestamp(second).replace(microsecond=microsecond) + + +class ArrowCoderImpl(StreamCoderImpl): + +def __init__(self, schema): +self._schema = schema +self._resettable_io = ResettableIO() + +def encode_to_stream(self, cols, out_stream, nested): +if not hasattr(self, "_batch_writer"): +self._batch_writer = pa.RecordBatchStreamWriter(self._resettable_io, self._schema) + +self._resettable_io.set_output_stream(out_stream) +self._batch_writer.write_batch(self._create_batch(cols)) + +def decode_from_stream(self, in_stream, nested): +if not hasattr(self, "_batch_reader"): +def load_from_stream(stream): +reader = pa.ipc.open_stream(stream) +for batch in reader: +yield batch + +self._batch_reader = load_from_stream(self._resettable_io) + +self._resettable_io.set_input_bytes(in_stream.read_all()) +table = pa.Table.from_batches([next(self._batch_reader)]) Review comment: It's a generator and we will fetch one batch at a time. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] TisonKun commented on issue #11311: [FLINK-16427][api] Remove directly throw ProgramInvocationExceptions in RemoteStreamEnvironment
TisonKun commented on issue #11311: [FLINK-16427][api] Remove directly throw ProgramInvocationExceptions in RemoteStreamEnvironment URL: https://github.com/apache/flink/pull/11311#issuecomment-596870391 Thanks! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (FLINK-16476) SelectivityEstimatorTest logs LinkageErrors
[ https://issues.apache.org/jira/browse/FLINK-16476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-16476: --- Labels: pull-request-available test-stability (was: test-stability) > SelectivityEstimatorTest logs LinkageErrors > --- > > Key: FLINK-16476 > URL: https://issues.apache.org/jira/browse/FLINK-16476 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.11.0 >Reporter: Robert Metzger >Priority: Major > Labels: pull-request-available, test-stability > > This is the test run > https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6038=logs=d47ab8d2-10c7-5d9e-8178-ef06a797a0d8=9a1abf5f-7cf4-58c3-bb2a-282a64aebb1f > Log output > {code} > 2020-03-07T00:35:20.1270791Z [INFO] Running > org.apache.flink.table.planner.plan.metadata.SelectivityEstimatorTest > 2020-03-07T00:35:21.6473057Z [INFO] Tests run: 3, Failures: 0, Errors: 0, > Skipped: 0, Time elapsed: 3.408 s - in > org.apache.flink.table.planner.plan.utils.FlinkRexUtilTest > 2020-03-07T00:35:21.6541713Z [INFO] Running > org.apache.flink.table.planner.plan.metadata.FlinkRelMdNonCumulativeCostTest > 2020-03-07T00:35:21.7294613Z [INFO] Tests run: 2, Failures: 0, Errors: 0, > Skipped: 0, Time elapsed: 0.073 s - in > org.apache.flink.table.planner.plan.metadata.FlinkRelMdNonCumulativeCostTest > 2020-03-07T00:35:21.7309958Z [INFO] Running > org.apache.flink.table.planner.plan.metadata.AggCallSelectivityEstimatorTest > 2020-03-07T00:35:23.7443246Z ScriptEngineManager providers.next(): > javax.script.ScriptEngineFactory: Provider > jdk.nashorn.api.scripting.NashornScriptEngineFactory not a subtype > 2020-03-07T00:35:23.8260013Z 2020-03-07 00:35:23,819 main ERROR Could not > reconfigure JMX java.lang.LinkageError: loader constraint violation: loader > (instance of > org/powermock/core/classloader/javassist/JavassistMockClassLoader) previously > initiated loading for a different type with name > "javax/management/MBeanServer" > 2020-03-07T00:35:23.8262329Z at java.lang.ClassLoader.defineClass1(Native > Method) > 2020-03-07T00:35:23.8263241Z at > java.lang.ClassLoader.defineClass(ClassLoader.java:757) > 2020-03-07T00:35:23.8264629Z at > org.powermock.core.classloader.javassist.JavassistMockClassLoader.loadUnmockedClass(JavassistMockClassLoader.java:90) > 2020-03-07T00:35:23.8266241Z at > org.powermock.core.classloader.MockClassLoader.loadClassByThisClassLoader(MockClassLoader.java:104) > 2020-03-07T00:35:23.8267808Z at > org.powermock.core.classloader.DeferSupportingClassLoader.loadClass1(DeferSupportingClassLoader.java:147) > 2020-03-07T00:35:23.8269485Z at > org.powermock.core.classloader.DeferSupportingClassLoader.loadClass(DeferSupportingClassLoader.java:98) > 2020-03-07T00:35:23.8270900Z at > java.lang.ClassLoader.loadClass(ClassLoader.java:352) > 2020-03-07T00:35:23.8272000Z at > org.apache.logging.log4j.core.jmx.Server.unregisterAllMatching(Server.java:337) > 2020-03-07T00:35:23.8273779Z at > org.apache.logging.log4j.core.jmx.Server.unregisterLoggerContext(Server.java:261) > 2020-03-07T00:35:23.8275087Z at > org.apache.logging.log4j.core.jmx.Server.reregisterMBeansAfterReconfigure(Server.java:165) > 2020-03-07T00:35:23.8276515Z at > org.apache.logging.log4j.core.jmx.Server.reregisterMBeansAfterReconfigure(Server.java:141) > 2020-03-07T00:35:23.8278036Z at > org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:590) > 2020-03-07T00:35:23.8279741Z at > org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:651) > 2020-03-07T00:35:23.8281190Z at > org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:668) > 2020-03-07T00:35:23.8282440Z at > org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:253) > 2020-03-07T00:35:23.8283717Z at > org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:153) > 2020-03-07T00:35:23.8285186Z at > org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:45) > 2020-03-07T00:35:23.8286575Z at > org.apache.logging.log4j.LogManager.getContext(LogManager.java:194) > 2020-03-07T00:35:23.8287933Z at > org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:138) > 2020-03-07T00:35:23.8289393Z at > org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFactory.java:45) > 2020-03-07T00:35:23.8290816Z at > org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:48) > 2020-03-07T00:35:23.8292179Z at > org.apache.logging.slf4j.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:30) > 2020-03-07T00:35:23.8293304Z at > org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:329) >
[GitHub] [flink] godfreyhe opened a new pull request #11357: [FLINK-16476] [table-planner-blink] Remove PowerMockito to avoid LinkageError in SelectivityEstimatorTest
godfreyhe opened a new pull request #11357: [FLINK-16476] [table-planner-blink] Remove PowerMockito to avoid LinkageError in SelectivityEstimatorTest URL: https://github.com/apache/flink/pull/11357 ## What is the purpose of the change *Remove PowerMockito to avoid LinkageError in SelectivityEstimatorTest* ## Brief change log - *Remove PowerMockito to avoid LinkageError in SelectivityEstimatorTest* ## Verifying this change This change is a trivial rework / code cleanup without any test coverage. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (yes / **no**) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes / **no**) - The serializers: (yes / **no** / don't know) - The runtime per-record code paths (performance sensitive): (yes / **no** / don't know) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / don't know) - The S3 file system connector: (yes / **no** / don't know) ## Documentation - Does this pull request introduce a new feature? (yes / **no**) - If yes, how is the feature documented? (not applicable / docs / JavaDocs / **not documented**) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Closed] (FLINK-16514) SQLClientKafkaITCase fails with output mismatch
[ https://issues.apache.org/jira/browse/FLINK-16514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu closed FLINK-16514. --- Resolution: Duplicate > SQLClientKafkaITCase fails with output mismatch > --- > > Key: FLINK-16514 > URL: https://issues.apache.org/jira/browse/FLINK-16514 > Project: Flink > Issue Type: Bug > Components: Table SQL / Client, Tests >Reporter: Robert Metzger >Priority: Major > Labels: test-stability > > Run: > https://travis-ci.org/apache/flink/jobs/660152169?utm_medium=notification_source=slack > {code} > 17:21:00.286 [INFO] --- > 17:21:00.286 [INFO] T E S T S > 17:21:00.286 [INFO] --- > 17:21:01.745 [INFO] Running > org.apache.flink.tests.util.kafka.StreamingKafkaITCase > 17:23:15.481 [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time > elapsed: 133.732 s - in org.apache.flink.tests.util.kafka.StreamingKafkaITCase > 17:23:15.481 [INFO] Running > org.apache.flink.tests.util.kafka.SQLClientKafkaITCase > 17:25:50.370 [ERROR] Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time > elapsed: 154.884 s <<< FAILURE! - in > org.apache.flink.tests.util.kafka.SQLClientKafkaITCase > 17:25:50.376 [ERROR] testKafka[0: kafka-version:0.10 > kafka-sql-version:.*kafka-0.10.jar](org.apache.flink.tests.util.kafka.SQLClientKafkaITCase) > Time elapsed: 55.178 s <<< FAILURE! > org.junit.ComparisonFailure: > expected:<...-03-12 09:00:00.000,[Bob,This was another warning.,1,Success > constant folding. > 2018-03-12 09:00:00.000,Steve,This was another info.,2],Success constant > fo...> but was:<...-03-12 09:00:00.000,[Steve,This was another > info.,2,Success constant folding. > 2018-03-12 09:00:00.000,Bob,This was another warning.,1],Success constant > fo...> > at > org.apache.flink.tests.util.kafka.SQLClientKafkaITCase.checkCsvResultFile(SQLClientKafkaITCase.java:230) > at > org.apache.flink.tests.util.kafka.SQLClientKafkaITCase.testKafka(SQLClientKafkaITCase.java:158) > 17:25:50.712 [INFO] > 17:25:50.712 [INFO] Results: > 17:25:50.712 [INFO] > 17:25:50.712 [ERROR] Failures: > 17:25:50.712 [ERROR] > SQLClientKafkaITCase.testKafka:158->checkCsvResultFile:230 > expected:<...-03-12 09:00:00.000,[Bob,This was another warning.,1,Success > constant folding. > 2018-03-12 09:00:00.000,Steve,This was another info.,2],Success constant > fo...> but was:<...-03-12 09:00:00.000,[Steve,This was another > info.,2,Success constant folding. > 2018-03-12 09:00:00.000,Bob,This was another warning.,1],Success constant > fo...> > 17:25:50.712 [INFO] > 17:25:50.712 [ERROR] Tests run: 6, Failures: 1, Errors: 0, Skipped: 0 > 17:25:50.712 [INFO] > 17:25:50.714 [INFO] - > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (FLINK-16083) Translate "Dynamic Table" page of "Streaming Concepts" into Chinese
[ https://issues.apache.org/jira/browse/FLINK-16083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu reassigned FLINK-16083: --- Assignee: ShijieZhang > Translate "Dynamic Table" page of "Streaming Concepts" into Chinese > --- > > Key: FLINK-16083 > URL: https://issues.apache.org/jira/browse/FLINK-16083 > Project: Flink > Issue Type: Sub-task > Components: chinese-translation, Documentation >Reporter: Jark Wu >Assignee: ShijieZhang >Priority: Major > > The page url is > https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/streaming/dynamic_tables.html > The markdown file is located in > {{flink/docs/dev/table/streaming/dynamic_tables.zh.md}} -- This message was sent by Atlassian Jira (v8.3.4#803005)