[flink-web] branch asf-site updated (33a7059 -> 798f215)
This is an automated email from the ASF dual-hosted git repository. jark pushed a change to branch asf-site in repository https://gitbox.apache.org/repos/asf/flink-web.git. from 33a7059 [FLINK-13344][docs-zh] Translate How to Contribute page into Chinese (#338) new e33628e [FLINK-13682][docs-zh] Translate "Code Style - Scala Guide" page into Chinese new 798f215 Rebuild website The 2 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: .../contributing/code-style-and-quality-scala.html | 86 +++ content/zh/contributing/how-to-contribute.html | 118 + contributing/code-style-and-quality-scala.zh.md| 75 +++-- 3 files changed, 132 insertions(+), 147 deletions(-)
[flink-web] 02/02: Rebuild website
This is an automated email from the ASF dual-hosted git repository. jark pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/flink-web.git commit 798f215185457ab67a0ecbf55be9d7ad695833c5 Author: Jark Wu AuthorDate: Tue May 12 12:11:44 2020 +0800 Rebuild website --- .../contributing/code-style-and-quality-scala.html | 86 +++ content/zh/contributing/how-to-contribute.html | 118 + 2 files changed, 95 insertions(+), 109 deletions(-) diff --git a/content/zh/contributing/code-style-and-quality-scala.html b/content/zh/contributing/code-style-and-quality-scala.html index 83e5c4c..d6e 100644 --- a/content/zh/contributing/code-style-and-quality-scala.html +++ b/content/zh/contributing/code-style-and-quality-scala.html @@ -5,7 +5,7 @@ -Apache Flink: Apache Flink Code Style and Quality Guide — Scala +Apache Flink: Apache Flink 代码样式与质量指南 — Scala @@ -208,7 +208,7 @@ -Apache Flink Code Style and Quality Guide — Scala +Apache Flink 代码样式与质量指南 — Scala @@ -253,101 +253,101 @@ - Scala Language Features - Where to use (and not use) Scala - API Parity - Language Features - Coding Formatting + Scala 语言特性 + 在哪儿使用(和不使用) Scala + API 等价 + 语言特性 + 编码格式 -Scala Language Features +Scala 语言特性 -Where to use (and not use) Scala +在哪儿使用(和不使用) Scala -We use Scala for Scala APIs or pure Scala Libraries. +对于 Scala 的 API 或者纯 Scala libraries,我们会选择使用 Scala。 -We do not use Scala in the core APIs and runtime components. We aim to remove existing Scala use (code and dependencies) from those components. +在 core API 和 运行时的组件中,我们不使用 Scala。我们的目标是从这些组件中删除现有的 Scala 使用(代码和依赖项)。 -⇒ This is not because we do not like Scala, it is a consequence of “the right tool for the right job” approach (see below). +⇒ 这并不是因为我们不喜欢 Scala,而是考虑到“用正确的工具做正确的事”的结果(见下文)。 -For APIs, we develop the foundation in Java, and layer Scala on top. +对于 API,我们使用 Java 开发基础内容,并在上层使用 Scala。 - This has traditionally given the best interoperability for both Java and Scala - It does mean dedicated effort to keep the Scala API up to date + 这在传统上为 Java 和 Scala 提供了最佳的互通性 + 这意味着要致力于保持 Scala API 的更新 -Why don’t we use Scala in the core APIs and runtime? +为什么我们不在 Core API 和 Runtime 中使用 Scala ? - The past has shown that Scala evolves too quickly with tricky changes in functionality. Each Scala version upgrade was a rather big effort process for the Flink community. - Scala does not always interact nicely with Java classes, e.g. Scala’s visibility scopes work differently and often expose more to Java consumers than desired - Scala adds an additional layer of complexity to artifact/dependency management. + 过去的经验显示, Scala 在功能上的变化太快了。对于 Flink 社区来说,每次 Scala 版本升级都是一个比较棘手的处理过程。 + Scala 并不总能很好地与 Java 的类交互,例如 Scala 的可见性范围的工作方式不同,而且常常向 Java 消费者公开的内容比预期的要多。 + 由于使用 Scala ,所以 Flink 的 artifact/dependency 管理增加了一层额外的复杂性。 - We may want to keep Scala dependent libraries like Akka in the runtime, but abstract them via an interface and load them in a separate classloader, to keep them shielded and avoid version conflicts. + 我们希望通过接口抽象,同时也在运行时保留像 Akka 这样依赖 Scala 的库,然后将它们加载到单独的类加载器中,以保护它们并避免版本冲突。 - Scala makes it very easy for knowledgeable Scala programmers to write code that is very hard to understand for programmers that are less knowledgeable in Scala. That is especially tricky for an open source project with a broad community of diverse experience levels. Working around this means restricting the Scala feature set by a lot, which defeats a good amount of the purpose of using Scala in the first place. + Scala 让懂 Scala 的程序员很容易编写代码,而对于不太懂 Scala 的程序员来说,这些代码很难理解。对于一个拥有不同经验水平的广大社区的开源项目来说,这尤其棘手。解决这个问题意味着大量限制 Scala 特性集,这首先就违背了使用 Scala 的很多目的。 -API Parity +API 等价 -Keep Java API and Scala API in sync in terms of functionality and code quality. +保持 Java API 和 Scala API 在功能和代码质量方面的同步。 -The Scala API should cover all the features of the Java APIs as well. +Scala API 也应该涵盖 Java API 的所有特性。 -Scala APIs should have a “completeness test”, like the following example from the DataStream API: https://github.com/apache/flink/blob/master/flink-streaming-scala/src/test/scala/org/apache/flink/streaming/api/scala/StreamingScalaAPICompletenessTest.scala";>https://github.com/apache/flink/blob/master/flink-streaming-scala/src/test/scala/org/apache/flink/streaming/api/scala/StreamingScalaAPICompletenessTest.scala +Scala API 应该有一个“完整性测试”,就如下面 DataStream API 的示例中的一样: https://github.com/apache/flink/blob/master/flink-streaming-scala/src/test/scala/org/apache/flink/streaming/api/scala/StreamingScalaAPICompletenessTest.scala";>https://github.com/apache/flink/blob/master/flink-streaming-scala/src/test/scala/org/apache/flink/streaming/api/scala/StreamingScalaAPICompletenessTest.scala -Language Features +语言特性 - Avoid
[flink-web] 01/02: [FLINK-13682][docs-zh] Translate "Code Style - Scala Guide" page into Chinese
This is an automated email from the ASF dual-hosted git repository. jark pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/flink-web.git commit e33628ea470918116bb75aaa4e5580d1cc8edaff Author: yangjf2019 AuthorDate: Thu Sep 19 14:17:02 2019 +0800 [FLINK-13682][docs-zh] Translate "Code Style - Scala Guide" page into Chinese This closes #267 --- contributing/code-style-and-quality-scala.zh.md | 75 - 1 file changed, 37 insertions(+), 38 deletions(-) diff --git a/contributing/code-style-and-quality-scala.zh.md b/contributing/code-style-and-quality-scala.zh.md index c0b826c..ab173da 100644 --- a/contributing/code-style-and-quality-scala.zh.md +++ b/contributing/code-style-and-quality-scala.zh.md @@ -1,5 +1,5 @@ --- -title: "Apache Flink Code Style and Quality Guide — Scala" +title: "Apache Flink 代码样式与质量指南 — Scala" --- {% include code-style-navbar.zh.md %} @@ -8,68 +8,67 @@ title: "Apache Flink Code Style and Quality Guide — Scala" -## Scala Language Features +## Scala 语言特性 -### Where to use (and not use) Scala +### 在哪儿使用(和不使用) Scala -**We use Scala for Scala APIs or pure Scala Libraries.** +**对于 Scala 的 API 或者纯 Scala libraries,我们会选择使用 Scala。** -**We do not use Scala in the core APIs and runtime components. We aim to remove existing Scala use (code and dependencies) from those components.** +**在 core API 和 运行时的组件中,我们不使用 Scala。我们的目标是从这些组件中删除现有的 Scala 使用(代码和依赖项)。** -⇒ This is not because we do not like Scala, it is a consequence of “the right tool for the right job” approach (see below). +⇒ 这并不是因为我们不喜欢 Scala,而是考虑到“用正确的工具做正确的事”的结果(见下文)。 -For APIs, we develop the foundation in Java, and layer Scala on top. +对于 API,我们使用 Java 开发基础内容,并在上层使用 Scala。 -* This has traditionally given the best interoperability for both Java and Scala -* It does mean dedicated effort to keep the Scala API up to date +* 这在传统上为 Java 和 Scala 提供了最佳的互通性 +* 这意味着要致力于保持 Scala API 的更新 -Why don’t we use Scala in the core APIs and runtime? +为什么我们不在 Core API 和 Runtime 中使用 Scala ? -* The past has shown that Scala evolves too quickly with tricky changes in functionality. Each Scala version upgrade was a rather big effort process for the Flink community. -* Scala does not always interact nicely with Java classes, e.g. Scala’s visibility scopes work differently and often expose more to Java consumers than desired -* Scala adds an additional layer of complexity to artifact/dependency management. -* We may want to keep Scala dependent libraries like Akka in the runtime, but abstract them via an interface and load them in a separate classloader, to keep them shielded and avoid version conflicts. -* Scala makes it very easy for knowledgeable Scala programmers to write code that is very hard to understand for programmers that are less knowledgeable in Scala. That is especially tricky for an open source project with a broad community of diverse experience levels. Working around this means restricting the Scala feature set by a lot, which defeats a good amount of the purpose of using Scala in the first place. +* 过去的经验显示, Scala 在功能上的变化太快了。对于 Flink 社区来说,每次 Scala 版本升级都是一个比较棘手的处理过程。 +* Scala 并不总能很好地与 Java 的类交互,例如 Scala 的可见性范围的工作方式不同,而且常常向 Java 消费者公开的内容比预期的要多。 +* 由于使用 Scala ,所以 Flink 的 artifact/dependency 管理增加了一层额外的复杂性。 +* 我们希望通过接口抽象,同时也在运行时保留像 Akka 这样依赖 Scala 的库,然后将它们加载到单独的类加载器中,以保护它们并避免版本冲突。 +* Scala 让懂 Scala 的程序员很容易编写代码,而对于不太懂 Scala 的程序员来说,这些代码很难理解。对于一个拥有不同经验水平的广大社区的开源项目来说,这尤其棘手。解决这个问题意味着大量限制 Scala 特性集,这首先就违背了使用 Scala 的很多目的。 -### API Parity +### API 等价 -Keep Java API and Scala API in sync in terms of functionality and code quality. +保持 Java API 和 Scala API 在功能和代码质量方面的同步。 -The Scala API should cover all the features of the Java APIs as well. +Scala API 也应该涵盖 Java API 的所有特性。 -Scala APIs should have a “completeness test”, like the following example from the DataStream API: [https://github.com/apache/flink/blob/master/flink-streaming-scala/src/test/scala/org/apache/flink/streaming/api/scala/StreamingScalaAPICompletenessTest.scala](https://github.com/apache/flink/blob/master/flink-streaming-scala/src/test/scala/org/apache/flink/streaming/api/scala/StreamingScalaAPICompletenessTest.scala) +Scala API 应该有一个“完整性测试”,就如下面 DataStream API 的示例中的一样: [https://github.com/apache/flink/blob/master/flink-streaming-scala/src/test/scala/org/apache/flink/streaming/api/scala/StreamingScalaAPICompletenessTest.scala](https://github.com/apache/flink/blob/master/flink-streaming-scala/src/test/scala/org/apache/flink/streaming/api/scala/StreamingScalaAPICompletenessTest.scala) -### Language Features +### 语言特性 -* **Avoid Scala implicits.** -* Scala’s implicits should only be used for user-facing API improvements such as the Table API expressions or type information extraction. -* Don’t use them for internal “magic”. -* **Add explicit types for class members.** -* Don’t rely on implicit type inference for class fields and methods
[flink] branch master updated (2160c32 -> aa1ede9)
This is an automated email from the ASF dual-hosted git repository. jqin pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/flink.git. from 2160c32 [FLINK-17252][table] Add Table#execute api and support SELECT statement in TableEnvironment#executeSql add aa1ede9 [FLINK-16845] Adds implmentation of SourceOperator. This patch does the following: 1. Add CoordinatedOperatorFactory interface. 2. Rename SourceReaderOpertor to SourceOperator, add implementation and connect it to OperatorEventGateway. 3. Rename SourceReaderStreamTask to SourceOperatorStreamTask 4. Fix some bugs in StreamTaskMailboxTestHarness. No new revisions were added by this update. Summary of changes: .../api/connector/source/mocks/MockSource.java | 14 +- .../connector/source/mocks/MockSourceReader.java | 130 .../connector/source/mocks/MockSourceSplit.java| 4 + .../flink/state/api/output/BoundedStreamTask.java | 3 +- .../coordination/MockOperatorEventGateway.java | 24 ++- .../api/operators/CoordinatedOperatorFactory.java | 57 ++ .../streaming/api/operators/SourceOperator.java| 220 + .../api/operators/SourceOperatorFactory.java | 80 .../api/operators/SourceReaderOperator.java| 34 .../api/operators/StreamOperatorFactoryUtil.java | 13 +- .../runtime/io/StreamTaskSourceInput.java | 8 +- .../streaming/runtime/tasks/OperatorChain.java | 6 +- ...reamTask.java => SourceOperatorStreamTask.java} | 36 +++- .../runtime/tasks/mailbox/MailboxProcessor.java| 5 + .../api/operators/SourceOperatorTest.java | 203 +++ .../tasks/SourceOperatorStreamTaskTest.java| 176 + .../runtime/tasks/SourceReaderStreamTaskTest.java | 209 .../tasks/StreamTaskMailboxTestHarness.java| 14 +- .../tasks/StreamTaskMailboxTestHarnessBuilder.java | 32 ++- .../util/AbstractStreamOperatorTestHarness.java| 4 +- 20 files changed, 982 insertions(+), 290 deletions(-) create mode 100644 flink-core/src/test/java/org/apache/flink/api/connector/source/mocks/MockSourceReader.java copy flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/source/reader/splitreader/SplitsChange.java => flink-runtime/src/test/java/org/apache/flink/runtime/operators/coordination/MockOperatorEventGateway.java (62%) create mode 100644 flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/CoordinatedOperatorFactory.java create mode 100644 flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/SourceOperator.java create mode 100644 flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/SourceOperatorFactory.java delete mode 100644 flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/SourceReaderOperator.java rename flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/{SourceReaderStreamTask.java => SourceOperatorStreamTask.java} (73%) create mode 100644 flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/SourceOperatorTest.java create mode 100644 flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/SourceOperatorStreamTaskTest.java delete mode 100644 flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/SourceReaderStreamTaskTest.java
[flink] branch master updated (7cfcd33 -> 2160c32)
This is an automated email from the ASF dual-hosted git repository. kurt pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/flink.git. from 7cfcd33 [FLINK-17608][web] Add TM log and stdout page/tab back add 2160c32 [FLINK-17252][table] Add Table#execute api and support SELECT statement in TableEnvironment#executeSql No new revisions were added by this update. Summary of changes: .../flink/connectors/hive/HiveTableSourceTest.java | 30 ++-- .../connectors/hive/TableEnvHiveConnectorTest.java | 47 +++--- .../table/catalog/hive/HiveCatalogITCase.java | 9 +- .../catalog/hive/HiveCatalogUseBlinkITCase.java| 9 +- .../flink/table/module/hive/HiveModuleTest.java| 23 ++- .../io/jdbc/catalog/PostgresCatalogITCase.java | 27 ++-- flink-python/pyflink/table/table.py| 15 ++ flink-python/pyflink/table/table_environment.py| 1 + flink-python/pyflink/table/tests/test_table_api.py | 52 +++ .../java/org/apache/flink/table/api/Table.java | 15 +- .../apache/flink/table/api/TableEnvironment.java | 2 +- .../flink/table/api/internal/SelectTableSink.java | 37 +++-- .../table/api/internal/TableEnvironmentImpl.java | 32 +++- .../api/internal/TableEnvironmentInternal.java | 9 ++ .../apache/flink/table/api/internal/TableImpl.java | 5 + .../flink/table/api/internal/TableResultImpl.java | 65 +--- .../org/apache/flink/table/delegation/Planner.java | 12 +- .../org/apache/flink/table/utils/PlannerMock.java | 7 + .../org/apache/flink/table/utils/PrintUtils.java | 34 - .../org/apache/flink/table/api/TableUtils.java | 165 - .../table/planner/sinks/BatchSelectTableSink.java | 112 ++ .../sinks/SelectTableSinkSchemaConverter.java | 61 .../table/planner/sinks/StreamSelectTableSink.java | 94 .../table/planner/delegation/BatchPlanner.scala| 8 +- .../table/planner/delegation/StreamPlanner.scala | 8 +- .../flink/table/api/TableUtilsBatchITCase.java | 66 - .../flink/table/api/TableUtilsStreamingITCase.java | 75 -- .../flink/table/api/TableEnvironmentITCase.scala | 81 +- .../flink/table/api/TableEnvironmentTest.scala | 10 -- .../org/apache/flink/table/api/TableITCase.scala | 128 .../runtime/batch/sql/join/ScalarQueryITCase.scala | 4 +- .../runtime/stream/FsStreamingSinkITCaseBase.scala | 6 +- .../planner/runtime/utils/BatchTestBase.scala | 4 +- .../flink/table/sinks/BatchSelectTableSink.java| 105 + .../sinks/SelectTableSinkSchemaConverter.java | 56 +++ .../flink/table/sinks/StreamSelectTableSink.java | 86 +++ .../flink/table/api/internal/TableEnvImpl.scala| 32 +++- .../apache/flink/table/planner/StreamPlanner.scala | 5 + .../flink/table/api/TableEnvironmentITCase.scala | 77 +- .../org/apache/flink/table/api/TableITCase.scala | 120 +++ .../api/batch/BatchTableEnvironmentTest.scala | 11 -- .../runtime/batch/sql/TableEnvironmentITCase.scala | 33 - .../table/runtime/batch/table/TableITCase.scala| 65 .../streaming/util/TestStreamEnvironment.java | 4 + 44 files changed, 1392 insertions(+), 455 deletions(-) create mode 100644 flink-python/pyflink/table/tests/test_table_api.py copy flink-core/src/main/java/org/apache/flink/api/common/operators/UnaryOperatorInformation.java => flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/internal/SelectTableSink.java (53%) delete mode 100644 flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/api/TableUtils.java create mode 100644 flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/sinks/BatchSelectTableSink.java create mode 100644 flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/sinks/SelectTableSinkSchemaConverter.java create mode 100644 flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/sinks/StreamSelectTableSink.java delete mode 100644 flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/api/TableUtilsBatchITCase.java delete mode 100644 flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/api/TableUtilsStreamingITCase.java create mode 100644 flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/api/TableITCase.scala create mode 100644 flink-table/flink-table-planner/src/main/java/org/apache/flink/table/sinks/BatchSelectTableSink.java create mode 100644 flink-table/flink-table-planner/src/main/java/org/apache/flink/table/sinks/SelectTableSinkSchemaConverter.java create mode 100644 flink-table/flink-table-planner/src/main/java/org/apache/flink/table/sinks/StreamSelectTableSink.java create mode 100644 flink-table/flink-table-planner/src/test/scala/org/apache/flink/tab
[flink] branch master updated: [FLINK-17608][web] Add TM log and stdout page/tab back
This is an automated email from the ASF dual-hosted git repository. gary pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/flink.git The following commit(s) were added to refs/heads/master by this push: new 7cfcd33 [FLINK-17608][web] Add TM log and stdout page/tab back 7cfcd33 is described below commit 7cfcd33e983c6e07eedf8c0d5514450a565710ff Author: vthinkxie AuthorDate: Mon May 11 20:06:13 2020 +0800 [FLINK-17608][web] Add TM log and stdout page/tab back This closes #12085. --- .../task-manager-log-detail.component.ts | 10 + .../task-manager-logs.component.html} | 5 ++- .../logs/task-manager-logs.component.less | 28 ++ .../task-manager-logs.component.ts}| 44 +- .../status/task-manager-status.component.ts| 8 +++- .../task-manager-stdout.component.html}| 4 +- .../stdout/task-manager-stdout.component.less | 28 ++ .../task-manager-stdout.component.ts} | 44 +- .../task-manager/task-manager-routing.module.ts| 17 ++--- .../app/pages/task-manager/task-manager.module.ts | 6 ++- .../task-manager-thread-dump.component.html| 2 +- .../src/app/services/task-manager.service.ts | 42 - 12 files changed, 174 insertions(+), 64 deletions(-) diff --git a/flink-runtime-web/web-dashboard/src/app/pages/task-manager/log-detail/task-manager-log-detail.component.ts b/flink-runtime-web/web-dashboard/src/app/pages/task-manager/log-detail/task-manager-log-detail.component.ts index f589122..4226a87 100644 --- a/flink-runtime-web/web-dashboard/src/app/pages/task-manager/log-detail/task-manager-log-detail.component.ts +++ b/flink-runtime-web/web-dashboard/src/app/pages/task-manager/log-detail/task-manager-log-detail.component.ts @@ -37,7 +37,6 @@ export class TaskManagerLogDetailComponent implements OnInit { isLoading = false; taskManagerDetail: TaskManagerDetailInterface; isFullScreen = false; - hasLogName = false; @ViewChild(MonacoEditorComponent) monacoEditorComponent: MonacoEditorComponent; constructor( @@ -49,7 +48,7 @@ export class TaskManagerLogDetailComponent implements OnInit { reloadLog() { this.isLoading = true; this.cdr.markForCheck(); -this.taskManagerService.loadLog(this.taskManagerDetail.id, this.logName, this.hasLogName).subscribe( +this.taskManagerService.loadLog(this.taskManagerDetail.id, this.logName).subscribe( data => { this.logs = data.data; this.downloadUrl = data.url; @@ -77,12 +76,7 @@ export class TaskManagerLogDetailComponent implements OnInit { ngOnInit() { this.taskManagerService.taskManagerDetail$.pipe(first()).subscribe(data => { this.taskManagerDetail = data; - this.hasLogName = this.activatedRoute.snapshot.data.hasLogName; - if (this.hasLogName) { -this.logName = this.activatedRoute.snapshot.params.logName; - } else { -this.logName = `taskmanager_${data.id}_log`; - } + this.logName = this.activatedRoute.snapshot.params.logName; this.reloadLog(); }); } diff --git a/flink-runtime-web/web-dashboard/src/app/pages/task-manager/thread-dump/task-manager-thread-dump.component.html b/flink-runtime-web/web-dashboard/src/app/pages/task-manager/logs/task-manager-logs.component.html similarity index 75% copy from flink-runtime-web/web-dashboard/src/app/pages/task-manager/thread-dump/task-manager-thread-dump.component.html copy to flink-runtime-web/web-dashboard/src/app/pages/task-manager/logs/task-manager-logs.component.html index 096b6b9..8c07727 100644 --- a/flink-runtime-web/web-dashboard/src/app/pages/task-manager/thread-dump/task-manager-thread-dump.component.html +++ b/flink-runtime-web/web-dashboard/src/app/pages/task-manager/logs/task-manager-logs.component.html @@ -16,5 +16,6 @@ ~ limitations under the License. --> - - + + + diff --git a/flink-runtime-web/web-dashboard/src/app/pages/task-manager/logs/task-manager-logs.component.less b/flink-runtime-web/web-dashboard/src/app/pages/task-manager/logs/task-manager-logs.component.less new file mode 100644 index 000..df80525 --- /dev/null +++ b/flink-runtime-web/web-dashboard/src/app/pages/task-manager/logs/task-manager-logs.component.less @@ -0,0 +1,28 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under
[flink] branch master updated: [FLINK-17523] Add call expression with a class of UDF as a parameter
This is an automated email from the ASF dual-hosted git repository. dwysakowicz pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/flink.git The following commit(s) were added to refs/heads/master by this push: new 3561adf [FLINK-17523] Add call expression with a class of UDF as a parameter 3561adf is described below commit 3561adf03deb88bff540773b5a2037c8576c09f8 Author: Dawid Wysakowicz AuthorDate: Mon May 11 08:46:03 2020 +0200 [FLINK-17523] Add call expression with a class of UDF as a parameter --- .../org/apache/flink/table/api/Expressions.java| 12 +++ .../java/org/apache/flink/table/api/Table.java | 24 -- .../apache/flink/table/api/WindowGroupedTable.java | 6 ++ .../resolver/ExpressionResolverTest.java | 14 + .../org/apache/flink/table/api/expressionDsl.scala | 9 5 files changed, 45 insertions(+), 20 deletions(-) diff --git a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/Expressions.java b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/Expressions.java index 6071764..37f53d4 100644 --- a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/Expressions.java +++ b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/Expressions.java @@ -27,6 +27,7 @@ import org.apache.flink.table.expressions.TimePointUnit; import org.apache.flink.table.functions.BuiltInFunctionDefinitions; import org.apache.flink.table.functions.FunctionDefinition; import org.apache.flink.table.functions.UserDefinedFunction; +import org.apache.flink.table.functions.UserDefinedFunctionHelper; import org.apache.flink.table.types.DataType; import org.apache.flink.table.types.utils.TypeConversions; import org.apache.flink.table.types.utils.ValueDataTypeConverter; @@ -526,6 +527,17 @@ public final class Expressions { return apiCall(function, arguments); } + /** +* A call to an unregistered, inline function. +* +* For functions that have been registered before and are identified by a name, use +* {@link #call(String, Object...)}. +*/ + public static ApiExpression call(Class function, Object... arguments) { + final UserDefinedFunction functionInstance = UserDefinedFunctionHelper.instantiateFunction(function); + return apiCall(functionInstance, arguments); + } + private static ApiExpression apiCall(FunctionDefinition functionDefinition, Object... args) { List arguments = Stream.of(args) diff --git a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/Table.java b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/Table.java index d1239e6..4d8219f 100644 --- a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/Table.java +++ b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/Table.java @@ -595,8 +595,7 @@ public interface Table { * } * } * -* TableFunction split = new MySplitUDTF(); -* table.joinLateral(call(split, $("c")).as("s")) +* table.joinLateral(call(MySplitUDTF.class, $("c")).as("s")) *.select($("a"), $("b"), $("c"), $("s")); * } * @@ -659,8 +658,7 @@ public interface Table { * } * } * -* TableFunction split = new MySplitUDTF(); -* table.joinLateral(call(split, $("c")).as("s"), $("a").isEqual($("s"))) +* table.joinLateral(call(MySplitUDTF.class, $("c")).as("s"), $("a").isEqual($("s"))) *.select($("a"), $("b"), $("c"), $("s")); * } * @@ -725,8 +723,7 @@ public interface Table { * } * } * -* TableFunction split = new MySplitUDTF(); -* table.leftOuterJoinLateral(call(split, $("c")).as("s")) +* table.leftOuterJoinLateral(call(MySplitUDTF.class, $("c")).as("s")) *.select($("a"), $("b"), $("c"), $("s")); * } * @@ -791,8 +788,7 @@ public interface Table { * } * } * -* TableFunction split = new MySplitUDTF(); -* table.leftOuterJoinLateral(call(split, $("c")).as("s"), $("a").isEqual($("s"))) +* table.leftOuterJoinLateral(call(MySplitUDTF.class, $("c")).as("s"), $("a").isEqual($("s"))) *.select($("a"), $("b"), $("c"), $("s")); * } * @@ -1267,8 +1263,7 @@ public interface Table { * * * {@code -* ScalarFunction func = new MyMapFunction(); -* tab.map(call(func, $("c"))) +* tab.map(call(MyMapFunction.class, $("c"))) * } * * @@ -1309,8 +1304,7 @@ public interface Table { *
[flink] branch master updated (fd0ef6e -> bbf9d95)
This is an automated email from the ASF dual-hosted git repository. rmetzger pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/flink.git. from fd0ef6e [FLINK-17369][tests] Reduce visiblity of internal test methods add 5846e54 [FLINK-17416][e2e][k8s] Use fixed v1.16.9 because fabric8 kubernetes-client could not work with higher version under jdk 8u252 add bbf9d95 [FLINK-17416][e2e][k8s] Enable k8s related tests No new revisions were added by this update. Summary of changes: flink-end-to-end-tests/run-nightly-tests.sh | 5 ++--- flink-end-to-end-tests/test-scripts/common_kubernetes.sh | 3 ++- 2 files changed, 4 insertions(+), 4 deletions(-)
[flink] branch master updated (ce6b97e -> fd0ef6e)
This is an automated email from the ASF dual-hosted git repository. gary pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/flink.git. from ce6b97e [FLINK-17567][python][release] Create a dedicated Python directory in release directory to place Python-related source and binary packages add cba4845 [FLINK-17369][tests] In RestartPipelinedRegionFailoverStrategyBuildingTest invoke PipelinedRegionComputeUtil directly add 56cc76b [FLINK-17369][tests] Rename RestartPipelinedRegionFailoverStrategyBuildingTest to PipelinedRegionComputeUtilTest add fd0ef6e [FLINK-17369][tests] Reduce visiblity of internal test methods No new revisions were added by this update. Summary of changes: ...st.java => PipelinedRegionComputeUtilTest.java} | 174 - 1 file changed, 98 insertions(+), 76 deletions(-) rename flink-runtime/src/test/java/org/apache/flink/runtime/executiongraph/failover/flip1/{RestartPipelinedRegionFailoverStrategyBuildingTest.java => PipelinedRegionComputeUtilTest.java} (65%)
[flink] branch master updated: [FLINK-17567][python][release] Create a dedicated Python directory in release directory to place Python-related source and binary packages
This is an automated email from the ASF dual-hosted git repository. dianfu pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/flink.git The following commit(s) were added to refs/heads/master by this push: new ce6b97e [FLINK-17567][python][release] Create a dedicated Python directory in release directory to place Python-related source and binary packages ce6b97e is described below commit ce6b97edf8eb44fc92fb7da09e41aac18d710802 Author: huangxingbo AuthorDate: Fri May 8 15:19:15 2020 +0800 [FLINK-17567][python][release] Create a dedicated Python directory in release directory to place Python-related source and binary packages This closes #12030. --- tools/releasing/create_binary_release.sh | 9 + 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/tools/releasing/create_binary_release.sh b/tools/releasing/create_binary_release.sh index 9b58b82..0626e4e 100755 --- a/tools/releasing/create_binary_release.sh +++ b/tools/releasing/create_binary_release.sh @@ -51,7 +51,8 @@ cd .. FLINK_DIR=`pwd` RELEASE_DIR=${FLINK_DIR}/tools/releasing/release -mkdir -p ${RELEASE_DIR} +PYTHON_RELEASE_DIR=${RELEASE_DIR}/python +mkdir -p ${PYTHON_RELEASE_DIR} ### @@ -108,7 +109,7 @@ make_python_release() { exit 1 fi - cp ${pyflink_actual_name} "${RELEASE_DIR}/${pyflink_release_name}" + cp ${pyflink_actual_name} "${PYTHON_RELEASE_DIR}/${pyflink_release_name}" wheel_packages_num=0 # py35,py36,py37 for mac and linux (6 wheel packages) @@ -119,7 +120,7 @@ make_python_release() { echo -e "\033[31;1mThe file name of the python package: ${wheel_file} is not consistent with given release version: ${PYFLINK_VERSION}!\033[0m" exit 1 fi -cp ${wheel_file} "${RELEASE_DIR}/${wheel_file}" +cp ${wheel_file} "${PYTHON_RELEASE_DIR}/${wheel_file}" wheel_packages_num=$((wheel_packages_num+1)) done if [[ ${wheel_packages_num} != ${EXPECTED_WHEEL_PACKAGES_NUM} ]]; then @@ -127,7 +128,7 @@ make_python_release() { exit 1 fi - cd ${RELEASE_DIR} + cd ${PYTHON_RELEASE_DIR} # Sign sha the tgz and wheel packages if [ "$SKIP_GPG" == "false" ] ; then
[flink] branch master updated (5f744d3 -> d90b5e0)
This is an automated email from the ASF dual-hosted git repository. lzljs3620320 pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/flink.git. from 5f744d3 [FLINK-17454][python] Specify a port number for gateway callback server from python gateway. add d90b5e0 [FLINK-17603][table][core] Prepare Hive partitioned streaming source No new revisions were added by this update. Summary of changes: .../connectors/hive/read/HiveTableInputFormat.java | 23 +-- .../hive/read/HiveVectorizedOrcSplitReader.java| 5 ++ .../read/HiveVectorizedParquetSplitReader.java | 5 ++ .../flink/connectors/hive/read/SplitReader.java| 13 .../java/org/apache/flink/orc/OrcSplitReader.java | 7 +++ .../vector/ParquetColumnarRowSplitReader.java | 31 + .../vector/ParquetColumnarRowSplitReaderTest.java | 13 +++- .../ContinuousFileProcessingMigrationTest.java | 2 +- .../hdfstests/ContinuousFileProcessingTest.java| 3 +- .../environment/StreamExecutionEnvironment.java| 4 +- .../source/ContinuousFileReaderOperator.java | 73 -- .../ContinuousFileReaderOperatorFactory.java | 19 +++--- .../source/TimestampedFileInputSplit.java | 36 --- .../functions/source/TimestampedInputSplit.java| 62 ++ 14 files changed, 222 insertions(+), 74 deletions(-) create mode 100644 flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/source/TimestampedInputSplit.java
[flink] branch master updated: [FLINK-17454][python] Specify a port number for gateway callback server from python gateway.
This is an automated email from the ASF dual-hosted git repository. jincheng pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/flink.git The following commit(s) were added to refs/heads/master by this push: new 5f744d3 [FLINK-17454][python] Specify a port number for gateway callback server from python gateway. 5f744d3 is described below commit 5f744d3f81bcfb8f77164a5ec9caa4594851d4bf Author: acqua.csq AuthorDate: Fri May 8 23:29:12 2020 +0800 [FLINK-17454][python] Specify a port number for gateway callback server from python gateway. This closes #12061 --- flink-python/pyflink/java_gateway.py | 11 +++-- .../apache/flink/client/python/PythonEnvUtils.java | 55 +- 2 files changed, 49 insertions(+), 17 deletions(-) diff --git a/flink-python/pyflink/java_gateway.py b/flink-python/pyflink/java_gateway.py index d8e061b..33ab2ae 100644 --- a/flink-python/pyflink/java_gateway.py +++ b/flink-python/pyflink/java_gateway.py @@ -49,15 +49,19 @@ def get_gateway(): # if Java Gateway is already running if 'PYFLINK_GATEWAY_PORT' in os.environ: gateway_port = int(os.environ['PYFLINK_GATEWAY_PORT']) -callback_port = int(os.environ['PYFLINK_CALLBACK_PORT']) gateway_param = GatewayParameters(port=gateway_port, auto_convert=True) _gateway = JavaGateway( gateway_parameters=gateway_param, callback_server_parameters=CallbackServerParameters( -port=callback_port, daemonize=True, daemonize_connections=True)) +port=0, daemonize=True, daemonize_connections=True)) else: _gateway = launch_gateway() +callback_server = _gateway.get_callback_server() +callback_server_listening_address = callback_server.get_listening_address() +callback_server_listening_port = callback_server.get_listening_port() + _gateway.jvm.org.apache.flink.client.python.PythonEnvUtils.resetCallbackClient( +callback_server_listening_address, callback_server_listening_port) # import the flink view import_flink_view(_gateway) install_exception_handler() @@ -102,7 +106,6 @@ def launch_gateway(): with open(conn_info_file, "rb") as info: gateway_port = struct.unpack("!I", info.read(4))[0] -callback_port = struct.unpack("!I", info.read(4))[0] finally: shutil.rmtree(conn_info_dir) @@ -110,7 +113,7 @@ def launch_gateway(): gateway = JavaGateway( gateway_parameters=GatewayParameters(port=gateway_port, auto_convert=True), callback_server_parameters=CallbackServerParameters( -port=callback_port, daemonize=True, daemonize_connections=True)) +port=0, daemonize=True, daemonize_connections=True)) return gateway diff --git a/flink-python/src/main/java/org/apache/flink/client/python/PythonEnvUtils.java b/flink-python/src/main/java/org/apache/flink/client/python/PythonEnvUtils.java index cf15b7b..76370dc 100644 --- a/flink-python/src/main/java/org/apache/flink/client/python/PythonEnvUtils.java +++ b/flink-python/src/main/java/org/apache/flink/client/python/PythonEnvUtils.java @@ -35,7 +35,10 @@ import py4j.GatewayServer; import java.io.File; import java.io.IOException; import java.lang.reflect.Field; +import java.lang.reflect.InvocationTargetException; import java.lang.reflect.Method; +import java.net.InetAddress; +import java.net.UnknownHostException; import java.nio.file.FileSystems; import java.nio.file.FileVisitResult; import java.nio.file.Files; @@ -277,16 +280,7 @@ final class PythonEnvUtils { .gateway(new Gateway(new ConcurrentHashMap(), new CallbackClient(freePort))) .javaPort(0) .build(); - CallbackClient callbackClient = (CallbackClient) server.getCallbackClient(); - // The Java API of py4j does not provide approach to set "daemonize_connections" parameter. - // Use reflect to daemonize the connection thread. - Field executor = CallbackClient.class.getDeclaredField("executor"); - executor.setAccessible(true); - ((ScheduledExecutorService) executor.get(callbackClient)).shutdown(); - executor.set(callbackClient, Executors.newScheduledThreadPool(1, Thread::new)); - Method setupCleaner = CallbackClient.class.getDeclaredMethod("setupCleaner"); - setupCleaner.setAccessible(true); - setupCleaner.invoke(callbackClient); +
[flink] branch master updated (186724f -> 1a2eb5e)
This is an automated email from the ASF dual-hosted git repository. rmetzger pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/flink.git. from 186724f [FLINK-17286][table][json] Integrate json to file system connector add 1a2eb5e [FLINK-17289][docs]Translate tutorials/etl.md to Chinese No new revisions were added by this update. Summary of changes: docs/training/etl.zh.md | 263 +++- 1 file changed, 81 insertions(+), 182 deletions(-)
[flink] branch master updated: [FLINK-17286][table][json] Integrate json to file system connector
This is an automated email from the ASF dual-hosted git repository. lzljs3620320 pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/flink.git The following commit(s) were added to refs/heads/master by this push: new 186724f [FLINK-17286][table][json] Integrate json to file system connector 186724f is described below commit 186724f1625f13482ef1e6362764459129ee84fe Author: Leonard Xu AuthorDate: Mon May 11 15:35:31 2020 +0800 [FLINK-17286][table][json] Integrate json to file system connector This closes #12010 --- .../flink/table/catalog/hive/HiveCatalog.java | 2 +- .../flink/api/common/io/DelimitedInputFormat.java | 6 +- flink-formats/flink-json/pom.xml | 17 ++ .../formats/json/JsonFileSystemFormatFactory.java | 270 + .../org.apache.flink.table.factories.TableFactory | 1 + .../formats/json/JsonBatchFileSystemITCase.java| 62 + .../flink/formats/json/JsonFsStreamSinkITCase.java | 39 +++ .../flink/orc/OrcFileSystemFormatFactory.java | 6 +- .../parquet/ParquetFileSystemFormatFactory.java| 6 +- .../table/factories/FileSystemFormatFactory.java | 61 - .../flink/table/utils}/PartitionPathUtils.java | 83 ++- .../utils/TestCsvFileSystemFormatFactory.java | 4 +- .../planner/utils/TestRowDataCsvInputFormat.java | 2 +- .../planner/runtime/FileSystemITCaseBase.scala | 7 +- .../table/filesystem/DynamicPartitionWriter.java | 2 +- .../table/filesystem/FileSystemTableSink.java | 1 + .../table/filesystem/FileSystemTableSource.java| 1 + .../table/filesystem/GroupedPartitionWriter.java | 2 +- .../flink/table/filesystem/PartitionLoader.java| 4 +- .../table/filesystem/PartitionTempFileManager.java | 2 +- .../table/filesystem/SingleDirectoryWriter.java| 2 +- .../table/filesystem/RowPartitionComputerTest.java | 2 +- 22 files changed, 558 insertions(+), 24 deletions(-) diff --git a/flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/HiveCatalog.java b/flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/HiveCatalog.java index 6d9dc57..128aa21 100644 --- a/flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/HiveCatalog.java +++ b/flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/HiveCatalog.java @@ -116,7 +116,7 @@ import java.util.stream.Collectors; import static org.apache.flink.table.catalog.config.CatalogConfig.FLINK_PROPERTY_PREFIX; import static org.apache.flink.table.catalog.hive.util.HiveStatsUtil.parsePositiveIntStat; import static org.apache.flink.table.catalog.hive.util.HiveStatsUtil.parsePositiveLongStat; -import static org.apache.flink.table.filesystem.PartitionPathUtils.unescapePathName; +import static org.apache.flink.table.utils.PartitionPathUtils.unescapePathName; import static org.apache.flink.util.Preconditions.checkArgument; import static org.apache.flink.util.Preconditions.checkNotNull; diff --git a/flink-core/src/main/java/org/apache/flink/api/common/io/DelimitedInputFormat.java b/flink-core/src/main/java/org/apache/flink/api/common/io/DelimitedInputFormat.java index 977a02c..0c09a4c 100644 --- a/flink-core/src/main/java/org/apache/flink/api/common/io/DelimitedInputFormat.java +++ b/flink-core/src/main/java/org/apache/flink/api/common/io/DelimitedInputFormat.java @@ -144,9 +144,9 @@ public abstract class DelimitedInputFormat extends FileInputFormat imple private transient int limit; - private transient byte[] currBuffer;// buffer in which current record byte sequence is found - private transient int currOffset; // offset in above buffer - private transient int currLen; // length of current byte sequence + protected transient byte[] currBuffer; // buffer in which current record byte sequence is found + protected transient int currOffset; // offset in above buffer + protected transient int currLen;// length of current byte sequence private transient boolean overLimit; diff --git a/flink-formats/flink-json/pom.xml b/flink-formats/flink-json/pom.xml index bff7fc1..19d5045 100644 --- a/flink-formats/flink-json/pom.xml +++ b/flink-formats/flink-json/pom.xml @@ -77,6 +77,23 @@ under the License. test + + + org.apache.flink + flink-table-planner-blink_${scala.binary.version} + ${project.version} + test + test-jar + + + + + org.apache.flink + flink-test-utils_${scala.binary.ver
[flink] branch release-1.9 updated: [FLINK-16346][runtime][tests] Use BlockingNoOpInvokable
This is an automated email from the ASF dual-hosted git repository. chesnay pushed a commit to branch release-1.9 in repository https://gitbox.apache.org/repos/asf/flink.git The following commit(s) were added to refs/heads/release-1.9 by this push: new 36440fb [FLINK-16346][runtime][tests] Use BlockingNoOpInvokable 36440fb is described below commit 36440fb938b50aecd2c8de607806f381cc2af587 Author: Chesnay Schepler AuthorDate: Thu May 7 12:19:53 2020 +0200 [FLINK-16346][runtime][tests] Use BlockingNoOpInvokable --- .../java/org/apache/flink/runtime/jobmanager/BlobsCleanupITCase.java | 5 - 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/jobmanager/BlobsCleanupITCase.java b/flink-runtime/src/test/java/org/apache/flink/runtime/jobmanager/BlobsCleanupITCase.java index 4e53b4e..88d66ae 100644 --- a/flink-runtime/src/test/java/org/apache/flink/runtime/jobmanager/BlobsCleanupITCase.java +++ b/flink-runtime/src/test/java/org/apache/flink/runtime/jobmanager/BlobsCleanupITCase.java @@ -34,6 +34,7 @@ import org.apache.flink.runtime.jobgraph.JobGraph; import org.apache.flink.runtime.jobgraph.JobVertex; import org.apache.flink.runtime.jobmaster.JobResult; import org.apache.flink.runtime.minicluster.MiniCluster; +import org.apache.flink.runtime.testtasks.BlockingNoOpInvokable; import org.apache.flink.runtime.testtasks.FailingBlockingInvokable; import org.apache.flink.runtime.testtasks.NoOpInvokable; import org.apache.flink.runtime.testutils.MiniClusterResource; @@ -231,8 +232,10 @@ public class BlobsCleanupITCase extends TestLogger { @Nonnull private JobGraph createJobGraph(TestCase testCase, int numTasks) { JobVertex source = new JobVertex("Source"); - if (testCase == TestCase.JOB_FAILS || testCase == TestCase.JOB_IS_CANCELLED) { + if (testCase == TestCase.JOB_FAILS) { source.setInvokableClass(FailingBlockingInvokable.class); + } else if (testCase == TestCase.JOB_IS_CANCELLED) { + source.setInvokableClass(BlockingNoOpInvokable.class); } else { source.setInvokableClass(NoOpInvokable.class); }
[flink] branch release-1.10 updated: [FLINK-16346][runtime][tests] Use BlockingNoOpInvokable
This is an automated email from the ASF dual-hosted git repository. chesnay pushed a commit to branch release-1.10 in repository https://gitbox.apache.org/repos/asf/flink.git The following commit(s) were added to refs/heads/release-1.10 by this push: new e346215 [FLINK-16346][runtime][tests] Use BlockingNoOpInvokable e346215 is described below commit e346215edcf2252cc60c5cef507ea77ce2ac9aca Author: Chesnay Schepler AuthorDate: Thu May 7 12:19:53 2020 +0200 [FLINK-16346][runtime][tests] Use BlockingNoOpInvokable --- .../java/org/apache/flink/runtime/jobmanager/BlobsCleanupITCase.java | 5 - 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/jobmanager/BlobsCleanupITCase.java b/flink-runtime/src/test/java/org/apache/flink/runtime/jobmanager/BlobsCleanupITCase.java index fdac262..d9dc9ff 100644 --- a/flink-runtime/src/test/java/org/apache/flink/runtime/jobmanager/BlobsCleanupITCase.java +++ b/flink-runtime/src/test/java/org/apache/flink/runtime/jobmanager/BlobsCleanupITCase.java @@ -34,6 +34,7 @@ import org.apache.flink.runtime.jobgraph.JobGraph; import org.apache.flink.runtime.jobgraph.JobVertex; import org.apache.flink.runtime.jobmaster.JobResult; import org.apache.flink.runtime.minicluster.MiniCluster; +import org.apache.flink.runtime.testtasks.BlockingNoOpInvokable; import org.apache.flink.runtime.testtasks.FailingBlockingInvokable; import org.apache.flink.runtime.testtasks.NoOpInvokable; import org.apache.flink.runtime.testutils.MiniClusterResource; @@ -237,8 +238,10 @@ public class BlobsCleanupITCase extends TestLogger { @Nonnull private JobGraph createJobGraph(TestCase testCase, int numTasks) { JobVertex source = new JobVertex("Source"); - if (testCase == TestCase.JOB_FAILS || testCase == TestCase.JOB_IS_CANCELLED) { + if (testCase == TestCase.JOB_FAILS) { source.setInvokableClass(FailingBlockingInvokable.class); + } else if (testCase == TestCase.JOB_IS_CANCELLED) { + source.setInvokableClass(BlockingNoOpInvokable.class); } else { source.setInvokableClass(NoOpInvokable.class); }
[flink] branch master updated (51c7d61 -> 477f39f)
This is an automated email from the ASF dual-hosted git repository. chesnay pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/flink.git. from 51c7d61 [FLINK-17601][table-planner-blink] Correct the table scan node name in the explain result of TableEnvironmentITCase#testStatementSet add 477f39f [FLINK-16346][runtime][tests] Use BlockingNoOpInvokable No new revisions were added by this update. Summary of changes: .../java/org/apache/flink/runtime/jobmanager/BlobsCleanupITCase.java | 5 - 1 file changed, 4 insertions(+), 1 deletion(-)