[flink-statefun] 02/02: [hotfix] [e2e] Re-enable E2E tests in CI
This is an automated email from the ASF dual-hosted git repository. tzulitai pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/flink-statefun.git commit 0dab43a81a02603f3aa0a748272473199b536efd Author: Tzu-Li (Gordon) Tai AuthorDate: Thu Feb 25 13:12:28 2021 +0800 [hotfix] [e2e] Re-enable E2E tests in CI --- .github/workflows/java8-build.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/java8-build.yml b/.github/workflows/java8-build.yml index 563ae8e..3cf9fc2 100644 --- a/.github/workflows/java8-build.yml +++ b/.github/workflows/java8-build.yml @@ -13,4 +13,4 @@ jobs: with: java-version: 1.8 - name: Build -run: mvn clean install \ No newline at end of file +run: mvn clean install -Prun-e2e-tests
[flink-statefun] 01/02: [FLINK-21496] [e2e] Upgrade Testcontainers to 1.15.2
This is an automated email from the ASF dual-hosted git repository. tzulitai pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/flink-statefun.git commit ec762a7413c3f94470ee13d57edc47350feb1569 Author: Tzu-Li (Gordon) Tai AuthorDate: Thu Feb 25 13:11:37 2021 +0800 [FLINK-21496] [e2e] Upgrade Testcontainers to 1.15.2 --- statefun-e2e-tests/pom.xml| 4 statefun-e2e-tests/statefun-e2e-tests-common/pom.xml | 15 ++- .../statefun-exactly-once-remote-e2e/pom.xml | 1 - statefun-e2e-tests/statefun-sanity-e2e/pom.xml| 4 statefun-e2e-tests/statefun-smoke-e2e/pom.xml | 8 +++- 5 files changed, 25 insertions(+), 7 deletions(-) diff --git a/statefun-e2e-tests/pom.xml b/statefun-e2e-tests/pom.xml index e38a954..94a8994 100644 --- a/statefun-e2e-tests/pom.xml +++ b/statefun-e2e-tests/pom.xml @@ -28,6 +28,10 @@ under the License. statefun-e2e-tests pom + +1.15.2 + + statefun-e2e-tests-common statefun-sanity-e2e diff --git a/statefun-e2e-tests/statefun-e2e-tests-common/pom.xml b/statefun-e2e-tests/statefun-e2e-tests-common/pom.xml index 3fed811..5510f92 100644 --- a/statefun-e2e-tests/statefun-e2e-tests-common/pom.xml +++ b/statefun-e2e-tests/statefun-e2e-tests-common/pom.xml @@ -28,7 +28,6 @@ under the License. statefun-e2e-tests-common -1.12.5 2.4.1 @@ -67,10 +66,24 @@ under the License. com.kohlschutter.junixsocket junixsocket-common + + +net.java.dev.jna +jna + +net.java.dev.jna +jna +5.5.0 + + + com.kohlschutter.junixsocket junixsocket-common ${unixsocket.version} diff --git a/statefun-e2e-tests/statefun-exactly-once-remote-e2e/pom.xml b/statefun-e2e-tests/statefun-exactly-once-remote-e2e/pom.xml index f8b4dcf..0909b16 100644 --- a/statefun-e2e-tests/statefun-exactly-once-remote-e2e/pom.xml +++ b/statefun-e2e-tests/statefun-exactly-once-remote-e2e/pom.xml @@ -28,7 +28,6 @@ under the License. statefun-exactly-once-remote-e2e -1.12.5 2.2.0 diff --git a/statefun-e2e-tests/statefun-sanity-e2e/pom.xml b/statefun-e2e-tests/statefun-sanity-e2e/pom.xml index f2ac99a..02f1486 100644 --- a/statefun-e2e-tests/statefun-sanity-e2e/pom.xml +++ b/statefun-e2e-tests/statefun-sanity-e2e/pom.xml @@ -27,10 +27,6 @@ under the License. statefun-sanity-e2e - -1.12.5 - - diff --git a/statefun-e2e-tests/statefun-smoke-e2e/pom.xml b/statefun-e2e-tests/statefun-smoke-e2e/pom.xml index 26318c2..126cf5c 100644 --- a/statefun-e2e-tests/statefun-smoke-e2e/pom.xml +++ b/statefun-e2e-tests/statefun-smoke-e2e/pom.xml @@ -28,7 +28,6 @@ under the License. statefun-smoke-e2e -1.12.5 3.5 target/additional-sources @@ -133,6 +132,13 @@ under the License. statefun-flink-harness ${project.version} test + + + +com.fasterxml.jackson.core +jackson-annotations + +
[flink-statefun] branch master updated (ed73ce7 -> 0dab43a)
This is an automated email from the ASF dual-hosted git repository. tzulitai pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/flink-statefun.git. from ed73ce7 [FLINK-21491] [sdk] Depend on statefun-protocol-shaded in Java SDK new ec762a7 [FLINK-21496] [e2e] Upgrade Testcontainers to 1.15.2 new 0dab43a [hotfix] [e2e] Re-enable E2E tests in CI The 2 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: .github/workflows/java8-build.yml | 2 +- statefun-e2e-tests/pom.xml| 4 statefun-e2e-tests/statefun-e2e-tests-common/pom.xml | 15 ++- .../statefun-exactly-once-remote-e2e/pom.xml | 1 - statefun-e2e-tests/statefun-sanity-e2e/pom.xml| 4 statefun-e2e-tests/statefun-smoke-e2e/pom.xml | 8 +++- 6 files changed, 26 insertions(+), 8 deletions(-)
[flink-web] branch asf-site updated (e5b2bc4 -> cca97d4)
This is an automated email from the ASF dual-hosted git repository. zhuzh pushed a change to branch asf-site in repository https://gitbox.apache.org/repos/asf/flink-web.git. from e5b2bc4 Rebuild website new 357b03c Add "Zhu Zhu" to community page new cca97d4 Rebuild web The 2 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: community.md | 6 ++ community.zh.md | 6 ++ content/community.html| 6 ++ content/zh/community.html | 6 ++ 4 files changed, 24 insertions(+)
[flink-web] 01/02: Add "Zhu Zhu" to community page
This is an automated email from the ASF dual-hosted git repository. zhuzh pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/flink-web.git commit 357b03c467b994b50e0ccc8631a6a60c707c61ae Author: Zhu Zhu AuthorDate: Thu Feb 25 11:44:46 2021 +0800 Add "Zhu Zhu" to community page --- community.md| 6 ++ community.zh.md | 6 ++ 2 files changed, 12 insertions(+) diff --git a/community.md b/community.md index 332390a..5c4be58 100644 --- a/community.md +++ b/community.md @@ -549,6 +549,12 @@ Flink Forward is a conference happening yearly in different locations around the Committer weizhong + +https://avatars1.githubusercontent.com/u/5869249?s=50; class="committer-avatar"> +Zhu Zhu +PMC, Committer +zhuzh + You can reach committers directly at `@apache.org`. A list of all contributors can be found [here]({{ site.FLINK_CONTRIBUTORS_URL }}). diff --git a/community.zh.md b/community.zh.md index bb8de9e..3be879a 100644 --- a/community.zh.md +++ b/community.zh.md @@ -538,6 +538,12 @@ Flink Forward 大会每年都会在世界的不同地方举办。关于大会最 Committer weizhong + +https://avatars1.githubusercontent.com/u/5869249?s=50; class="committer-avatar"> +Zhu Zhu +PMC, Committer +zhuzh + 可以通过 `@apache.org` 直接联系 committer。可以在 [这里]({{ site.FLINK_CONTRIBUTORS_URL }}) 找到所有的贡献者。
[flink-web] 02/02: Rebuild web
This is an automated email from the ASF dual-hosted git repository. zhuzh pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/flink-web.git commit cca97d485af33fbb67721efdf4bac16b1ca7a0a8 Author: Zhu Zhu AuthorDate: Thu Feb 25 11:56:20 2021 +0800 Rebuild web --- content/community.html| 6 ++ content/zh/community.html | 6 ++ 2 files changed, 12 insertions(+) diff --git a/content/community.html b/content/community.html index ac2da00..665124c 100644 --- a/content/community.html +++ b/content/community.html @@ -780,6 +780,12 @@ Committer weizhong + +https://avatars1.githubusercontent.com/u/5869249?s=50; class="committer-avatar" /> +Zhu Zhu +PMC, Committer +zhuzh + You can reach committers directly at apache-id@apache.org. A list of all contributors can be found https://cwiki.apache.org/confluence/display/FLINK/List+of+contributors;>here. diff --git a/content/zh/community.html b/content/zh/community.html index e7927d2..ec67673 100644 --- a/content/zh/community.html +++ b/content/zh/community.html @@ -771,6 +771,12 @@ Committer weizhong + +https://avatars1.githubusercontent.com/u/5869249?s=50; class="committer-avatar" /> +Zhu Zhu +PMC, Committer +zhuzh + 可以通过 apache-id@apache.org 直接联系 committer。可以在 https://cwiki.apache.org/confluence/display/FLINK/List+of+contributors;>这里 找到所有的贡献者。
[flink-web] 01/02: Add Chinese translation for "how to look for what to contribute"
This is an automated email from the ASF dual-hosted git repository. zhuzh pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/flink-web.git commit 8f66969d3fcb7c0ee92750c8a6fa068834ca1cfd Author: Zhu Zhu AuthorDate: Mon Feb 22 09:32:25 2021 +0800 Add Chinese translation for "how to look for what to contribute" --- contributing/contribute-code.zh.md | 7 +++ 1 file changed, 7 insertions(+) diff --git a/contributing/contribute-code.zh.md b/contributing/contribute-code.zh.md index 3955c12..b1e1138 100644 --- a/contributing/contribute-code.zh.md +++ b/contributing/contribute-code.zh.md @@ -12,6 +12,13 @@ Apache Flink 是一个通过志愿者贡献的代码来维护、改进和扩展 {% toc %} +## 寻找可贡献的内容 + +如果你已经有好的想法可以贡献,可以直接参考下面的 "代码贡献步骤"。 +如果你在寻找可贡献的内容,可以通过 [Flink 的问题跟踪列表](https://issues.apache.org/jira/projects/FLINK/issues) 浏览处于 open 状态且未被分配的 Jira 工单,然后根据 "代码贡献步骤" 中的描述来参与贡献。 +如果你是一个刚刚加入到 Flink 项目中的新人,并希望了解 Flink 及其贡献步骤,可以浏览 [适合新手的工单列表](https://issues.apache.org/jira/issues/?filter=12349196) 。 +这个列表中的工单都带有 _starter_ 标记,适合新手参与。 + ## 代码贡献步骤
[flink-web] branch asf-site updated (a5bf420 -> e5b2bc4)
This is an automated email from the ASF dual-hosted git repository. zhuzh pushed a change to branch asf-site in repository https://gitbox.apache.org/repos/asf/flink-web.git. from a5bf420 Rebuild website new 8f66969 Add Chinese translation for "how to look for what to contribute" new e5b2bc4 Rebuild website The 2 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: content/zh/contributing/contribute-code.html | 20 ++-- contributing/contribute-code.zh.md | 7 +++ 2 files changed, 21 insertions(+), 6 deletions(-)
[flink-web] 02/02: Rebuild website
This is an automated email from the ASF dual-hosted git repository. zhuzh pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/flink-web.git commit e5b2bc4b16209125eb6aededccc9eb299c9817b7 Author: Zhu Zhu AuthorDate: Mon Feb 22 09:32:56 2021 +0800 Rebuild website --- content/zh/contributing/contribute-code.html | 20 ++-- 1 file changed, 14 insertions(+), 6 deletions(-) diff --git a/content/zh/contributing/contribute-code.html b/content/zh/contributing/contribute-code.html index cb366ab..25cb097 100644 --- a/content/zh/contributing/contribute-code.html +++ b/content/zh/contributing/contribute-code.html @@ -223,18 +223,26 @@ - 代码贡献步骤 + 寻找可贡献的内容 + 代码贡献步骤 1. 创建 Jira 工单并达成共识。 - 2. 实现你的改动 + 2. 实现你的改动 3. 创建 Pull Request - 4. 合并改动 + 4. 合并改动 -代码贡献步骤 +寻找可贡献的内容 + +如果你已经有好的想法可以贡献,可以直接参考下面的 “代码贡献步骤”。 +如果你在寻找可贡献的内容,可以通过 https://issues.apache.org/jira/projects/FLINK/issues;>Flink 的问题跟踪列表 浏览处于 open 状态且未被分配的 Jira 工单,然后根据 “代码贡献步骤” 中的描述来参与贡献。 +如果你是一个刚刚加入到 Flink 项目中的新人,并希望了解 Flink 及其贡献步骤,可以浏览 https://issues.apache.org/jira/issues/?filter=12349196;>适合新手的工单列表 。 +这个列表中的工单都带有 starter 标记,适合新手参与。 + +代码贡献步骤
[flink-statefun] 01/01: [FLINK-21491] [sdk] Depend on statefun-protocol-shaded in Java SDK
This is an automated email from the ASF dual-hosted git repository. tzulitai pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/flink-statefun.git commit ed73ce72652366b37c2cbfd23c949612a40dcb6c Author: Tzu-Li (Gordon) Tai AuthorDate: Thu Feb 25 00:21:05 2021 +0800 [FLINK-21491] [sdk] Depend on statefun-protocol-shaded in Java SDK This closes #202. --- statefun-sdk-java/pom.xml | 92 +- .../java/com/google/protobuf/MoreByteStrings.java | 39 - .../flink/statefun/sdk/java/ApiExtension.java | 2 +- .../apache/flink/statefun/sdk/java/TypeName.java | 2 +- .../apache/flink/statefun/sdk/java/ValueSpec.java | 2 +- .../handler/ConcurrentRequestReplyHandler.java | 2 +- .../statefun/sdk/java/io/KafkaEgressMessage.java | 2 +- .../statefun/sdk/java/io/KinesisEgressMessage.java | 2 +- .../statefun/sdk/java/message/MessageBuilder.java | 2 +- .../statefun/sdk/java/slice/ByteStringSlice.java | 2 +- .../statefun/sdk/java/slice/SliceProtobufUtil.java | 10 +-- .../flink/statefun/sdk/java/slice/Slices.java | 4 +- .../storage/ConcurrentAddressScopedStorage.java| 2 +- .../flink/statefun/sdk/java/types/Types.java | 8 +- .../sdk/java/slice/SliceProtobufUtilTest.java | 2 +- .../ConcurrentAddressScopedStorageTest.java| 2 +- .../sdk/java/storage/StateValueContextsTest.java | 2 +- .../sdk/java/types/SanityPrimitiveTypeTest.java| 2 +- 18 files changed, 28 insertions(+), 151 deletions(-) diff --git a/statefun-sdk-java/pom.xml b/statefun-sdk-java/pom.xml index f58829a..e621e2c 100644 --- a/statefun-sdk-java/pom.xml +++ b/statefun-sdk-java/pom.xml @@ -31,14 +31,10 @@ under the License. org.apache.flink -statefun-sdk-protos +statefun-protocol-shaded ${project.version} - -com.google.protobuf -protobuf-java -${protobuf.version} - + junit @@ -53,101 +49,21 @@ under the License. test + - - -org.apache.maven.plugins -maven-dependency-plugin - - -unpack -generate-sources - -unpack - - - - -org.apache.flink - statefun-sdk-protos -${project.version} -jar - ${additional-sources.dir} - sdk/*.proto,types/*.proto,io/*.proto - - - - - - - - -com.github.os72 -protoc-jar-maven-plugin -${protoc-jar-maven-plugin.version} - - -generate-protobuf-sources -generate-sources - -run - - -true -${protobuf.version} -true - - src/main/protobuf - ${additional-sources.dir} - - ${basedir}/target/generated-sources/protoc-jar - - - - - org.apache.maven.plugins maven-shade-plugin -shade-protobuf +uber-jar package shade false - - - com.google.protobuf:protobuf-java - - - - -com.google.protobuf - org.apache.flink.statefun.sdk.shaded.com.google.protobuf - - - - - - com.google.protobuf:protobuf-java -
[flink-statefun] branch master updated (21fd255 -> ed73ce7)
This is an automated email from the ASF dual-hosted git repository. tzulitai pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/flink-statefun.git. from 21fd255 [FLINK-21276] [legal] Mention Protobuf BSD license in statefun-sdk-java add dfe5a58 [FLINK-21483] [build] Remove spotbugs plugin add 1c40147 [FLINK-21491] Introduce statefun-shaded new ed73ce7 [FLINK-21491] [sdk] Depend on statefun-protocol-shaded in Java SDK The 1 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: pom.xml| 31 +- statefun-sdk-java/pom.xml | 92 + .../flink/statefun/sdk/java/ApiExtension.java | 2 +- .../apache/flink/statefun/sdk/java/TypeName.java | 2 +- .../apache/flink/statefun/sdk/java/ValueSpec.java | 2 +- .../handler/ConcurrentRequestReplyHandler.java | 2 +- .../statefun/sdk/java/io/KafkaEgressMessage.java | 2 +- .../statefun/sdk/java/io/KinesisEgressMessage.java | 2 +- .../statefun/sdk/java/message/MessageBuilder.java | 2 +- .../statefun/sdk/java/slice/ByteStringSlice.java | 2 +- .../statefun/sdk/java/slice/SliceProtobufUtil.java | 10 +- .../flink/statefun/sdk/java/slice/Slices.java | 4 +- .../storage/ConcurrentAddressScopedStorage.java| 2 +- .../flink/statefun/sdk/java/types/Types.java | 8 +- .../sdk/java/slice/SliceProtobufUtilTest.java | 2 +- .../ConcurrentAddressScopedStorageTest.java| 2 +- .../sdk/java/storage/StateValueContextsTest.java | 2 +- .../sdk/java/types/SanityPrimitiveTypeTest.java| 2 +- statefun-shaded/pom.xml| 96 ++ statefun-shaded/statefun-protobuf-shaded/pom.xml | 109 + .../com/google/protobuf/MoreByteStrings.java | 2 +- .../statefun-protocol-shaded}/pom.xml | 81 --- tools/maven/spotbugs-exclude.xml | 108 23 files changed, 281 insertions(+), 286 deletions(-) create mode 100644 statefun-shaded/pom.xml create mode 100644 statefun-shaded/statefun-protobuf-shaded/pom.xml rename {statefun-sdk-java/src/main/java => statefun-shaded/statefun-protobuf-shaded/src/main/java/org/apache/flink/statefun/sdk/shaded}/com/google/protobuf/MoreByteStrings.java (95%) copy {statefun-flink/statefun-flink-io => statefun-shaded/statefun-protocol-shaded}/pom.xml (62%) delete mode 100644 tools/maven/spotbugs-exclude.xml
[flink] branch master updated (d2be34e -> 2180355)
This is an automated email from the ASF dual-hosted git repository. sjwiesman pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/flink.git. from d2be34e [FLINK-21358][docs] Adds savepoint 1.12.x to savepoint compatibility diagram add f0fdd2c [FLINK-19466][runtime / state backends] Add default savepoint configuration to StreamExecutionEnvironment add 16f62c5 [FLINK-19466][runtime / state backends] Add JobManagerCheckpointStorage and FileSystemCheckpointStorage add 0a76dab [FLINK-19467][runtime / state backends] Implement HashMapStateBackend and EmbeddedRocksDBStateBackend add daba0ac [FLINK-19467][examples] Update examples to new API add 0b57ba2 [FLINK-19467][docs] Regenerate configurations add 53216ec [FLINK-19467][e2e] Migrate end-to-end tests to the modern API add 2180355 [hotfix][docs] fix version tag in kubernetes docs No new revisions were added by this update. Summary of changes: .../resource-providers/native_kubernetes.md| 4 +- .../resource-providers/native_kubernetes.md| 4 +- .../generated/checkpointing_configuration.html | 36 +- .../generated/common_state_backends_section.html | 12 +- .../generated/expert_state_backends_section.html | 6 +- .../flink/configuration/CheckpointingOptions.java | 86 ++- .../tests/DataStreamAllroundTestJobFactory.java| 22 +- .../StickyAllocationAndLocalRecoveryTestJob.java | 13 +- .../flink/streaming/tests/StubStateBackend.java| 24 +- flink-end-to-end-tests/run-nightly-tests.sh| 42 +- .../test-scripts/test_resume_savepoint.sh | 4 +- .../streaming/examples/async/AsyncIOExample.java | 3 +- .../examples/statemachine/StateMachineExample.java | 14 +- ...st_stream_execution_environment_completeness.py | 2 +- .../flink/runtime/checkpoint/Checkpoints.java | 23 +- .../executiongraph/ExecutionGraphBuilder.java | 1 + .../flink/runtime/state/CheckpointStorage.java | 45 +- .../runtime/state/CheckpointStorageLoader.java | 74 ++- .../apache/flink/runtime/state/StateBackend.java | 39 +- .../flink/runtime/state/StateBackendLoader.java| 65 ++- .../state/filesystem/AbstractFileStateBackend.java | 5 + .../AbstractFsCheckpointStorageAccess.java | 27 +- .../runtime/state/filesystem/FsStateBackend.java | 20 +- .../runtime/state/hashmap/HashMapStateBackend.java | 224 .../state/hashmap/HashMapStateBackendFactory.java | 35 ++ .../runtime/state/memory/MemoryStateBackend.java | 25 +- .../storage/ExternalizedSnapshotLocation.java | 157 ++ .../state/storage/FileSystemCheckpointStorage.java | 374 + .../state/storage/JobManagerCheckpointStorage.java | 271 ++ .../checkpoint/CheckpointCoordinatorTest.java | 4 +- .../runtime/state/CheckpointStorageLoaderTest.java | 343 +++- .../state/HashMapStateBackendMigrationTest.java| 65 +++ .../runtime/state/HashMapStateBackendTest.java | 115 .../HeapKeyedStateBackendAsyncByDefaultTest.java | 7 + .../runtime/state/StateBackendLoadingTest.java | 25 +- .../streaming/state/AbstractRocksDBState.java | 2 +- .../state/DefaultConfigurableOptionsFactory.java | 2 +- ...ckend.java => EmbeddedRocksDBStateBackend.java} | 270 +++--- .../state/EmbeddedRocksDBStateBackendFactory.java | 35 ++ .../contrib/streaming/state/LegacyEnumBridge.java | 50 ++ .../contrib/streaming/state/PredefinedOptions.java | 6 +- .../state/RocksDBKeyedStateBackendBuilder.java | 10 +- .../contrib/streaming/state/RocksDBListState.java | 2 +- .../contrib/streaming/state/RocksDBOptions.java| 6 +- .../streaming/state/RocksDBOptionsFactory.java | 4 +- .../streaming/state/RocksDBResourceContainer.java | 4 +- .../streaming/state/RocksDBStateBackend.java | 586 +++-- .../EmbeddedRocksDBStateBackendMigrationTest.java | 79 +++ ...t.java => EmbeddedRocksDBStateBackendTest.java} | 65 ++- .../streaming/state/RocksDBAsyncSnapshotTest.java | 7 +- .../contrib/streaming/state/RocksDBInitTest.java | 10 +- .../state/RocksDBStateBackendConfigTest.java | 29 +- .../state/RocksDBStateBackendFactoryTest.java | 68 +-- .../state/RocksDBStateBackendMigrationTest.java| 2 +- .../state/RocksDBStateMisuseOptionTest.java| 4 +- .../contrib/streaming/state/RocksDBTestUtils.java | 12 +- .../benchmark/StateBackendBenchmarkUtils.java | 4 +- .../api/environment/CheckpointConfig.java | 91 .../environment/StreamExecutionEnvironment.java| 80 ++- .../flink/streaming/api/graph/StreamConfig.java| 20 + .../flink/streaming/api/graph/StreamGraph.java | 10 + .../streaming/api/graph/StreamGraphGenerator.java | 10 + .../api/graph/StreamingJobGraphGenerator.java | 2 + .../flink/streaming/runtime/tasks/StreamTask.java | 3 +
[flink-web] 01/02: Add weizhong to community page
This is an automated email from the ASF dual-hosted git repository. dianfu pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/flink-web.git commit 1d68442dcdd80cb8018c6ca04ad64144cb5f Author: WeiZhong94 AuthorDate: Wed Feb 24 20:09:21 2021 +0800 Add weizhong to community page --- community.md| 7 ++- community.zh.md | 6 ++ 2 files changed, 12 insertions(+), 1 deletion(-) diff --git a/community.md b/community.md index 0c529d1..332390a 100644 --- a/community.md +++ b/community.md @@ -543,7 +543,12 @@ Flink Forward is a conference happening yearly in different locations around the Committer tangyun - + +https://avatars1.githubusercontent.com/u/44194288?s=50; class="committer-avatar"> +Wei Zhong +Committer +weizhong + You can reach committers directly at `@apache.org`. A list of all contributors can be found [here]({{ site.FLINK_CONTRIBUTORS_URL }}). diff --git a/community.zh.md b/community.zh.md index 8d78119..bb8de9e 100644 --- a/community.zh.md +++ b/community.zh.md @@ -532,6 +532,12 @@ Flink Forward 大会每年都会在世界的不同地方举办。关于大会最 Committer tangyun + +https://avatars1.githubusercontent.com/u/44194288?s=50; class="committer-avatar"> +Wei Zhong +Committer +weizhong + 可以通过 `@apache.org` 直接联系 committer。可以在 [这里]({{ site.FLINK_CONTRIBUTORS_URL }}) 找到所有的贡献者。
[flink-web] 02/02: Rebuild website
This is an automated email from the ASF dual-hosted git repository. dianfu pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/flink-web.git commit a5bf4201ef534feef11b91ddb7cf6d66f8043b48 Author: WeiZhong94 AuthorDate: Wed Feb 24 20:58:53 2021 +0800 Rebuild website --- content/community.html| 7 ++- content/zh/community.html | 6 ++ 2 files changed, 12 insertions(+), 1 deletion(-) diff --git a/content/community.html b/content/community.html index c0c5f9f..ac2da00 100644 --- a/content/community.html +++ b/content/community.html @@ -774,7 +774,12 @@ Committer tangyun - + +https://avatars1.githubusercontent.com/u/44194288?s=50; class="committer-avatar" /> +Wei Zhong +Committer +weizhong + You can reach committers directly at apache-id@apache.org. A list of all contributors can be found https://cwiki.apache.org/confluence/display/FLINK/List+of+contributors;>here. diff --git a/content/zh/community.html b/content/zh/community.html index c4a551d..e7927d2 100644 --- a/content/zh/community.html +++ b/content/zh/community.html @@ -765,6 +765,12 @@ Committer tangyun + +https://avatars1.githubusercontent.com/u/44194288?s=50; class="committer-avatar" /> +Wei Zhong +Committer +weizhong + 可以通过 apache-id@apache.org 直接联系 committer。可以在 https://cwiki.apache.org/confluence/display/FLINK/List+of+contributors;>这里 找到所有的贡献者。
[flink-web] branch asf-site updated (e75c703 -> a5bf420)
This is an automated email from the ASF dual-hosted git repository. dianfu pushed a change to branch asf-site in repository https://gitbox.apache.org/repos/asf/flink-web.git. from e75c703 Revert "Add Apache Flink 1.12.2 release" new 1d68442 Add weizhong to community page new a5bf420 Rebuild website The 2 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: community.md | 7 ++- community.zh.md | 6 ++ content/community.html| 7 ++- content/zh/community.html | 6 ++ 4 files changed, 24 insertions(+), 2 deletions(-)
[flink] branch release-1.12 updated: [FLINK-21358][docs] Adds savepoint 1.12.x to savepoint compatibility diagram
This is an automated email from the ASF dual-hosted git repository. dianfu pushed a commit to branch release-1.12 in repository https://gitbox.apache.org/repos/asf/flink.git The following commit(s) were added to refs/heads/release-1.12 by this push: new f01ca78 [FLINK-21358][docs] Adds savepoint 1.12.x to savepoint compatibility diagram f01ca78 is described below commit f01ca78482fba0f74182a83f086f41fb163d91c6 Author: Matthias Pohl AuthorDate: Wed Feb 10 18:56:15 2021 +0100 [FLINK-21358][docs] Adds savepoint 1.12.x to savepoint compatibility diagram This closes #14920. --- docs/ops/upgrading.md| 28 docs/ops/upgrading.zh.md | 28 2 files changed, 56 insertions(+) diff --git a/docs/ops/upgrading.md b/docs/ops/upgrading.md index 146cea6..cc618d1 100644 --- a/docs/ops/upgrading.md +++ b/docs/ops/upgrading.md @@ -219,6 +219,7 @@ Savepoints are compatible across Flink versions as indicated by the table below: 1.9.x 1.10.x 1.11.x + 1.12.x Limitations @@ -236,6 +237,7 @@ Savepoints are compatible across Flink versions as indicated by the table below: + The maximum parallelism of a job that was migrated from Flink 1.1.x to 1.2.x+ is currently fixed as the parallelism of the job. This means that the parallelism can not be increased after migration. This limitation might be removed in a future bugfix release. @@ -253,6 +255,7 @@ Savepoints are compatible across Flink versions as indicated by the table below: O O O + O When migrating from Flink 1.2.x to Flink 1.3.x+, changing parallelism at the same time is not supported. Users have to first take a savepoint after migrating to Flink 1.3.x+, and then change @@ -276,6 +279,7 @@ Savepoints are compatible across Flink versions as indicated by the table below: O O O + O Migrating from Flink 1.3.0 to Flink 1.4.[0,1] will fail if the savepoint contains Scala case classes. Users have to directly migrate to 1.4.2+ instead. @@ -291,6 +295,7 @@ Savepoints are compatible across Flink versions as indicated by the table below: O O O + O @@ -306,6 +311,7 @@ Savepoints are compatible across Flink versions as indicated by the table below: O O O + O There is a known issue with resuming broadcast state created with 1.5.x in versions 1.6.x up to 1.6.2, and 1.7.0: https://issues.apache.org/jira/browse/FLINK-11087;>FLINK-11087. Users upgrading to 1.6.x or 1.7.x series need to directly migrate to minor versions higher than 1.6.2 and 1.7.0, @@ -324,6 +330,7 @@ Savepoints are compatible across Flink versions as indicated by the table below: O O O + O @@ -339,6 +346,7 @@ Savepoints are compatible across Flink versions as indicated by the table below: O O O + O @@ -354,6 +362,7 @@ Savepoints are compatible across Flink versions as indicated by the table below: O O O + O @@ -369,6 +378,7 @@ Savepoints are compatible across Flink versions as indicated by the table below: O O O + O @@ -384,6 +394,7 @@ Savepoints are compatible across Flink versions as indicated by the table below: O O + O @@ -399,8 +410,25 @@ Savepoints are compatible across Flink versions as indicated by the table below: O + O + + 1.12.x + + + + + + + + + + + + O + + diff --git a/docs/ops/upgrading.zh.md b/docs/ops/upgrading.zh.md index b49b83a..6885c09 100644 --- a/docs/ops/upgrading.zh.md +++ b/docs/ops/upgrading.zh.md @@ -217,6 +217,7 @@ Savepoints are compatible across Flink versions as indicated by the table below: 1.9.x 1.10.x 1.11.x + 1.12.x Limitations @@ -234,6 +235,7 @@ Savepoints are compatible across Flink versions as indicated by the table below: + The maximum parallelism of a job that was migrated from Flink 1.1.x to 1.2.x+ is currently fixed as the parallelism of the job. This means that the parallelism can not be increased after migration. This limitation might be removed in a future bugfix release. @@ -251,6 +253,7 @@ Savepoints are
[flink] branch master updated (f322de7 -> d2be34e)
This is an automated email from the ASF dual-hosted git repository. dianfu pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/flink.git. from f322de7 [FLINK-21453][checkpointing] Do not ignore endOfInput when terminating a job with savepoint add d2be34e [FLINK-21358][docs] Adds savepoint 1.12.x to savepoint compatibility diagram No new revisions were added by this update. Summary of changes: docs/content.zh/docs/ops/upgrading.md | 28 docs/content/docs/ops/upgrading.md| 28 2 files changed, 56 insertions(+)
[flink] branch master updated (7c286d7 -> f322de7)
This is an automated email from the ASF dual-hosted git repository. pnowojski pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/flink.git. from 7c286d7 [FLINK-21399][coordination][tests] Provide enough slots for job deployment add fbdc2c0 [FLINK-21484][rest] Do not expose internal CheckpointType enum via the REST API add 144cd80 [hotfix][task] Rename isStoppingBySyncSavepoint to ignoreEndOfInput add 44e9627 [FLINK-21453][checkpointing][refactor] Replace advanceToEndOfTime with new CheckpointType.SAVEPOINT_TERMINATE add f322de7 [FLINK-21453][checkpointing] Do not ignore endOfInput when terminating a job with savepoint No new revisions were added by this update. Summary of changes: .../runtime/checkpoint/CheckpointCoordinator.java | 44 ++- .../runtime/checkpoint/CheckpointProperties.java | 19 --- .../flink/runtime/checkpoint/CheckpointType.java | 41 +++--- .../flink/runtime/dispatcher/Dispatcher.java | 7 +-- .../flink/runtime/executiongraph/Execution.java| 27 +++--- .../network/api/serialization/EventSerializer.java | 4 +- .../runtime/jobgraph/tasks/AbstractInvokable.java | 6 +-- .../jobmanager/slots/TaskManagerGateway.java | 5 +- .../apache/flink/runtime/jobmaster/JobMaster.java | 6 +-- .../flink/runtime/jobmaster/JobMasterGateway.java | 5 +- .../runtime/jobmaster/RpcTaskManagerGateway.java | 9 +--- .../flink/runtime/minicluster/MiniCluster.java | 4 +- .../runtime/minicluster/MiniClusterJobClient.java | 4 +- .../handler/job/savepoints/SavepointHandlers.java | 4 +- .../messages/checkpoints/CheckpointStatistics.java | 45 .../flink/runtime/scheduler/SchedulerBase.java | 4 +- .../flink/runtime/scheduler/SchedulerNG.java | 3 +- .../scheduler/adaptive/AdaptiveScheduler.java | 5 +- .../adaptive/StateWithExecutionGraph.java | 3 +- .../flink/runtime/taskexecutor/TaskExecutor.java | 9 ++-- .../runtime/taskexecutor/TaskExecutorGateway.java | 5 +- .../org/apache/flink/runtime/taskmanager/Task.java | 8 +-- .../flink/runtime/webmonitor/RestfulGateway.java | 5 +- .../checkpoint/CheckpointCoordinatorTest.java | 15 ++ .../CheckpointCoordinatorTestingUtils.java | 3 +- .../CheckpointCoordinatorTriggeringTest.java | 33 ++-- .../checkpoint/CheckpointRequestDeciderTest.java | 7 +-- .../runtime/checkpoint/CheckpointTypeTest.java | 2 +- .../DefaultCompletedCheckpointStoreTest.java | 3 +- .../runtime/checkpoint/PendingCheckpointTest.java | 2 +- .../utils/SimpleAckingTaskManagerGateway.java | 20 ++- .../serialization/CheckpointSerializationTest.java | 2 +- .../jobmaster/utils/TestingJobMasterGateway.java | 6 +-- .../CoordinatorEventsExactlyOnceITCase.java| 4 +- .../checkpoints/CheckpointingStatisticsTest.java | 9 ++-- .../runtime/scheduler/DefaultSchedulerTest.java| 7 +-- .../runtime/scheduler/TestingSchedulerNG.java | 3 +- .../taskexecutor/TestingTaskExecutorGateway.java | 3 +- .../runtime/taskmanager/TaskAsyncCallTest.java | 16 ++ .../runtime/webmonitor/TestingRestfulGateway.java | 2 +- .../streaming/state/RocksDBAsyncSnapshotTest.java | 6 +-- .../streaming/api/operators/StreamSource.java | 2 +- .../runtime/tasks/MultipleInputStreamTask.java | 4 +- .../streaming/runtime/tasks/OperatorChain.java | 10 ++-- .../runtime/tasks/SourceOperatorStreamTask.java| 9 ++-- .../streaming/runtime/tasks/SourceStreamTask.java | 11 ++-- .../flink/streaming/runtime/tasks/StreamTask.java | 40 +- .../AbstractUdfStreamOperatorLifecycleTest.java| 3 +- .../api/operators/async/AsyncWaitOperatorTest.java | 5 +- .../checkpointing/CheckpointSequenceValidator.java | 4 +- .../checkpointing/ValidatingCheckpointHandler.java | 4 +- ...tStreamTaskChainedSourcesCheckpointingTest.java | 12 ++--- .../runtime/tasks/OneInputStreamTaskTest.java | 4 +- .../runtime/tasks/RestoreStreamTaskTest.java | 2 +- .../tasks/SourceExternalCheckpointTriggerTest.java | 6 +-- .../tasks/SourceOperatorStreamTaskTest.java| 4 +- .../runtime/tasks/SourceStreamTaskTest.java| 14 ++--- .../runtime/tasks/SourceTaskTerminationTest.java | 14 ++--- .../tasks/StreamTaskExecutionDecorationTest.java | 3 +- .../runtime/tasks/StreamTaskTerminationTest.java | 3 +- .../streaming/runtime/tasks/StreamTaskTest.java| 62 +++--- .../runtime/tasks/SynchronousCheckpointITCase.java | 12 ++--- .../runtime/tasks/SynchronousCheckpointTest.java | 5 +- .../jobmaster/JobMasterStopWithSavepointIT.java| 12 ++--- .../jobmaster/JobMasterTriggerSavepointITCase.java | 3 +- .../state/StatefulOperatorChainedTaskTest.java | 4 +- 66 files changed, 286 insertions(+), 381 deletions(-)
[flink] branch master updated (c77672a -> 7c286d7)
This is an automated email from the ASF dual-hosted git repository. chesnay pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/flink.git. from c77672a [FLINK-21481][build] Move git-commit-id-plugin execution to flink-runtime new 87c3933 [FLINK-21399][coordination][tests] Refactor registerSlotsRequiredForJobExecution new 7c286d7 [FLINK-21399][coordination][tests] Provide enough slots for job deployment The 2 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: .../jobmaster/JobMasterQueryableStateTest.java | 41 +- .../flink/runtime/jobmaster/JobMasterTest.java | 26 +-- .../runtime/jobmaster/JobMasterTestUtils.java | 87 ++ 3 files changed, 108 insertions(+), 46 deletions(-) create mode 100644 flink-runtime/src/test/java/org/apache/flink/runtime/jobmaster/JobMasterTestUtils.java
[flink] 01/02: [FLINK-21399][coordination][tests] Refactor registerSlotsRequiredForJobExecution
This is an automated email from the ASF dual-hosted git repository. chesnay pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/flink.git commit 87c3933b44e38242e80717758261fd34487d185c Author: Chesnay Schepler AuthorDate: Mon Feb 22 08:30:24 2021 +0100 [FLINK-21399][coordination][tests] Refactor registerSlotsRequiredForJobExecution --- .../jobmaster/JobMasterQueryableStateTest.java | 41 +--- .../runtime/jobmaster/JobMasterTestUtils.java | 74 ++ 2 files changed, 77 insertions(+), 38 deletions(-) diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/jobmaster/JobMasterQueryableStateTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/jobmaster/JobMasterQueryableStateTest.java index fa53ebb..0de4ec9 100644 --- a/flink-runtime/src/test/java/org/apache/flink/runtime/jobmaster/JobMasterQueryableStateTest.java +++ b/flink-runtime/src/test/java/org/apache/flink/runtime/jobmaster/JobMasterQueryableStateTest.java @@ -22,8 +22,6 @@ import org.apache.flink.api.common.JobID; import org.apache.flink.api.common.JobStatus; import org.apache.flink.api.common.time.Time; import org.apache.flink.queryablestate.KvStateID; -import org.apache.flink.runtime.clusterframework.types.AllocationID; -import org.apache.flink.runtime.clusterframework.types.ResourceProfile; import org.apache.flink.runtime.jobgraph.JobGraph; import org.apache.flink.runtime.jobgraph.JobType; import org.apache.flink.runtime.jobgraph.JobVertex; @@ -37,11 +35,6 @@ import org.apache.flink.runtime.query.UnknownKvStateLocation; import org.apache.flink.runtime.rpc.RpcUtils; import org.apache.flink.runtime.rpc.TestingRpcService; import org.apache.flink.runtime.state.KeyGroupRange; -import org.apache.flink.runtime.taskexecutor.TaskExecutorGateway; -import org.apache.flink.runtime.taskexecutor.TestingTaskExecutorGatewayBuilder; -import org.apache.flink.runtime.taskexecutor.slot.SlotOffer; -import org.apache.flink.runtime.taskmanager.LocalUnresolvedTaskManagerLocation; -import org.apache.flink.runtime.taskmanager.UnresolvedTaskManagerLocation; import org.apache.flink.util.ExceptionUtils; import org.apache.flink.util.TestLogger; @@ -53,10 +46,7 @@ import org.junit.Test; import java.net.InetAddress; import java.net.InetSocketAddress; import java.net.UnknownHostException; -import java.util.Collection; import java.util.concurrent.ExecutionException; -import java.util.stream.Collectors; -import java.util.stream.IntStream; import static org.apache.flink.core.testutils.FlinkMatchers.containsCause; import static org.hamcrest.CoreMatchers.either; @@ -318,35 +308,10 @@ public class JobMasterQueryableStateTest extends TestLogger { } } -private void registerSlotsRequiredForJobExecution(JobMasterGateway jobMasterGateway) +private static void registerSlotsRequiredForJobExecution(JobMasterGateway jobMasterGateway) throws ExecutionException, InterruptedException { - -final TaskExecutorGateway taskExecutorGateway = -new TestingTaskExecutorGatewayBuilder().createTestingTaskExecutorGateway(); -final UnresolvedTaskManagerLocation unresolvedTaskManagerLocation = -new LocalUnresolvedTaskManagerLocation(); - -rpcService.registerGateway(taskExecutorGateway.getAddress(), taskExecutorGateway); - -jobMasterGateway -.registerTaskManager( -taskExecutorGateway.getAddress(), -unresolvedTaskManagerLocation, -testingTimeout) -.get(); - -Collection slotOffers = -IntStream.range(0, PARALLELISM) -.mapToObj( -index -> -new SlotOffer( -new AllocationID(), index, ResourceProfile.ANY)) -.collect(Collectors.toList()); - -jobMasterGateway -.offerSlots( -unresolvedTaskManagerLocation.getResourceID(), slotOffers, testingTimeout) -.get(); +JobMasterTestUtils.registerTaskExecutorAndOfferSlots( +rpcService, jobMasterGateway, PARALLELISM, testingTimeout); } private static void registerKvState( diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/jobmaster/JobMasterTestUtils.java b/flink-runtime/src/test/java/org/apache/flink/runtime/jobmaster/JobMasterTestUtils.java new file mode 100644 index 000..cd9609f --- /dev/null +++ b/flink-runtime/src/test/java/org/apache/flink/runtime/jobmaster/JobMasterTestUtils.java @@ -0,0 +1,74 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this
[flink] 02/02: [FLINK-21399][coordination][tests] Provide enough slots for job deployment
This is an automated email from the ASF dual-hosted git repository. chesnay pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/flink.git commit 7c286d78826a3bac7a3a99c0335854292217e231 Author: Chesnay Schepler AuthorDate: Mon Feb 22 08:31:40 2021 +0100 [FLINK-21399][coordination][tests] Provide enough slots for job deployment --- .../flink/runtime/jobmaster/JobMasterTest.java | 26 +++--- .../runtime/jobmaster/JobMasterTestUtils.java | 15 - 2 files changed, 32 insertions(+), 9 deletions(-) diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/jobmaster/JobMasterTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/jobmaster/JobMasterTest.java index a1e6255..6f83dc1 100644 --- a/flink-runtime/src/test/java/org/apache/flink/runtime/jobmaster/JobMasterTest.java +++ b/flink-runtime/src/test/java/org/apache/flink/runtime/jobmaster/JobMasterTest.java @@ -1065,7 +1065,6 @@ public class JobMasterTest extends TestLogger { } @Test -@Category(FailsWithAdaptiveScheduler.class) // FLINK-21399 public void testRequestNextInputSplitWithGlobalFailover() throws Exception { configuration.setInteger(RestartStrategyOptions.RESTART_STRATEGY_FIXED_DELAY_ATTEMPTS, 1); configuration.set( @@ -1100,7 +1099,7 @@ public class JobMasterTest extends TestLogger { source.setInvokableClass(AbstractInvokable.class); final JobGraph inputSplitJobGraph = new JobGraph(source); -jobGraph.setJobType(JobType.STREAMING); +inputSplitJobGraph.setJobType(JobType.STREAMING); final ExecutionConfig executionConfig = new ExecutionConfig(); executionConfig.setRestartStrategy(RestartStrategies.fixedDelayRestart(100, 0)); @@ -1119,6 +1118,10 @@ public class JobMasterTest extends TestLogger { final JobMasterGateway jobMasterGateway = jobMaster.getSelfGateway(JobMasterGateway.class); +registerSlotsRequiredForJobExecution(jobMasterGateway, parallelism); + +waitUntilAllExecutionsAreScheduledOrDeployed(jobMasterGateway); + final JobVertexID sourceId = source.getID(); final List executions = getExecutions(jobMasterGateway, sourceId); @@ -1139,8 +1142,6 @@ public class JobMasterTest extends TestLogger { allRequestedInputSplits, containsInAnyOrder(allInputSplits.toArray(EMPTY_TESTING_INPUT_SPLITS))); -waitUntilAllExecutionsAreScheduled(jobMasterGateway); - // fail the first execution to trigger a failover jobMasterGateway .updateTaskExecutionState( @@ -1148,7 +1149,7 @@ public class JobMasterTest extends TestLogger { .get(); // wait until the job has been recovered -waitUntilAllExecutionsAreScheduled(jobMasterGateway); +waitUntilAllExecutionsAreScheduledOrDeployed(jobMasterGateway); final ExecutionAttemptID restartedAttemptId = getFirstExecution(jobMasterGateway, sourceId).getAttemptId(); @@ -1181,8 +1182,8 @@ public class JobMasterTest extends TestLogger { return () -> getInputSplit(jobMasterGateway, jobVertexID, initialAttemptId); } -private void waitUntilAllExecutionsAreScheduled(final JobMasterGateway jobMasterGateway) -throws Exception { +private void waitUntilAllExecutionsAreScheduledOrDeployed( +final JobMasterGateway jobMasterGateway) throws Exception { final Duration duration = Duration.ofMillis(testingTimeout.toMilliseconds()); final Deadline deadline = Deadline.fromNow(duration); @@ -1191,7 +1192,9 @@ public class JobMasterTest extends TestLogger { getExecutions(jobMasterGateway).stream() .allMatch( execution -> -execution.getState() == ExecutionState.SCHEDULED), +execution.getState() == ExecutionState.SCHEDULED +|| execution.getState() +== ExecutionState.DEPLOYING), deadline); } @@ -1977,4 +1980,11 @@ public class JobMasterTest extends TestLogger { @Override public void disposeStorageLocation() throws IOException {} } + +private static void registerSlotsRequiredForJobExecution( +JobMasterGateway jobMasterGateway, int numSlots) +throws ExecutionException, InterruptedException { +JobMasterTestUtils.registerTaskExecutorAndOfferSlots( +rpcService, jobMasterGateway, numSlots, testingTimeout); +} } diff --git
[flink] branch master updated: [FLINK-21481][build] Move git-commit-id-plugin execution to flink-runtime
This is an automated email from the ASF dual-hosted git repository. chesnay pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/flink.git The following commit(s) were added to refs/heads/master by this push: new c77672a [FLINK-21481][build] Move git-commit-id-plugin execution to flink-runtime c77672a is described below commit c77672a1ed654902042d0882e46c67b526a7ef5a Author: Chesnay Schepler AuthorDate: Wed Feb 24 10:48:30 2021 +0100 [FLINK-21481][build] Move git-commit-id-plugin execution to flink-runtime --- flink-runtime/pom.xml | 27 .../streaming/util/PseudoRandomValueSelector.java | 2 +- pom.xml| 29 +- 3 files changed, 29 insertions(+), 29 deletions(-) diff --git a/flink-runtime/pom.xml b/flink-runtime/pom.xml index d3c1038..d12e26d 100644 --- a/flink-runtime/pom.xml +++ b/flink-runtime/pom.xml @@ -350,6 +350,33 @@ under the License. + + pl.project13.maven + git-commit-id-plugin + 4.0.2 + + + get-the-git-infos + validate + + revision + + + + + false + false + false + + + + true + + + + + org.apache.maven.plugins maven-enforcer-plugin diff --git a/flink-test-utils-parent/flink-test-utils/src/main/java/org/apache/flink/streaming/util/PseudoRandomValueSelector.java b/flink-test-utils-parent/flink-test-utils/src/main/java/org/apache/flink/streaming/util/PseudoRandomValueSelector.java index 625f713..c765d68 100644 --- a/flink-test-utils-parent/flink-test-utils/src/main/java/org/apache/flink/streaming/util/PseudoRandomValueSelector.java +++ b/flink-test-utils-parent/flink-test-utils/src/main/java/org/apache/flink/streaming/util/PseudoRandomValueSelector.java @@ -83,7 +83,7 @@ class PseudoRandomValueSelector { private static String getGlobalSeed() { // manual seed or set by maven final String seed = System.getProperty("test.randomization.seed"); -if (seed != null) { +if (seed != null && !seed.isEmpty()) { return seed; } diff --git a/pom.xml b/pom.xml index 8310016..3e5c120 100644 --- a/pom.xml +++ b/pom.xml @@ -159,7 +159,7 @@ under the License. 2.4.2 - ${git.commit.id} + **/*Test.* @@ -1376,33 +1376,6 @@ under the License. - - pl.project13.maven - git-commit-id-plugin - 4.0.2 - - - get-the-git-infos - validate - - revision - - - - - false - false - false - - - - true - - - - - org.apache.rat apache-rat-plugin 0.12
[flink] branch master updated: [FLINK-20659][yarn][tests] Add flink-yarn-test README
This is an automated email from the ASF dual-hosted git repository. chesnay pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/flink.git The following commit(s) were added to refs/heads/master by this push: new 3436381 [FLINK-20659][yarn][tests] Add flink-yarn-test README 3436381 is described below commit 3436381117ef8fb21f7e02ef63d6d6ec07792203 Author: Matthias Pohl AuthorDate: Wed Feb 24 18:15:32 2021 +0100 [FLINK-20659][yarn][tests] Add flink-yarn-test README --- flink-yarn-tests/README.md | 17 + .../flink/yarn/YARNSessionCapacitySchedulerITCase.java | 14 ++ 2 files changed, 31 insertions(+) diff --git a/flink-yarn-tests/README.md b/flink-yarn-tests/README.md new file mode 100644 index 000..998b03d --- /dev/null +++ b/flink-yarn-tests/README.md @@ -0,0 +1,17 @@ +# Flink YARN tests + +`flink-yarn-test` collects test cases which are deployed to a local Apache Hadoop YARN cluster. +There are several things to consider when running these tests locally: + +* `YarnTestBase` spins up a `MiniYARNCluster`. This cluster spawns processes outside of the IDE's JVM + to run the workers on. `JAVA_HOME` needs to be set to make this work. +* The Flink cluster within each test is deployed using the `flink-dist` binaries. Any changes made + to the code will only take effect after rebuilding the `flink-dist` module. +* Each `YARN*ITCase` will have a local working directory for resources like logs to be stored. These + working directories are located in `flink-yarn-tests/target/` (see + `find flink-yarn-tests/target -name "*.err" -or -name "*.out"` for the test's output). +* There is a known problem causing test instabilities due to our usage of Hadoop 2.8.3 executing the + tests. This is caused by a bug [YARN-7007](https://issues.apache.org/jira/browse/YARN-7007) that + got fixed in [Hadoop 2.8.6](https://issues.apache.org/jira/projects/YARN/versions/12344056). See + [FLINK-15534](https://issues.apache.org/jira/browse/FLINK-15534) for further details on the + related discussion. diff --git a/flink-yarn-tests/src/test/java/org/apache/flink/yarn/YARNSessionCapacitySchedulerITCase.java b/flink-yarn-tests/src/test/java/org/apache/flink/yarn/YARNSessionCapacitySchedulerITCase.java index 1f7d226..b8b328a 100644 --- a/flink-yarn-tests/src/test/java/org/apache/flink/yarn/YARNSessionCapacitySchedulerITCase.java +++ b/flink-yarn-tests/src/test/java/org/apache/flink/yarn/YARNSessionCapacitySchedulerITCase.java @@ -232,6 +232,10 @@ public class YARNSessionCapacitySchedulerITCase extends YarnTestBase { * This ensures that with (any) pre-allocated off-heap memory by us, there is some off-heap * memory remaining for Flink's libraries. Creating task managers will thus fail if no off-heap * memory remains. + * + * @throws NullPointerException There is a known Hadoop bug (YARN-7007) that got fixed in Hadoop + * 2.8.6 but might cause test instabilities. See FLINK-20659/FLINK-15534 for further + * information. */ @Test public void perJobYarnClusterOffHeap() throws Exception { @@ -289,6 +293,10 @@ public class YARNSessionCapacitySchedulerITCase extends YarnTestBase { * * Hint: If you think it is a good idea to add more assertions to this test, think * again! + * + * @throws NullPointerException There is a known Hadoop bug (YARN-7007) that got fixed in Hadoop + * 2.8.6 but might cause test instabilities. See FLINK-13009/FLINK-15534 for further + * information. */ @Test public void @@ -451,6 +459,9 @@ public class YARNSessionCapacitySchedulerITCase extends YarnTestBase { * Test deployment to non-existing queue & ensure that the system logs a WARN message for the * user. (Users had unexpected behavior of Flink on YARN because they mistyped the target queue. * With an error message, we can help users identifying the issue) + * + * @throws NullPointerException There is a known Hadoop bug (YARN-7007) that got fixed in Hadoop + * 2.8.6 but might cause test instabilities. See FLINK-15534 for further information. */ @Test public void testNonexistingQueueWARNmessage() throws Exception { @@ -493,6 +504,9 @@ public class YARNSessionCapacitySchedulerITCase extends YarnTestBase { /** * Test per-job yarn cluster with the parallelism set at the CliFrontend instead of the YARN * client. + * + * @throws NullPointerException There is a known Hadoop bug (YARN-7007) that got fixed in Hadoop + * 2.8.6 but might cause test instabilities. See FLINK-15534 for further information. */ @Test public void perJobYarnClusterWithParallelism() throws Exception {
[flink] 08/09: [hotfix] Fix possible null pointer exception in RocksStatesPerKeyGroupMergeIterator
This is an automated email from the ASF dual-hosted git repository. dwysakowicz pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/flink.git commit 6ad54a5352ba4f30cd94f4f97b53f51da834109e Author: Dawid Wysakowicz AuthorDate: Tue Feb 9 18:47:30 2021 +0100 [hotfix] Fix possible null pointer exception in RocksStatesPerKeyGroupMergeIterator --- .../streaming/state/iterator/RocksStatesPerKeyGroupMergeIterator.java | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/iterator/RocksStatesPerKeyGroupMergeIterator.java b/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/iterator/RocksStatesPerKeyGroupMergeIterator.java index ed8cc0d..2f062e5 100644 --- a/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/iterator/RocksStatesPerKeyGroupMergeIterator.java +++ b/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/iterator/RocksStatesPerKeyGroupMergeIterator.java @@ -236,6 +236,8 @@ public class RocksStatesPerKeyGroupMergeIterator implements KeyValueStateIterato public void close() { IOUtils.closeQuietly(closeableRegistry); -heap.clear(); +if (heap != null) { +heap.clear(); +} } }
[flink] 07/09: [FLINK-21344] Do not store heap timers in raw operator state for a savepoint
This is an automated email from the ASF dual-hosted git repository. dwysakowicz pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/flink.git commit 80066185648a243853b532350393ce93952f49b3 Author: Dawid Wysakowicz AuthorDate: Mon Feb 8 18:56:27 2021 +0100 [FLINK-21344] Do not store heap timers in raw operator state for a savepoint We do no longer serialize the heap timers in RocksDB state backend when taking a savepoint. We still do it for checkpoints though. There is one gotcha in the PR, that the StateConfigUtil#isStateImmutableInStateBackend assumes the knowledge that checkpoints behave differently for heap timers than savepoints. This closes #14913 --- .../runtime/state/AbstractKeyedStateBackend.java | 3 +- .../state/ttl/mock/MockKeyedStateBackend.java | 3 +- .../streaming/state/RocksDBKeyedStateBackend.java | 6 +- .../state/HeapTimersSnapshottingTest.java | 103 + .../contrib/streaming/state/RocksDBTestUtils.java | 11 ++- .../api/operators/InternalTimeServiceManager.java | 12 +-- .../operators/InternalTimeServiceManagerImpl.java | 25 + .../api/operators/StreamOperatorStateHandler.java | 9 +- .../BatchExecutionInternalTimeServiceManager.java | 5 - .../util/AbstractStreamOperatorTestHarness.java| 16 +++- .../flink/table/runtime/util/StateConfigUtil.java | 3 +- 11 files changed, 148 insertions(+), 48 deletions(-) diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/state/AbstractKeyedStateBackend.java b/flink-runtime/src/main/java/org/apache/flink/runtime/state/AbstractKeyedStateBackend.java index 1ded0dc..6ba970a 100644 --- a/flink-runtime/src/main/java/org/apache/flink/runtime/state/AbstractKeyedStateBackend.java +++ b/flink-runtime/src/main/java/org/apache/flink/runtime/state/AbstractKeyedStateBackend.java @@ -26,6 +26,7 @@ import org.apache.flink.api.common.state.StateDescriptor; import org.apache.flink.api.common.typeutils.TypeSerializer; import org.apache.flink.core.fs.CloseableRegistry; import org.apache.flink.runtime.checkpoint.CheckpointOptions; +import org.apache.flink.runtime.checkpoint.CheckpointType; import org.apache.flink.runtime.query.TaskKvStateRegistry; import org.apache.flink.runtime.state.heap.InternalKeyContext; import org.apache.flink.runtime.state.internal.InternalKvState; @@ -348,7 +349,7 @@ public abstract class AbstractKeyedStateBackend } // TODO remove this once heap-based timers are working with RocksDB incremental snapshots! -public boolean requiresLegacySynchronousTimerSnapshots() { +public boolean requiresLegacySynchronousTimerSnapshots(CheckpointType checkpointOptions) { return false; } } diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/state/ttl/mock/MockKeyedStateBackend.java b/flink-runtime/src/test/java/org/apache/flink/runtime/state/ttl/mock/MockKeyedStateBackend.java index c946365..d995ba3 100644 --- a/flink-runtime/src/test/java/org/apache/flink/runtime/state/ttl/mock/MockKeyedStateBackend.java +++ b/flink-runtime/src/test/java/org/apache/flink/runtime/state/ttl/mock/MockKeyedStateBackend.java @@ -30,6 +30,7 @@ import org.apache.flink.api.common.typeutils.TypeSerializer; import org.apache.flink.api.java.tuple.Tuple2; import org.apache.flink.core.fs.CloseableRegistry; import org.apache.flink.runtime.checkpoint.CheckpointOptions; +import org.apache.flink.runtime.checkpoint.CheckpointType; import org.apache.flink.runtime.query.TaskKvStateRegistry; import org.apache.flink.runtime.state.AbstractKeyedStateBackend; import org.apache.flink.runtime.state.CheckpointStreamFactory; @@ -181,7 +182,7 @@ public class MockKeyedStateBackend extends AbstractKeyedStateBackend { } @Override -public boolean requiresLegacySynchronousTimerSnapshots() { +public boolean requiresLegacySynchronousTimerSnapshots(CheckpointType checkpointOptions) { return false; } diff --git a/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBKeyedStateBackend.java b/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBKeyedStateBackend.java index 0f53955..9dbc5a6 100644 --- a/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBKeyedStateBackend.java +++ b/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBKeyedStateBackend.java @@ -40,6 +40,7 @@ import org.apache.flink.core.fs.CloseableRegistry; import org.apache.flink.core.memory.DataInputDeserializer; import org.apache.flink.core.memory.DataOutputSerializer; import org.apache.flink.runtime.checkpoint.CheckpointOptions; +import org.apache.flink.runtime.checkpoint.CheckpointType; import
[flink] 04/09: [refactor] Extract common interface for a single Rocks state
This is an automated email from the ASF dual-hosted git repository. dwysakowicz pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/flink.git commit f5fbb64dbfc0d872d5574a10cb7ae035f5d5405a Author: Dawid Wysakowicz AuthorDate: Fri Feb 5 16:43:41 2021 +0100 [refactor] Extract common interface for a single Rocks state This commit introduces an interface for iterating over a single state in RocksDB state backend. This is a prerequisite for storing heap timers along with other states from RocksDB. --- .../state/iterator/RocksSingleStateIterator.java | 29 ++-- .../RocksStatesPerKeyGroupMergeIterator.java | 39 ++ .../state/iterator/SingleStateIterator.java| 37 3 files changed, 73 insertions(+), 32 deletions(-) diff --git a/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/iterator/RocksSingleStateIterator.java b/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/iterator/RocksSingleStateIterator.java index 3c0aa82..4608acb 100644 --- a/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/iterator/RocksSingleStateIterator.java +++ b/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/iterator/RocksSingleStateIterator.java @@ -23,13 +23,11 @@ import org.apache.flink.util.IOUtils; import javax.annotation.Nonnull; -import java.io.Closeable; - /** * Wraps a RocksDB iterator to cache it's current key and assigns an id for the key/value state to * the iterator. Used by {@link RocksStatesPerKeyGroupMergeIterator}. */ -class RocksSingleStateIterator implements Closeable { +class RocksSingleStateIterator implements SingleStateIterator { /** * @param iterator underlying {@link RocksIteratorWrapper} @@ -45,19 +43,30 @@ class RocksSingleStateIterator implements Closeable { private byte[] currentKey; private final int kvStateId; -public byte[] getCurrentKey() { -return currentKey; +@Override +public void next() { +iterator.next(); +if (iterator.isValid()) { +currentKey = iterator.key(); +} +} + +@Override +public boolean isValid() { +return iterator.isValid(); } -public void setCurrentKey(byte[] currentKey) { -this.currentKey = currentKey; +@Override +public byte[] key() { +return currentKey; } -@Nonnull -public RocksIteratorWrapper getIterator() { -return iterator; +@Override +public byte[] value() { +return iterator.value(); } +@Override public int getKvStateId() { return kvStateId; } diff --git a/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/iterator/RocksStatesPerKeyGroupMergeIterator.java b/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/iterator/RocksStatesPerKeyGroupMergeIterator.java index 2f970c9..613d181 100644 --- a/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/iterator/RocksStatesPerKeyGroupMergeIterator.java +++ b/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/iterator/RocksStatesPerKeyGroupMergeIterator.java @@ -40,14 +40,14 @@ import java.util.PriorityQueue; public class RocksStatesPerKeyGroupMergeIterator implements KeyValueStateIterator { private final CloseableRegistry closeableRegistry; -private final PriorityQueue heap; +private final PriorityQueue heap; private final int keyGroupPrefixByteCount; private boolean newKeyGroup; private boolean newKVState; private boolean valid; -private RocksSingleStateIterator currentSubIterator; +private SingleStateIterator currentSubIterator; -private static final List> COMPARATORS; +private static final List> COMPARATORS; static { int maxBytes = 2; @@ -57,8 +57,7 @@ public class RocksStatesPerKeyGroupMergeIterator implements KeyValueStateIterato COMPARATORS.add( (o1, o2) -> { int arrayCmpRes = -compareKeyGroupsForByteArrays( -o1.getCurrentKey(), o2.getCurrentKey(), currentBytes); +compareKeyGroupsForByteArrays(o1.key(), o2.key(), currentBytes); return arrayCmpRes == 0 ? o1.getKvStateId() - o2.getKvStateId() : arrayCmpRes; @@ -103,18 +102,14 @@ public class RocksStatesPerKeyGroupMergeIterator implements KeyValueStateIterato newKeyGroup = false; newKVState =
[flink] 09/09: [FLINK-21344] Test legacy heap timers
This is an automated email from the ASF dual-hosted git repository. dwysakowicz pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/flink.git commit f5193466c5c9b92b52e3b4e81d0dffe27d351b34 Author: Dawid Wysakowicz AuthorDate: Tue Feb 9 19:57:30 2021 +0100 [FLINK-21344] Test legacy heap timers --- .../test/checkpointing/TimersSavepointITCase.java | 229 + .../_metadata | Bin 0 -> 5391 bytes 2 files changed, 229 insertions(+) diff --git a/flink-tests/src/test/java/org/apache/flink/test/checkpointing/TimersSavepointITCase.java b/flink-tests/src/test/java/org/apache/flink/test/checkpointing/TimersSavepointITCase.java new file mode 100644 index 000..d1d9e83 --- /dev/null +++ b/flink-tests/src/test/java/org/apache/flink/test/checkpointing/TimersSavepointITCase.java @@ -0,0 +1,229 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.test.checkpointing; + +import org.apache.flink.api.common.eventtime.WatermarkStrategy; +import org.apache.flink.api.common.state.ListState; +import org.apache.flink.api.common.state.ListStateDescriptor; +import org.apache.flink.api.common.typeutils.base.IntSerializer; +import org.apache.flink.client.program.ClusterClient; +import org.apache.flink.configuration.CheckpointingOptions; +import org.apache.flink.configuration.Configuration; +import org.apache.flink.contrib.streaming.state.RocksDBOptions; +import org.apache.flink.contrib.streaming.state.RocksDBStateBackend.PriorityQueueStateType; +import org.apache.flink.core.testutils.OneShotLatch; +import org.apache.flink.runtime.jobgraph.JobGraph; +import org.apache.flink.runtime.jobgraph.SavepointRestoreSettings; +import org.apache.flink.runtime.state.FunctionInitializationContext; +import org.apache.flink.runtime.state.FunctionSnapshotContext; +import org.apache.flink.runtime.testutils.MiniClusterResourceConfiguration; +import org.apache.flink.streaming.api.checkpoint.CheckpointedFunction; +import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; +import org.apache.flink.streaming.api.functions.KeyedProcessFunction; +import org.apache.flink.streaming.api.functions.sink.DiscardingSink; +import org.apache.flink.streaming.api.functions.source.SourceFunction; +import org.apache.flink.test.util.MiniClusterWithClientResource; +import org.apache.flink.util.Collector; + +import org.apache.commons.io.FileUtils; +import org.junit.ClassRule; +import org.junit.Rule; +import org.junit.Test; +import org.junit.rules.TemporaryFolder; + +import java.io.File; +import java.io.IOException; +import java.net.URI; +import java.net.URL; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.TimeUnit; + +/** Tests for restoring {@link PriorityQueueStateType#HEAP} timers stored in raw operator state. */ +public class TimersSavepointITCase { +private static final int PARALLELISM = 4; + +private static final OneShotLatch savepointLatch = new OneShotLatch(); +private static final OneShotLatch resultLatch = new OneShotLatch(); + +@ClassRule public static final TemporaryFolder TMP_FOLDER = new TemporaryFolder(); + +// We use a single past Flink version as we verify heap timers stored in raw state +// Starting from 1.13 we do not store heap timers in raw state, but we keep them in +// managed state +public static final String SAVEPOINT_FILE_NAME = "legacy-raw-state-heap-timers-rocks-db-1.12"; + +/** + * This test runs in either of two modes: 1) we want to generate the binary savepoint, i.e. we + * have to run the checkpointing functions 2) we want to verify restoring, so we have to run the + * checking functions. + */ +public enum ExecutionMode { +PERFORM_SAVEPOINT, +VERIFY_SAVEPOINT +} + +// TODO change this to PERFORM_SAVEPOINT to regenerate binary savepoints +// TODO Note: You should generate the savepoint based on the release branch instead of the +// master. +private final ExecutionMode executionMode = ExecutionMode.VERIFY_SAVEPOINT; + +@Rule +public final MiniClusterWithClientResource miniClusterResource = +
[flink] 06/09: [FLINK-21344] Handle heap timers in Rocks state
This is an automated email from the ASF dual-hosted git repository. dwysakowicz pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/flink.git commit a9fef44654b0c154af573f5c27398e27d3351cf9 Author: Dawid Wysakowicz AuthorDate: Mon Feb 8 17:09:19 2021 +0100 [FLINK-21344] Handle heap timers in Rocks state We serialize the heap timers into the same format as if they were actually stored in RocksDB instead of storing them in a raw operator state. It lets users change between using heap and RocksDB timers. --- .../runtime/state/HeapPriorityQueuesManager.java | 110 + .../runtime/state/heap/HeapKeyedStateBackend.java | 73 +- .../state/heap/HeapMetaInfoRestoreOperation.java | 5 +- .../HeapPriorityQueueSnapshotRestoreWrapper.java | 5 +- .../state/heap/HeapPriorityQueueStateSnapshot.java | 5 + .../state/heap/HeapSavepointRestoreOperation.java | 6 +- .../streaming/state/RocksDBKeyedStateBackend.java | 26 ++- .../state/RocksDBKeyedStateBackendBuilder.java | 37 ++- .../state/iterator/RocksQueueIterator.java | 141 .../RocksStatesPerKeyGroupMergeIterator.java | 23 +- .../state/restore/RocksDBFullRestoreOperation.java | 30 +-- .../RocksDBHeapTimersFullRestoreOperation.java | 255 + .../snapshot/RocksDBFullSnapshotResources.java | 26 ++- .../state/snapshot/RocksFullSnapshotStrategy.java | 17 ++ ...RocksKeyGroupsRocksSingleStateIteratorTest.java | 6 +- .../flink/test/state/BackendSwitchSpecs.java | 16 +- .../RocksSavepointStateBackendSwitchTest.java | 22 +- 17 files changed, 696 insertions(+), 107 deletions(-) diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/state/HeapPriorityQueuesManager.java b/flink-runtime/src/main/java/org/apache/flink/runtime/state/HeapPriorityQueuesManager.java new file mode 100644 index 000..27d500d --- /dev/null +++ b/flink-runtime/src/main/java/org/apache/flink/runtime/state/HeapPriorityQueuesManager.java @@ -0,0 +1,110 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.runtime.state; + +import org.apache.flink.annotation.Internal; +import org.apache.flink.api.common.typeutils.TypeSerializer; +import org.apache.flink.api.common.typeutils.TypeSerializerSchemaCompatibility; +import org.apache.flink.runtime.state.heap.HeapPriorityQueueElement; +import org.apache.flink.runtime.state.heap.HeapPriorityQueueSet; +import org.apache.flink.runtime.state.heap.HeapPriorityQueueSetFactory; +import org.apache.flink.runtime.state.heap.HeapPriorityQueueSnapshotRestoreWrapper; +import org.apache.flink.util.FlinkRuntimeException; +import org.apache.flink.util.StateMigrationException; + +import javax.annotation.Nonnull; + +import java.util.Map; + +/** Manages creating heap priority queues along with their counterpart meta info. */ +@Internal +public class HeapPriorityQueuesManager { + +private final Map> registeredPQStates; +private final HeapPriorityQueueSetFactory priorityQueueSetFactory; +private final KeyGroupRange keyGroupRange; +private final int numberOfKeyGroups; + +public HeapPriorityQueuesManager( +Map> registeredPQStates, +HeapPriorityQueueSetFactory priorityQueueSetFactory, +KeyGroupRange keyGroupRange, +int numberOfKeyGroups) { +this.registeredPQStates = registeredPQStates; +this.priorityQueueSetFactory = priorityQueueSetFactory; +this.keyGroupRange = keyGroupRange; +this.numberOfKeyGroups = numberOfKeyGroups; +} + +@SuppressWarnings("unchecked") +@Nonnull +public & Keyed> +KeyGroupedInternalPriorityQueue createOrUpdate( +@Nonnull String stateName, +@Nonnull TypeSerializer byteOrderedElementSerializer) { + +final HeapPriorityQueueSnapshotRestoreWrapper existingState = +(HeapPriorityQueueSnapshotRestoreWrapper) registeredPQStates.get(stateName); + +if (existingState != null) { +TypeSerializerSchemaCompatibility compatibilityResult = +existingState +
[flink] 05/09: [hotfix] Fix RocksIncrementalCheckpointRescalingTest
This is an automated email from the ASF dual-hosted git repository. dwysakowicz pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/flink.git commit be628c67b150b03b316b55f09a7292939de21c0c Author: Dawid Wysakowicz AuthorDate: Thu Feb 4 09:09:11 2021 +0100 [hotfix] Fix RocksIncrementalCheckpointRescalingTest Few cases that were checked in the test are actually illegal combination. They were testing keys that should never end up in a given sub task as they do not belong to a key group owned by the task. --- .../RocksIncrementalCheckpointRescalingTest.java | 42 -- 1 file changed, 42 deletions(-) diff --git a/flink-state-backends/flink-statebackend-rocksdb/src/test/java/org/apache/flink/contrib/streaming/state/RocksIncrementalCheckpointRescalingTest.java b/flink-state-backends/flink-statebackend-rocksdb/src/test/java/org/apache/flink/contrib/streaming/state/RocksIncrementalCheckpointRescalingTest.java index 580c35b..baf418f 100644 --- a/flink-state-backends/flink-statebackend-rocksdb/src/test/java/org/apache/flink/contrib/streaming/state/RocksIncrementalCheckpointRescalingTest.java +++ b/flink-state-backends/flink-statebackend-rocksdb/src/test/java/org/apache/flink/contrib/streaming/state/RocksIncrementalCheckpointRescalingTest.java @@ -187,12 +187,6 @@ public class RocksIncrementalCheckpointRescalingTest extends TestLogger { snapshot2 = AbstractStreamOperatorTestHarness.repackageState( harness2[0].snapshot(0, 0), harness2[1].snapshot(0, 0)); - -validHarnessResult( -harness2[0], 1, records[5], records[6], records[7], records[8], records[9]); - -validHarnessResult( -harness2[1], 1, records[0], records[1], records[2], records[3], records[4]); } finally { closeHarness(harness2); } @@ -253,36 +247,6 @@ public class RocksIncrementalCheckpointRescalingTest extends TestLogger { validHarnessResult(harness3[0], 3, records[0], records[1], records[2], records[3]); validHarnessResult(harness3[1], 3, records[4], records[5], records[6]); validHarnessResult(harness3[2], 3, records[7], records[8], records[9]); - -validHarnessResult( -harness3[0], -1, -records[4], -records[5], -records[6], -records[7], -records[8], -records[9]); -validHarnessResult( -harness3[1], -1, -records[0], -records[1], -records[2], -records[3], -records[7], -records[8], -records[9]); -validHarnessResult( -harness3[2], -1, -records[0], -records[1], -records[2], -records[3], -records[4], -records[5], -records[6]); } finally { closeHarness(harness3); } @@ -390,12 +354,6 @@ public class RocksIncrementalCheckpointRescalingTest extends TestLogger { snapshot2 = AbstractStreamOperatorTestHarness.repackageState( harness2[0].snapshot(0, 0), harness2[1].snapshot(0, 0)); - -validHarnessResult( -harness2[0], 1, records[5], records[6], records[7], records[8], records[9]); - -validHarnessResult( -harness2[1], 1, records[0], records[1], records[2], records[3], records[4]); } finally { closeHarness(harness2); }
[flink] 03/09: [refactor] Remove AbstractRocksDBRestoreOperation
This is an automated email from the ASF dual-hosted git repository. dwysakowicz pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/flink.git commit 3ed5c1a26f53b9481d5616669c91c0f272bdc949 Author: Dawid Wysakowicz AuthorDate: Mon Feb 8 16:32:25 2021 +0100 [refactor] Remove AbstractRocksDBRestoreOperation So far both the RocksFullSnapshotRestoreOperation and RocksIncrementalRestoreOperation extended from AbstractRocksDBRestoreOperation in order to share some functions. However it required e.g. unnecessary parameters to be passed just to fulfill the requirements of the base class. Moreover a base class makes it harder to extend classes independently. This commit changes sharing the common code to use composition instead of inheritance. --- .../state/RocksDBKeyedStateBackendBuilder.java | 18 +- .../state/restore/RocksDBFullRestoreOperation.java | 57 +++--- ...sDBRestoreOperation.java => RocksDBHandle.java} | 201 +++-- .../RocksDBIncrementalRestoreOperation.java| 191 +++- .../state/restore/RocksDBNoneRestoreOperation.java | 58 +++--- .../state/restore/RocksDBRestoreOperation.java | 3 +- 6 files changed, 261 insertions(+), 267 deletions(-) diff --git a/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBKeyedStateBackendBuilder.java b/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBKeyedStateBackendBuilder.java index 5f6426c..ce90d05 100644 --- a/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBKeyedStateBackendBuilder.java +++ b/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBKeyedStateBackendBuilder.java @@ -21,10 +21,10 @@ package org.apache.flink.contrib.streaming.state; import org.apache.flink.annotation.VisibleForTesting; import org.apache.flink.api.common.ExecutionConfig; import org.apache.flink.api.common.typeutils.TypeSerializer; -import org.apache.flink.contrib.streaming.state.restore.AbstractRocksDBRestoreOperation; import org.apache.flink.contrib.streaming.state.restore.RocksDBFullRestoreOperation; import org.apache.flink.contrib.streaming.state.restore.RocksDBIncrementalRestoreOperation; import org.apache.flink.contrib.streaming.state.restore.RocksDBNoneRestoreOperation; +import org.apache.flink.contrib.streaming.state.restore.RocksDBRestoreOperation; import org.apache.flink.contrib.streaming.state.restore.RocksDBRestoreResult; import org.apache.flink.contrib.streaming.state.snapshot.RocksDBSnapshotStrategyBase; import org.apache.flink.contrib.streaming.state.snapshot.RocksFullSnapshotStrategy; @@ -250,7 +250,7 @@ public class RocksDBKeyedStateBackendBuilder extends AbstractKeyedStateBacken LinkedHashMap kvStateInformation = new LinkedHashMap<>(); RocksDB db = null; -AbstractRocksDBRestoreOperation restoreOperation = null; +RocksDBRestoreOperation restoreOperation = null; RocksDbTtlCompactFiltersManager ttlCompactFiltersManager = new RocksDbTtlCompactFiltersManager(ttlTimeProvider); @@ -393,7 +393,7 @@ public class RocksDBKeyedStateBackendBuilder extends AbstractKeyedStateBacken writeBatchSize); } -private AbstractRocksDBRestoreOperation getRocksDBRestoreOperation( +private RocksDBRestoreOperation getRocksDBRestoreOperation( int keyGroupPrefixBytes, CloseableRegistry cancelStreamRegistry, LinkedHashMap kvStateInformation, @@ -401,20 +401,12 @@ public class RocksDBKeyedStateBackendBuilder extends AbstractKeyedStateBacken DBOptions dbOptions = optionsContainer.getDbOptions(); if (restoreStateHandles.isEmpty()) { return new RocksDBNoneRestoreOperation<>( -keyGroupRange, -keyGroupPrefixBytes, -numberOfTransferingThreads, -cancelStreamRegistry, -userCodeClassLoader, kvStateInformation, -keySerializerProvider, -instanceBasePath, instanceRocksDBPath, dbOptions, columnFamilyOptionsFactory, nativeMetricOptions, metricGroup, -restoreStateHandles, ttlCompactFiltersManager, optionsContainer.getWriteBufferManagerCapacity()); } @@ -442,13 +434,9 @@ public class RocksDBKeyedStateBackendBuilder extends AbstractKeyedStateBacken } else { return new RocksDBFullRestoreOperation<>( keyGroupRange, -keyGroupPrefixBytes, -
[flink] 02/09: [hotfix] Cleanup raw types around PriorityQueueSetFactory
This is an automated email from the ASF dual-hosted git repository. dwysakowicz pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/flink.git commit 7f3aa390892bed6e00ab254e311f6a46c623a1d5 Author: Dawid Wysakowicz AuthorDate: Tue Feb 2 17:37:54 2021 +0100 [hotfix] Cleanup raw types around PriorityQueueSetFactory --- .../org/apache/flink/runtime/state/PriorityQueueSetFactory.java | 2 +- .../apache/flink/runtime/state/heap/HeapKeyedStateBackend.java| 8 .../flink/runtime/state/heap/HeapPriorityQueueSetFactory.java | 2 +- .../flink/runtime/state/ttl/mock/MockKeyedStateBackend.java | 2 +- .../flink/contrib/streaming/state/RocksDBKeyedStateBackend.java | 2 +- .../contrib/streaming/state/RocksDBPriorityQueueSetFactory.java | 2 +- .../operators/sorted/state/BatchExecutionKeyedStateBackend.java | 4 ++-- 7 files changed, 11 insertions(+), 11 deletions(-) diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/state/PriorityQueueSetFactory.java b/flink-runtime/src/main/java/org/apache/flink/runtime/state/PriorityQueueSetFactory.java index baeb591..96ce98b 100644 --- a/flink-runtime/src/main/java/org/apache/flink/runtime/state/PriorityQueueSetFactory.java +++ b/flink-runtime/src/main/java/org/apache/flink/runtime/state/PriorityQueueSetFactory.java @@ -36,7 +36,7 @@ public interface PriorityQueueSetFactory { * @return the queue with the specified unique name. */ @Nonnull - + & Keyed> KeyGroupedInternalPriorityQueue create( @Nonnull String stateName, @Nonnull TypeSerializer byteOrderedElementSerializer); diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/state/heap/HeapKeyedStateBackend.java b/flink-runtime/src/main/java/org/apache/flink/runtime/state/heap/HeapKeyedStateBackend.java index 0b42a32..8e6c356 100644 --- a/flink-runtime/src/main/java/org/apache/flink/runtime/state/heap/HeapKeyedStateBackend.java +++ b/flink-runtime/src/main/java/org/apache/flink/runtime/state/heap/HeapKeyedStateBackend.java @@ -157,13 +157,13 @@ public class HeapKeyedStateBackend extends AbstractKeyedStateBackend { @SuppressWarnings("unchecked") @Nonnull @Override -public +public & Keyed> KeyGroupedInternalPriorityQueue create( @Nonnull String stateName, @Nonnull TypeSerializer byteOrderedElementSerializer) { -final HeapPriorityQueueSnapshotRestoreWrapper existingState = -registeredPQStates.get(stateName); +final HeapPriorityQueueSnapshotRestoreWrapper existingState = +(HeapPriorityQueueSnapshotRestoreWrapper) registeredPQStates.get(stateName); if (existingState != null) { // TODO we implement the simple way of supporting the current functionality, mimicking @@ -197,7 +197,7 @@ public class HeapKeyedStateBackend extends AbstractKeyedStateBackend { } @Nonnull -private +private & Keyed> KeyGroupedInternalPriorityQueue createInternal( RegisteredPriorityQueueStateBackendMetaInfo metaInfo) { diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/state/heap/HeapPriorityQueueSetFactory.java b/flink-runtime/src/main/java/org/apache/flink/runtime/state/heap/HeapPriorityQueueSetFactory.java index 8074c1a..6646d5f 100644 --- a/flink-runtime/src/main/java/org/apache/flink/runtime/state/heap/HeapPriorityQueueSetFactory.java +++ b/flink-runtime/src/main/java/org/apache/flink/runtime/state/heap/HeapPriorityQueueSetFactory.java @@ -50,7 +50,7 @@ public class HeapPriorityQueueSetFactory implements PriorityQueueSetFactory { @Nonnull @Override -public +public & Keyed> HeapPriorityQueueSet create( @Nonnull String stateName, @Nonnull TypeSerializer byteOrderedElementSerializer) { diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/state/ttl/mock/MockKeyedStateBackend.java b/flink-runtime/src/test/java/org/apache/flink/runtime/state/ttl/mock/MockKeyedStateBackend.java index d3d3757..c946365 100644 --- a/flink-runtime/src/test/java/org/apache/flink/runtime/state/ttl/mock/MockKeyedStateBackend.java +++ b/flink-runtime/src/test/java/org/apache/flink/runtime/state/ttl/mock/MockKeyedStateBackend.java @@ -278,7 +278,7 @@ public class MockKeyedStateBackend extends AbstractKeyedStateBackend { @Nonnull @Override -public +public & Keyed> KeyGroupedInternalPriorityQueue create( @Nonnull String stateName, @Nonnull TypeSerializer byteOrderedElementSerializer) { diff --git a/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBKeyedStateBackend.java
[flink] branch master updated (9b84132 -> f519346)
This is an automated email from the ASF dual-hosted git repository. dwysakowicz pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/flink.git. from 9b84132 [hotfix][docs] reintroduce build_docs.sh script new 2f16bff [hotfix] Remove unnecessary if in RocksIncrementalSnapshotStrategy new 7f3aa39 [hotfix] Cleanup raw types around PriorityQueueSetFactory new 3ed5c1a [refactor] Remove AbstractRocksDBRestoreOperation new f5fbb64 [refactor] Extract common interface for a single Rocks state new be628c6 [hotfix] Fix RocksIncrementalCheckpointRescalingTest new a9fef44 [FLINK-21344] Handle heap timers in Rocks state new 8006618 [FLINK-21344] Do not store heap timers in raw operator state for a savepoint new 6ad54a5 [hotfix] Fix possible null pointer exception in RocksStatesPerKeyGroupMergeIterator new f519346 [FLINK-21344] Test legacy heap timers The 9 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: .../runtime/state/AbstractKeyedStateBackend.java | 3 +- .../runtime/state/HeapPriorityQueuesManager.java | 110 + .../runtime/state/PriorityQueueSetFactory.java | 2 +- .../runtime/state/heap/HeapKeyedStateBackend.java | 75 +- .../state/heap/HeapMetaInfoRestoreOperation.java | 5 +- .../state/heap/HeapPriorityQueueSetFactory.java| 2 +- .../HeapPriorityQueueSnapshotRestoreWrapper.java | 5 +- .../state/heap/HeapPriorityQueueStateSnapshot.java | 5 + .../state/heap/HeapSavepointRestoreOperation.java | 6 +- .../state/ttl/mock/MockKeyedStateBackend.java | 5 +- .../streaming/state/RocksDBKeyedStateBackend.java | 34 ++- .../state/RocksDBKeyedStateBackendBuilder.java | 55 +++-- .../state/RocksDBPriorityQueueSetFactory.java | 2 +- .../state/iterator/RocksQueueIterator.java | 141 .../state/iterator/RocksSingleStateIterator.java | 29 ++- .../RocksStatesPerKeyGroupMergeIterator.java | 66 +++--- .../state/iterator/SingleStateIterator.java| 19 +- .../state/restore/RocksDBFullRestoreOperation.java | 85 +++ ...sDBRestoreOperation.java => RocksDBHandle.java} | 201 .../RocksDBHeapTimersFullRestoreOperation.java | 255 + .../RocksDBIncrementalRestoreOperation.java| 191 +++ .../state/restore/RocksDBNoneRestoreOperation.java | 58 ++--- .../state/restore/RocksDBRestoreOperation.java | 3 +- .../snapshot/RocksDBFullSnapshotResources.java | 26 ++- .../state/snapshot/RocksFullSnapshotStrategy.java | 17 ++ .../snapshot/RocksIncrementalSnapshotStrategy.java | 8 +- .../state/HeapTimersSnapshottingTest.java | 103 + .../contrib/streaming/state/RocksDBTestUtils.java | 11 +- .../RocksIncrementalCheckpointRescalingTest.java | 42 ...RocksKeyGroupsRocksSingleStateIteratorTest.java | 6 +- .../api/operators/InternalTimeServiceManager.java | 12 +- .../operators/InternalTimeServiceManagerImpl.java | 25 +- .../api/operators/StreamOperatorStateHandler.java | 9 +- .../BatchExecutionInternalTimeServiceManager.java | 5 - .../state/BatchExecutionKeyedStateBackend.java | 4 +- .../util/AbstractStreamOperatorTestHarness.java| 16 +- .../flink/table/runtime/util/StateConfigUtil.java | 3 +- .../test/checkpointing/TimersSavepointITCase.java | 229 ++ .../flink/test/state/BackendSwitchSpecs.java | 16 +- .../RocksSavepointStateBackendSwitchTest.java | 22 +- .../_metadata | Bin 0 -> 5391 bytes 41 files changed, 1395 insertions(+), 516 deletions(-) create mode 100644 flink-runtime/src/main/java/org/apache/flink/runtime/state/HeapPriorityQueuesManager.java create mode 100644 flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/iterator/RocksQueueIterator.java copy flink-runtime/src/main/java/org/apache/flink/runtime/blob/BlobStoreService.java => flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/iterator/SingleStateIterator.java (71%) rename flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/restore/{AbstractRocksDBRestoreOperation.java => RocksDBHandle.java} (55%) create mode 100644 flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/restore/RocksDBHeapTimersFullRestoreOperation.java create mode 100644 flink-state-backends/flink-statebackend-rocksdb/src/test/java/org/apache/flink/contrib/streaming/state/HeapTimersSnapshottingTest.java create mode 100644
[flink] 01/09: [hotfix] Remove unnecessary if in RocksIncrementalSnapshotStrategy
This is an automated email from the ASF dual-hosted git repository. dwysakowicz pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/flink.git commit 2f16bff7547c81539f9f34eff1ae380e20efea13 Author: Dawid Wysakowicz AuthorDate: Tue Feb 2 13:28:54 2021 +0100 [hotfix] Remove unnecessary if in RocksIncrementalSnapshotStrategy --- .../state/snapshot/RocksIncrementalSnapshotStrategy.java | 8 +--- 1 file changed, 1 insertion(+), 7 deletions(-) diff --git a/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/snapshot/RocksIncrementalSnapshotStrategy.java b/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/snapshot/RocksIncrementalSnapshotStrategy.java index 0921924..682a3f7 100644 --- a/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/snapshot/RocksIncrementalSnapshotStrategy.java +++ b/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/snapshot/RocksIncrementalSnapshotStrategy.java @@ -177,18 +177,12 @@ public class RocksIncrementalSnapshotStrategy return registry -> SnapshotResult.empty(); } -List stateMetaInfoSnapshots = -snapshotResources.stateMetaInfoSnapshots; -if (stateMetaInfoSnapshots.isEmpty()) { -return snapshotCloseableRegistry -> SnapshotResult.empty(); -} - return new RocksDBIncrementalSnapshotOperation( checkpointId, checkpointStreamFactory, snapshotResources.snapshotDirectory, snapshotResources.baseSstFiles, -stateMetaInfoSnapshots); +snapshotResources.stateMetaInfoSnapshots); } @Override
[flink] branch master updated (808cae6 -> 9b84132)
This is an automated email from the ASF dual-hosted git repository. sjwiesman pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/flink.git. from 808cae6 [hotfix] Upgrade the os-maven-plugin depency, to version 1.7.0 add fd6d7cc [FLINK-21489][docs] Hugo docs add two anchor links to headers add 9b84132 [hotfix][docs] reintroduce build_docs.sh script No new revisions were added by this update. Summary of changes: .../mesos-bin/mesos-appmaster-job.sh => docs/build_docs.sh| 11 --- docs/static/js/flink.js | 7 +-- 2 files changed, 13 insertions(+), 5 deletions(-) copy flink-dist/src/main/flink-bin/mesos-bin/mesos-appmaster-job.sh => docs/build_docs.sh (81%)
[flink-statefun] branch master updated: [FLINK-21276] [legal] Mention Protobuf BSD license in statefun-sdk-java
This is an automated email from the ASF dual-hosted git repository. tzulitai pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/flink-statefun.git The following commit(s) were added to refs/heads/master by this push: new 21fd255 [FLINK-21276] [legal] Mention Protobuf BSD license in statefun-sdk-java 21fd255 is described below commit 21fd2554b3c72adef5e1e7250b1bbfe7b86b3a6b Author: Tzu-Li (Gordon) Tai AuthorDate: Thu Feb 4 15:56:08 2021 +0800 [FLINK-21276] [legal] Mention Protobuf BSD license in statefun-sdk-java This closes #198. --- .../src/main/resources/META-INF/NOTICE | 10 +++ .../META-INF/licenses/LICENSE.protobuf-java| 32 ++ 2 files changed, 42 insertions(+) diff --git a/statefun-sdk-java/src/main/resources/META-INF/NOTICE b/statefun-sdk-java/src/main/resources/META-INF/NOTICE new file mode 100644 index 000..4218101 --- /dev/null +++ b/statefun-sdk-java/src/main/resources/META-INF/NOTICE @@ -0,0 +1,10 @@ +statefun-sdk-java +Copyright 2014-2020 The Apache Software Foundation + +This product includes software developed at +The Apache Software Foundation (http://www.apache.org/). + +This project bundles the following dependencies under the BSD license. +See bundled license files under "META-INF/licenses" for details. + +- com.google.protobuf:protobuf-java:3.7.1 diff --git a/statefun-sdk-java/src/main/resources/META-INF/licenses/LICENSE.protobuf-java b/statefun-sdk-java/src/main/resources/META-INF/licenses/LICENSE.protobuf-java new file mode 100644 index 000..97a6e3d --- /dev/null +++ b/statefun-sdk-java/src/main/resources/META-INF/licenses/LICENSE.protobuf-java @@ -0,0 +1,32 @@ +Copyright 2008 Google Inc. All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + +* Redistributions of source code must retain the above copyright +notice, this list of conditions and the following disclaimer. +* Redistributions in binary form must reproduce the above +copyright notice, this list of conditions and the following disclaimer +in the documentation and/or other materials provided with the +distribution. +* Neither the name of Google Inc. nor the names of its +contributors may be used to endorse or promote products derived from +this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +Code generated by the Protocol Buffer compiler is owned by the owner +of the input file used when generating it. This code is not +standalone and requires a support library to be linked with it. This +support library is itself covered by the above license. \ No newline at end of file
[flink] branch master updated (7f1853d -> 808cae6)
This is an automated email from the ASF dual-hosted git repository. rmetzger pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/flink.git. from 7f1853d [FLINK-21451][coordination] Remove JobID from TaskExecutionState add 808cae6 [hotfix] Upgrade the os-maven-plugin depency, to version 1.7.0 No new revisions were added by this update. Summary of changes: flink-formats/flink-parquet/pom.xml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
[flink-statefun] branch master updated (ec69df6 -> 20b521e)
This is an automated email from the ASF dual-hosted git repository. tzulitai pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/flink-statefun.git. from ec69df6 [hotfix] Temporary disable E2E tests in CI add 96cd004 [FLINK-21457] Add support to differentiate a zero length value bytes and non existing value new 20b521e [FLINK-21459] Implement remote Java SDK for Stateful Functions The 1 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: .../flink/common/types/TypedValueUtil.java | 1 + .../protorouter/AutoRoutableProtobufRouter.java| 6 +- .../reqreply/PersistedRemoteFunctionValues.java| 3 +- .../PersistedRemoteFunctionValuesTest.java | 6 +- .../core/reqreply/RequestReplyFunctionTest.java| 2 + statefun-sdk-java/pom.xml | 19 +- .../java/com/google/protobuf/MoreByteStrings.java | 23 +- .../apache/flink/statefun/sdk/java}/Address.java | 16 +- .../statefun/sdk/java/AddressScopedStorage.java| 16 +- .../flink/statefun/sdk/java/ApiExtension.java | 20 +- .../apache/flink/statefun/sdk/java/Context.java| 31 +- .../flink/statefun/sdk/java}/Expiration.java | 8 +- .../flink/statefun/sdk/java/StatefulFunction.java | 26 +- .../statefun/sdk/java/StatefulFunctionSpec.java| 78 .../flink/statefun/sdk/java/StatefulFunctions.java | 36 +- .../apache/flink/statefun/sdk/java/TypeName.java | 59 ++- .../apache/flink/statefun/sdk/java/ValueSpec.java | 115 ++ .../statefun/sdk/java/annotations/Internal.java| 8 +- .../sdk/java/handler/ConcurrentContext.java| 147 +++ .../handler/ConcurrentRequestReplyHandler.java | 150 +++ .../statefun/sdk/java/handler/MoreFutures.java | 66 +++ .../statefun/sdk/java/handler/ProtoUtils.java | 97 + .../sdk/java/handler/RequestReplyHandler.java | 16 +- .../statefun/sdk/java/io/KafkaEgressMessage.java | 127 ++ .../statefun/sdk/java/io/KinesisEgressMessage.java | 146 +++ .../statefun/sdk/java/message/EgressMessage.java | 26 +- .../sdk/java/message/EgressMessageWrapper.java | 55 +++ .../flink/statefun/sdk/java/message/Message.java | 43 +- .../statefun/sdk/java/message/MessageBuilder.java | 113 + .../statefun/sdk/java/message/MessageWrapper.java | 133 ++ .../statefun/sdk/java/slice/ByteStringSlice.java | 80 .../flink/statefun/sdk/java/slice/Slice.java | 25 +- .../flink/statefun/sdk/java/slice/SliceOutput.java | 108 + .../statefun/sdk/java/slice/SliceProtobufUtil.java | 55 +++ .../flink/statefun/sdk/java/slice/Slices.java | 61 +++ .../storage/ConcurrentAddressScopedStorage.java| 347 .../storage/IllegalStorageAccessException.java | 10 +- .../sdk/java/storage/StateValueContexts.java | 131 ++ .../flink/statefun/sdk/java/types/SimpleType.java | 104 + .../apache/flink/statefun/sdk/java/types/Type.java | 17 +- .../sdk/java/types/TypeCharacteristics.java| 6 +- .../statefun/sdk/java/types/TypeSerializer.java| 9 +- .../flink/statefun/sdk/java/types/Types.java | 456 + .../handler/ConcurrentRequestReplyHandlerTest.java | 116 ++ .../statefun/sdk/java/handler/MoreFuturesTest.java | 93 + .../flink/statefun/sdk/java/handler/TestUtils.java | 96 + .../statefun/sdk/java/slice/SliceOutputTest.java | 144 +++ .../sdk/java/slice/SliceProtobufUtilTest.java | 20 +- .../ConcurrentAddressScopedStorageTest.java| 206 ++ .../sdk/java/storage/StateValueContextsTest.java | 150 +++ .../statefun/sdk/java/storage/TestMutableType.java | 77 .../sdk/java/types/SanityPrimitiveTypeTest.java| 194 + .../src/main/protobuf/sdk/request-reply.proto | 5 +- 53 files changed, 3919 insertions(+), 183 deletions(-) copy statefun-examples/statefun-ridesharing-example/statefun-ridesharing-example-simulator/src/main/java/org/apache/flink/statefun/examples/ridesharing/simulator/simulation/engine/LifecycleMessages.java => statefun-sdk-java/src/main/java/com/google/protobuf/MoreByteStrings.java (63%) copy {statefun-sdk/src/main/java/org/apache/flink/statefun/sdk => statefun-sdk-java/src/main/java/org/apache/flink/statefun/sdk/java}/Address.java (81%) copy statefun-flink/statefun-flink-core/src/main/java/org/apache/flink/statefun/flink/core/jsonmodule/FunctionSpec.java => statefun-sdk-java/src/main/java/org/apache/flink/statefun/sdk/java/AddressScopedStorage.java (77%) copy statefun-flink/statefun-flink-core/src/main/java/org/apache/flink/statefun/flink/core/common/ManagingResources.java => statefun-sdk-java/src/main/java/org/apache/flink/statefun/sdk/java/ApiExtension.java (67%) copy
[flink] branch master updated (41ad173 -> 7f1853d)
This is an automated email from the ASF dual-hosted git repository. chesnay pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/flink.git. from 41ad173 [hotfix][docs] Fix broken gh_link usage add 7f1853d [FLINK-21451][coordination] Remove JobID from TaskExecutionState No new revisions were added by this update. Summary of changes: .../TaskExecutionStateTransition.java | 5 --- .../apache/flink/runtime/jobmaster/JobMaster.java | 1 - .../flink/runtime/scheduler/SchedulerBase.java | 2 +- ...pdateSchedulerNgOnInternalFailuresListener.java | 9 +--- .../scheduler/adaptive/AdaptiveScheduler.java | 2 +- .../flink/runtime/taskexecutor/TaskExecutor.java | 1 - .../org/apache/flink/runtime/taskmanager/Task.java | 4 +- .../runtime/taskmanager/TaskExecutionState.java| 43 --- .../ExecutionGraphCheckpointCoordinatorTest.java | 4 +- .../executiongraph/ArchivedExecutionGraphTest.java | 1 - .../ExecutionGraphDeploymentTest.java | 6 +-- .../ExecutionGraphPartitionReleaseTest.java| 28 - .../scheduler/UpdatePartitionConsumersTest.java| 5 +-- .../jobmaster/JobMasterPartitionReleaseTest.java | 4 +- .../flink/runtime/jobmaster/JobMasterTest.java | 15 ++- .../DefaultSchedulerBatchSchedulingTest.java | 4 +- .../runtime/scheduler/DefaultSchedulerTest.java| 48 ++ .../runtime/scheduler/SchedulerTestingUtils.java | 19 +++-- .../scheduler/adaptive/AdaptiveSchedulerTest.java | 4 +- .../runtime/scheduler/adaptive/CancelingTest.java | 1 - .../runtime/scheduler/adaptive/ExecutingTest.java | 16 +++- .../runtime/scheduler/adaptive/FailingTest.java| 1 - .../taskmanager/TaskExecutionStateTest.java| 17 .../apache/flink/runtime/taskmanager/TaskTest.java | 1 - .../streaming/runtime/tasks/StreamTaskTest.java| 3 +- 25 files changed, 63 insertions(+), 181 deletions(-)
[flink] branch master updated (fab2e55 -> 41ad173)
This is an automated email from the ASF dual-hosted git repository. aljoscha pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/flink.git. from fab2e55 [FLINK-19970] Update documentation regarding backwards compatibility add 41ad173 [hotfix][docs] Fix broken gh_link usage No new revisions were added by this update. Summary of changes: docs/content.zh/docs/dev/dataset/zip_elements_guide.md | 3 +-- docs/content.zh/docs/dev/datastream/overview.md| 12 ++-- docs/content.zh/docs/dev/table/functions/udfs.md | 2 +- docs/content.zh/docs/ops/metrics.md| 2 +- docs/content/docs/dev/dataset/zip_elements_guide.md| 3 +-- docs/content/docs/dev/datastream/overview.md | 13 ++--- docs/content/docs/ops/metrics.md | 2 +- 7 files changed, 17 insertions(+), 20 deletions(-)
[flink] branch master updated (e2fe4fb -> fab2e55)
This is an automated email from the ASF dual-hosted git repository. dwysakowicz pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/flink.git. from e2fe4fb [hotfix][table-planner-blink] Fix the failed FileSystemTableSinkTest add 4f6a94f [hotfix] Drop compatibility with Flink <= 1.5 add 5a22de9 [FLINK-19970] State leak in CEP Operators add fab2e55 [FLINK-19970] Update documentation regarding backwards compatibility No new revisions were added by this update. Summary of changes: docs/content.zh/docs/libs/cep.md | 23 +--- docs/content/docs/libs/cep.md | 28 + .../main/java/org/apache/flink/cep/nfa/NFA.java| 21 ++-- .../cep/nfa/aftermatch/AfterMatchSkipStrategy.java | 4 +- .../flink/cep/nfa/sharedbuffer/Lockable.java | 4 + .../flink/cep/nfa/sharedbuffer/SharedBuffer.java | 114 +- .../cep/nfa/sharedbuffer/SharedBufferAccessor.java | 44 +-- .../cep/nfa/sharedbuffer/SharedBufferNode.java | 32 -- .../sharedbuffer/SharedBufferNodeSerializer.java | 127 + .../SharedBufferNodeSerializerSnapshotV2.java | 39 +++ .../org/apache/flink/cep/operator/CepOperator.java | 27 + .../apache/flink/cep/NFASerializerUpgradeTest.java | 11 +- .../java/org/apache/flink/cep/nfa/NFAITCase.java | 31 + .../cep/nfa/sharedbuffer/SharedBufferTest.java | 40 +++ .../flink/cep/operator/CEPMigrationTest.java | 3 - ...cep-migration-after-branching-flink1.3-snapshot | Bin 21980 -> 0 bytes ...cep-migration-after-branching-flink1.4-snapshot | Bin 19058 -> 0 bytes ...cep-migration-after-branching-flink1.5-snapshot | Bin 19390 -> 0 bytes .../cep-migration-conditions-flink1.3-snapshot | Bin 22425 -> 0 bytes .../cep-migration-conditions-flink1.4-snapshot | Bin 19503 -> 0 bytes .../cep-migration-conditions-flink1.5-snapshot | Bin 19835 -> 0 bytes ...ion-single-pattern-afterwards-flink1.3-snapshot | Bin 19770 -> 0 bytes ...ion-single-pattern-afterwards-flink1.4-snapshot | Bin 16848 -> 0 bytes ...ion-single-pattern-afterwards-flink1.5-snapshot | Bin 17180 -> 0 bytes ...igration-starting-new-pattern-flink1.3-snapshot | Bin 21788 -> 0 bytes ...igration-starting-new-pattern-flink1.4-snapshot | Bin 18866 -> 0 bytes ...igration-starting-new-pattern-flink1.5-snapshot | Bin 19198 -> 0 bytes 27 files changed, 374 insertions(+), 174 deletions(-) create mode 100644 flink-libraries/flink-cep/src/main/java/org/apache/flink/cep/nfa/sharedbuffer/SharedBufferNodeSerializer.java copy flink-table/flink-table-planner/src/main/java/org/apache/flink/table/runtime/types/CRowSerializerSnapshot.java => flink-libraries/flink-cep/src/main/java/org/apache/flink/cep/nfa/sharedbuffer/SharedBufferNodeSerializerSnapshotV2.java (52%) delete mode 100644 flink-libraries/flink-cep/src/test/resources/cep-migration-after-branching-flink1.3-snapshot delete mode 100644 flink-libraries/flink-cep/src/test/resources/cep-migration-after-branching-flink1.4-snapshot delete mode 100644 flink-libraries/flink-cep/src/test/resources/cep-migration-after-branching-flink1.5-snapshot delete mode 100644 flink-libraries/flink-cep/src/test/resources/cep-migration-conditions-flink1.3-snapshot delete mode 100644 flink-libraries/flink-cep/src/test/resources/cep-migration-conditions-flink1.4-snapshot delete mode 100644 flink-libraries/flink-cep/src/test/resources/cep-migration-conditions-flink1.5-snapshot delete mode 100644 flink-libraries/flink-cep/src/test/resources/cep-migration-single-pattern-afterwards-flink1.3-snapshot delete mode 100644 flink-libraries/flink-cep/src/test/resources/cep-migration-single-pattern-afterwards-flink1.4-snapshot delete mode 100644 flink-libraries/flink-cep/src/test/resources/cep-migration-single-pattern-afterwards-flink1.5-snapshot delete mode 100644 flink-libraries/flink-cep/src/test/resources/cep-migration-starting-new-pattern-flink1.3-snapshot delete mode 100644 flink-libraries/flink-cep/src/test/resources/cep-migration-starting-new-pattern-flink1.4-snapshot delete mode 100644 flink-libraries/flink-cep/src/test/resources/cep-migration-starting-new-pattern-flink1.5-snapshot
[flink] branch master updated: [hotfix][table-planner-blink] Fix the failed FileSystemTableSinkTest
This is an automated email from the ASF dual-hosted git repository. jark pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/flink.git The following commit(s) were added to refs/heads/master by this push: new e2fe4fb [hotfix][table-planner-blink] Fix the failed FileSystemTableSinkTest e2fe4fb is described below commit e2fe4fb2c9e121385b01d65392ed7e4d2df4e4b6 Author: Jark Wu AuthorDate: Wed Feb 24 16:16:20 2021 +0800 [hotfix][table-planner-blink] Fix the failed FileSystemTableSinkTest --- ...stFileSystemTableSinkWithParallelismInBatch.out | 19 +++-- ...stemTableSinkWithParallelismInStreamingSql0.out | 23 +--- ...stemTableSinkWithParallelismInStreamingSql1.out | 31 +++--- 3 files changed, 20 insertions(+), 53 deletions(-) diff --git a/flink-table/flink-table-planner-blink/src/test/resources/explain/filesystem/testFileSystemTableSinkWithParallelismInBatch.out b/flink-table/flink-table-planner-blink/src/test/resources/explain/filesystem/testFileSystemTableSinkWithParallelismInBatch.out index 4cf716b..81df7d7 100644 --- a/flink-table/flink-table-planner-blink/src/test/resources/explain/filesystem/testFileSystemTableSinkWithParallelismInBatch.out +++ b/flink-table/flink-table-planner-blink/src/test/resources/explain/filesystem/testFileSystemTableSinkWithParallelismInBatch.out @@ -14,32 +14,21 @@ Sink(table=[default_catalog.default_database.test_sink_table], fields=[id, real_ == Physical Execution Plan == { "nodes" : [ { -"id" : 1, +"id" : , "type" : "Source: TableSourceScan(table=[[default_catalog, default_database, test_source_table]], fields=[id, real_col, double_col, decimal_col])", "pact" : "Data Source", "contents" : "Source: TableSourceScan(table=[[default_catalog, default_database, test_source_table]], fields=[id, real_col, double_col, decimal_col])", "parallelism" : 1 }, { -"id" : 2, -"type" : "Filter", -"pact" : "Operator", -"contents" : "Filter", -"parallelism" : 8, -"predecessors" : [ { - "id" : 1, - "ship_strategy" : "REBALANCE", - "side" : "second" -} ] - }, { -"id" : 3, +"id" : , "type" : "Sink: Filesystem", "pact" : "Data Sink", "contents" : "Sink: Filesystem", "parallelism" : 5, "predecessors" : [ { - "id" : 2, + "id" : , "ship_strategy" : "REBALANCE", "side" : "second" } ] } ] -} +} \ No newline at end of file diff --git a/flink-table/flink-table-planner-blink/src/test/resources/explain/filesystem/testFileSystemTableSinkWithParallelismInStreamingSql0.out b/flink-table/flink-table-planner-blink/src/test/resources/explain/filesystem/testFileSystemTableSinkWithParallelismInStreamingSql0.out index 4be2d33..6ab8bc2 100644 --- a/flink-table/flink-table-planner-blink/src/test/resources/explain/filesystem/testFileSystemTableSinkWithParallelismInStreamingSql0.out +++ b/flink-table/flink-table-planner-blink/src/test/resources/explain/filesystem/testFileSystemTableSinkWithParallelismInStreamingSql0.out @@ -14,43 +14,32 @@ Sink(table=[default_catalog.default_database.test_sink_table], fields=[id, real_ == Physical Execution Plan == { "nodes" : [ { -"id" : 1, +"id" : , "type" : "Source: TableSourceScan(table=[[default_catalog, default_database, test_source_table]], fields=[id, real_col, double_col, decimal_col])", "pact" : "Data Source", "contents" : "Source: TableSourceScan(table=[[default_catalog, default_database, test_source_table]], fields=[id, real_col, double_col, decimal_col])", "parallelism" : 1 }, { -"id" : 2, -"type" : "Filter", -"pact" : "Operator", -"contents" : "Filter", -"parallelism" : 8, -"predecessors" : [ { - "id" : 1, - "ship_strategy" : "REBALANCE", - "side" : "second" -} ] - }, { -"id" : 3, +"id" : , "type" : "StreamingFileWriter", "pact" : "Operator", "contents" : "StreamingFileWriter", "parallelism" : 5, "predecessors" : [ { - "id" : 2, + "id" : , "ship_strategy" : "REBALANCE", "side" : "second" } ] }, { -"id" : 4, +"id" : , "type" : "Sink: end", "pact" : "Data Sink", "contents" : "Sink: end", "parallelism" : 1, "predecessors" : [ { - "id" : 3, + "id" : , "ship_strategy" : "REBALANCE", "side" : "second" } ] } ] -} +} \ No newline at end of file diff --git a/flink-table/flink-table-planner-blink/src/test/resources/explain/filesystem/testFileSystemTableSinkWithParallelismInStreamingSql1.out b/flink-table/flink-table-planner-blink/src/test/resources/explain/filesystem/testFileSystemTableSinkWithParallelismInStreamingSql1.out index b63945f..fcfe456 100644 --- a/flink-table/flink-table-planner-blink/src/test/resources/explain/filesystem/testFileSystemTableSinkWithParallelismInStreamingSql1.out +++