[flink-web] 01/02: clarified flink config in log4j cve blog post
This is an automated email from the ASF dual-hosted git repository. sjwiesman pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/flink-web.git commit 59d18f50205c45e50f9bf2beb731579b6d42ed54 Author: Konstantin Knauf AuthorDate: Fri Dec 10 20:30:56 2021 +0100 clarified flink config in log4j cve blog post --- _posts/2021-12-10-log4j-cve.md | 9 - 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/_posts/2021-12-10-log4j-cve.md b/_posts/2021-12-10-log4j-cve.md index deaafe0..163036d 100644 --- a/_posts/2021-12-10-log4j-cve.md +++ b/_posts/2021-12-10-log4j-cve.md @@ -13,8 +13,15 @@ It is by now tracked under [CVE-2021-44228](https://nvd.nist.gov/vuln/detail/CVE Apache Flink is bundling a version of Log4j that is affected by this vulnerability. We recommend users to follow the [advisory](https://logging.apache.org/log4j/2.x/security.html) of the Apache Log4j Community. -For Apache Flink this currently translates to "setting system property `log4j2.formatMsgNoLookups` to `true`" until Log4j has been upgraded to 2.15.0 in Apache Flink. +For Apache Flink this currently translates to setting the following property in your flink-conf.yaml: +```yaml +env.java.opts: -Dlog4j2.formatMsgNoLookups=true +``` + +If you are already setting `env.java.opts.jobmanager`, `env.java.opts.taskmanager`, `env.java.opts.client`, or `env.java.opts.historyserver` you should instead add the system change to those existing parameter lists. + +As soon as Log4j has been upgraded to 2.15.0 in Apache Flink, this is not necessary anymore. This effort is tracked in [FLINK-25240](https://issues.apache.org/jira/browse/FLINK-25240). It will be included in Flink 1.15.0, Flink 1.14.1 and Flink 1.13.3. We expect Flink 1.14.1 to be released in the next 1-2 weeks.
[flink-web] 02/02: rebuild website
This is an automated email from the ASF dual-hosted git repository. sjwiesman pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/flink-web.git commit e89e37d16c1438847a35fe987b0d2ab4bd61ac43 Author: Konstantin Knauf AuthorDate: Fri Dec 10 20:33:49 2021 +0100 rebuild website --- content/2021/12/10/log4j-cve.html | 9 +++-- content/blog/feed.xml | 9 +++-- 2 files changed, 14 insertions(+), 4 deletions(-) diff --git a/content/2021/12/10/log4j-cve.html b/content/2021/12/10/log4j-cve.html index e7f27e9..11b8a9e 100644 --- a/content/2021/12/10/log4j-cve.html +++ b/content/2021/12/10/log4j-cve.html @@ -206,9 +206,14 @@ It is by now tracked under https://nvd.nist.gov/vuln/detail/CVE-2021-44 Apache Flink is bundling a version of Log4j that is affected by this vulnerability. We recommend users to follow the https://logging.apache.org/log4j/2.x/security.html;>advisory of the Apache Log4j Community. -For Apache Flink this currently translates to “setting system property log4j2.formatMsgNoLookups to true” until Log4j has been upgraded to 2.15.0 in Apache Flink. +For Apache Flink this currently translates to setting the following property in your flink-conf.yaml: -This effort is tracked in https://issues.apache.org/jira/browse/FLINK-25240;>FLINK-25240. +env.java.opts: -Dlog4j2.formatMsgNoLookups=true + +If you are already setting env.java.opts.jobmanager, env.java.opts.taskmanager, env.java.opts.client, or env.java.opts.historyserver you should instead add the system change to those existing parameter lists. + +As soon as Log4j has been upgraded to 2.15.0 in Apache Flink, this is not necessary anymore. +This effort is tracked in https://issues.apache.org/jira/browse/FLINK-25240;>FLINK-25240. It will be included in Flink 1.15.0, Flink 1.14.1 and Flink 1.13.3. We expect Flink 1.14.1 to be released in the next 1-2 weeks. The other releases will follow in their regular cadence. diff --git a/content/blog/feed.xml b/content/blog/feed.xml index daa45f3..c666cb2 100644 --- a/content/blog/feed.xml +++ b/content/blog/feed.xml @@ -13,9 +13,14 @@ It is by now tracked under a href=https://nvd.nist.gov/vuln/detail/CVE pApache Flink is bundling a version of Log4j that is affected by this vulnerability. We recommend users to follow the a href=https://logging.apache.org/log4j/2.x/security.htmladvisory/a; of the Apache Log4j Community. -For Apache Flink this currently translates to “setting system property codelog4j2.formatMsgNoLookups/code to codetrue/code” until Log4j has been upgraded to 2.15.0 in Apache Flink./p +For Apache Flink this currently translates to setting the following property in your flink-conf.yaml:/p -pThis effort is tracked in a href=https://issues.apache.org/jira/browse/FLINK-25240FLINK-25240/a;. +div class=highlightprecode class=language-yamlspan class=l-Scalar-Plainenv.java.opts/spanspan class=p-Indicator:/span span class=l-Scalar-Plain-Dlog4j2.formatMsgNoLookups=true/span/code/pre/div + +pIf you are already setting codeenv.java.opts.jobmanager/code, codeenv.java.opts.taskmanager/code, codeenv.java.opts.client/code, or codeenv.java.opts.historyserver/code you should instead add the system change to those existing parameter lists./p + +pAs soon as Log4j has been upgraded to 2.15.0 in Apache Flink, this is not necessary anymore. +This effort is tracked in a href=https://issues.apache.org/jira/browse/FLINK-25240FLINK-25240/a;. It will be included in Flink 1.15.0, Flink 1.14.1 and Flink 1.13.3. We expect Flink 1.14.1 to be released in the next 1-2 weeks. The other releases will follow in their regular cadence./p
[flink-web] branch asf-site updated (f00f0e8 -> e89e37d)
This is an automated email from the ASF dual-hosted git repository. sjwiesman pushed a change to branch asf-site in repository https://gitbox.apache.org/repos/asf/flink-web.git. from f00f0e8 rebuild website new 59d18f5 clarified flink config in log4j cve blog post new e89e37d rebuild website The 2 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: _posts/2021-12-10-log4j-cve.md| 9 - content/2021/12/10/log4j-cve.html | 9 +++-- content/blog/feed.xml | 9 +++-- 3 files changed, 22 insertions(+), 5 deletions(-)
[flink-web] branch asf-site updated (c69db27 -> f00f0e8)
This is an automated email from the ASF dual-hosted git repository. knaufk pushed a change to branch asf-site in repository https://gitbox.apache.org/repos/asf/flink-web.git. from c69db27 rebuild website new b17c8c5 [hotfix] fix yptos in Log4j CVE blog post new f00f0e8 rebuild website The 2 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: _posts/2021-12-10-log4j-cve.md| 6 +++--- content/2021/12/10/log4j-cve.html | 4 ++-- content/blog/feed.xml | 4 ++-- content/blog/index.html | 2 +- content/index.html| 2 +- content/zh/index.html | 2 +- 6 files changed, 10 insertions(+), 10 deletions(-)
[flink-web] 02/02: rebuild website
This is an automated email from the ASF dual-hosted git repository. knaufk pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/flink-web.git commit f00f0e87be5cb9bfe1935611667da7ba42e616ce Author: Konstantin Knauf AuthorDate: Fri Dec 10 17:37:00 2021 +0100 rebuild website --- content/2021/12/10/log4j-cve.html | 4 ++-- content/blog/feed.xml | 4 ++-- content/blog/index.html | 2 +- content/index.html| 2 +- content/zh/index.html | 2 +- 5 files changed, 7 insertions(+), 7 deletions(-) diff --git a/content/2021/12/10/log4j-cve.html b/content/2021/12/10/log4j-cve.html index bf61d5e..e7f27e9 100644 --- a/content/2021/12/10/log4j-cve.html +++ b/content/2021/12/10/log4j-cve.html @@ -204,8 +204,8 @@ Yesterday, a new Zero Day for Apache Log4j was https://www.cyberkendra.com/2021/12/apache-log4j-vulnerability-details-and.html;>reported. It is by now tracked under https://nvd.nist.gov/vuln/detail/CVE-2021-44228;>CVE-2021-44228. -Apache Flink is bundling a version of Log4j that is affeced by this vulnerability. -We recommend users to follow the https://logging.apache.org/log4j/2.x/security.html;>adivsory of the Apache Log4j Community. +Apache Flink is bundling a version of Log4j that is affected by this vulnerability. +We recommend users to follow the https://logging.apache.org/log4j/2.x/security.html;>advisory of the Apache Log4j Community. For Apache Flink this currently translates to “setting system property log4j2.formatMsgNoLookups to true” until Log4j has been upgraded to 2.15.0 in Apache Flink. This effort is tracked in https://issues.apache.org/jira/browse/FLINK-25240;>FLINK-25240. diff --git a/content/blog/feed.xml b/content/blog/feed.xml index 56e5931..daa45f3 100644 --- a/content/blog/feed.xml +++ b/content/blog/feed.xml @@ -11,8 +11,8 @@ pYesterday, a new Zero Day for Apache Log4j was a href=https://www.cyberkendra.com/2021/12/apache-log4j-vulnerability-details-and.htmlreported/a;. It is by now tracked under a href=https://nvd.nist.gov/vuln/detail/CVE-2021-44228CVE-2021-44228/a./p; -pApache Flink is bundling a version of Log4j that is affeced by this vulnerability. -We recommend users to follow the a href=https://logging.apache.org/log4j/2.x/security.htmladivsory/a; of the Apache Log4j Community. +pApache Flink is bundling a version of Log4j that is affected by this vulnerability. +We recommend users to follow the a href=https://logging.apache.org/log4j/2.x/security.htmladvisory/a; of the Apache Log4j Community. For Apache Flink this currently translates to “setting system property codelog4j2.formatMsgNoLookups/code to codetrue/code” until Log4j has been upgraded to 2.15.0 in Apache Flink./p pThis effort is tracked in a href=https://issues.apache.org/jira/browse/FLINK-25240FLINK-25240/a;. diff --git a/content/blog/index.html b/content/blog/index.html index a47d95c..f7c959d 100644 --- a/content/blog/index.html +++ b/content/blog/index.html @@ -206,7 +206,7 @@ 10 Dec 2021 Konstantin Knauf - Advise on Apache Log4j Zero Day (CVE-2021-44228) + Apache Flink is affected by an Apache Log4j Zero Day (CVE-2021-44228). This blog post contains advise for users on how to address this. Continue reading diff --git a/content/index.html b/content/index.html index fcc6015..a6ee695 100644 --- a/content/index.html +++ b/content/index.html @@ -366,7 +366,7 @@ Advise on Apache Log4j Zero Day (CVE-2021-44228) -Advise on Apache Log4j Zero Day (CVE-2021-44228) +Apache Flink is affected by an Apache Log4j Zero Day (CVE-2021-44228). This blog post contains advise for users on how to address this. Flink Backward - The Apache Flink Retrospective A look back at the development cycle for Flink 1.14 diff --git a/content/zh/index.html b/content/zh/index.html index 1007408..655d399 100644 --- a/content/zh/index.html +++ b/content/zh/index.html @@ -363,7 +363,7 @@ Advise on Apache Log4j Zero Day (CVE-2021-44228) -Advise on Apache Log4j Zero Day (CVE-2021-44228) +Apache Flink is affected by an Apache Log4j Zero Day (CVE-2021-44228). This blog post contains advise for users on how to address this. Flink Backward - The Apache Flink Retrospective A look back at the development cycle for Flink 1.14
[flink-web] 01/02: [hotfix] fix yptos in Log4j CVE blog post
This is an automated email from the ASF dual-hosted git repository. knaufk pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/flink-web.git commit b17c8c568053ef2c2731beec9e46fc6b1ca9e71f Author: Konstantin Knauf AuthorDate: Fri Dec 10 17:36:01 2021 +0100 [hotfix] fix yptos in Log4j CVE blog post --- _posts/2021-12-10-log4j-cve.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/_posts/2021-12-10-log4j-cve.md b/_posts/2021-12-10-log4j-cve.md index 574ec66..deaafe0 100644 --- a/_posts/2021-12-10-log4j-cve.md +++ b/_posts/2021-12-10-log4j-cve.md @@ -5,14 +5,14 @@ date: 2021-12-10 00:00:00 authors: - knaufk: name: "Konstantin Knauf" -excerpt: "Advise on Apache Log4j Zero Day (CVE-2021-44228)" +excerpt: "Apache Flink is affected by an Apache Log4j Zero Day (CVE-2021-44228). This blog post contains advise for users on how to address this." --- Yesterday, a new Zero Day for Apache Log4j was [reported](https://www.cyberkendra.com/2021/12/apache-log4j-vulnerability-details-and.html). It is by now tracked under [CVE-2021-44228](https://nvd.nist.gov/vuln/detail/CVE-2021-44228). -Apache Flink is bundling a version of Log4j that is affeced by this vulnerability. -We recommend users to follow the [adivsory](https://logging.apache.org/log4j/2.x/security.html) of the Apache Log4j Community. +Apache Flink is bundling a version of Log4j that is affected by this vulnerability. +We recommend users to follow the [advisory](https://logging.apache.org/log4j/2.x/security.html) of the Apache Log4j Community. For Apache Flink this currently translates to "setting system property `log4j2.formatMsgNoLookups` to `true`" until Log4j has been upgraded to 2.15.0 in Apache Flink. This effort is tracked in [FLINK-25240](https://issues.apache.org/jira/browse/FLINK-25240).
[flink] branch master updated (5776483 -> cf1e8c3)
This is an automated email from the ASF dual-hosted git repository. trohrmann pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/flink.git. from 5776483 [FLINK-25126][kafka] Reset internal transaction state of FlinkKafkaInternalProducer if transaction finalization fails add cf1e8c3 [FLINK-25257][TE] Let TaskExecutor use TaskExecutorBlobService interface instead of concrete BlobCacheService implementation No new revisions were added by this update. Summary of changes: .../flink/runtime/blob/BlobCacheService.java | 2 +- .../JobPermanentBlobService.java} | 32 ++- .../flink/runtime/blob/PermanentBlobCache.java | 4 ++- .../runtime/blob/TaskExecutorBlobService.java} | 20 +++- .../flink/runtime/taskexecutor/TaskExecutor.java | 24 --- .../runtime/taskexecutor/TaskManagerRunner.java| 7 +++-- ...rvice.java => NoOpJobPermanentBlobService.java} | 20 +--- .../runtime/blob/NoOpTaskExecutorBlobService.java} | 36 +++--- ...cutorExecutionDeploymentReconciliationTest.java | 5 ++- .../TaskExecutorPartitionLifecycleTest.java| 5 ++- .../taskexecutor/TaskExecutorSlotLifetimeTest.java | 5 ++- .../runtime/taskexecutor/TaskExecutorTest.java | 17 ++ .../taskexecutor/TaskManagerRunnerStartupTest.java | 5 ++- .../TaskSubmissionTestEnvironment.java | 12 .../runtime/taskexecutor/TestingTaskExecutor.java | 6 ++-- 15 files changed, 104 insertions(+), 96 deletions(-) copy flink-runtime/src/main/java/org/apache/flink/runtime/{jobmanager/ThrowingJobGraphWriter.java => blob/JobPermanentBlobService.java} (59%) copy flink-runtime/src/{test/java/org/apache/flink/runtime/checkpoint/NoOpFailJobCall.java => main/java/org/apache/flink/runtime/blob/TaskExecutorBlobService.java} (62%) copy flink-runtime/src/test/java/org/apache/flink/runtime/blob/{VoidPermanentBlobService.java => NoOpJobPermanentBlobService.java} (69%) copy flink-runtime/src/{main/java/org/apache/flink/runtime/blob/VoidBlobStore.java => test/java/org/apache/flink/runtime/blob/NoOpTaskExecutorBlobService.java} (53%)
[flink-web] branch asf-site updated: rebuild website
This is an automated email from the ASF dual-hosted git repository. knaufk pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/flink-web.git The following commit(s) were added to refs/heads/asf-site by this push: new c69db27 rebuild website c69db27 is described below commit c69db27156fdbec781bb01c5275a7aa0b5e121c3 Author: Konstantin Knauf AuthorDate: Fri Dec 10 17:25:45 2021 +0100 rebuild website --- content/{index.html => 2021/12/10/log4j-cve.html} | 276 +++--- content/blog/feed.xml | 153 ++-- content/blog/index.html | 38 +-- content/blog/page10/index.html| 38 +-- content/blog/page11/index.html| 40 ++-- content/blog/page12/index.html| 40 ++-- content/blog/page13/index.html| 40 ++-- content/blog/page14/index.html| 40 ++-- content/blog/page15/index.html| 43 ++-- content/blog/page16/index.html| 43 ++-- content/blog/page17/index.html| 25 ++ content/blog/page2/index.html | 38 ++- content/blog/page3/index.html | 36 ++- content/blog/page4/index.html | 36 ++- content/blog/page5/index.html | 36 ++- content/blog/page6/index.html | 36 ++- content/blog/page7/index.html | 38 +-- content/blog/page8/index.html | 41 ++-- content/blog/page9/index.html | 39 ++- content/index.html| 6 +- content/zh/index.html | 6 +- 21 files changed, 471 insertions(+), 617 deletions(-) diff --git a/content/index.html b/content/2021/12/10/log4j-cve.html similarity index 54% copy from content/index.html copy to content/2021/12/10/log4j-cve.html index b169a83..bf61d5e 100644 --- a/content/index.html +++ b/content/2021/12/10/log4j-cve.html @@ -5,7 +5,7 @@ -Apache Flink: Stateful Computations over Data Streams +Apache Flink: Advise on Apache Log4j Zero Day (CVE-2021-44228) @@ -145,7 +145,7 @@ - 中文版 + 中文版 @@ -193,259 +193,45 @@ - - - Apache Flink® — Stateful Computations over Data Streams - - + + Advise on Apache Log4j Zero Day (CVE-2021-44228) + - - - + +10 Dec 2021 Konstantin Knauf - +Yesterday, a new Zero Day for Apache Log4j was https://www.cyberkendra.com/2021/12/apache-log4j-vulnerability-details-and.html;>reported. +It is by now tracked under https://nvd.nist.gov/vuln/detail/CVE-2021-44228;>CVE-2021-44228. - +Apache Flink is bundling a version of Log4j that is affeced by this vulnerability. +We recommend users to follow the https://logging.apache.org/log4j/2.x/security.html;>adivsory of the Apache Log4j Community. +For Apache Flink this currently translates to “setting system property log4j2.formatMsgNoLookups to true” until Log4j has been upgraded to 2.15.0 in Apache Flink. - - - - - - +This effort is tracked in https://issues.apache.org/jira/browse/FLINK-25240;>FLINK-25240. +It will be included in Flink 1.15.0, Flink 1.14.1 and Flink 1.13.3. +We expect Flink 1.14.1 to be released in the next 1-2 weeks. +The other releases will follow in their regular cadence. - - - - - - All streaming use cases - - - - Event-driven Applications - Stream Batch Analytics - Data Pipelines ETL - -Learn more - - - - - - - Guaranteed correctness - - - - Exactly-once state consistency - Event-time processing - Sophisticated late data handling - -Learn more - + - - - - - Layered APIs - - - - SQL on Stream Batch Data - DataStream API DataSet API - ProcessFunction (Time State) - -Learn more - - - - - - - - - Operational Focus - - - - Flexible deployment - High-availability setup - Savepoints - -Learn more - - - - - - - Scales to any use case - - - - Scale-out architecture - Support for very large state - Incremental checkpointing - -Learn more - - - - - - - Excellent Performance - - - - Low latency - High throughput - In-Memory computing - -Learn more - + + + + +
[flink-web] branch asf-site updated: Add blog post about Log4j Zero Day
This is an automated email from the ASF dual-hosted git repository. knaufk pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/flink-web.git The following commit(s) were added to refs/heads/asf-site by this push: new 4c7ca4b Add blog post about Log4j Zero Day 4c7ca4b is described below commit 4c7ca4b1c1a28d70501ee5c25d314a5cde713ce1 Author: Konstantin Knauf AuthorDate: Fri Dec 10 16:15:34 2021 +0100 Add blog post about Log4j Zero Day --- _posts/2021-12-10-log4j-cve.md | 21 + 1 file changed, 21 insertions(+) diff --git a/_posts/2021-12-10-log4j-cve.md b/_posts/2021-12-10-log4j-cve.md new file mode 100644 index 000..574ec66 --- /dev/null +++ b/_posts/2021-12-10-log4j-cve.md @@ -0,0 +1,21 @@ +--- +layout: post +title: "Advise on Apache Log4j Zero Day (CVE-2021-44228)" +date: 2021-12-10 00:00:00 +authors: +- knaufk: + name: "Konstantin Knauf" +excerpt: "Advise on Apache Log4j Zero Day (CVE-2021-44228)" +--- + +Yesterday, a new Zero Day for Apache Log4j was [reported](https://www.cyberkendra.com/2021/12/apache-log4j-vulnerability-details-and.html). +It is by now tracked under [CVE-2021-44228](https://nvd.nist.gov/vuln/detail/CVE-2021-44228). + +Apache Flink is bundling a version of Log4j that is affeced by this vulnerability. +We recommend users to follow the [adivsory](https://logging.apache.org/log4j/2.x/security.html) of the Apache Log4j Community. +For Apache Flink this currently translates to "setting system property `log4j2.formatMsgNoLookups` to `true`" until Log4j has been upgraded to 2.15.0 in Apache Flink. + +This effort is tracked in [FLINK-25240](https://issues.apache.org/jira/browse/FLINK-25240). +It will be included in Flink 1.15.0, Flink 1.14.1 and Flink 1.13.3. +We expect Flink 1.14.1 to be released in the next 1-2 weeks. +The other releases will follow in their regular cadence.
[flink] branch master updated: [FLINK-25126][kafka] Reset internal transaction state of FlinkKafkaInternalProducer if transaction finalization fails
This is an automated email from the ASF dual-hosted git repository. fpaul pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/flink.git The following commit(s) were added to refs/heads/master by this push: new 5776483 [FLINK-25126][kafka] Reset internal transaction state of FlinkKafkaInternalProducer if transaction finalization fails 5776483 is described below commit 577648379c2abb429259ac1a46ca6a04550f3dbd Author: Fabian Paul AuthorDate: Fri Dec 3 15:07:05 2021 +0100 [FLINK-25126][kafka] Reset internal transaction state of FlinkKafkaInternalProducer if transaction finalization fails In the KafkaCommitter we retry transactions if they failed during committing. Since we reuse the KafkaProducers we update the used transactionalId to continue committing other transactions. To prevent accidental overwrites we track the transaction state inside the FlinkKafkaInternalProducer. Before this change, the state was not reset on a failures during the transaction finalization and setting a new transactionalId failed. The state is now always reset nevertheless whether finalizing the transaction fails (commit, abort). --- .../kafka/sink/FlinkKafkaInternalProducer.java | 8 ++- .../sink/FlinkKafkaInternalProducerITCase.java | 82 -- 2 files changed, 64 insertions(+), 26 deletions(-) diff --git a/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/sink/FlinkKafkaInternalProducer.java b/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/sink/FlinkKafkaInternalProducer.java index 65616d2..a023cdd 100644 --- a/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/sink/FlinkKafkaInternalProducer.java +++ b/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/sink/FlinkKafkaInternalProducer.java @@ -85,16 +85,16 @@ class FlinkKafkaInternalProducer extends KafkaProducer { public void abortTransaction() throws ProducerFencedException { LOG.debug("abortTransaction {}", transactionalId); checkState(inTransaction, "Transaction was not started"); -super.abortTransaction(); inTransaction = false; +super.abortTransaction(); } @Override public void commitTransaction() throws ProducerFencedException { LOG.debug("commitTransaction {}", transactionalId); checkState(inTransaction, "Transaction was not started"); -super.commitTransaction(); inTransaction = false; +super.commitTransaction(); } public boolean isInTransaction() { @@ -152,7 +152,9 @@ class FlinkKafkaInternalProducer extends KafkaProducer { public void setTransactionId(String transactionalId) { if (!transactionalId.equals(this.transactionalId)) { -checkState(!inTransaction); +checkState( +!inTransaction, +String.format("Another transaction %s is still open.", transactionalId)); LOG.debug("Change transaction id from {} to {}", this.transactionalId, transactionalId); Object transactionManager = getTransactionManager(); synchronized (transactionManager) { diff --git a/flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/sink/FlinkKafkaInternalProducerITCase.java b/flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/sink/FlinkKafkaInternalProducerITCase.java index 0a68433..ca2c6b7 100644 --- a/flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/sink/FlinkKafkaInternalProducerITCase.java +++ b/flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/sink/FlinkKafkaInternalProducerITCase.java @@ -19,6 +19,8 @@ package org.apache.flink.connector.kafka.sink; import org.apache.flink.util.TestLogger; +import org.apache.flink.shaded.guava30.com.google.common.collect.Lists; + import org.apache.kafka.clients.CommonClientConfigs; import org.apache.kafka.clients.consumer.ConsumerConfig; import org.apache.kafka.clients.consumer.ConsumerRecords; @@ -26,9 +28,12 @@ import org.apache.kafka.clients.consumer.KafkaConsumer; import org.apache.kafka.clients.producer.ProducerConfig; import org.apache.kafka.clients.producer.ProducerRecord; import org.apache.kafka.common.TopicPartition; +import org.apache.kafka.common.errors.ProducerFencedException; import org.apache.kafka.common.serialization.StringDeserializer; import org.apache.kafka.common.serialization.StringSerializer; import org.junit.jupiter.api.Test; +import org.junit.jupiter.params.ParameterizedTest; +import org.junit.jupiter.params.provider.MethodSource; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.testcontainers.containers.KafkaContainer; @@ -38,18 +43,17 @@ import
[flink] branch master updated: [FLINK-24186][table-planner] Allow multiple rowtime attributes for collect() and print()
This is an automated email from the ASF dual-hosted git repository. twalthr pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/flink.git The following commit(s) were added to refs/heads/master by this push: new f27e53a [FLINK-24186][table-planner] Allow multiple rowtime attributes for collect() and print() f27e53a is described below commit f27e53a03516ca7de7ec6c86a905f7d8a88b1271 Author: Timo Walther AuthorDate: Thu Dec 9 13:35:06 2021 +0100 [FLINK-24186][table-planner] Allow multiple rowtime attributes for collect() and print() This closes #17217. --- .../planner/connectors/CollectDynamicSink.java | 2 +- .../plan/nodes/exec/batch/BatchExecSink.java | 3 +- .../plan/nodes/exec/common/CommonExecSink.java | 2 +- .../plan/nodes/exec/stream/StreamExecSink.java | 13 .../org/apache/flink/table/api/TableITCase.scala | 35 +- .../runtime/stream/table/TableSinkITCase.scala | 5 ++-- 6 files changed, 49 insertions(+), 11 deletions(-) diff --git a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/CollectDynamicSink.java b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/CollectDynamicSink.java index 98fcf8b..be59089 100644 --- a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/CollectDynamicSink.java +++ b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/CollectDynamicSink.java @@ -49,7 +49,7 @@ import java.util.function.Function; /** Table sink for {@link TableResult#collect()}. */ @Internal -final class CollectDynamicSink implements DynamicTableSink { +public final class CollectDynamicSink implements DynamicTableSink { private final ObjectIdentifier tableIdentifier; private final DataType consumedDataType; diff --git a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/batch/BatchExecSink.java b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/batch/BatchExecSink.java index 3633628..64a1c0c 100644 --- a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/batch/BatchExecSink.java +++ b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/batch/BatchExecSink.java @@ -56,6 +56,7 @@ public class BatchExecSink extends CommonExecSink implements BatchExecNode translateToPlanInternal(PlannerBase planner) { final Transformation inputTransform = (Transformation) getInputEdges().get(0).translateToPlan(planner); -return createSinkTransformation(planner, inputTransform, -1, false); +final DynamicTableSink tableSink = tableSinkSpec.getTableSink(planner.getFlinkContext()); +return createSinkTransformation(planner, inputTransform, tableSink, -1, false); } } diff --git a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/common/CommonExecSink.java b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/common/CommonExecSink.java index 9c1870f..65500b9 100644 --- a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/common/CommonExecSink.java +++ b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/common/CommonExecSink.java @@ -117,9 +117,9 @@ public abstract class CommonExecSink extends ExecNodeBase protected Transformation createSinkTransformation( PlannerBase planner, Transformation inputTransform, +DynamicTableSink tableSink, int rowtimeFieldIndex, boolean upsertMaterialize) { -final DynamicTableSink tableSink = tableSinkSpec.getTableSink(planner.getFlinkContext()); final ResolvedSchema schema = tableSinkSpec.getCatalogTable().getResolvedSchema(); final SinkRuntimeProvider runtimeProvider = tableSink.getSinkRuntimeProvider(new SinkRuntimeProviderContext(isBounded)); diff --git a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/stream/StreamExecSink.java b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/stream/StreamExecSink.java index c145b59..848779c 100644 --- a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/stream/StreamExecSink.java +++ b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/stream/StreamExecSink.java @@ -23,6 +23,7 @@ import org.apache.flink.table.api.TableException; import org.apache.flink.table.connector.ChangelogMode; import org.apache.flink.table.connector.sink.DynamicTableSink; import org.apache.flink.table.data.RowData;
[flink] 02/02: [FLINK-24987][docs] Improve ExternalizedCheckpointCleanup documentation
This is an automated email from the ASF dual-hosted git repository. airblader pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/flink.git commit ef30fe5b0d01ec901e7e666ee3c5bc0cb51e430d Author: Nicolaus Weidner AuthorDate: Mon Nov 29 15:04:17 2021 +0100 [FLINK-24987][docs] Improve ExternalizedCheckpointCleanup documentation --- .../execution_checkpointing_configuration.html | 2 +- .../api/environment/CheckpointConfig.java | 28 ++ 2 files changed, 25 insertions(+), 5 deletions(-) diff --git a/docs/layouts/shortcodes/generated/execution_checkpointing_configuration.html b/docs/layouts/shortcodes/generated/execution_checkpointing_configuration.html index 836b2e0..a9fa7ca 100644 --- a/docs/layouts/shortcodes/generated/execution_checkpointing_configuration.html +++ b/docs/layouts/shortcodes/generated/execution_checkpointing_configuration.html @@ -24,7 +24,7 @@ execution.checkpointing.externalized-checkpoint-retention NO_EXTERNALIZED_CHECKPOINTS Enum -Externalized checkpoints write their meta data out to persistent storage and are not automatically cleaned up when the owning job fails or is suspended (terminating with job status JobStatus#FAILED or JobStatus#SUSPENDED). In this case, you have to manually clean up the checkpoint state, both the meta data and actual program state.The mode defines how an externalized checkpoint shoul [...] +Externalized checkpoints write their meta data out to persistent storage and are not automatically cleaned up when the owning job fails or is suspended (terminating with job status JobStatus#FAILED or JobStatus#SUSPENDED). In this case, you have to manually clean up the checkpoint state, both the meta data and actual program state.The mode defines how an externalized checkpoint shoul [...] execution.checkpointing.interval diff --git a/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/environment/CheckpointConfig.java b/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/environment/CheckpointConfig.java index b2a2d8a..8983baa 100644 --- a/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/environment/CheckpointConfig.java +++ b/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/environment/CheckpointConfig.java @@ -19,11 +19,14 @@ package org.apache.flink.streaming.api.environment; import org.apache.flink.annotation.Experimental; +import org.apache.flink.annotation.Internal; import org.apache.flink.annotation.Public; import org.apache.flink.annotation.PublicEvolving; import org.apache.flink.api.common.JobStatus; import org.apache.flink.configuration.CheckpointingOptions; +import org.apache.flink.configuration.DescribedEnum; import org.apache.flink.configuration.ReadableConfig; +import org.apache.flink.configuration.description.InlineElement; import org.apache.flink.core.fs.Path; import org.apache.flink.runtime.state.CheckpointStorage; import org.apache.flink.runtime.state.StateBackend; @@ -40,6 +43,7 @@ import java.net.URI; import java.time.Duration; import static java.util.Objects.requireNonNull; +import static org.apache.flink.configuration.description.TextElement.text; import static org.apache.flink.runtime.checkpoint.CheckpointFailureManager.UNLIMITED_TOLERABLE_FAILURE_NUMBER; import static org.apache.flink.runtime.jobgraph.tasks.CheckpointCoordinatorConfiguration.MINIMAL_CHECKPOINT_TIME; import static org.apache.flink.util.Preconditions.checkNotNull; @@ -733,7 +737,7 @@ public class CheckpointConfig implements java.io.Serializable { /** Cleanup behaviour for externalized checkpoints when the job is cancelled. */ @PublicEvolving -public enum ExternalizedCheckpointCleanup { +public enum ExternalizedCheckpointCleanup implements DescribedEnum { /** * Delete externalized checkpoints on job cancellation. @@ -745,7 +749,10 @@ public class CheckpointConfig implements java.io.Serializable { * Note that checkpoint state is always kept if the job terminates with state {@link * JobStatus#FAILED}. */ -DELETE_ON_CANCELLATION, +DELETE_ON_CANCELLATION( +text( +"Checkpoint state is only kept when the owning job fails. It is deleted if " ++ "the job is cancelled.")), /** * Retain externalized checkpoints on job cancellation. @@ -756,10 +763,17 @@ public class CheckpointConfig implements java.io.Serializable { * Note that checkpoint state is always kept if the job terminates with state {@link * JobStatus#FAILED}. */ -RETAIN_ON_CANCELLATION, +RETAIN_ON_CANCELLATION( +text("Checkpoint state is kept when the owning job is cancelled or fails.")), /** Externalized
[flink] 01/02: [FLINK-24987][streaming-java] Add explicit enum value NO_EXTERNAL_CHECKPOINTS as default for externalized-checkpoint-retention
This is an automated email from the ASF dual-hosted git repository. airblader pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/flink.git commit f4db43a3a8d7147f3ebd1279addcb35fb2c5e38b Author: Nicolaus Weidner AuthorDate: Wed Nov 24 09:17:34 2021 +0100 [FLINK-24987][streaming-java] Add explicit enum value NO_EXTERNAL_CHECKPOINTS as default for externalized-checkpoint-retention --- .../execution_checkpointing_configuration.html | 4 +- .../reader/CoordinatedSourceRescaleITCase.java | 2 +- .../tests/DataStreamAllroundTestJobFactory.java| 2 +- .../StickyAllocationAndLocalRecoveryTestJob.java | 2 +- .../pyflink/datastream/checkpoint_config.py| 57 +++-- .../datastream/tests/test_check_point_config.py| 3 +- .../api/environment/CheckpointConfig.java | 58 -- .../environment/ExecutionCheckpointingOptions.java | 6 ++- .../CheckpointConfigFromConfigurationTest.java | 2 +- .../test/checkpointing/RegionFailoverITCase.java | 2 +- .../ResumeCheckpointManuallyITCase.java| 2 +- .../UnalignedCheckpointCompatibilityITCase.java| 2 +- .../UnalignedCheckpointStressITCase.java | 2 +- .../checkpointing/UnalignedCheckpointTestBase.java | 2 +- 14 files changed, 113 insertions(+), 33 deletions(-) diff --git a/docs/layouts/shortcodes/generated/execution_checkpointing_configuration.html b/docs/layouts/shortcodes/generated/execution_checkpointing_configuration.html index 04e00bd..836b2e0 100644 --- a/docs/layouts/shortcodes/generated/execution_checkpointing_configuration.html +++ b/docs/layouts/shortcodes/generated/execution_checkpointing_configuration.html @@ -22,9 +22,9 @@ execution.checkpointing.externalized-checkpoint-retention -(none) +NO_EXTERNALIZED_CHECKPOINTS Enum -Externalized checkpoints write their meta data out to persistent storage and are not automatically cleaned up when the owning job fails or is suspended (terminating with job status JobStatus#FAILED or JobStatus#SUSPENDED. In this case, you have to manually clean up the checkpoint state, both the meta data and actual program state.The mode defines how an externalized checkpoint should [...] +Externalized checkpoints write their meta data out to persistent storage and are not automatically cleaned up when the owning job fails or is suspended (terminating with job status JobStatus#FAILED or JobStatus#SUSPENDED). In this case, you have to manually clean up the checkpoint state, both the meta data and actual program state.The mode defines how an externalized checkpoint shoul [...] execution.checkpointing.interval diff --git a/flink-connectors/flink-connector-base/src/test/java/org/apache/flink/connector/base/source/reader/CoordinatedSourceRescaleITCase.java b/flink-connectors/flink-connector-base/src/test/java/org/apache/flink/connector/base/source/reader/CoordinatedSourceRescaleITCase.java index f9e0e27..0120fee 100644 --- a/flink-connectors/flink-connector-base/src/test/java/org/apache/flink/connector/base/source/reader/CoordinatedSourceRescaleITCase.java +++ b/flink-connectors/flink-connector-base/src/test/java/org/apache/flink/connector/base/source/reader/CoordinatedSourceRescaleITCase.java @@ -118,7 +118,7 @@ public class CoordinatedSourceRescaleITCase extends TestLogger { StreamExecutionEnvironment.createLocalEnvironment(p, conf); env.enableCheckpointing(100); env.getCheckpointConfig() -.enableExternalizedCheckpoints( +.setExternalizedCheckpointCleanup( CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION); env.setRestartStrategy(RestartStrategies.noRestart()); diff --git a/flink-end-to-end-tests/flink-datastream-allround-test/src/main/java/org/apache/flink/streaming/tests/DataStreamAllroundTestJobFactory.java b/flink-end-to-end-tests/flink-datastream-allround-test/src/main/java/org/apache/flink/streaming/tests/DataStreamAllroundTestJobFactory.java index 83e2cde..e219c46 100644 --- a/flink-end-to-end-tests/flink-datastream-allround-test/src/main/java/org/apache/flink/streaming/tests/DataStreamAllroundTestJobFactory.java +++ b/flink-end-to-end-tests/flink-datastream-allround-test/src/main/java/org/apache/flink/streaming/tests/DataStreamAllroundTestJobFactory.java @@ -295,7 +295,7 @@ public class DataStreamAllroundTestJobFactory { "Unknown clean up mode for externalized checkpoints: " + cleanupModeConfig); } - env.getCheckpointConfig().enableExternalizedCheckpoints(cleanupMode); + env.getCheckpointConfig().setExternalizedCheckpointCleanup(cleanupMode); final int tolerableDeclinedCheckpointNumber =
[flink] branch master updated (dfb0bfc -> ef30fe5)
This is an automated email from the ASF dual-hosted git repository. airblader pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/flink.git. from dfb0bfc [FLINK-25241][hotfix] Remove resolved violations new f4db43a [FLINK-24987][streaming-java] Add explicit enum value NO_EXTERNAL_CHECKPOINTS as default for externalized-checkpoint-retention new ef30fe5 [FLINK-24987][docs] Improve ExternalizedCheckpointCleanup documentation The 2 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: .../execution_checkpointing_configuration.html | 4 +- .../reader/CoordinatedSourceRescaleITCase.java | 2 +- .../tests/DataStreamAllroundTestJobFactory.java| 2 +- .../StickyAllocationAndLocalRecoveryTestJob.java | 2 +- .../pyflink/datastream/checkpoint_config.py| 57 ++-- .../datastream/tests/test_check_point_config.py| 3 +- .../api/environment/CheckpointConfig.java | 76 ++ .../environment/ExecutionCheckpointingOptions.java | 6 +- .../CheckpointConfigFromConfigurationTest.java | 2 +- .../test/checkpointing/RegionFailoverITCase.java | 2 +- .../ResumeCheckpointManuallyITCase.java| 2 +- .../UnalignedCheckpointCompatibilityITCase.java| 2 +- .../UnalignedCheckpointStressITCase.java | 2 +- .../checkpointing/UnalignedCheckpointTestBase.java | 2 +- 14 files changed, 132 insertions(+), 32 deletions(-)
[flink] 01/02: [FLINK-25241][architecture] Add missing prefix for ArchUnit
This is an automated email from the ASF dual-hosted git repository. airblader pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/flink.git commit 214e10018b17eda9c941585c19847bda3096ccf0 Author: Ingo Bürk AuthorDate: Fri Dec 10 08:46:33 2021 +0100 [FLINK-25241][architecture] Add missing prefix for ArchUnit This fails the CI if violations are removed, ensuring that the violation store is updated properly. --- azure-pipelines.yml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/azure-pipelines.yml b/azure-pipelines.yml index 6299c4f..e836ac1 100644 --- a/azure-pipelines.yml +++ b/azure-pipelines.yml @@ -77,7 +77,7 @@ stages: vmImage: 'ubuntu-20.04' e2e_pool_definition: vmImage: 'ubuntu-20.04' - environment: PROFILE="-Dhadoop.version=2.8.3 -Dinclude_hadoop_aws -Dscala-2.12 -Dfreeze.store.default.allowStoreUpdate=false" + environment: PROFILE="-Dhadoop.version=2.8.3 -Dinclude_hadoop_aws -Dscala-2.12 -Darchunit.freeze.store.default.allowStoreUpdate=false" run_end_to_end: false container: flink-build-container jdk: 8 @@ -94,5 +94,5 @@ stages: - template: tools/azure-pipelines/build-python-wheels.yml parameters: stage_name: cron_python_wheels - environment: PROFILE="-Dhadoop.version=2.8.3 -Dinclude_hadoop_aws -Dscala-2.12 -Dfreeze.store.default.allowStoreUpdate=false" + environment: PROFILE="-Dhadoop.version=2.8.3 -Dinclude_hadoop_aws -Dscala-2.12 -Darchunit.freeze.store.default.allowStoreUpdate=false" container: flink-build-container
[flink] 02/02: [FLINK-25241][hotfix] Remove resolved violations
This is an automated email from the ASF dual-hosted git repository. airblader pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/flink.git commit dfb0bfc085ab8aacc84f6d581c5e44137e8daff9 Author: Ingo Bürk AuthorDate: Fri Dec 10 08:47:17 2021 +0100 [FLINK-25241][hotfix] Remove resolved violations --- .../violations/5b9eed8a-5fb6-4373-98ac-3be2a71941b8| 7 --- .../violations/e5126cae-f3fe-48aa-b6fb-60ae6cc3fcd5| 1 - 2 files changed, 8 deletions(-) diff --git a/flink-architecture-tests/violations/5b9eed8a-5fb6-4373-98ac-3be2a71941b8 b/flink-architecture-tests/violations/5b9eed8a-5fb6-4373-98ac-3be2a71941b8 index 961af9f..34b4b88 100644 --- a/flink-architecture-tests/violations/5b9eed8a-5fb6-4373-98ac-3be2a71941b8 +++ b/flink-architecture-tests/violations/5b9eed8a-5fb6-4373-98ac-3be2a71941b8 @@ -101,13 +101,6 @@ org.apache.flink.connector.file.src.reader.BulkFormat.createReader(org.apache.fl org.apache.flink.connector.file.src.reader.BulkFormat.restoreReader(org.apache.flink.configuration.Configuration, org.apache.flink.connector.file.src.FileSourceSplit): Returned leaf type org.apache.flink.connector.file.src.reader.BulkFormat$Reader does not satisfy: reside outside of package 'org.apache.flink..' or annotated with @Public or annotated with @PublicEvolving or annotated with @Deprecated org.apache.flink.connector.file.src.reader.FileRecordFormat.createReader(org.apache.flink.configuration.Configuration, org.apache.flink.core.fs.Path, long, long): Returned leaf type org.apache.flink.connector.file.src.reader.FileRecordFormat$Reader does not satisfy: reside outside of package 'org.apache.flink..' or annotated with @Public or annotated with @PublicEvolving or annotated with @Deprecated org.apache.flink.connector.file.src.reader.FileRecordFormat.restoreReader(org.apache.flink.configuration.Configuration, org.apache.flink.core.fs.Path, long, long, long): Returned leaf type org.apache.flink.connector.file.src.reader.FileRecordFormat$Reader does not satisfy: reside outside of package 'org.apache.flink..' or annotated with @Public or annotated with @PublicEvolving or annotated with @Deprecated -org.apache.flink.connector.file.src.reader.SimpleStreamFormat.createReader(org.apache.flink.configuration.Configuration, org.apache.flink.core.fs.FSDataInputStream): Returned leaf type org.apache.flink.connector.file.src.reader.StreamFormat$Reader does not satisfy: reside outside of package 'org.apache.flink..' or annotated with @Public or annotated with @PublicEvolving or annotated with @Deprecated -org.apache.flink.connector.file.src.reader.SimpleStreamFormat.createReader(org.apache.flink.configuration.Configuration, org.apache.flink.core.fs.FSDataInputStream, long, long): Returned leaf type org.apache.flink.connector.file.src.reader.StreamFormat$Reader does not satisfy: reside outside of package 'org.apache.flink..' or annotated with @Public or annotated with @PublicEvolving or annotated with @Deprecated -org.apache.flink.connector.file.src.reader.SimpleStreamFormat.restoreReader(org.apache.flink.configuration.Configuration, org.apache.flink.core.fs.FSDataInputStream, long, long, long): Returned leaf type org.apache.flink.connector.file.src.reader.StreamFormat$Reader does not satisfy: reside outside of package 'org.apache.flink..' or annotated with @Public or annotated with @PublicEvolving or annotated with @Deprecated -org.apache.flink.connector.file.src.reader.StreamFormat.createReader(org.apache.flink.configuration.Configuration, org.apache.flink.core.fs.FSDataInputStream, long, long): Returned leaf type org.apache.flink.connector.file.src.reader.StreamFormat$Reader does not satisfy: reside outside of package 'org.apache.flink..' or annotated with @Public or annotated with @PublicEvolving or annotated with @Deprecated -org.apache.flink.connector.file.src.reader.StreamFormat.restoreReader(org.apache.flink.configuration.Configuration, org.apache.flink.core.fs.FSDataInputStream, long, long, long): Returned leaf type org.apache.flink.connector.file.src.reader.StreamFormat$Reader does not satisfy: reside outside of package 'org.apache.flink..' or annotated with @Public or annotated with @PublicEvolving or annotated with @Deprecated -org.apache.flink.connector.file.src.reader.TextLineFormat.createReader(org.apache.flink.configuration.Configuration, org.apache.flink.core.fs.FSDataInputStream): Returned leaf type org.apache.flink.connector.file.src.reader.StreamFormat$Reader does not satisfy: reside outside of package 'org.apache.flink..' or annotated with @Public or annotated with @PublicEvolving or annotated with @Deprecated -org.apache.flink.connector.file.src.reader.TextLineFormat.createReader(org.apache.flink.configuration.Configuration, org.apache.flink.core.fs.FSDataInputStream): Returned leaf type org.apache.flink.connector.file.src.reader.TextLineFormat$Reader does not satisfy: reside outside of
[flink] branch master updated (417a710 -> dfb0bfc)
This is an automated email from the ASF dual-hosted git repository. airblader pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/flink.git. from 417a710 [FLINK-25038][testutils] Refactor FlinkContainer to split JM and TMs to individual containers and supports HA new 214e100 [FLINK-25241][architecture] Add missing prefix for ArchUnit new dfb0bfc [FLINK-25241][hotfix] Remove resolved violations The 2 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: azure-pipelines.yml| 4 ++-- .../violations/5b9eed8a-5fb6-4373-98ac-3be2a71941b8| 7 --- .../violations/e5126cae-f3fe-48aa-b6fb-60ae6cc3fcd5| 1 - 3 files changed, 2 insertions(+), 10 deletions(-)
[flink] branch master updated (825f036 -> 417a710)
This is an automated email from the ASF dual-hosted git repository. fpaul pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/flink.git. from 825f036 [FLINK-17510][tests] Enable StreamingKafkaITCase add 417a710 [FLINK-25038][testutils] Refactor FlinkContainer to split JM and TMs to individual containers and supports HA No new revisions were added by this update. Summary of changes: .../flink/tests/util/kafka/KafkaSourceE2ECase.java | 2 +- .../util/kafka/SQLClientSchemaRegistryITCase.java | 17 +- .../flink/tests/util/flink/FlinkContainer.java | 447 - .../util/flink/FlinkContainerTestEnvironment.java | 49 ++- .../util/flink/container/FlinkContainers.java | 320 +++ .../flink/container/FlinkContainersBuilder.java| 321 +++ .../util/flink/container/FlinkImageBuilder.java| 305 ++ .../util/flink/container/ImageBuildException.java | 11 +- .../util/pulsar/PulsarSourceOrderedE2ECase.java| 3 +- .../util/pulsar/PulsarSourceUnorderedE2ECase.java | 3 +- .../kinesis/test/KinesisTableApiITCase.java| 39 +- 11 files changed, 1025 insertions(+), 492 deletions(-) delete mode 100644 flink-end-to-end-tests/flink-end-to-end-tests-common/src/main/java/org/apache/flink/tests/util/flink/FlinkContainer.java create mode 100644 flink-end-to-end-tests/flink-end-to-end-tests-common/src/main/java/org/apache/flink/tests/util/flink/container/FlinkContainers.java create mode 100644 flink-end-to-end-tests/flink-end-to-end-tests-common/src/main/java/org/apache/flink/tests/util/flink/container/FlinkContainersBuilder.java create mode 100644 flink-end-to-end-tests/flink-end-to-end-tests-common/src/main/java/org/apache/flink/tests/util/flink/container/FlinkImageBuilder.java copy flink-filesystems/flink-s3-fs-hadoop/src/main/java/org/apache/flink/fs/s3hadoop/S3AFileSystemFactory.java => flink-end-to-end-tests/flink-end-to-end-tests-common/src/main/java/org/apache/flink/tests/util/flink/container/ImageBuildException.java (72%)
[flink] branch master updated (f6b2200 -> 825f036)
This is an automated email from the ASF dual-hosted git repository. chesnay pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/flink.git. from f6b2200 [FLINK-25096] Fixes empty exception history for JobInitializationException (#17967) add 825f036 [FLINK-17510][tests] Enable StreamingKafkaITCase No new revisions were added by this update. Summary of changes: .../java/org/apache/flink/tests/util/kafka/StreamingKafkaITCase.java| 2 -- 1 file changed, 2 deletions(-)
[flink] branch release-1.14 updated: [FLINK-25126][kafka] Reset internal transaction state of FlinkKafkaInternalProducer if transaction finalization fails
This is an automated email from the ASF dual-hosted git repository. fpaul pushed a commit to branch release-1.14 in repository https://gitbox.apache.org/repos/asf/flink.git The following commit(s) were added to refs/heads/release-1.14 by this push: new dccd7f0 [FLINK-25126][kafka] Reset internal transaction state of FlinkKafkaInternalProducer if transaction finalization fails dccd7f0 is described below commit dccd7f08cde2b13ba4549c94ebbc04ff2c0c5152 Author: Fabian Paul AuthorDate: Fri Dec 3 15:07:05 2021 +0100 [FLINK-25126][kafka] Reset internal transaction state of FlinkKafkaInternalProducer if transaction finalization fails In the KafkaCommitter we retry transactions if they failed during committing. Since we reuse the KafkaProducers we update the used transactionalId to continue committing other transactions. To prevent accidental overwrites we track the transaction state inside the FlinkKafkaInternalProducer. Before this change, the state was not reset on a failures during the transaction finalization and setting a new transactionalId failed. The state is now always reset nevertheless whether finalizing the transaction fails (commit, abort). --- .../kafka/sink/FlinkKafkaInternalProducer.java | 8 ++- .../sink/FlinkKafkaInternalProducerITCase.java | 69 +- 2 files changed, 58 insertions(+), 19 deletions(-) diff --git a/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/sink/FlinkKafkaInternalProducer.java b/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/sink/FlinkKafkaInternalProducer.java index aec1edf..19eed71 100644 --- a/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/sink/FlinkKafkaInternalProducer.java +++ b/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/sink/FlinkKafkaInternalProducer.java @@ -86,16 +86,16 @@ class FlinkKafkaInternalProducer extends KafkaProducer { public void abortTransaction() throws ProducerFencedException { LOG.debug("abortTransaction {}", transactionalId); checkState(inTransaction, "Transaction was not started"); -super.abortTransaction(); inTransaction = false; +super.abortTransaction(); } @Override public void commitTransaction() throws ProducerFencedException { LOG.debug("commitTransaction {}", transactionalId); checkState(inTransaction, "Transaction was not started"); -super.commitTransaction(); inTransaction = false; +super.commitTransaction(); } public boolean isInTransaction() { @@ -159,7 +159,9 @@ class FlinkKafkaInternalProducer extends KafkaProducer { public void setTransactionId(String transactionalId) { if (!transactionalId.equals(this.transactionalId)) { -checkState(!inTransaction); +checkState( +!inTransaction, +String.format("Another transaction %s is still open.", transactionalId)); LOG.debug("Change transaction id from {} to {}", this.transactionalId, transactionalId); Object transactionManager = getTransactionManager(); synchronized (transactionManager) { diff --git a/flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/sink/FlinkKafkaInternalProducerITCase.java b/flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/sink/FlinkKafkaInternalProducerITCase.java index 0a68433..cf07311 100644 --- a/flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/sink/FlinkKafkaInternalProducerITCase.java +++ b/flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/sink/FlinkKafkaInternalProducerITCase.java @@ -19,6 +19,8 @@ package org.apache.flink.connector.kafka.sink; import org.apache.flink.util.TestLogger; +import org.apache.flink.shaded.guava30.com.google.common.collect.Lists; + import org.apache.kafka.clients.CommonClientConfigs; import org.apache.kafka.clients.consumer.ConsumerConfig; import org.apache.kafka.clients.consumer.ConsumerRecords; @@ -26,9 +28,12 @@ import org.apache.kafka.clients.consumer.KafkaConsumer; import org.apache.kafka.clients.producer.ProducerConfig; import org.apache.kafka.clients.producer.ProducerRecord; import org.apache.kafka.common.TopicPartition; +import org.apache.kafka.common.errors.ProducerFencedException; import org.apache.kafka.common.serialization.StringDeserializer; import org.apache.kafka.common.serialization.StringSerializer; import org.junit.jupiter.api.Test; +import org.junit.jupiter.params.ParameterizedTest; +import org.junit.jupiter.params.provider.MethodSource; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.testcontainers.containers.KafkaContainer; @@ -38,6 +43,7 @@ import
[flink] branch release-1.14 updated (7c380ba -> b194e83)
This is an automated email from the ASF dual-hosted git repository. trohrmann pushed a change to branch release-1.14 in repository https://gitbox.apache.org/repos/asf/flink.git. from 7c380ba [FLINK-25096] Fixes empty exception history for JobInitializationException (#18035) add ff96e93 [FLINK-23946][clients] Dispatcher in application mode should be able to recover after losing and regaining leadership. add 4567d87 [FLINK-23946][FLINK-24038][tests] Harden ZooKeeperLeaderElectionITCase. add 0a99256 [FLINK-23946][clients] Code review based fixes. add b194e83 [FLINK-23946] Flink 1.14 compatibility + disable a flaky test `ZooKeeperLeaderElectionITCase#testJobExecutionOnClusterWithLeaderChange` (FLINK-25235) No new revisions were added by this update. Summary of changes: .../ApplicationDispatcherBootstrap.java| 70 + .../ApplicationDispatcherBootstrapITCase.java | 150 ++ .../ApplicationDispatcherBootstrapTest.java| 172 +++-- .../apache/flink/client/testjar/BlockingJob.java | 78 ++ .../flink/client/testjar/MultiExecuteJob.java | 24 +++ .../flink/runtime/minicluster/MiniCluster.java | 13 +- .../runtime/jobmaster/JobExecutionITCase.java | 3 +- .../LeaderChangeClusterComponentsTest.java | 13 +- .../runtime/minicluster/TestingMiniCluster.java| 69 - .../ZooKeeperLeaderElectionITCase.java | 90 --- 10 files changed, 532 insertions(+), 150 deletions(-) create mode 100644 flink-clients/src/test/java/org/apache/flink/client/deployment/application/ApplicationDispatcherBootstrapITCase.java create mode 100644 flink-clients/src/test/java/org/apache/flink/client/testjar/BlockingJob.java