ruibinx opened a new pull request, #24008:
URL: https://github.com/apache/flink/pull/24008

   - Update for 1.18.0
   - [FLINK-32827][table-runtime] Fix the operator fusion codegen may not take 
effect when enabling runtime filter
   - [FLINK-32296][table] Support cast of collections with row element types
   - [FLINK-32905][table-runtime] Fix the bug of broadcast hash join doesn't 
support spill to disk when enable operator fusion codegn
   - [FLINK-32523][test] Guarantee all operators triggering decline checkpoint 
together for NotifyCheckpointAbortedITCase#testNotifyCheckpointAborted
   - [FLINK-32523][test] Increase the tolerable checkpoint failure number to 
avoid aborting after job failing for 
NotifyCheckpointAbortedITCase#testNotifyCheckpointAborted(#23283)
   - [BP-1.18][FLINK-32760][Connectors/Hive] Reshade shaded parquet in 
flink-sql-connector-hive (#23288)
   - [FLINK-32869][e2e] Fix Kubernetes tests on aarch64
   - [hotfix][table-planner] Fix the wrong traitSet and inputs in runtime 
filter related exec node
   - [FLINK-32831][table-planner] RuntimeFilterProgram should aware join type 
when looking for the build side
   - [FLINK-32831][table-planner] Adjust default value of max build data size 
of runtime filter to cover more cases
   -  [FLINK-32831][table-planner] Add check to ensure all tests in 
RuntimeFilterITcase have applied runtime filter.
   - [FLINK-32824][table] Port CALCITE-1898 to fix the sql LIKE operator which 
must match '.' (period) literally
   - [FLINK-32907][Tests] Fix CheckpointAfterAllTasksFinishedITCase hangs on AZP
   - [FLINK-32945][runtime] Fix NPE when task reached end-of-data but 
checkpoint failed
   - [FLINK-32758][python] Remove PyFlink dependencies' upper bounds
   - [FLINK-32758][python] Limit fastavro != 1.8.0
   - [hotfix][test] Refactors test to remove version and offset server-side 
checks
   - [hotfix][JUnit5] Migrates CollectSinkOperatorCoordinatorTest to JUnit5 and 
Assertj
   - [FLINK-32751][streaming] Refactors CollectSinkOperatorCoordinator to 
improve its testability
   - [FLINK-32751][streaming] Fixes bug where the enqueued requests are not 
properly cancelled
   - [hotfix][network] Fix the notification of inaccurate data availability in 
Hybrid Shuffle
   - [FlINK-32865][table-planner] Add ExecutionOrderEnforcer to exec plan and 
put it into BatchExecMultipleInput
   - [FlINK-32865][table-planner] Make ExecutionOrderEnforcer support OFCG
   - [FLINK-32994][runtime] Adds proper toString() implementations to the 
LeaderElectionDriver implementations to have human-readable versions of the 
driver in log messages
   - [FLINK-32982][doc] Not to suggest to swap flink-table-planner lib for 
using Hive dialect This closes #23324
   - [hotfix][doc] Fix wrong documentation of INSERT OVERWRITE statement for 
Hive dialect
   - [hotfix][flink-examples] Fix wrong filtering and remove Scala version 
suffix for state-machine example pom
   - [FLINK-32821][examples] Include flink-connector-datagen for streaming 
examples
   - [FLINK-32821][examples] Add integrated tests for streaming examples
   - [FLINK-32909][dist]fix the bug of Pass arguments to jobmanager.sh failed 
(#23278)
   - [FLINK-32996][Tests] Fix the 
CheckpointAfterAllTasksFinishedITCase.testFailoverAfterSomeTasksFinished fails
   - [FLINK-28513] Fix Flink Table API CSV streaming sink throws
   - [FLINK-32475][docs] Add doc for time travel (#23109) (#23346)
   - [FLINK-32962][python] Remove pip version check on installing dependencies.
   - [FLINK-32796][table] Try to create catalog store path if not exists 
(#23293)
   - [FLINK-32796][docs] Extend catalog store documentation (#23294)
   - [FLINK-31889][docs] Add documentation for implementing/loading enrichers
   - [FLINK-32999][connectors/hbase] Remove HBase connector code from main repo 
(#23343)
   - [FLINK-32952][table-planner] Fix scan reuse with readable metadata and 
watermark push down get wrong watermark error (#23338)
   - [FLINK-33063][table-runtime] Fix udaf with complex user defined pojo 
object throw error while generate record equaliser (#23388)
   - [hotfix][network] Fix the close method for the tiered producer client
   - [FLINK-32870][network] Tiered storage supports reading multiple small 
buffers by reading and slicing one large buffer
   - [FLINK-33010][table] GREATEST/LEAST should work with other functions as 
input (#23393)
   - [FLINK-30025][doc] improve the option description with more precise content
   - [FLINK-32731][e2e] Add retry mechanism when fails to start the namenode 
(#23267)
   - [FLINK-32731][e2e] Fix NameNode uses port 9870 in hadoop3
   - [FLINK-33086] Protect failure enrichment against unhandled exceptions
   - [FLINK-33088][network] Fix NullPointerException in RemoteTierConsumerAgent 
for tiered storage
   - [hotfix][network] Optimize the backlog calculation logic in Hybrid Shuffle
   - [hotfix][network] Fix the bug of triggering disk writing in Hybrid Shuffle
   - [FLINK-33071][metrics,checkpointing] Log a json dump of checkpoint 
statistics when checkpoint completes or fails
   - [FLINK-15736][docs] Add Java compatibility page
   - [hotfix][network] Flush writing buffers when closing 
HashSubpartitionBufferAccumulator of tiered storage
   - [FLINK-33044][network] Reduce the frequency of triggering flush for the 
disk tier of tiered storage
   - [FLINK-32974][client] Avoid creating a new temporary directory every time 
for RestClusterClient
   - [FLINK-33050][table] Atomicity is not supported prompting the user to 
disable
   - [FLINK-33050][docs] Add notice of data duplicates for RTAS in docs This 
closes #23448
   - [FLINK-33119][table] The pojo result returned be procedure should be Row 
of fields in the pojo instead of whole pojo object (#23450)
   - [BP-1.18][FLINK-33149][build] Bump snappy to 1.1.10.4
   - [FLINK-33158] Cryptic exception when there is a StreamExecSort in JsonPlan
   - [hotfix][docs] Update KDA to MSF in vendor solutions docs
   - [FLINK-33000][sql-gateway] SqlGatewayServiceITCase should utilize 
TestExecutorExtension instead of using a ThreadFactory
   - [FLINK-33000][sql-gateway] OperationManagerTest should utilize 
TestExecutorExtension instead of using a ThreadFactory
   - [FLINK-33000][sql-gateway] ResultFetcherTest should utilize 
TestExecutorExtension instead of using a ThreadFactory
   - [FLINK-33223] MATCH_RECOGNIZE AFTER MATCH clause can not be deserialised 
from a compiled plan
   - [FLINK-33291][build] Sets the enforced range for Maven and JDK within the 
release profile
   - [FLINK-33116][tests] Fix CliClientTest.testCancelExecutionInteractiveMode 
fails with NPE
   - [FLINK-33316][runtime] Avoid unnecessary heavy getStreamOperatorFactory
   - [FLINK-33238][Formats/Avro] Upgrade used AVRO version to 1.11.3. This 
closes #23559
   - [FLINK-32671] Document Externalized Declararative Resource Management + 
minor Elastic Scaling page restructuring.
   - [FLINK-33342][ci] Adds target version to Java 17 CI build
   - [FLINK-33346][runtime][test] Removes timeout
   - [FLINK-33352][rest][docs] Add schema mappings to discriminator properties
   - [FLINK-33274][release] Add release note for version 1.18
   - [FLINK-33369] Use Java 17 docker image for e2e tests on Java 17
   - [FLINK-33360][connector/common] Clean up finishedReaders after switch to 
next Enumerator
   - [FLINK-26624][Runtime] Running HA (hashmap, async) end-to-end test failed 
on azu
   - [hotfix][python] Fix Kafka csv example
   - [FLINK-32107][test] Adds retry to artifact downloads (#23637)
   - [FLINK-33041][docs] Add an article in English website to guide users to 
migrate their DataSet jobs to DataStream
   - [FLINK-33171][table-planner] Consistent implicit type coercion support for 
equal and non-equal comparisons for codegen
   - [FLINK-33474][runtime-web] fix undefined error of show plan in job submit 
page
   - [hotfix] Remove Kafka documentation for SQL/Table API, since this is now 
externalized
   - [FLINK-33529][python] Updated GET_SITE_PACKAGES_SCRIPT to include purelib
   - [hotfix][docs] Fix git permission issue attempt
   - [hotfix] Move permission fix to correct line
   - [FLINK-33276][ci] Merges connect_1 and connect_2 stages into a single one
   - [FLINK-32913][tests] Updates base version for japicmp check for 1.18.0
   - [FLINK-32922][docs] Update compatibility matrix for release 1.18
   - [FLINK-33567][Documentation] Only display connector download links if its 
available. This closes #23732
   - [hotfix] Remove debug print line in connector artifact shortcode
   - [hotfix] Add support for Scala suffixed in connector download links
   - [FLINK-33395][table-planner] Fix the join hint doesn't work when appears 
in subquery
   - [FLINK-33589][docs] Fix connector_artifact to prevent generation of broken 
layout
   - [hotfix] Set available Flink Cassandra connector for 1.18 to v3.1
   - [FLINK-18356] Update CI image
   - [FLINK-32918][release] Generate reference data for state migration tests 
based on release-1.18.0
   - [hotfix][python][docs] Fix broken syntax in Flink Table API query example
   - [FLINK-33225][python] Parse `JVM_ARGS` as an array
   - [FLINK-33598][k8s] Watch HA configmap via name instead of lables to reduce 
pressure on APIserver
   - [FLINK-33613][python] Port Beam DefaultJobBundleFactory class to 
flink-python module
   - [FLINK-33613][python] Make sure gRPC server is shutdown gracefully if 
Python process startup failed
   - [FLINK-33418][test] Uses getHost()
   - [FLINK-33501][ci] Makes use of Maven wrapper
   - [hotfix][docs] Aligns Chinese documentation with English version on Maven 
version
   - [FLINK-31339][table][tests] Comparing with materialized result when 
mini-batch enabled to fix unstable sql e2e tests
   - [docs][hotfix] Update AWS connector docs to v4.2
   - [FLINK-33752][JUnit5 migration] Migrate TimeUtilsPrettyPrintingTest to 
Junit5 and Assertj
   - [FLINK-33752][Configuration] Change the displayed timeunit to day when the 
duration is an integral multiple of 1 day
   - [FLINK-33693][checkpoint] Force aligned barrier works with timeoutable 
aligned checkpoint barrier
   - Revert "[FLINK-31835][table-planner] Fix the array type that can't be 
converted from the external primitive array"
   - [FLINK-33313][table] Fix RexNodeExtractor#extractConjunctiveConditions 
throws an Exception when handle binary literal
   - [FLINK-25565][Formats][Parquet] write and read parquet int64 timestamp 
(#18304) (#23887)
   - [FLINK-33541][table-planner] RAND and RAND_INTEGER should return type 
nullable if the arguments are nullable
   - [FLINK-33704][Filesytems] Update GCS filesystems to latest available 
versions. This closes #23935
   - [FLINK-33531][python] Remove cython upper bounds
   - [FLINK-31650][metrics][rest] Remove transient metrics for subtasks in 
terminal state
   - Fix NullArgumentException of getQuantile method in 
DescriptiveStatisticsHistogramStatistics
   - [FLINK-33872] Retrieve checkpoint history for jobs in completed state
   - [FLINK-33534][runtime] Support configuring PARALLELISM_OVERRIDES during 
job submission
   - [FLINK-33902][ci] Adds -legacy to openssl command
   - [FLINK-27082][ci] Adds a github-actions profile that disables certain 
tests that do not run in GHA
   - [docs][hotfix] Set available Flink Pulsar connector for 1.18 to v4.1
   - [FLINK-33942][configuration][junit5-migration] Migrate 
DelegatingConfigurationTest to Junit5 and Assertj
   - [FLINK-33942][configuration] Fix the bug that DelegatingConfiguration 
misses the prefix in some get methods
   - [FLINK-33942][configuration][refactor] Using ConfigOption instead of 
string key in DelegatingConfiguration
   - [FLINK-33863] Fix restoring compressed operator state
   
   <!--
   *Thank you very much for contributing to Apache Flink - we are happy that 
you want to help us improve Flink. To help the community review your 
contribution in the best possible way, please go through the checklist below, 
which will get the contribution into a shape in which it can be best reviewed.*
   
   *Please understand that we do not do this to make contributions to Flink a 
hassle. In order to uphold a high standard of quality for code contributions, 
while at the same time managing a large number of contributions, we need 
contributors to prepare the contributions well, and give reviewers enough 
contextual information for the review. Please also understand that 
contributions that do not follow this guide will take longer to review and thus 
typically be picked up with lower priority by the community.*
   
   ## Contribution Checklist
   
     - Make sure that the pull request corresponds to a [JIRA 
issue](https://issues.apache.org/jira/projects/FLINK/issues). Exceptions are 
made for typos in JavaDoc or documentation files, which need no JIRA issue.
     
     - Name the pull request in the form "[FLINK-XXXX] [component] Title of the 
pull request", where *FLINK-XXXX* should be replaced by the actual issue 
number. Skip *component* if you are unsure about which is the best component.
     Typo fixes that have no associated JIRA issue should be named following 
this pattern: `[hotfix] [docs] Fix typo in event time introduction` or 
`[hotfix] [javadocs] Expand JavaDoc for PuncuatedWatermarkGenerator`.
   
     - Fill out the template below to describe the changes contributed by the 
pull request. That will give reviewers the context they need to do the review.
     
     - Make sure that the change passes the automated tests, i.e., `mvn clean 
verify` passes. You can set up Azure Pipelines CI to do that following [this 
guide](https://cwiki.apache.org/confluence/display/FLINK/Azure+Pipelines#AzurePipelines-Tutorial:SettingupAzurePipelinesforaforkoftheFlinkrepository).
   
     - Each pull request should address only one issue, not mix up code from 
multiple issues.
     
     - Each commit in the pull request has a meaningful commit message 
(including the JIRA id)
   
     - Once all items of the checklist are addressed, remove the above text and 
this checklist, leaving only the filled out template below.
   
   
   **(The sections below can be removed for hotfixes of typos)**
   -->
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
     - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
     - *Deployments RPC transmits only the blob storage reference*
     - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the 
conventions defined in our code quality guide: 
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
     - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
     - *Extended integration test for recovery after master (JobManager) 
failure*
     - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
     - *Manually verified the change by running a 4 node cluster with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
     - Dependencies (does it add or upgrade a dependency): (yes / no)
     - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
     - The serializers: (yes / no / don't know)
     - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
     - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
     - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
     - Does this pull request introduce a new feature? (yes / no)
     - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to