This is an automated email from the ASF dual-hosted git repository.

akudinkin pushed a change to branch release-0.12.2-blockers-candidate
in repository https://gitbox.apache.org/repos/asf/hudi.git


 discard 51af3e5f943 [HUDI-5348] Cache file slices in HoodieBackedTableMetadata 
(#7436)
 discard 5a6b4de0e04 [HUDI-5296] Allow disable schema on read after enabling 
(#7421)
 discard 7538e7e1512 [HUDI-5078] Fixing isTableService for replace commits 
(#7037)
 discard 1c0c379df92 [HUDI-5353] Close file readers (#7412)
 discard f11edd53c78 [MINOR] Fix Out of Bounds Exception for 
DayBasedCompactionStrategy (#7360)
 discard 4bb8f31d64b [HUDI-5372] Fix NPE caused by alter table add column. 
(#7236)
 discard 25571aa03d0 [HUDI-5347] Cleaned up transient state from 
`ExpressionPayload` making it non-serializable (#7424)
 discard b09e361723d [HUDI-5336] Fixing parsing of log files while building 
file groups (#7393)
 discard e647096286d [HUDI-5338] Adjust coalesce behavior within NONE sort mode 
for bulk insert (#7396)
 discard f33a430b054 [HUDI-5342] Add new bulk insert sort modes repartitioning 
data by partition path (#7402)
 discard ff817d9009c [HUDI-5358] Fix flaky tests in 
TestCleanerInsertAndCleanByCommits (#7420)
 discard c7c74e127d7 [HUDI-5350] Fix oom cause compaction event lost problem 
(#7408)
 discard 34537d29375 [HUDI-5346][HUDI-5320] Fixing Create Table as Select 
(CTAS) performance gaps (#7370)
 discard aecfb40a99a [HUDI-5291] Fixing NPE in MOR column stats accounting 
(#7349)
 discard e56631f34c0 [HUDI-5345] Avoid fs.exists calls for metadata table in 
HFileBootstrapIndex (#7404)
 discard 7ccdbaedb45 [HUDI-5347] FIxing performance traps in Spark SQL `MERGE 
INTO` implementation (#7395)
 discard 025a8db3f1d [HUDI-5344] Fix CVE - upgrade protobuf-java (#6960)
 discard 94860a41dfd [HUDI-5163] Fix failure handling with spark datasource 
write (#7140)
 discard fa2fd8e97ed [HUDI-5344] Fix CVE - upgrade protobuf-java to 3.18.2 
(#6957)
 discard 14004c83f63 [HUDI-5151] Fix bug with broken flink data skipping caused 
by ClassNotFoundException of InLineFileSystem (#7124)
 discard 0c963205084 [HUDI-5253] HoodieMergeOnReadTableInputFormat could have 
duplicate records issue if it contains delta files while still splittable 
(#7264)
 discard 7e3451269a8 [HUDI-5242] Do not fail Meta sync in Deltastreamer when 
inline table service fails (#7243)
 discard 6948ab10020 [HUDI-5277] Close HoodieWriteClient before exiting 
RunClusteringProcedure (#7300)
 discard 44bbfef9a3a [HUDI-5260] Fix insert into sql command with strict sql 
insert mode (#7269)
 discard 7614443d518 [HUDI-5252] ClusteringCommitSink supports to rollback 
clustering (#7263)
     add d4ec501f755 [HUDI-5260] Fix insert into sql command with strict sql 
insert mode (#7269)
     add 5230a11f15d [HUDI-5277] Close HoodieWriteClient before exiting 
RunClusteringProcedure (#7300)
     add a78cb091f94 [HUDI-5242] Do not fail Meta sync in Deltastreamer when 
inline table service fails (#7243)
     add 4ccee729d29 [HUDI-5253] HoodieMergeOnReadTableInputFormat could have 
duplicate records issue if it contains delta files while still splittable 
(#7264)
     add 64a359b5bd8 [HUDI-5151] Fix bug with broken flink data skipping caused 
by ClassNotFoundException of InLineFileSystem (#7124)
     add ab80838fd35 [HUDI-5344] Fix CVE - upgrade protobuf-java to 3.18.2 
(#6957)
     add e3c956284ed [HUDI-5163] Fix failure handling with spark datasource 
write (#7140)
     add 8b294b05639 [HUDI-5344] Fix CVE - upgrade protobuf-java (#6960)
     add 8510aacba8e [HUDI-5347] FIxing performance traps in Spark SQL `MERGE 
INTO` implementation (#7395)
     add 4a28b8389f9 [HUDI-5345] Avoid fs.exists calls for metadata table in 
HFileBootstrapIndex (#7404)
     add 725a9b210a1 [HUDI-5291] Fixing NPE in MOR column stats accounting 
(#7349)
     add d15bbbb6989 [HUDI-5346][HUDI-5320] Fixing Create Table as Select 
(CTAS) performance gaps (#7370)
     add f1d643e8f9e [HUDI-5350] Fix oom cause compaction event lost problem 
(#7408)
     add 172c438d64b [HUDI-5358] Fix flaky tests in 
TestCleanerInsertAndCleanByCommits (#7420)
     add bca85f376c1 [HUDI-5342] Add new bulk insert sort modes repartitioning 
data by partition path (#7402)
     add 7b8a7208602 [HUDI-5338] Adjust coalesce behavior within NONE sort mode 
for bulk insert (#7396)
     add 08b414dd15c [HUDI-5336] Fixing parsing of log files while building 
file groups (#7393)
     add 438f3ab6ae3 [HUDI-5347] Cleaned up transient state from 
`ExpressionPayload` making it non-serializable (#7424)
     add 39031d3c9ef [HUDI-5372] Fix NPE caused by alter table add column. 
(#7236)
     add 68361fae88e [MINOR] Fix Out of Bounds Exception for 
DayBasedCompactionStrategy (#7360)
     add 6e6940fc59e [HUDI-5353] Close file readers (#7412)
     add 4085f27cfb0 [HUDI-5078] Fixing isTableService for replace commits 
(#7037)
     add 70e4615c26a [HUDI-5296] Allow disable schema on read after enabling 
(#7421)
     add 51f15f500b3 [HUDI-5348] Cache file slices in HoodieBackedTableMetadata 
(#7436)

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (51af3e5f943)
            \
             N -- N -- N   refs/heads/release-0.12.2-blockers-candidate 
(51f15f500b3)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 .../hudi/sink/clustering/ClusteringCommitSink.java |   4 +-
 .../java/org/apache/hudi/util/ClusteringUtil.java  |  17 ---
 .../org/apache/hudi/utils/TestClusteringUtil.java  | 127 ---------------------
 3 files changed, 2 insertions(+), 146 deletions(-)
 delete mode 100644 
hudi-flink-datasource/hudi-flink/src/test/java/org/apache/hudi/utils/TestClusteringUtil.java

Reply via email to