danny0405 commented on code in PR #9153:
URL: https://github.com/apache/hudi/pull/9153#discussion_r1257656038


##########
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/client/SparkRDDTableServiceClient.java:
##########
@@ -291,6 +287,13 @@ private void 
completeClustering(HoodieReplaceCommitMetadata metadata,
     LOG.info("Clustering successfully on commit " + clusteringCommitTime);
   }
 
+  private void handleWriteErrors(List<HoodieWriteStat> writeStats, 
TableServiceType tableServiceType) {
+    if 
(writeStats.stream().mapToLong(HoodieWriteStat::getTotalWriteErrors).sum() > 0) 
{
+      throw new HoodieClusteringException(tableServiceType + " failed to write 
to files:"
+          + writeStats.stream().filter(s -> s.getTotalWriteErrors() > 
0L).map(HoodieWriteStat::getFileId).collect(Collectors.joining(",")));

Review Comment:
   wondering whether we need a config option to control the behavior? If all 
the exceptions are resolvable, it's okay we throw exceptions directly.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to