phet commented on code in PR #4089:
URL: https://github.com/apache/gobblin/pull/4089#discussion_r1900552161
##########
gobblin-temporal/src/main/java/org/apache/gobblin/temporal/ddm/activity/impl/GenerateWorkUnitsImpl.java:
##########
@@ -127,6 +130,9 @@ public GenerateWorkUnitsResult generateWorkUnits(Properties
jobProps, EventSubmi
int numSizeSummaryQuantiles =
getConfiguredNumSizeSummaryQuantiles(jobState);
WorkUnitsSizeSummary wuSizeSummary =
digestWorkUnitsSize(workUnits).asSizeSummary(numSizeSummaryQuantiles);
log.info("Discovered WorkUnits: {}", wuSizeSummary);
+ // IMPORTANT: send prior to `writeWorkUnits`, so the volume of work
discovered (and bin packed) gets durably measured. even if serialization were
to
+ // exceed available memory and this activity execution were to fail, a
subsequent re-attempt would know the amount of work, to guide re-config/attempt
+ createWorkPreparedSizeDistillationTimer(wuSizeSummary,
eventSubmitterContext).stop();
Review Comment:
having WU planning separate from WU serialization is worth considering.
that would allow for a re-attempt of only serialization w/o needing to rerun
the planning. there's no concern in sending the GTE again on the re-attempt -
that's not the motivation. instead the benefit would be to expedite the
re-attempt by subsequently skipping a repeat of successful WU planning.
to separate into two activities we'd need to succeed in persisting some
intermediate form of WU planning, so the WU planning activity could "pass
input" to the WU serialization activity, as the two won't execute together - or
possibly even on the same host. that intermediate form clearly ought to be
more likely to succeed than serializing all the WUs themselves - the failure
we're trying to address.
this major design choice awaits if we decide to pursue such larger rework.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]