yihua commented on code in PR #13726:
URL: https://github.com/apache/hudi/pull/13726#discussion_r2294773664
##########
hudi-flink-datasource/hudi-flink/src/test/java/org/apache/hudi/table/TestHoodieFileGroupReaderOnFlink.java:
##########
@@ -283,15 +283,15 @@ public void
testReadLogFilesOnlyInMergeOnReadTable(RecordMergeMode recordMergeMo
commitToTable(initialRecords, UPSERT.value(), true, writeConfigs,
TRIP_EXAMPLE_SCHEMA);
validateOutputFromFileGroupReader(
getStorageConf(), getBasePath(), false, 1, recordMergeMode,
- initialRecords, initialRecords);
+ initialRecords, initialRecords, new String[]{ORDERING_FIELD_NAME});
Review Comment:
nit: should there be a space between `[]` and `{` based on the checkstyle?
```suggestion
initialRecords, initialRecords, new String[]{ORDERING_FIELD_NAME});
```
##########
hudi-common/src/test/java/org/apache/hudi/common/testutils/HoodieTestDataGenerator.java:
##########
@@ -1322,17 +1285,13 @@ public static RecordIdentifier clone(RecordIdentifier
toClone, String orderingVa
return new RecordIdentifier(toClone.recordKey, toClone.partitionPath,
orderingVal, toClone.riderValue);
}
- public static RecordIdentifier fromTripTestPayload(RawTripTestPayload
payload) {
- try {
- String recordKey = payload.getRowKey();
- String partitionPath = payload.getPartitionPath();
- Comparable orderingValue = payload.getOrderingValue();
- String orderingValStr = orderingValue.toString();
- String riderValue = payload.getJsonDataAsMap().getOrDefault("rider",
"").toString();
- return new RecordIdentifier(recordKey, partitionPath, orderingValStr,
riderValue);
- } catch (IOException ex) {
- throw new HoodieIOException("Failed to parse payload", ex);
- }
+ public static RecordIdentifier fromTripTestPayload(HoodieAvroIndexedRecord
record, String[] orderingValues) {
Review Comment:
```suggestion
public static RecordIdentifier
fromTripTestPayload(HoodieAvroIndexedRecord record, String[] orderingFields) {
```
##########
hudi-spark-datasource/hudi-spark-common/src/main/java/org/apache/hudi/DataSourceUtils.java:
##########
@@ -241,15 +212,13 @@ public static HoodieWriteResult
doWriteOperation(SparkRDDWriteClient client, Jav
}
}
- public static HoodieWriteResult doDeleteOperation(SparkRDDWriteClient
client, JavaRDD<Tuple2<HoodieKey, scala.Option<HoodieRecordLocation>>>
hoodieKeysAndLocations,
+ public static HoodieWriteResult doDeleteOperation(SparkRDDWriteClient
client, JavaRDD<Tuple2<HoodieKey, Option<HoodieRecordLocation>>>
hoodieKeysAndLocations,
String instantTime, boolean isPrepped) {
if (isPrepped) {
HoodieRecord.HoodieRecordType recordType =
client.getConfig().getRecordMerger().getRecordType();
JavaRDD<HoodieRecord> records = hoodieKeysAndLocations.map(tuple -> {
- HoodieRecord record = recordType == HoodieRecord.HoodieRecordType.AVRO
- ? new HoodieAvroRecord(tuple._1, new EmptyHoodieRecordPayload())
- : new HoodieEmptyRecord(tuple._1,
HoodieRecord.HoodieRecordType.SPARK);
+ HoodieRecord record = new HoodieEmptyRecord(tuple._1, recordType);
Review Comment:
Is `EmptyHoodieRecordPayload` going to be replaced by `HoodieEmptyRecord` as
much as possible? For custom payload implementation, the deletes are also be
transformed to `HoodieEmptyRecord` instances correct?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]