lengkristy opened a new issue, #8610:
URL: https://github.com/apache/iceberg/issues/8610
### Apache Iceberg version
None
### Query engine
None
### Please describe the bug 🐞
java code just like this:
`
Configuration configuration = new Configuration();
// this is a local file catalog
HadoopCatalog hadoopCatalog = new HadoopCatalog(configuration,
icebergWareHousePath);
TableIdentifier name = TableIdentifier.of("logging", "logs");
Schema schema = new Schema(
Types.NestedField.required(1, "level",
Types.StringType.get()),
Types.NestedField.required(2, "event_time",
Types.TimestampType.withZone()),
Types.NestedField.required(3, "message",
Types.StringType.get()),
Types.NestedField.optional(4, "call_stack",
Types.ListType.ofRequired(5, Types.StringType.get()))
);
PartitionSpec spec = PartitionSpec.builderFor(schema)
.hour("event_time")
.identity("level")
.build();
Table table = hadoopCatalog.createTable(name, schema, spec);
GenericAppenderFactory appenderFactory = new
GenericAppenderFactory(table.schema());
int partitionId = 1, taskId = 1;
OutputFileFactory outputFileFactory =
OutputFileFactory.builderFor(table, partitionId,
taskId).format(FileFormat.PARQUET).build();
final PartitionKey partitionKey = new PartitionKey(table.spec(),
table.spec().schema());
// partitionedFanoutWriter will auto partitioned record and create
the partitioned writer
PartitionedFanoutWriter<Record> partitionedFanoutWriter = new
PartitionedFanoutWriter<Record>(table.spec(), FileFormat.PARQUET,
appenderFactory, outputFileFactory, table.io(), TARGET_FILE_SIZE_IN_BYTES) {
@Override
protected PartitionKey partition(Record record) {
partitionKey.partition(record);
return partitionKey;
}
};
Random random = new Random();
List<String> levels = Arrays.asList("info", "debug", "error",
"warn");
GenericRecord genericRecord = GenericRecord.create(table.schema());
// assume write 1000 records
for (int i = 0; i < 1000; i++) {
GenericRecord record = genericRecord.copy();
record.setField("level",
levels.get(random.nextInt(levels.size())));
// record.setField("event_time", System.currentTimeMillis());
record.setField("event_time", OffsetDateTime.now());
record.setField("message", "Iceberg is a great table format");
record.setField("call_stack",
Arrays.asList("NullPointerException"));
partitionedFanoutWriter.write(record);
}
AppendFiles appendFiles = table.newAppend();
// submit datafiles to the table
Arrays.stream(partitionedFanoutWriter.dataFiles()).forEach(appendFiles::appendFile);
// submit snapshot
Snapshot newSnapshot = appendFiles.apply();
appendFiles.commit();
`
Data loss may occur when writing iceberg with high concurrency.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]