lurnagao-dahua commented on code in PR #10661:
URL: https://github.com/apache/iceberg/pull/10661#discussion_r1669863853
##########
mr/src/main/java/org/apache/iceberg/mr/mapreduce/IcebergInputFormat.java:
##########
@@ -144,21 +147,32 @@ public List<InputSplit> getSplits(JobContext context) {
InputFormatConfig.InMemoryDataModel model =
conf.getEnum(
InputFormatConfig.IN_MEMORY_DATA_MODEL,
InputFormatConfig.InMemoryDataModel.GENERIC);
- try (CloseableIterable<CombinedScanTask> tasksIterable = scan.planTasks())
{
- Table serializableTable = SerializableTable.copyOf(table);
- tasksIterable.forEach(
- task -> {
- if (applyResidual
- && (model == InputFormatConfig.InMemoryDataModel.HIVE
- || model == InputFormatConfig.InMemoryDataModel.PIG)) {
- // TODO: We do not support residual evaluation for HIVE and PIG
in memory data model
- // yet
- checkResiduals(task);
- }
- splits.add(new IcebergSplit(serializableTable, conf, task));
- });
- } catch (IOException e) {
- throw new UncheckedIOException(String.format("Failed to close table
scan: %s", scan), e);
+ final ExecutorService workerPool =
+ ThreadPools.newWorkerPool(
+ "iceberg-plan-worker-pool",
+ conf.getInt(
+ SystemConfigs.WORKER_THREAD_POOL_SIZE.propertyKey(),
+ ThreadPools.WORKER_THREAD_POOL_SIZE));
+ try {
+ scan = scan.planWith(workerPool);
Review Comment:
Hi, my test case fetch task to do the simple query, and
`hive.fetch.task.conversion=more`
if `hive.fetch.task.conversion=none` and there will be no problem executing
the mr task(I think the same goes for the tezTask).
when run task,each task has only one user and so there is no such issue.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]