10183974 commented on issue #1057:
URL: https://github.com/apache/arrow-java/issues/1057#issuecomment-4028022541
> ScanOptions options = new ScanOptions(/_batchSize=_/ 65_536);
>
> try (RootAllocator allocator = new RootAllocator(512 * 1024 * 1024);
NativeMemoryPool nativePool = NativeMemoryPool.createListenable(
DirectReservationListener.instance()); FileSystemDatasetFactory datasetFactory
= new FileSystemDatasetFactory( allocator, nativePool, FileFormat.PARQUET,
hdfsUri); Dataset dataset = datasetFactory.finish(); Scanner scanner =
dataset.newScan(options); ArrowReader reader = scanner.scanBatches()) {
>
> ```
> long totalRows = 0;
> int batchCount = 0;
>
>
> VectorSchemaRoot root = reader.getVectorSchemaRoot();
>
> while (reader.loadNextBatch()) {
> int rowCount = root.getRowCount();
> totalRows += rowCount;
> batchCount++;
> System.out.println("Batch " + batchCount + " - rows: " + rowCount + ",
total: " + totalRows);
> }
>
> System.out.println("Total batches: " + batchCount);
> System.out.println("Total rows: " + totalRows);
> ```
>
> } catch (Exception e) { logger.error("Error processing parquet file", e); }
Thank you for the suggestions. I have re-run the application incorporating
your recommended code changes. While the process was able to consume more
batches compared to the previous attempt, it ultimately failed with the same
underlying issue.
Here are the detailed results and environment configurations:
1. Environment Configuration
Container Memory Limit: 15 GB
JVM Arguments:
bash
java \
-Xms1g -Xmx3g \
-XX:MaxDirectMemorySize=3g \
-XX:MaxMetaspaceSize=256m \
-XX:+UseG1GC \
-XX:+PrintGCDetails \
-XX:+PrintGCDateStamps \
-Xloggc:gc.log \
-XX:+HeapDumpOnOutOfMemoryError \
-XX:HeapDumpPath=./heap_dump.hprof \
-Darrow.memory.debug.allocator=true \
-jar arrow-test-1.0-SNAPSHOT.jar
2. Execution Behavior
The application successfully processed more batches before crashing,
indicating some improvement, but it still hit a hard limit:
Successfully processed up to Batch 26 (Total rows: ~1,007,544).
The crash occurred consistently when attempting to reserve memory for
subsequent operations.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]