iemejia commented on a change in pull request #14229:
URL: https://github.com/apache/beam/pull/14229#discussion_r594343385



##########
File path: 
sdks/java/io/parquet/src/main/java/org/apache/beam/sdk/io/parquet/ParquetIO.java
##########
@@ -762,83 +762,86 @@ public void processElement(
               conf, new Schema.Parser().parse(requestSchemaString));
         }
         ParquetReadOptions options = HadoopReadOptions.builder(conf).build();
-        ParquetFileReader reader =
-            ParquetFileReader.open(new 
BeamParquetInputFile(file.openSeekable()), options);
-        Filter filter = checkNotNull(options.getRecordFilter(), "filter");
-        Configuration hadoopConf = ((HadoopReadOptions) options).getConf();
-        FileMetaData parquetFileMetadata = 
reader.getFooter().getFileMetaData();
-        MessageType fileSchema = parquetFileMetadata.getSchema();
-        Map<String, String> fileMetadata = 
parquetFileMetadata.getKeyValueMetaData();
-        ReadSupport.ReadContext readContext =
-            readSupport.init(
-                new InitContext(
-                    hadoopConf, Maps.transformValues(fileMetadata, 
ImmutableSet::of), fileSchema));
-        ColumnIOFactory columnIOFactory = new 
ColumnIOFactory(parquetFileMetadata.getCreatedBy());
-
-        RecordMaterializer<GenericRecord> recordConverter =
-            readSupport.prepareForRead(hadoopConf, fileMetadata, fileSchema, 
readContext);
-        reader.setRequestedSchema(readContext.getRequestedSchema());
-        MessageColumnIO columnIO =
-            columnIOFactory.getColumnIO(readContext.getRequestedSchema(), 
fileSchema, true);
-        long currentBlock = tracker.currentRestriction().getFrom();
-        for (int i = 0; i < currentBlock; i++) {
-          reader.skipNextRowGroup();
-        }
-        while (tracker.tryClaim(currentBlock)) {
-          PageReadStore pages = reader.readNextRowGroup();
-          LOG.debug("block {} read in memory. row count = {}", currentBlock, 
pages.getRowCount());
-          currentBlock += 1;
-          RecordReader<GenericRecord> recordReader =
-              columnIO.getRecordReader(
-                  pages, recordConverter, options.useRecordFilter() ? filter : 
FilterCompat.NOOP);
-          long currentRow = 0;
-          long totalRows = pages.getRowCount();
-          while (currentRow < totalRows) {
-            try {
-              GenericRecord record;
-              currentRow += 1;
+        try (ParquetFileReader reader =

Review comment:
       Yes it is only that.

##########
File path: 
sdks/java/io/parquet/src/main/java/org/apache/beam/sdk/io/parquet/ParquetIO.java
##########
@@ -1012,14 +1016,15 @@ public void processElement(ProcessContext 
processContext) throws Exception {
         try (ParquetReader<GenericRecord> reader = builder.build()) {
           GenericRecord read;
           while ((read = reader.read()) != null) {
-            processContext.output(parseFn.apply(read));
+            receiver.output(parseFn.apply(read));

Review comment:
       Shorter code (one line less) + Using nicer Beam style.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to