HyukjinKwon commented on a change in pull request #29542:
URL: https://github.com/apache/spark/pull/29542#discussion_r569022628



##########
File path: 
sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/SpecificParquetRecordReaderBase.java
##########
@@ -92,67 +88,23 @@
   public void initialize(InputSplit inputSplit, TaskAttemptContext 
taskAttemptContext)
       throws IOException, InterruptedException {
     Configuration configuration = taskAttemptContext.getConfiguration();
-    ParquetInputSplit split = (ParquetInputSplit)inputSplit;
+    FileSplit split = (FileSplit) inputSplit;
     this.file = split.getPath();
-    long[] rowGroupOffsets = split.getRowGroupOffsets();
-
-    ParquetMetadata footer;
-    List<BlockMetaData> blocks;
 
-    // if task.side.metadata is set, rowGroupOffsets is null
-    if (rowGroupOffsets == null) {
-      // then we need to apply the predicate push down filter
-      footer = readFooter(configuration, file, range(split.getStart(), 
split.getEnd()));
-      MessageType fileSchema = footer.getFileMetaData().getSchema();
-      FilterCompat.Filter filter = getFilter(configuration);
-      blocks = filterRowGroups(filter, footer.getBlocks(), fileSchema);
-    } else {

Review comment:
       Okie, removal is fine by me.

##########
File path: 
sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/SpecificParquetRecordReaderBase.java
##########
@@ -199,12 +151,21 @@ public void initialize(InputSplit inputSplit, 
TaskAttemptContext taskAttemptCont
    */
   protected void initialize(String path, List<String> columns) throws 
IOException {
     Configuration config = new Configuration();
-    config.set("spark.sql.parquet.binaryAsString", "false");
-    config.set("spark.sql.parquet.int96AsTimestamp", "false");
+    config.setBoolean(SQLConf.PARQUET_BINARY_AS_STRING().key() , false);
+    config.setBoolean(SQLConf.PARQUET_INT96_AS_TIMESTAMP().key(), false);
 
     this.file = new Path(path);
     long length = 
this.file.getFileSystem(config).getFileStatus(this.file).getLen();
-    ParquetMetadata footer = readFooter(config, file, range(0, length));
+    ParquetReadOptions options = HadoopReadOptions
+      .builder(config)
+      .withRange(0, length)
+      .build();
+
+    ParquetMetadata footer;
+    try (ParquetFileReader reader = ParquetFileReader

Review comment:
       The change seems okay but can we do a microbenchmark to confirm there's 
no performance impact here? I am still confused why I faced and concluded that 
using new Parquet API caused performance regression when I tired it by myself a 
while ago ... (sorry I forgot all about it).




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to