Github user viirya commented on the issue:

    https://github.com/apache/spark/pull/13371
  
    @liancheng 
    
    I rerun the benchmark that excludes the time of writing Parquet file:
    
        test("Benchmark for Parquet") {
          val N = 1 << 50
            withParquetTable((0 until N).map(i => (101, i)), "t") {
              val benchmark = new Benchmark("Parquet reader", N)
              benchmark.addCase("reading Parquet file", 10) { iter =>
                sql("SELECT _1 FROM t where t._1 < 100").collect()
              }
              benchmark.run()
          }
        }
    
    `withParquetTable` in default will run tests for vectorized reader 
non-vectorized readers. I only let it run vectorized reader.
    
    After this patch:
    
        Java HotSpot(TM) 64-Bit Server VM 1.8.0_25-b17 on Linux 
3.13.0-57-generic
        Westmere E56xx/L56xx/X56xx (Nehalem-C)
        Parquet reader:                          Best/Avg Time(ms)    Rate(M/s) 
  Per Row(ns)   Relative
        
------------------------------------------------------------------------------------------------
        reading Parquet file                            76 /   88          3.4  
       291.0       1.0X
    
    Before this patch:
    
        Java HotSpot(TM) 64-Bit Server VM 1.8.0_25-b17 on Linux 
3.13.0-57-generic
        Westmere E56xx/L56xx/X56xx (Nehalem-C)
        Parquet reader:                          Best/Avg Time(ms)    Rate(M/s) 
  Per Row(ns)   Relative
        
------------------------------------------------------------------------------------------------
        reading Parquet file                            81 /   91          3.2  
       310.2       1.0X
    
    Next, I run the benchmark for non-pushdown case using the same benchmark 
code but with disabled pushdown configuration.
    
    After this patch:
    
        Parquet reader:                          Best/Avg Time(ms)    Rate(M/s) 
  Per Row(ns)   Relative
        
------------------------------------------------------------------------------------------------
        reading Parquet file                            80 /   95          3.3  
       306.5       1.0X
    
    Before this patch:
    
        Parquet reader:                          Best/Avg Time(ms)    Rate(M/s) 
  Per Row(ns)   Relative
        
------------------------------------------------------------------------------------------------
        reading Parquet file                            80 /  103          3.3  
       306.7       1.0X
    
    For non-pushdown case, from the results, I think this patch doesn't affect 
normal code path.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to