alamb opened a new issue, #9964:
URL: https://github.com/apache/arrow-datafusion/issues/9964

   ### Is your feature request related to a problem or challenge?
   
   The `ListingTable`  works quite well in practice, but like all software 
could be made better. I am writing up this ticket to enumerate some areas for 
improvement in the hopes people who are interested can collaborate / coordinate 
their efforts
   
   ### Background
   
   DataFusion has a 
[`ListingTable`](https://github.com/apache/arrow-datafusion/blob/2dad90425bacb98a3c2a4214faad53850c93104e/datafusion/core/src/datasource/listing/table.rs#L443-L508)
 that effectively reading tables stored in one or more files in a "hive 
partitioned" directory structure:
   
   So for example, give files like this:
   ```
   /path/to/my_table/file1.parquet
   /path/to/my_table/file2.parquet
   /path/to/my_table/file3.parquet
   ```
   
   You can create a table with a command like
   
   ```sql
   CREATE EXTERNAL TABLE my_table
   LOCATION '/path/to/my_table'
   ```
   
   And the `ListingTable` will handle figuring out schema, and running queries 
against those files as though they were a single table. 
   
   
   
   ### Describe the solution you'd like
   
   Here are some things I suspect could be improved:
   
   
   ## All Formats
   ### Object store list caching
   For large tables (many files) on remote stores, the actual object store call 
to 
[`LIST`](https://github.com/apache/arrow-datafusion/blob/2b0a7db0ce64950864e07edaddfa80756fe0ffd5/datafusion/core/src/datasource/listing/url.rs#L218)
 may be non trivially expensive and thus perhaps could be cached
   
   ## Parquet Specific
   ### MetaData caching 
   
   `ListingTable` ([code 
link](https://github.com/apache/arrow-datafusion/blob/2dad90425bacb98a3c2a4214faad53850c93104e/datafusion/core/src/datasource/listing/table.rs#L829-L857))
  prunes files based on statistics, and then inside the ParquetExec itself 
([link](https://github.com/apache/arrow-datafusion/blob/2dad90425bacb98a3c2a4214faad53850c93104e/datafusion/core/src/datasource/physical_plan/parquet/mod.rs#L527-L569))
 where it again prunes row groups and data pages based on metadata,
   
   ###  IO granularity 
   
   I have heard it said that the DataFusion `ParquetExec` reader reads a page 
at a time -- this is fine if the parquet file is a local file on disk, but it 
is likely quite inefficient if each page must be fetched with an individual 
remote object store request. This assertion needs to be researched, but if true 
we could make queries on remote parquet files much faster by making fewer 
larger requests
   
   ### Describe alternatives you've considered
   
   @Ted-Jiang  added some APIs in 
https://github.com/apache/arrow-datafusion/pull/7570 
https://github.com/apache/arrow-datafusion/blob/2b0a7db0ce64950864e07edaddfa80756fe0ffd5/datafusion/execution/src/cache/mod.rs
 but there aren't any default implementations in DataFusion so the metadata is 
read multiple times
   
   Maybe we can add a default implementation of the caches in SessionContext 
with a simple policy (like LRU / some max size)
   
   Another potential way to improve performance is to cache the decoded 
metadata from the Parquet footer rather than checking it once to prune files 
and then again to prune row groups / pages
   
   ### Additional context
   
   @matthewmturner mentioned interest in improving listing table performance: 
https://github.com/apache/arrow-datafusion/issues/9899#issuecomment-2030139830
   
   Note we don't use ListingTable in InfluxDB for some of the reasons described 
above


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to