[ https://issues.apache.org/jira/browse/IMPALA-9228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Zoltán Borók-Nagy updated IMPALA-9228: -------------------------------------- Description: The ORC scanners uses an external library to read ORC files. The library reads the file contents into its own memory representation. It is a vectorized representation similar to the Arrow format. Impala needs to convert the ORC row batch to an Impala row batch. Currently the conversion happens row-wise and value-by-value via virtual function calls: [https://github.com/apache/impala/blob/85425b81f04c856d7d5ec375242303f78ec7964e/be/src/exec/hdfs-orc-scanner.cc#L671] [https://github.com/apache/impala/blob/85425b81f04c856d7d5ec375242303f78ec7964e/be/src/exec/orc-column-readers.cc#L352] Instead of this approach it could work similarly to the Parquet scanner that fills the columns one-by-one into a scratch batch, then evaluate the conjuncts on the scratch batch. For more details see HdfsParquetScanner::AssembleRows(): [https://github.com/apache/impala/blob/85425b81f04c856d7d5ec375242303f78ec7964e/be/src/exec/parquet/hdfs-parquet-scanner.cc#L1077-L1088] This way we'll need a lot less virtual function calls, also the memory reads/writes will be much more localized and predictable. was: The ORC scanners uses an external library to read ORC files. The library reads the file contents into its own memory representation. It is a vectorized representation similar to the Arrow format. Impala needs to convert the ORC row batch to an Impala row batch. Currently the conversion happens row-wise and column-by-column via a virtual function call: [https://github.com/apache/impala/blob/85425b81f04c856d7d5ec375242303f78ec7964e/be/src/exec/hdfs-orc-scanner.cc#L671] [https://github.com/apache/impala/blob/85425b81f04c856d7d5ec375242303f78ec7964e/be/src/exec/orc-column-readers.cc#L352] Instead of this approach it could work similarly to the Parquet scanner that fills the columns one-by-one into a scratch batch, then evaluate the conjuncts on the scratch batch. For more details see HdfsParquetScanner::AssembleRows(): [https://github.com/apache/impala/blob/85425b81f04c856d7d5ec375242303f78ec7964e/be/src/exec/parquet/hdfs-parquet-scanner.cc#L1077-L1088] This way we'll need a lot less virtual function calls, also the memory reads/writes will be much more localized and predictable. > ORC scanner could be vectorized > ------------------------------- > > Key: IMPALA-9228 > URL: https://issues.apache.org/jira/browse/IMPALA-9228 > Project: IMPALA > Issue Type: Improvement > Reporter: Zoltán Borók-Nagy > Priority: Major > Labels: orc > > The ORC scanners uses an external library to read ORC files. The library > reads the file contents into its own memory representation. It is a > vectorized representation similar to the Arrow format. > Impala needs to convert the ORC row batch to an Impala row batch. Currently > the conversion happens row-wise and value-by-value via virtual function calls: > [https://github.com/apache/impala/blob/85425b81f04c856d7d5ec375242303f78ec7964e/be/src/exec/hdfs-orc-scanner.cc#L671] > [https://github.com/apache/impala/blob/85425b81f04c856d7d5ec375242303f78ec7964e/be/src/exec/orc-column-readers.cc#L352] > Instead of this approach it could work similarly to the Parquet scanner that > fills the columns one-by-one into a scratch batch, then evaluate the > conjuncts on the scratch batch. For more details see > HdfsParquetScanner::AssembleRows(): > [https://github.com/apache/impala/blob/85425b81f04c856d7d5ec375242303f78ec7964e/be/src/exec/parquet/hdfs-parquet-scanner.cc#L1077-L1088] > This way we'll need a lot less virtual function calls, also the memory > reads/writes will be much more localized and predictable. -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org For additional commands, e-mail: issues-all-h...@impala.apache.org