[ 
https://issues.apache.org/jira/browse/FLINK-11899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17014906#comment-17014906
 ] 

Zhenqiu Huang commented on FLINK-11899:
---------------------------------------

[~lzljs3620320] 
I want to leverage existing 
org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedPrimitiveColumnReader. So 
ParquetColumnarRowSplitReader will need a wrap on hive column vector as ORC. 
Not sure what's the efficient way of reading data directly to Flink Vector.

> Introduce vectorized parquet InputFormat for blink runtime
> ----------------------------------------------------------
>
>                 Key: FLINK-11899
>                 URL: https://issues.apache.org/jira/browse/FLINK-11899
>             Project: Flink
>          Issue Type: Sub-task
>          Components: Table SQL / Runtime
>            Reporter: Jingsong Lee
>            Assignee: Zhenqiu Huang
>            Priority: Major
>             Fix For: 1.11.0
>
>
> VectorizedParquetInputFormat is introduced to read parquet data in batches.
> When returning each row of data, instead of actually retrieving each field, 
> we use BaseRow's abstraction to return a Columnar Row-like view.
> This will greatly improve the downstream filtered scenarios, so that there is 
> no need to access redundant fields on the filtered data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to