Vaibhav Gumashta created HIVE-21470:
---------------------------------------

             Summary: ACID: Optimize RecordReader creation when SearchArgument 
is provided
                 Key: HIVE-21470
                 URL: https://issues.apache.org/jira/browse/HIVE-21470
             Project: Hive
          Issue Type: Bug
          Components: Transactions
    Affects Versions: 2.3.4, 3.1.1
            Reporter: Vaibhav Gumashta


Consider the following query:
{code}
select col1 from tbl1 where year_partition=2019;
{code}
 
If the table has a lot of columns, currently we end up creating a TreeReader 
for each column, even when it won't pass the SearchArgument:
{code}
TreeReaderFactory.createTreeReader(TypeDescription, TreeReaderFactory$Context) 
line: 2339       
TreeReaderFactory$StructTreeReader.<init>(int, TypeDescription, 
TreeReaderFactory$Context) line: 1974   
TreeReaderFactory.createTreeReader(TypeDescription, TreeReaderFactory$Context) 
line: 2390       
RecordReaderImpl(RecordReaderImpl).<init>(ReaderImpl, Reader$Options) line: 267 
RecordReaderImpl.<init>(ReaderImpl, Reader$Options, Configuration) line: 67     
ReaderImpl.rowsOptions(Reader$Options, Configuration) line: 83  
OrcRawRecordMerger$OriginalReaderPairToRead.<init>(OrcRawRecordMerger$ReaderKey,
 Reader, int, RecordIdentifier, RecordIdentifier, Reader$Options, 
OrcRawRecordMerger$Options, Configuration, ValidWriteIdList, int) line: 446   
OrcRawRecordMerger.<init>(Configuration, boolean, Reader, boolean, int, 
ValidWriteIdList, Reader$Options, Path[], OrcRawRecordMerger$Options) line: 
1057        
OrcInputFormat.getReader(InputSplit, Options) line: 2108        
OrcInputFormat.getRecordReader(InputSplit, JobConf, Reporter) line: 2006        
FetchOperator$FetchInputFormatSplit.getRecordReader(JobConf) line: 776  
{code}

If the table has 1000 column, and spans N splits, we will end up creating 
1000*N TreeReader objects when we might need only N (1/split).




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to