[ 
https://issues.apache.org/jira/browse/HIVE-10217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14499598#comment-14499598
 ] 

Gopal V commented on HIVE-10217:
--------------------------------

I guess we can use an arbitrary size to read this data in, since the layout 
after decompression will match reading this in 256kb chunks (or whatever's 
ideal for the allocator).

> LLAP: Support caching of uncompressed ORC data
> ----------------------------------------------
>
>                 Key: HIVE-10217
>                 URL: https://issues.apache.org/jira/browse/HIVE-10217
>             Project: Hive
>          Issue Type: Sub-task
>    Affects Versions: llap
>            Reporter: Gopal V
>            Assignee: Sergey Shelukhin
>             Fix For: llap
>
>
> {code}
> Caused by: java.io.IOException: ORC compression buffer size (0) is smaller 
> than LLAP low-level cache minimum allocation size (131072). Decrease the 
> value for hive.llap.io.cache.orc.alloc.min
>         at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:137)
>         at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:48)
>         at 
> org.apache.hadoop.hive.common.CallableWithNdc.call(CallableWithNdc.java:37)
>         ... 4 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to