[ 
https://issues.apache.org/jira/browse/FLINK-19025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17184918#comment-17184918
 ] 

Rui Li commented on FLINK-19025:
--------------------------------

Hey [~McClone], are you writing streaming data into hive orc table? If so, 
there's indeed a known issue which has been fixed in FLINK-18659. You should be 
able to run this use case if you apply that patch, and set 
{{table.exec.hive.fallback-mapred-writer=true}}, which is the default setting.

If you hit the issue when writing batch data into hive, please provide the 
stacktrace of the exception.

> table sql write orc file but hive2.1.1 can not read
> ---------------------------------------------------
>
>                 Key: FLINK-19025
>                 URL: https://issues.apache.org/jira/browse/FLINK-19025
>             Project: Flink
>          Issue Type: Bug
>          Components: Connectors / ORC
>    Affects Versions: 1.11.0
>            Reporter: McClone
>            Priority: Major
>
> table sql write orc file but hive2.1.1 create external table can not read 
> data.Because flink use orc-core-1.5.6.jar but hive 2.1.1 use his own orcfile 
> jar.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to