>>>>
>>>>> If you generate table using have but read it by data frame, it may
>>>>> have some comparability issue.
>>>>>
>>>>> Thanks
>>>>>
>>>>> Zhan Zhang
>>>>>
>>>>
;>>>
>>>>
>>>> Sent from my iPhone
>>>>
>>>> > On Sep 29, 2015, at 1:47 PM, unk1102 <umesh.ka...@gmail.com> wrote:
>>>> >
>>>> > Hi I have a spark job which creates hive tables in orc format with
>>>>
hich creates hive tables in orc format with
>>> > partitions. It works well I can read data back into hive table using
>>> hive
>>> > console. But if I try further process orc files generated by Spark job
>>> by
>>> > loading into dataframe then
/user/hive/warehouse/partorc/part_tiny.txt. Invalid
>> > postscript.
>> >
>> > Dataframe df = hiveContext.read().format("orc").load(to/path);
>> >
>> > Please g
m/Hive-ORC-Malformed-while-loading-into-spark-data-frame-tp24876.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-ma
art_tiny.txt. Invalid
> > postscript.
> >
> > Dataframe df = hiveContext.read().format("orc").load(to/path);
> >
> > Please guide.
> >
> >
> >
> > --
> > View this message in context:
> http://apache-sp
ntext:
> http://apache-spark-user-list.1001560.n3.nabble.com/Hive-ORC-Malformed-while-loading-into-spark-data-frame-tp24876.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -
> To uns
st:9000/user/hive/warehouse/partorc/part_tiny.txt. Invalid
>> > postscript.
>> >
>> > Dataframe df = hiveContext.read().format("orc").load(to/path);
>> >
>> > Please guide.
>> >
>> >
>> >
>> > --
>> > View