Your Hive metadata reflects the new data type of the column changed from
type A to type B.

The old partitions have data stored as type A. These have not changed.

If you run the query on the old partition you would probably need to use
CAST function to change the column from B to A so it is compatible with
what is stored in the old partitions. That I believe will work.

HTH


Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 19 January 2018 at 21:37, Raghuraman Murugaiyan <
[email protected]> wrote:

> Hi All,
>
> I have a table partitioned on a date column. We have changed the data type
> of one of the field for the older partitions and the data type of the newer
> partitions will remain the same as the table. When I tried to run a simple
> Select query on the older partitions , the query is failing with the below
> error :
>
>
> 2018-01-19 16:17:58,766 FATAL [IPC Server handler 23 on 38645]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task:
> attempt_1497186993127_459726_m_000112_0 - exited : java.io.IOException:
> java.lang.reflect.InvocationTargetException
> at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.
> handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
> at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.
> handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)
> at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.
> initNextRecordReader(HadoopShimsSecure.java:269)
> at org.apache.hadoop.hive.shims.HadoopShimsSecure$
> CombineFileRecordReader.<init>(HadoopShimsSecure.java:216)
> at org.apache.hadoop.hive.shims.HadoopShimsSecure$
> CombineFileInputFormatShim.getRecordReader(HadoopShimsSecure.java:343)
> at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(
> CombineHiveInputFormat.java:681)
> at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.<
> init>(MapTask.java:169)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInformation.java:1656)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance(
> NativeConstructorAccessorImpl.java:62)
> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(
> DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
> at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.
> initNextRecordReader(HadoopShimsSecure.java:255)
> ... 11 more
> Caused by: java.io.IOException: Unknown encoding kind: DICTIONARY_V2
> dictionarySize: 6
>  in column 219
>
>
> Can you help me debug this error ? I am using Hive 2.2.
>
> Regards,
> Raghu M
>

Reply via email to