[ 
https://issues.apache.org/jira/browse/HIVE-26233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17538744#comment-17538744
 ] 

Peter Vary commented on HIVE-26233:
-----------------------------------

 They are getting an exception:
{code}
Caused by: java.text.ParseException: Unparseable date: "+10000-01-01 
04:59:59.999999"
        at java.text.DateFormat.parse(DateFormat.java:366) ~[?:1.8.0_232]
        at 
org.apache.hadoop.hive.common.type.TimestampTZUtil.convertTimestampToZone(TimestampTZUtil.java:180)
 
        at 
org.apache.hadoop.hive.ql.io.parquet.timestamp.NanoTimeUtils.getTimestamp(NanoTimeUtils.java:122)
        at 
org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter$9$2.convert(ETypeConverter.java:710)
        at 
org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter$9$2.convert(ETypeConverter.java:692)
 
        at 
org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter$BinaryConverter.setDictionary(ETypeConverter.java:933)
        at 
org.apache.parquet.column.impl.ColumnReaderBase.<init>(ColumnReaderBase.java:385)
        at 
org.apache.parquet.column.impl.ColumnReaderImpl.<init>(ColumnReaderImpl.java:46)
        at 
org.apache.parquet.column.impl.ColumnReadStoreImpl.getColumnReader(ColumnReadStoreImpl.java:84)
        at 
org.apache.parquet.io.RecordReaderImplementation.<init>(RecordReaderImplementation.java:271)
        at 
org.apache.parquet.io.MessageColumnIO$1.visit(MessageColumnIO.java:147) 
        at 
org.apache.parquet.io.MessageColumnIO$1.visit(MessageColumnIO.java:109) 
        at 
org.apache.parquet.filter2.compat.FilterCompat$NoOpFilter.accept(FilterCompat.java:165)
 
        at 
org.apache.parquet.io.MessageColumnIO.getRecordReader(MessageColumnIO.java:109) 
        at 
org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:137)
 
        at 
org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:222)
 
        at 
org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:207)
 
        at 
org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:98)
 
        at 
org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:60)
 
        at 
org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:93)
 
        at 
org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputFormatSplit.getRecordReader(FetchOperator.java:810)
 
        at 
org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:365)
 
        at 
org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:576) 
        at 
org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:545) 
        at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:150) 
        at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:912) 
        at 
org.apache.hadoop.hive.ql.reexec.ReExecDriver.getResults(ReExecDriver.java:243) 
        at 
org.apache.hive.service.cli.operation.SQLOperation.getNextRowSet(SQLOperation.java:476)
        ... 13 more
{code}

> Problems reading back PARQUET timestamps above 10000 years
> ----------------------------------------------------------
>
>                 Key: HIVE-26233
>                 URL: https://issues.apache.org/jira/browse/HIVE-26233
>             Project: Hive
>          Issue Type: Bug
>            Reporter: Peter Vary
>            Assignee: Peter Vary
>            Priority: Major
>              Labels: backwards-compatibility, pull-request-available, 
> timestamp
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> Timestamp values above year 10000 are not supported, but during the migration 
> from Hive2 to Hive3 some might appear because of TZ issues. We should be able 
> to at least read these tables before rewriting the data.
> For this we need to change the Timestamp.PRINT_FORMATTER, so no {{+}} sign is 
> appended to the timestamp if the year exceeds 4 digits.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

Reply via email to