[jira] [Commented] (HIVE-21987) Hive is unable to read Parquet int32 annotated with decimal

2019-10-04 Thread Dmitry Romanenko (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944940#comment-16944940
 ] 

Dmitry Romanenko commented on HIVE-21987:
-

Any chance this will be backported to 3.x tree? This seems like quite major 
problem affecting multiple trees.

> Hive is unable to read Parquet int32 annotated with decimal
> ---
>
> Key: HIVE-21987
> URL: https://issues.apache.org/jira/browse/HIVE-21987
> Project: Hive
>  Issue Type: Improvement
>Reporter: Nándor Kollár
>Assignee: Marta Kuczora
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21987.1.patch, HIVE-21987.2.patch, 
> HIVE-21987.3.patch, HIVE-21987.4.patch, HIVE-21987.5.patch, 
> part-0-e5287735-8dcf-4dda-9c6e-4d5c98dc15f2-c000.snappy.parquet
>
>
> When I tried to read a Parquet file from a Hive (with Tez execution engine) 
> table with a small decimal column, I got the following exception:
> {code}
> Caused by: java.lang.UnsupportedOperationException: 
> org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter$8$1
>   at 
> org.apache.parquet.io.api.PrimitiveConverter.addInt(PrimitiveConverter.java:98)
>   at 
> org.apache.parquet.column.impl.ColumnReaderImpl$2$3.writeValue(ColumnReaderImpl.java:248)
>   at 
> org.apache.parquet.column.impl.ColumnReaderImpl.writeCurrentValueToConverter(ColumnReaderImpl.java:367)
>   at 
> org.apache.parquet.io.RecordReaderImplementation.read(RecordReaderImplementation.java:406)
>   at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:226)
>   ... 28 more
> {code}
> Steps to reproduce:
> - Create a Hive table with a single decimal(4, 2) column
> - Create a Parquet file with int32 column annotated with decimal(4, 2) 
> logical type, put it into the previously created table location (or use the 
> attached parquet file, in this case the column should be named as 'd', to 
> match the Hive schema with the Parquet schema in the file)
> - Execute a {{select *}} on this table
> Also, I'm afraid that similar problems can happen with int64 decimals too. 
> [Parquet specification | 
> https://github.com/apache/parquet-format/blob/master/LogicalTypes.md] allows 
> both of these cases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-21987) Hive is unable to read Parquet int32 annotated with decimal

2019-10-01 Thread Marta Kuczora (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16941756#comment-16941756
 ] 

Marta Kuczora commented on HIVE-21987:
--

Pushed to master. Thanks a lot [~pvary] for the review.

> Hive is unable to read Parquet int32 annotated with decimal
> ---
>
> Key: HIVE-21987
> URL: https://issues.apache.org/jira/browse/HIVE-21987
> Project: Hive
>  Issue Type: Improvement
>Reporter: Nándor Kollár
>Assignee: Marta Kuczora
>Priority: Major
> Attachments: HIVE-21987.1.patch, HIVE-21987.2.patch, 
> HIVE-21987.3.patch, HIVE-21987.4.patch, HIVE-21987.5.patch, 
> part-0-e5287735-8dcf-4dda-9c6e-4d5c98dc15f2-c000.snappy.parquet
>
>
> When I tried to read a Parquet file from a Hive (with Tez execution engine) 
> table with a small decimal column, I got the following exception:
> {code}
> Caused by: java.lang.UnsupportedOperationException: 
> org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter$8$1
>   at 
> org.apache.parquet.io.api.PrimitiveConverter.addInt(PrimitiveConverter.java:98)
>   at 
> org.apache.parquet.column.impl.ColumnReaderImpl$2$3.writeValue(ColumnReaderImpl.java:248)
>   at 
> org.apache.parquet.column.impl.ColumnReaderImpl.writeCurrentValueToConverter(ColumnReaderImpl.java:367)
>   at 
> org.apache.parquet.io.RecordReaderImplementation.read(RecordReaderImplementation.java:406)
>   at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:226)
>   ... 28 more
> {code}
> Steps to reproduce:
> - Create a Hive table with a single decimal(4, 2) column
> - Create a Parquet file with int32 column annotated with decimal(4, 2) 
> logical type, put it into the previously created table location (or use the 
> attached parquet file, in this case the column should be named as 'd', to 
> match the Hive schema with the Parquet schema in the file)
> - Execute a {{select *}} on this table
> Also, I'm afraid that similar problems can happen with int64 decimals too. 
> [Parquet specification | 
> https://github.com/apache/parquet-format/blob/master/LogicalTypes.md] allows 
> both of these cases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-21987) Hive is unable to read Parquet int32 annotated with decimal

2019-10-01 Thread Marta Kuczora (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16941746#comment-16941746
 ] 

Marta Kuczora commented on HIVE-21987:
--

Got +1 from [~pvary] on review board.

> Hive is unable to read Parquet int32 annotated with decimal
> ---
>
> Key: HIVE-21987
> URL: https://issues.apache.org/jira/browse/HIVE-21987
> Project: Hive
>  Issue Type: Improvement
>Reporter: Nándor Kollár
>Assignee: Marta Kuczora
>Priority: Major
> Attachments: HIVE-21987.1.patch, HIVE-21987.2.patch, 
> HIVE-21987.3.patch, HIVE-21987.4.patch, HIVE-21987.5.patch, 
> part-0-e5287735-8dcf-4dda-9c6e-4d5c98dc15f2-c000.snappy.parquet
>
>
> When I tried to read a Parquet file from a Hive (with Tez execution engine) 
> table with a small decimal column, I got the following exception:
> {code}
> Caused by: java.lang.UnsupportedOperationException: 
> org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter$8$1
>   at 
> org.apache.parquet.io.api.PrimitiveConverter.addInt(PrimitiveConverter.java:98)
>   at 
> org.apache.parquet.column.impl.ColumnReaderImpl$2$3.writeValue(ColumnReaderImpl.java:248)
>   at 
> org.apache.parquet.column.impl.ColumnReaderImpl.writeCurrentValueToConverter(ColumnReaderImpl.java:367)
>   at 
> org.apache.parquet.io.RecordReaderImplementation.read(RecordReaderImplementation.java:406)
>   at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:226)
>   ... 28 more
> {code}
> Steps to reproduce:
> - Create a Hive table with a single decimal(4, 2) column
> - Create a Parquet file with int32 column annotated with decimal(4, 2) 
> logical type, put it into the previously created table location (or use the 
> attached parquet file, in this case the column should be named as 'd', to 
> match the Hive schema with the Parquet schema in the file)
> - Execute a {{select *}} on this table
> Also, I'm afraid that similar problems can happen with int64 decimals too. 
> [Parquet specification | 
> https://github.com/apache/parquet-format/blob/master/LogicalTypes.md] allows 
> both of these cases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-21987) Hive is unable to read Parquet int32 annotated with decimal

2019-09-30 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16941167#comment-16941167
 ] 

Hive QA commented on HIVE-21987:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12981785/HIVE-21987.5.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17015 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18792/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18792/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18792/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12981785 - PreCommit-HIVE-Build

> Hive is unable to read Parquet int32 annotated with decimal
> ---
>
> Key: HIVE-21987
> URL: https://issues.apache.org/jira/browse/HIVE-21987
> Project: Hive
>  Issue Type: Improvement
>Reporter: Nándor Kollár
>Assignee: Marta Kuczora
>Priority: Major
> Attachments: HIVE-21987.1.patch, HIVE-21987.2.patch, 
> HIVE-21987.3.patch, HIVE-21987.4.patch, HIVE-21987.5.patch, 
> part-0-e5287735-8dcf-4dda-9c6e-4d5c98dc15f2-c000.snappy.parquet
>
>
> When I tried to read a Parquet file from a Hive (with Tez execution engine) 
> table with a small decimal column, I got the following exception:
> {code}
> Caused by: java.lang.UnsupportedOperationException: 
> org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter$8$1
>   at 
> org.apache.parquet.io.api.PrimitiveConverter.addInt(PrimitiveConverter.java:98)
>   at 
> org.apache.parquet.column.impl.ColumnReaderImpl$2$3.writeValue(ColumnReaderImpl.java:248)
>   at 
> org.apache.parquet.column.impl.ColumnReaderImpl.writeCurrentValueToConverter(ColumnReaderImpl.java:367)
>   at 
> org.apache.parquet.io.RecordReaderImplementation.read(RecordReaderImplementation.java:406)
>   at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:226)
>   ... 28 more
> {code}
> Steps to reproduce:
> - Create a Hive table with a single decimal(4, 2) column
> - Create a Parquet file with int32 column annotated with decimal(4, 2) 
> logical type, put it into the previously created table location (or use the 
> attached parquet file, in this case the column should be named as 'd', to 
> match the Hive schema with the Parquet schema in the file)
> - Execute a {{select *}} on this table
> Also, I'm afraid that similar problems can happen with int64 decimals too. 
> [Parquet specification | 
> https://github.com/apache/parquet-format/blob/master/LogicalTypes.md] allows 
> both of these cases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-21987) Hive is unable to read Parquet int32 annotated with decimal

2019-09-30 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16941161#comment-16941161
 ] 

Hive QA commented on HIVE-21987:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
42s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
58s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
25s{color} | {color:blue} ql in master has 1550 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
59s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
40s{color} | {color:red} ql: The patch generated 1 new + 15 unchanged - 0 fixed 
= 16 total (was 15) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
58s{color} | {color:red} root: The patch generated 1 new + 15 unchanged - 0 
fixed = 16 total (was 15) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18792/dev-support/hive-personality.sh
 |
| git revision | master / aacc830 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18792/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18792/yetus/diff-checkstyle-root.txt
 |
| modules | C: ql . U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18792/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Hive is unable to read Parquet int32 annotated with decimal
> ---
>
> Key: HIVE-21987
> URL: https://issues.apache.org/jira/browse/HIVE-21987
> Project: Hive
>  Issue Type: Improvement
>Reporter: Nándor Kollár
>Assignee: Marta Kuczora
>Priority: Major
> Attachments: HIVE-21987.1.patch, HIVE-21987.2.patch, 
> HIVE-21987.3.patch, HIVE-21987.4.patch, HIVE-21987.5.patch, 
> part-0-e5287735-8dcf-4dda-9c6e-4d5c98dc15f2-c000.snappy.parquet
>
>
> When I tried to read a Parquet file from a Hive (with Tez execution engine) 
> table with a small decimal column, I got the following exception:
> {code}
> Caused by: java.lang.UnsupportedOperationException: 
> org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter$8$1
>   at 
> org.apache.parquet.io.api.PrimitiveConverter.addInt(PrimitiveConverter.java:98)

[jira] [Commented] (HIVE-21987) Hive is unable to read Parquet int32 annotated with decimal

2019-09-30 Thread Marta Kuczora (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16940907#comment-16940907
 ] 

Marta Kuczora commented on HIVE-21987:
--

The test failure is not related, so reattached the patch to run the tests again.

> Hive is unable to read Parquet int32 annotated with decimal
> ---
>
> Key: HIVE-21987
> URL: https://issues.apache.org/jira/browse/HIVE-21987
> Project: Hive
>  Issue Type: Improvement
>Reporter: Nándor Kollár
>Assignee: Marta Kuczora
>Priority: Major
> Attachments: HIVE-21987.1.patch, HIVE-21987.2.patch, 
> HIVE-21987.3.patch, HIVE-21987.4.patch, HIVE-21987.5.patch, 
> part-0-e5287735-8dcf-4dda-9c6e-4d5c98dc15f2-c000.snappy.parquet
>
>
> When I tried to read a Parquet file from a Hive (with Tez execution engine) 
> table with a small decimal column, I got the following exception:
> {code}
> Caused by: java.lang.UnsupportedOperationException: 
> org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter$8$1
>   at 
> org.apache.parquet.io.api.PrimitiveConverter.addInt(PrimitiveConverter.java:98)
>   at 
> org.apache.parquet.column.impl.ColumnReaderImpl$2$3.writeValue(ColumnReaderImpl.java:248)
>   at 
> org.apache.parquet.column.impl.ColumnReaderImpl.writeCurrentValueToConverter(ColumnReaderImpl.java:367)
>   at 
> org.apache.parquet.io.RecordReaderImplementation.read(RecordReaderImplementation.java:406)
>   at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:226)
>   ... 28 more
> {code}
> Steps to reproduce:
> - Create a Hive table with a single decimal(4, 2) column
> - Create a Parquet file with int32 column annotated with decimal(4, 2) 
> logical type, put it into the previously created table location (or use the 
> attached parquet file, in this case the column should be named as 'd', to 
> match the Hive schema with the Parquet schema in the file)
> - Execute a {{select *}} on this table
> Also, I'm afraid that similar problems can happen with int64 decimals too. 
> [Parquet specification | 
> https://github.com/apache/parquet-format/blob/master/LogicalTypes.md] allows 
> both of these cases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-21987) Hive is unable to read Parquet int32 annotated with decimal

2019-09-05 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16923802#comment-16923802
 ] 

Hive QA commented on HIVE-21987:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12979531/HIVE-21987.4.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 16747 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[hybridgrace_hashjoin_2]
 (batchId=111)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18462/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18462/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18462/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12979531 - PreCommit-HIVE-Build

> Hive is unable to read Parquet int32 annotated with decimal
> ---
>
> Key: HIVE-21987
> URL: https://issues.apache.org/jira/browse/HIVE-21987
> Project: Hive
>  Issue Type: Improvement
>Reporter: Nandor Kollar
>Assignee: Marta Kuczora
>Priority: Major
> Attachments: HIVE-21987.1.patch, HIVE-21987.2.patch, 
> HIVE-21987.3.patch, HIVE-21987.4.patch, 
> part-0-e5287735-8dcf-4dda-9c6e-4d5c98dc15f2-c000.snappy.parquet
>
>
> When I tried to read a Parquet file from a Hive (with Tez execution engine) 
> table with a small decimal column, I got the following exception:
> {code}
> Caused by: java.lang.UnsupportedOperationException: 
> org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter$8$1
>   at 
> org.apache.parquet.io.api.PrimitiveConverter.addInt(PrimitiveConverter.java:98)
>   at 
> org.apache.parquet.column.impl.ColumnReaderImpl$2$3.writeValue(ColumnReaderImpl.java:248)
>   at 
> org.apache.parquet.column.impl.ColumnReaderImpl.writeCurrentValueToConverter(ColumnReaderImpl.java:367)
>   at 
> org.apache.parquet.io.RecordReaderImplementation.read(RecordReaderImplementation.java:406)
>   at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:226)
>   ... 28 more
> {code}
> Steps to reproduce:
> - Create a Hive table with a single decimal(4, 2) column
> - Create a Parquet file with int32 column annotated with decimal(4, 2) 
> logical type, put it into the previously created table location (or use the 
> attached parquet file, in this case the column should be named as 'd', to 
> match the Hive schema with the Parquet schema in the file)
> - Execute a {{select *}} on this table
> Also, I'm afraid that similar problems can happen with int64 decimals too. 
> [Parquet specification | 
> https://github.com/apache/parquet-format/blob/master/LogicalTypes.md] allows 
> both of these cases.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HIVE-21987) Hive is unable to read Parquet int32 annotated with decimal

2019-09-05 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16923796#comment-16923796
 ] 

Hive QA commented on HIVE-21987:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
43s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
53s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
27s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
44s{color} | {color:blue} ql in master has 2246 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
56s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
53s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
34s{color} | {color:red} ql: The patch generated 1 new + 15 unchanged - 0 fixed 
= 16 total (was 15) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
48s{color} | {color:red} root: The patch generated 1 new + 15 unchanged - 0 
fixed = 16 total (was 15) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18462/dev-support/hive-personality.sh
 |
| git revision | master / 0213afb |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18462/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18462/yetus/diff-checkstyle-root.txt
 |
| modules | C: ql . U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18462/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Hive is unable to read Parquet int32 annotated with decimal
> ---
>
> Key: HIVE-21987
> URL: https://issues.apache.org/jira/browse/HIVE-21987
> Project: Hive
>  Issue Type: Improvement
>Reporter: Nandor Kollar
>Assignee: Marta Kuczora
>Priority: Major
> Attachments: HIVE-21987.1.patch, HIVE-21987.2.patch, 
> HIVE-21987.3.patch, HIVE-21987.4.patch, 
> part-0-e5287735-8dcf-4dda-9c6e-4d5c98dc15f2-c000.snappy.parquet
>
>
> When I tried to read a Parquet file from a Hive (with Tez execution engine) 
> table with a small decimal column, I got the following exception:
> {code}
> Caused by: java.lang.UnsupportedOperationException: 
> org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter$8$1
>   at 
> org.apache.parquet.io.api.PrimitiveConverter.addInt(PrimitiveConverter.java:98)
>   at 
> org.a

[jira] [Commented] (HIVE-21987) Hive is unable to read Parquet int32 annotated with decimal

2019-09-05 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16923179#comment-16923179
 ] 

Hive QA commented on HIVE-21987:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12979387/HIVE-21987.3.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18449/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18449/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18449/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2019-09-05 08:22:37.248
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-18449/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2019-09-05 08:22:37.251
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at ebcc9bc HIVE-22161: UDF: FunctionRegistry synchronizes on 
org.apache.hadoop.hive.ql.udf.UDFType class (Gopal V, reviewed by Ashutosh 
Chauhan)
+ git clean -f -d
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at ebcc9bc HIVE-22161: UDF: FunctionRegistry synchronizes on 
org.apache.hadoop.hive.ql.udf.UDFType class (Gopal V, reviewed by Ashutosh 
Chauhan)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2019-09-05 08:22:38.293
+ rm -rf ../yetus_PreCommit-HIVE-Build-18449
+ mkdir ../yetus_PreCommit-HIVE-Build-18449
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-18449
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-18449/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: cannot apply binary patch to 'data/files/parquet_int_decimal_1.parquet' 
without full index line
Falling back to three-way merge...
error: cannot apply binary patch to 'data/files/parquet_int_decimal_1.parquet' 
without full index line
error: data/files/parquet_int_decimal_1.parquet: patch does not apply
error: cannot apply binary patch to 'data/files/parquet_int_decimal_2.parquet' 
without full index line
Falling back to three-way merge...
error: cannot apply binary patch to 'data/files/parquet_int_decimal_2.parquet' 
without full index line
error: data/files/parquet_int_decimal_2.parquet: patch does not apply
error: cannot apply binary patch to 'files/parquet_int_decimal_1.parquet' 
without full index line
Falling back to three-way merge...
error: cannot apply binary patch to 'files/parquet_int_decimal_1.parquet' 
without full index line
error: files/parquet_int_decimal_1.parquet: patch does not apply
error: cannot apply binary patch to 'files/parquet_int_decimal_2.parquet' 
without full index line
Falling back to three-way merge...
error: cannot apply binary patch to 'files/parquet_int_decimal_2.parquet' 
without full index line
error: files/parquet_int_decimal_2.parquet: patch does not apply
error: 
src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ETypeConverter.java: does 
not exist in index
error: 
src/java/org/apache/hadoop/hive/ql/io/parquet/vector/ParquetDataColumnReaderFactory.java:
 does not exist in index
error: src/test/results/clientpositive/type_change_test_fraction.q.out: does 
not exist in index
error: cannot apply binary patch to 'parquet_int_decimal_1.parquet' without 
full index line
Falling back to three-way merge...
error: cannot apply binary patch to 'parquet_int_decimal_1.parquet' without 
full index line
error: parquet_int_decimal_1.parquet: patch does not apply
error: cannot apply binary pat

[jira] [Commented] (HIVE-21987) Hive is unable to read Parquet int32 annotated with decimal

2019-08-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16918861#comment-16918861
 ] 

Hive QA commented on HIVE-21987:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12978908/HIVE-21987.2.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 10 failed/errored test(s), 16746 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[type_change_test_fraction]
 (batchId=20)
org.apache.hive.minikdc.TestJdbcWithDBTokenStore.testCancelRenewTokenFlow 
(batchId=298)
org.apache.hive.minikdc.TestJdbcWithDBTokenStore.testConnection (batchId=298)
org.apache.hive.minikdc.TestJdbcWithDBTokenStore.testIsValid (batchId=298)
org.apache.hive.minikdc.TestJdbcWithDBTokenStore.testIsValidNeg (batchId=298)
org.apache.hive.minikdc.TestJdbcWithDBTokenStore.testNegativeProxyAuth 
(batchId=298)
org.apache.hive.minikdc.TestJdbcWithDBTokenStore.testNegativeTokenAuth 
(batchId=298)
org.apache.hive.minikdc.TestJdbcWithDBTokenStore.testProxyAuth (batchId=298)
org.apache.hive.minikdc.TestJdbcWithDBTokenStore.testRenewDelegationToken 
(batchId=298)
org.apache.hive.minikdc.TestJdbcWithDBTokenStore.testTokenAuth (batchId=298)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18423/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18423/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18423/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 10 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12978908 - PreCommit-HIVE-Build

> Hive is unable to read Parquet int32 annotated with decimal
> ---
>
> Key: HIVE-21987
> URL: https://issues.apache.org/jira/browse/HIVE-21987
> Project: Hive
>  Issue Type: Improvement
>Reporter: Nandor Kollar
>Assignee: Marta Kuczora
>Priority: Major
> Attachments: HIVE-21987.1.patch, HIVE-21987.2.patch, 
> part-0-e5287735-8dcf-4dda-9c6e-4d5c98dc15f2-c000.snappy.parquet
>
>
> When I tried to read a Parquet file from a Hive (with Tez execution engine) 
> table with a small decimal column, I got the following exception:
> {code}
> Caused by: java.lang.UnsupportedOperationException: 
> org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter$8$1
>   at 
> org.apache.parquet.io.api.PrimitiveConverter.addInt(PrimitiveConverter.java:98)
>   at 
> org.apache.parquet.column.impl.ColumnReaderImpl$2$3.writeValue(ColumnReaderImpl.java:248)
>   at 
> org.apache.parquet.column.impl.ColumnReaderImpl.writeCurrentValueToConverter(ColumnReaderImpl.java:367)
>   at 
> org.apache.parquet.io.RecordReaderImplementation.read(RecordReaderImplementation.java:406)
>   at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:226)
>   ... 28 more
> {code}
> Steps to reproduce:
> - Create a Hive table with a single decimal(4, 2) column
> - Create a Parquet file with int32 column annotated with decimal(4, 2) 
> logical type, put it into the previously created table location (or use the 
> attached parquet file, in this case the column should be named as 'd', to 
> match the Hive schema with the Parquet schema in the file)
> - Execute a {{select *}} on this table
> Also, I'm afraid that similar problems can happen with int64 decimals too. 
> [Parquet specification | 
> https://github.com/apache/parquet-format/blob/master/LogicalTypes.md] allows 
> both of these cases.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HIVE-21987) Hive is unable to read Parquet int32 annotated with decimal

2019-08-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16918850#comment-16918850
 ] 

Hive QA commented on HIVE-21987:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
32s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
38s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
51s{color} | {color:blue} ql in master has 2248 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
31s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m  
7s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
39s{color} | {color:red} ql: The patch generated 1 new + 15 unchanged - 0 fixed 
= 16 total (was 15) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
55s{color} | {color:red} root: The patch generated 1 new + 15 unchanged - 0 
fixed = 16 total (was 15) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18423/dev-support/hive-personality.sh
 |
| git revision | master / 1cbff4d |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18423/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18423/yetus/diff-checkstyle-root.txt
 |
| modules | C: ql . U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18423/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Hive is unable to read Parquet int32 annotated with decimal
> ---
>
> Key: HIVE-21987
> URL: https://issues.apache.org/jira/browse/HIVE-21987
> Project: Hive
>  Issue Type: Improvement
>Reporter: Nandor Kollar
>Assignee: Marta Kuczora
>Priority: Major
> Attachments: HIVE-21987.1.patch, HIVE-21987.2.patch, 
> part-0-e5287735-8dcf-4dda-9c6e-4d5c98dc15f2-c000.snappy.parquet
>
>
> When I tried to read a Parquet file from a Hive (with Tez execution engine) 
> table with a small decimal column, I got the following exception:
> {code}
> Caused by: java.lang.UnsupportedOperationException: 
> org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter$8$1
>   at 
> org.apache.parquet.io.api.PrimitiveConverter.addInt(PrimitiveConverter.java:98)
>   at 
> org.apache.parquet.column.impl.ColumnReaderImpl$

[jira] [Commented] (HIVE-21987) Hive is unable to read Parquet int32 annotated with decimal

2019-08-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16918738#comment-16918738
 ] 

Hive QA commented on HIVE-21987:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12978898/HIVE-21987.1.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18421/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18421/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18421/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2019-08-29 15:59:17.911
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-18421/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2019-08-29 15:59:17.915
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 1cbff4d HIVE-22148: S3A delegation tokens are not added in the 
job config of the Compactor. (Harish JP, reviewd by Anishek Agarwal)
+ git clean -f -d
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 1cbff4d HIVE-22148: S3A delegation tokens are not added in the 
job config of the Compactor. (Harish JP, reviewd by Anishek Agarwal)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2019-08-29 15:59:18.650
+ rm -rf ../yetus_PreCommit-HIVE-Build-18421
+ mkdir ../yetus_PreCommit-HIVE-Build-18421
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-18421
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-18421/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: cannot apply binary patch to 'data/files/parquet_int_decimal_1.parquet' 
without full index line
Falling back to three-way merge...
error: cannot apply binary patch to 'data/files/parquet_int_decimal_1.parquet' 
without full index line
error: data/files/parquet_int_decimal_1.parquet: patch does not apply
error: cannot apply binary patch to 'data/files/parquet_int_decimal_2.parquet' 
without full index line
Falling back to three-way merge...
error: cannot apply binary patch to 'data/files/parquet_int_decimal_2.parquet' 
without full index line
error: data/files/parquet_int_decimal_2.parquet: patch does not apply
error: cannot apply binary patch to 'files/parquet_int_decimal_1.parquet' 
without full index line
Falling back to three-way merge...
error: cannot apply binary patch to 'files/parquet_int_decimal_1.parquet' 
without full index line
error: files/parquet_int_decimal_1.parquet: patch does not apply
error: cannot apply binary patch to 'files/parquet_int_decimal_2.parquet' 
without full index line
Falling back to three-way merge...
error: cannot apply binary patch to 'files/parquet_int_decimal_2.parquet' 
without full index line
error: files/parquet_int_decimal_2.parquet: patch does not apply
error: 
src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ETypeConverter.java: does 
not exist in index
error: 
src/java/org/apache/hadoop/hive/ql/io/parquet/vector/ParquetDataColumnReaderFactory.java:
 does not exist in index
error: cannot apply binary patch to 'parquet_int_decimal_1.parquet' without 
full index line
Falling back to three-way merge...
error: cannot apply binary patch to 'parquet_int_decimal_1.parquet' without 
full index line
error: parquet_int_decimal_1.parquet: patch does not apply
error: cannot apply binary patch to 'parquet_int_decimal_2.parquet' without 
full index line
Falling back to three-way merge...
error: cannot apply b