[jira] [Commented] (HIVE-11820) export tables with size of >32MB throws "java.lang.IllegalArgumentException: Skip CRC is valid only with update options"

2015-09-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14877423#comment-14877423
 ] 

Hive QA commented on HIVE-11820:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12761290/HIVE-11820.2.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 9446 tests executed
*Failed tests:*
{noformat}
TestSparkNegativeCliDriver - did not produce a TEST-*.xml file
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5350/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5350/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5350/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12761290 - PreCommit-HIVE-TRUNK-Build

> export tables with size of >32MB throws "java.lang.IllegalArgumentException: 
> Skip CRC is valid only with update options"
> 
>
> Key: HIVE-11820
> URL: https://issues.apache.org/jira/browse/HIVE-11820
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Takahiko Saito
>Assignee: Takahiko Saito
> Attachments: HIVE-11820.2.patch, HIVE-11820.2.patch, HIVE-11820.patch
>
>
> Tested a patch of HIVE-11607 and seeing the following exception:
> {noformat}
> 2015-09-14 21:44:16,817 ERROR [main]: exec.Task 
> (SessionState.java:printError(960)) - Failed with exception Skip CRC is valid 
> only with update options
> java.lang.IllegalArgumentException: Skip CRC is valid only with update options
> at 
> org.apache.hadoop.tools.DistCpOptions.validate(DistCpOptions.java:556)
> at 
> org.apache.hadoop.tools.DistCpOptions.setSkipCRC(DistCpOptions.java:311)
> at 
> org.apache.hadoop.hive.shims.Hadoop23Shims.runDistCp(Hadoop23Shims.java:1147)
> at org.apache.hadoop.hive.common.FileUtils.copy(FileUtils.java:553)
> at org.apache.hadoop.hive.ql.exec.CopyTask.execute(CopyTask.java:82)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:89)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1655)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1414)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1195)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
> at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> {noformat}
> A possible resolution is to reverse the order of the following two lines from 
> a patch of HIVE-11607:
> {noformat}
> +options.setSkipCRC(true);
> +options.setSyncFolder(true);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11882) Fetch optimizer should stop source files traversal once it exceeds the hive.fetch.task.conversion.threshold

2015-09-19 Thread Illya Yalovyy (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14877430#comment-14877430
 ] 

Illya Yalovyy commented on HIVE-11882:
--

When I checked last time the API for the early exit was in place but actual 
code didn't use it. 
I'll go ahead and double check the trunk (master). If it is fixed already I'll 
close this ticket.

> Fetch optimizer should stop source files traversal once it exceeds the 
> hive.fetch.task.conversion.threshold
> ---
>
> Key: HIVE-11882
> URL: https://issues.apache.org/jira/browse/HIVE-11882
> Project: Hive
>  Issue Type: Improvement
>  Components: Physical Optimizer
>Affects Versions: 1.0.0
>Reporter: Illya Yalovyy
>
> Hive 1.0's fetch optimizer tries to optimize queries of the form "select  
> from  where  limit " to a fetch task (see the 
> hive.fetch.task.conversion property). This optimization gets the lengths of 
> all the files in the specified partition and does some comparison against a 
> threshold value to determine whether it should use a fetch task or not (see 
> the hive.fetch.task.conversion.threshold property). This process of getting 
> the length of all files. One of the main problems in this optimization is the 
> fetch optimizer doesn't seem to stop once it exceeds the 
> hive.fetch.task.conversion.threshold. It works fine on HDFS, but could cause 
> a significant performance degradation on other supported file systems. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11217) CTAS statements throws error, when the table is stored as ORC File format and select clause has NULL/VOID type column

2015-09-19 Thread Yongzhi Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14877341#comment-14877341
 ] 

Yongzhi Chen commented on HIVE-11217:
-

The four failures are not related.

org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation failed 
more than 100 times

The 3 org.apache.hive.hcatalog.streaming.TestStreaming tests is caused by:

Table/View 'TXNS' already exists in Schema 'APP'.

> CTAS statements throws error, when the table is stored as ORC File format and 
> select clause has NULL/VOID type column 
> --
>
> Key: HIVE-11217
> URL: https://issues.apache.org/jira/browse/HIVE-11217
> Project: Hive
>  Issue Type: Bug
>  Components: File Formats
>Affects Versions: 0.13.1
>Reporter: Gaurav Kohli
>Assignee: Yongzhi Chen
>Priority: Minor
> Attachments: HIVE-11217.1.patch, HIVE-11217.2.patch, 
> HIVE-11217.3.patch, HIVE-11217.4.patch, HIVE-11271.5.patch
>
>
> If you try to use create-table-as-select (CTAS) statement and create a ORC 
> File format based table, then you can't use NULL as a column value in select 
> clause 
> CREATE TABLE empty (x int);
> CREATE TABLE orc_table_with_null 
> STORED AS ORC 
> AS 
> SELECT 
> x,
> null
> FROM empty;
> Error: 
> {quote}
> 347084 [main] ERROR hive.ql.exec.DDLTask  - 
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.lang.IllegalArgumentException: Unknown primitive type VOID
>   at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:643)
>   at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4242)
>   at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:285)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1554)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1321)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1139)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:962)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:952)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:269)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:221)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:431)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:367)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:464)
>   at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:474)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:756)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:694)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:633)
>   at org.apache.oozie.action.hadoop.HiveMain.runHive(HiveMain.java:323)
>   at org.apache.oozie.action.hadoop.HiveMain.run(HiveMain.java:284)
>   at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:39)
>   at org.apache.oozie.action.hadoop.HiveMain.main(HiveMain.java:66)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:227)
>   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
>   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
> Caused by: java.lang.IllegalArgumentException: Unknown primitive type VOID
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcStruct.createObjectInspector(OrcStruct.java:530)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcStruct$OrcStructInspector.(OrcStruct.java:195)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcStruct.createObjectInspector(OrcStruct.java:534)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcSerde.initialize(OrcSerde.java:106)
>   at 
> 

[jira] [Updated] (HIVE-11875) JDBC Driver does not honor delegation token mechanism when readings params from ZooKeeper

2015-09-19 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-11875:

Attachment: HIVE-11875.2.patch

> JDBC Driver does not honor delegation token mechanism when readings params 
> from ZooKeeper
> -
>
> Key: HIVE-11875
> URL: https://issues.apache.org/jira/browse/HIVE-11875
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 1.3.0, 2.0.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Attachments: HIVE-11875.1.patch, HIVE-11875.2.patch
>
>
> Regression introduced in HIVE-11581. When the driver is reading connection 
> params from ZK, in a secure cluster if overrides the delegation token 
> mechanism (specified on client side) with a TGT requiring mechanism (kinit).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11898) support default partition in metasotredirectsql

2015-09-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14877344#comment-14877344
 ] 

Hive QA commented on HIVE-11898:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12761246/HIVE-11898.patch

{color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 9452 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_partition_change_col
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_table_cascade
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_part
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_dynamic_partition_skip_default
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_schemeAuthority
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
org.apache.hive.hcatalog.streaming.TestStreaming.testRemainingTransactions
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5347/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5347/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5347/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 7 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12761246 - PreCommit-HIVE-TRUNK-Build

> support default partition in metasotredirectsql
> ---
>
> Key: HIVE-11898
> URL: https://issues.apache.org/jira/browse/HIVE-11898
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-11898.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11762) TestHCatLoaderEncryption failures when using Hadoop 2.7

2015-09-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14877310#comment-14877310
 ] 

Hive QA commented on HIVE-11762:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12761245/HIVE-11762.4.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 9452 tests executed
*Failed tests:*
{noformat}
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5345/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5345/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5345/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12761245 - PreCommit-HIVE-TRUNK-Build

> TestHCatLoaderEncryption failures when using Hadoop 2.7
> ---
>
> Key: HIVE-11762
> URL: https://issues.apache.org/jira/browse/HIVE-11762
> Project: Hive
>  Issue Type: Bug
>  Components: Shims, Tests
>Reporter: Jason Dere
>Assignee: Jason Dere
> Attachments: HIVE-11762.1.patch, HIVE-11762.2.patch, 
> HIVE-11762.3.patch, HIVE-11762.4.patch
>
>
> When running TestHCatLoaderEncryption with -Dhadoop23.version=2.7.0, we get 
> the following error during setup():
> {noformat}
> testReadDataFromEncryptedHiveTableByPig[5](org.apache.hive.hcatalog.pig.TestHCatLoaderEncryption)
>   Time elapsed: 3.648 sec  <<< ERROR!
> java.lang.NoSuchMethodError: 
> org.apache.hadoop.hdfs.DFSClient.setKeyProvider(Lorg/apache/hadoop/crypto/key/KeyProviderCryptoExtension;)V
>   at 
> org.apache.hadoop.hive.shims.Hadoop23Shims.getMiniDfs(Hadoop23Shims.java:534)
>   at 
> org.apache.hive.hcatalog.pig.TestHCatLoaderEncryption.initEncryptionShim(TestHCatLoaderEncryption.java:252)
>   at 
> org.apache.hive.hcatalog.pig.TestHCatLoaderEncryption.setup(TestHCatLoaderEncryption.java:200)
> {noformat}
> It looks like between Hadoop 2.6 and Hadoop 2.7, the argument to 
> DFSClient.setKeyProvider() changed:
> {noformat}
>@VisibleForTesting
> -  public void setKeyProvider(KeyProviderCryptoExtension provider) {
> -this.provider = provider;
> +  public void setKeyProvider(KeyProvider provider) {
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11786) Deprecate the use of redundant column in colunm stats related tables

2015-09-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14877373#comment-14877373
 ] 

Hive QA commented on HIVE-11786:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12761244/HIVE-11786.1.patch

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 9452 tests executed
*Failed tests:*
{noformat}
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
org.apache.hive.hcatalog.streaming.TestStreaming.testRemainingTransactions
org.apache.hive.hcatalog.streaming.TestStreaming.testTransactionBatchEmptyCommit
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5348/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5348/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5348/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12761244 - PreCommit-HIVE-TRUNK-Build

> Deprecate the use of redundant column in colunm stats related tables
> 
>
> Key: HIVE-11786
> URL: https://issues.apache.org/jira/browse/HIVE-11786
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Chaoyu Tang
>Assignee: Chaoyu Tang
> Attachments: HIVE-11786.1.patch, HIVE-11786.1.patch, HIVE-11786.patch
>
>
> The stats tables such as TAB_COL_STATS, PART_COL_STATS have redundant columns 
> such as DB_NAME, TABLE_NAME, PARTITION_NAME since these tables already have 
> foreign key like TBL_ID, or PART_ID referencing to TBLS or PARTITIONS. 
> These redundant columns violate database normalization rules and cause a lot 
> of inconvenience (sometimes difficult) in column stats related feature 
> implementation. For example, when renaming a table, we have to update 
> TABLE_NAME column in these tables as well which is unnecessary.
> This JIRA is first to deprecate the use of these columns at HMS code level. A 
> followed JIRA is to be opened to focus on DB schema change and upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-11875) JDBC Driver does not honor delegation token mechanism when readings params from ZooKeeper

2015-09-19 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14876642#comment-14876642
 ] 

Vaibhav Gumashta edited comment on HIVE-11875 at 9/19/15 7:27 PM:
--

Whoops, pressed "Add" before I was done with creating the table. Logic supposed 
to match this:

|*AUTH MODE* | *AUTH_TYPE param not set*| *AUTH_SIMPLE* | *AUTH_TOKEN* | 
*AUTH_OTHER* | 
|*NOSASL*| Set AUTH_TYPE=AUTH_SIMPLE | Do nothing | Set AUTH_TYPE=AUTH_SIMPLE | 
Set AUTH_TYPE=AUTH_SIMPLE |
|*KERBEROS* | Use TGT from Ticket Cache | Use TGT from Ticket Cache | Do 
nothing | Use TGT from Ticket Cache |
|*OTHER*|Do nothing |Do nothing |Do nothing |Do nothing |


was (Author: jdere):
Whoops, pressed "Add" before I was done with creating the table. Logic supposed 
to match this:

|AUTH MODE | *AUTH_TYPE param not set*| *AUTH_SIMPLE* | *AUTH_TOKEN* | 
*AUTH_OTHER* | 
|*NOSASL*| Set AUTH_TYPE=AUTH_SIMPLE | Do nothing | Set AUTH_TYPE=AUTH_SIMPLE | 
Set AUTH_TYPE=AUTH_SIMPLE |
|*KERBEROS* | Use TGT from Ticket Cache | Use TGT from Ticket Cache | Do 
nothing | Use TGT from Ticket Cache |
|*OTHER*|Do nothing |Do nothing |Do nothing |Do nothing |

> JDBC Driver does not honor delegation token mechanism when readings params 
> from ZooKeeper
> -
>
> Key: HIVE-11875
> URL: https://issues.apache.org/jira/browse/HIVE-11875
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 1.3.0, 2.0.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Attachments: HIVE-11875.1.patch
>
>
> Regression introduced in HIVE-11581. When the driver is reading connection 
> params from ZK, in a secure cluster if overrides the delegation token 
> mechanism (specified on client side) with a TGT requiring mechanism (kinit).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-11875) JDBC Driver does not honor delegation token mechanism when readings params from ZooKeeper

2015-09-19 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14876642#comment-14876642
 ] 

Vaibhav Gumashta edited comment on HIVE-11875 at 9/19/15 7:26 PM:
--

Whoops, pressed "Add" before I was done with creating the table. Logic supposed 
to match this:

|AUTH MODE | *AUTH_TYPE param not set*| *AUTH_SIMPLE* | *AUTH_TOKEN* | 
*AUTH_OTHER* | 
|*NOSASL*| Set AUTH_TYPE=AUTH_SIMPLE | Do nothing | Set AUTH_TYPE=AUTH_SIMPLE | 
Set AUTH_TYPE=AUTH_SIMPLE |
|*KERBEROS* | Use TGT from Ticket Cache | Use TGT from Ticket Cache | Do 
nothing | Use TGT from Ticket Cache |
|*OTHER*|Do nothing |Do nothing |Do nothing |Do nothing |


was (Author: jdere):
Whoops, pressed "Add" before I was done with creating the table. Logic supposed 
to match this:

| | *AUTH_TYPE param not set*| *AUTH_SIMPLE* | *AUTH_TOKEN* | *AUTH_OTHER* | 
|*NOSASL*| Set AUTH_TYPE=AUTH_SIMPLE | Do nothing | Set AUTH_TYPE=AUTH_SIMPLE | 
Set AUTH_TYPE=AUTH_SIMPLE |
|*KERBEROS* | Use TGT from Ticket Cache | Use TGT from Ticket Cache | Do 
nothing | Use TGT from Ticket Cache |
|*OTHER*|Do nothing |Do nothing |Do nothing |Do nothing |

> JDBC Driver does not honor delegation token mechanism when readings params 
> from ZooKeeper
> -
>
> Key: HIVE-11875
> URL: https://issues.apache.org/jira/browse/HIVE-11875
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 1.3.0, 2.0.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Attachments: HIVE-11875.1.patch
>
>
> Regression introduced in HIVE-11581. When the driver is reading connection 
> params from ZK, in a secure cluster if overrides the delegation token 
> mechanism (specified on client side) with a TGT requiring mechanism (kinit).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11897) JDO rollback can throw pointless exceptions

2015-09-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14877333#comment-14877333
 ] 

Hive QA commented on HIVE-11897:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12761242/HIVE-11897.patch

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 9425 tests executed
*Failed tests:*
{noformat}
TestContribCliDriver - did not produce a TEST-*.xml file
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
org.apache.hive.hcatalog.streaming.TestStreaming.testTransactionBatchCommit_Delimited
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5346/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5346/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5346/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12761242 - PreCommit-HIVE-TRUNK-Build

> JDO rollback can throw pointless exceptions
> ---
>
> Key: HIVE-11897
> URL: https://issues.apache.org/jira/browse/HIVE-11897
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-11897.patch
>
>
> Datanucleus does a bunch of stuff before the actual rollback, with each next 
> step in a finally block; that way even if the prior steps fail, the rollback 
> should still happen. However, an exception from some questionable 
> pre-rollback logic like manipulating resultset after failure can affect 
> DirectSQL fallback



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10598) Vectorization borks when column is added to table.

2015-09-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14877388#comment-14877388
 ] 

Hive QA commented on HIVE-10598:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12761265/HIVE-10598.04.patch

{color:red}ERROR:{color} -1 due to 58 failed/errored test(s), 9451 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_select
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_delete
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_delete_own_table
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_update
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_update_own_table
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_dynpart_sort_opt_vectorization
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_dynpart_sort_optimization2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_lineage3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_create
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_diff_part_cols
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_diff_part_cols2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_int_type_promotion
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_merge6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_merge7
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_merge_incompat2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_min_max
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_null_check
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_non_string_partition
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_partitioned_date_time
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vectorization_part
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vectorization_part_project
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vectorization_part_varchar
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_outer_join1
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_outer_join4
org.apache.hadoop.hive.ql.TestTxnCommands.testTimeOutReaper
org.apache.hadoop.hive.ql.txn.compactor.TestCompactor.majorCompactAfterAbort
org.apache.hadoop.hive.ql.txn.compactor.TestCompactor.majorCompactWhileStreaming
org.apache.hadoop.hive.ql.txn.compactor.TestCompactor.minorCompactAfterAbort
org.apache.hadoop.hive.ql.txn.compactor.TestCompactor.minorCompactWhileStreaming
org.apache.hadoop.hive.ql.txn.compactor.TestCompactor.testStatsAfterCompactionPartTbl
org.apache.hadoop.hive.ql.txn.compactor.TestWorker.majorTableLegacy
org.apache.hadoop.hive.ql.txn.compactor.TestWorker.majorTableNoBase
org.apache.hadoop.hive.ql.txn.compactor.TestWorker.majorTableWithBase
org.apache.hadoop.hive.ql.txn.compactor.TestWorker.majorWithAborted
org.apache.hadoop.hive.ql.txn.compactor.TestWorker.majorWithOpenInMiddle
org.apache.hadoop.hive.ql.txn.compactor.TestWorker.minorTableLegacy
org.apache.hadoop.hive.ql.txn.compactor.TestWorker.minorTableNoBase
org.apache.hadoop.hive.ql.txn.compactor.TestWorker.minorTableWithBase
org.apache.hadoop.hive.ql.txn.compactor.TestWorker.minorWithAborted
org.apache.hadoop.hive.ql.txn.compactor.TestWorker.minorWithOpenInMiddle
org.apache.hadoop.hive.ql.txn.compactor.TestWorker2.majorTableLegacy
org.apache.hadoop.hive.ql.txn.compactor.TestWorker2.majorTableNoBase
org.apache.hadoop.hive.ql.txn.compactor.TestWorker2.majorTableWithBase
org.apache.hadoop.hive.ql.txn.compactor.TestWorker2.majorWithAborted
org.apache.hadoop.hive.ql.txn.compactor.TestWorker2.majorWithOpenInMiddle
org.apache.hadoop.hive.ql.txn.compactor.TestWorker2.minorTableLegacy
org.apache.hadoop.hive.ql.txn.compactor.TestWorker2.minorTableNoBase
org.apache.hadoop.hive.ql.txn.compactor.TestWorker2.minorTableWithBase
org.apache.hadoop.hive.ql.txn.compactor.TestWorker2.minorWithAborted
org.apache.hadoop.hive.ql.txn.compactor.TestWorker2.minorWithOpenInMiddle
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler.org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler
org.apache.hive.hcatalog.mapreduce.TestHCatExternalPartitioned.testHCatPartitionedTable[3]
org.apache.hive.hcatalog.mapreduce.TestHCatMutablePartitioned.testHCatPartitionedTable[3]
org.apache.hive.hcatalog.mapreduce.TestHCatPartitioned.testHCatPartitionedTable[3]
org.apache.hive.hcatalog.pig.TestHCatStorer.testBagNStruct[3]
org.apache.hive.hcatalog.pig.TestHCatStorer.testEmptyStore[3]
org.apache.hive.hcatalog.pig.TestHCatStorer.testStoreFuncAllSimpleTypes[3]
{noformat}

Test results: 

[jira] [Commented] (HIVE-11642) LLAP: make sure tests pass #3

2015-09-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14877269#comment-14877269
 ] 

Hive QA commented on HIVE-11642:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12761232/HIVE-11642.06.patch

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 9510 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_llap
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.initializationError
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_llapdecider
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
org.apache.hive.hcatalog.streaming.TestStreaming.testTimeOutReaper
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5344/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5344/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5344/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12761232 - PreCommit-HIVE-TRUNK-Build

> LLAP: make sure tests pass #3
> -
>
> Key: HIVE-11642
> URL: https://issues.apache.org/jira/browse/HIVE-11642
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-11642.01.patch, HIVE-11642.02.patch, 
> HIVE-11642.03.patch, HIVE-11642.04.patch, HIVE-11642.05.patch, 
> HIVE-11642.06.patch, HIVE-11642.patch
>
>
> Tests should pass against the most recent branch and Tez 0.8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11827) STORED AS AVRO fails SELECT COUNT(*) when empty

2015-09-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14876939#comment-14876939
 ] 

Hive QA commented on HIVE-11827:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12761239/HIVE-11827.2.patch

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 9410 tests executed
*Failed tests:*
{noformat}
TestMiniSparkOnYarnCliDriver - did not produce a TEST-*.xml file
org.apache.hadoop.hive.thrift.TestHadoop20SAuthBridge.testSaslWithHiveMetaStore
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5337/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5337/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5337/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12761239 - PreCommit-HIVE-TRUNK-Build

> STORED AS AVRO fails SELECT COUNT(*) when empty
> ---
>
> Key: HIVE-11827
> URL: https://issues.apache.org/jira/browse/HIVE-11827
> Project: Hive
>  Issue Type: Bug
>  Components: Serializers/Deserializers
> Environment: CDH5.4.5
>Reporter: Johndee Burks
>Assignee: Yongzhi Chen
>Priority: Minor
> Attachments: HIVE-11827.2.patch
>
>
> If you create a table stored as avro and try to do select count(*) against 
> the table it will fail. The following shows this. Empty table in this 
> situation is a table with no files. 
> {code}
> hive> create table j2 (a int) stored as avro;
> OK
> Time taken: 1.069 seconds
> hive> select count(*) from j2;
> Query ID = johndee_20150915113434_d4fe99d4-7fb9-42fe-9b91-ad560eeacc48
> Total jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapreduce.job.reduces=
> java.io.IOException: org.apache.hadoop.hive.serde2.avro.AvroSerdeException: 
> Neither avro.schema.literal nor avro.schema.url specified, can't determine 
> table schema
>   at 
> org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat.getHiveRecordWriter(AvroContainerOutputFormat.java:65)
>   at 
> org.apache.hadoop.hive.ql.exec.Utilities.createEmptyFile(Utilities.java:3430)
>   at 
> org.apache.hadoop.hive.ql.exec.Utilities.createDummyFileForEmptyPartition(Utilities.java:3463)
>   at 
> org.apache.hadoop.hive.ql.exec.Utilities.getInputPaths(Utilities.java:3387)
>   at 
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:370)
>   at 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:137)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1640)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1399)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1183)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1039)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:207)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:159)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:370)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:756)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: org.apache.hadoop.hive.serde2.avro.AvroSerdeException: Neither 
> avro.schema.literal 

[jira] [Updated] (HIVE-10598) Vectorization borks when column is added to table.

2015-09-19 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-10598:

Attachment: HIVE-10598.04.patch

> Vectorization borks when column is added to table.
> --
>
> Key: HIVE-10598
> URL: https://issues.apache.org/jira/browse/HIVE-10598
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Reporter: Mithun Radhakrishnan
>Assignee: Matt McCline
> Attachments: HIVE-10598.01.patch, HIVE-10598.02.patch, 
> HIVE-10598.03.patch, HIVE-10598.04.patch
>
>
> Consider the following table definition:
> {code:sql}
> create table foobar ( foo string, bar string ) partitioned by (dt string) 
> stored as orc;
> alter table foobar add partition( dt='20150101' ) ;
> {code}
> Say the partition has the following data:
> {noformat}
> 1 one 20150101
> 2 two 20150101
> 3 three   20150101
> {noformat}
> If a new column is added to the table-schema (and the partition continues to 
> have the old schema), vectorized read from the old partitions fail thus:
> {code:sql}
> alter table foobar add columns( goo string );
> select count(1) from foobar;
> {code}
> {code:title=stacktrace}
> java.lang.Exception: java.lang.RuntimeException: Error creating a batch
>   at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
>   at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
> Caused by: java.lang.RuntimeException: Error creating a batch
>   at 
> org.apache.hadoop.hive.ql.io.orc.VectorizedOrcInputFormat$VectorizedOrcRecordReader.createValue(VectorizedOrcInputFormat.java:114)
>   at 
> org.apache.hadoop.hive.ql.io.orc.VectorizedOrcInputFormat$VectorizedOrcRecordReader.createValue(VectorizedOrcInputFormat.java:52)
>   at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.createValue(CombineHiveRecordReader.java:84)
>   at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.createValue(CombineHiveRecordReader.java:42)
>   at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.createValue(HadoopShimsSecure.java:156)
>   at 
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.createValue(MapTask.java:180)
>   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
>   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
>   at 
> org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:744)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: No type entry 
> found for column 3 in map {4=Long}
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatchCtx.addScratchColumnsToBatch(VectorizedRowBatchCtx.java:632)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatchCtx.createVectorizedRowBatch(VectorizedRowBatchCtx.java:343)
>   at 
> org.apache.hadoop.hive.ql.io.orc.VectorizedOrcInputFormat$VectorizedOrcRecordReader.createValue(VectorizedOrcInputFormat.java:112)
>   ... 14 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-4243) Fix column names in FileSinkOperator

2015-09-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14876980#comment-14876980
 ] 

Hive QA commented on HIVE-4243:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12761148/HIVE-4243.patch

{color:red}ERROR:{color} -1 due to 10 failed/errored test(s), 9448 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_union_fast_stats
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_union_fast_stats
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_outer_join1
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_outer_join4
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testNegativeCliDriver_minimr_broken_pipe
org.apache.hadoop.hive.thrift.TestHadoop20SAuthBridge.testMetastoreProxyUser
org.apache.hadoop.hive.thrift.TestHadoop20SAuthBridge.testSaslWithHiveMetaStore
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler.org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler
org.apache.hive.jdbc.TestSSL.testSSLFetchHttp
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5339/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5339/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5339/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 10 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12761148 - PreCommit-HIVE-TRUNK-Build

> Fix column names in FileSinkOperator
> 
>
> Key: HIVE-4243
> URL: https://issues.apache.org/jira/browse/HIVE-4243
> Project: Hive
>  Issue Type: Bug
>  Components: File Formats
>Affects Versions: 1.3.0, 2.0.0
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Attachments: HIVE-4243.patch, HIVE-4243.patch, HIVE-4243.patch, 
> HIVE-4243.tmp.patch
>
>
> All of the ObjectInspectors given to SerDe's by FileSinkOperator have virtual 
> column names. Since the files are part of tables, Hive knows the column 
> names. For self-describing file formats like ORC, having the real column 
> names will improve the understandability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11699) Support special characters in quoted table names

2015-09-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14877054#comment-14877054
 ] 

Hive QA commented on HIVE-11699:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12761163/HIVE-11699.03.patch

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 9413 tests executed
*Failed tests:*
{noformat}
TestMiniSparkOnYarnCliDriver - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_special_character_in_tabnames_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_special_character_in_tabnames_3
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
org.apache.hive.hcatalog.streaming.TestStreaming.testEndpointConnection
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5340/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5340/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5340/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12761163 - PreCommit-HIVE-TRUNK-Build

> Support special characters in quoted table names
> 
>
> Key: HIVE-11699
> URL: https://issues.apache.org/jira/browse/HIVE-11699
> Project: Hive
>  Issue Type: New Feature
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-11699.01.patch, HIVE-11699.02.patch, 
> HIVE-11699.03.patch
>
>
> Right now table names can only be "[a-zA-z_0-9]+". This patch tries to 
> investigate how much change there should be if we would like to support 
> special characters, e.g., "/" in table names.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11820) export tables with size of >32MB throws "java.lang.IllegalArgumentException: Skip CRC is valid only with update options"

2015-09-19 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-11820:

Attachment: HIVE-11820.2.patch

> export tables with size of >32MB throws "java.lang.IllegalArgumentException: 
> Skip CRC is valid only with update options"
> 
>
> Key: HIVE-11820
> URL: https://issues.apache.org/jira/browse/HIVE-11820
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Takahiko Saito
>Assignee: Takahiko Saito
> Attachments: HIVE-11820.2.patch, HIVE-11820.2.patch, HIVE-11820.patch
>
>
> Tested a patch of HIVE-11607 and seeing the following exception:
> {noformat}
> 2015-09-14 21:44:16,817 ERROR [main]: exec.Task 
> (SessionState.java:printError(960)) - Failed with exception Skip CRC is valid 
> only with update options
> java.lang.IllegalArgumentException: Skip CRC is valid only with update options
> at 
> org.apache.hadoop.tools.DistCpOptions.validate(DistCpOptions.java:556)
> at 
> org.apache.hadoop.tools.DistCpOptions.setSkipCRC(DistCpOptions.java:311)
> at 
> org.apache.hadoop.hive.shims.Hadoop23Shims.runDistCp(Hadoop23Shims.java:1147)
> at org.apache.hadoop.hive.common.FileUtils.copy(FileUtils.java:553)
> at org.apache.hadoop.hive.ql.exec.CopyTask.execute(CopyTask.java:82)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:89)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1655)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1414)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1195)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
> at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> {noformat}
> A possible resolution is to reverse the order of the following two lines from 
> a patch of HIVE-11607:
> {noformat}
> +options.setSkipCRC(true);
> +options.setSyncFolder(true);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11217) CTAS statements throws error, when the table is stored as ORC File format and select clause has NULL/VOID type column

2015-09-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14877177#comment-14877177
 ] 

Hive QA commented on HIVE-11217:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12761218/HIVE-11271.5.patch

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 9453 tests executed
*Failed tests:*
{noformat}
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
org.apache.hive.hcatalog.streaming.TestStreaming.testEndpointConnection
org.apache.hive.hcatalog.streaming.TestStreaming.testRemainingTransactions
org.apache.hive.hcatalog.streaming.TestStreaming.testTransactionBatchCommit_Json
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5342/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5342/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5342/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12761218 - PreCommit-HIVE-TRUNK-Build

> CTAS statements throws error, when the table is stored as ORC File format and 
> select clause has NULL/VOID type column 
> --
>
> Key: HIVE-11217
> URL: https://issues.apache.org/jira/browse/HIVE-11217
> Project: Hive
>  Issue Type: Bug
>  Components: File Formats
>Affects Versions: 0.13.1
>Reporter: Gaurav Kohli
>Assignee: Yongzhi Chen
>Priority: Minor
> Attachments: HIVE-11217.1.patch, HIVE-11217.2.patch, 
> HIVE-11217.3.patch, HIVE-11217.4.patch, HIVE-11271.5.patch
>
>
> If you try to use create-table-as-select (CTAS) statement and create a ORC 
> File format based table, then you can't use NULL as a column value in select 
> clause 
> CREATE TABLE empty (x int);
> CREATE TABLE orc_table_with_null 
> STORED AS ORC 
> AS 
> SELECT 
> x,
> null
> FROM empty;
> Error: 
> {quote}
> 347084 [main] ERROR hive.ql.exec.DDLTask  - 
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.lang.IllegalArgumentException: Unknown primitive type VOID
>   at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:643)
>   at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4242)
>   at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:285)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1554)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1321)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1139)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:962)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:952)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:269)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:221)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:431)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:367)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:464)
>   at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:474)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:756)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:694)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:633)
>   at org.apache.oozie.action.hadoop.HiveMain.runHive(HiveMain.java:323)
>   at org.apache.oozie.action.hadoop.HiveMain.run(HiveMain.java:284)
>   at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:39)
>   at org.apache.oozie.action.hadoop.HiveMain.main(HiveMain.java:66)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:227)
>   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
>   at 

[jira] [Commented] (HIVE-11807) Set ORC buffer size in relation to set stripe size

2015-09-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14877125#comment-14877125
 ] 

Hive QA commented on HIVE-11807:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12761202/HIVE-11807.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 9449 tests executed
*Failed tests:*
{noformat}
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
org.apache.hive.hcatalog.streaming.TestStreaming.testRemainingTransactions
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5341/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5341/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5341/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12761202 - PreCommit-HIVE-TRUNK-Build

> Set ORC buffer size in relation to set stripe size
> --
>
> Key: HIVE-11807
> URL: https://issues.apache.org/jira/browse/HIVE-11807
> Project: Hive
>  Issue Type: Improvement
>  Components: File Formats
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Attachments: HIVE-11807.patch, HIVE-11807.patch
>
>
> A customer produced ORC files with very small stripe sizes (10k rows/stripe) 
> by setting a small 64MB stripe size and 256K buffer size for a 54 column 
> table. At that size, each of the streams only get a buffer or two before the 
> stripe size is reached. The current code uses the available memory instead of 
> the stripe size and thus doesn't shrink the buffer size if the JVM has much 
> more memory than the stripe size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11889) Add unit test for HIVE-11449

2015-09-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14877213#comment-14877213
 ] 

Hive QA commented on HIVE-11889:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12761223/HIVE-11889.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 9453 tests executed
*Failed tests:*
{noformat}
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5343/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5343/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5343/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12761223 - PreCommit-HIVE-TRUNK-Build

> Add unit test for HIVE-11449
> 
>
> Key: HIVE-11889
> URL: https://issues.apache.org/jira/browse/HIVE-11889
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.3.0, 2.0.0
>Reporter: Wei Zheng
>Assignee: Wei Zheng
> Attachments: HIVE-11889.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11891) Add basic performance logging to metastore calls

2015-09-19 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-11891:

Summary: Add basic performance logging to metastore calls  (was: Add basic 
performance logging at trace level to metastore calls)

> Add basic performance logging to metastore calls
> 
>
> Key: HIVE-11891
> URL: https://issues.apache.org/jira/browse/HIVE-11891
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 1.0.0, 1.2.0, 1.1.0
>Reporter: Brock Noland
>Assignee: Brock Noland
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HIVE-11891.patch, HIVE-11891.patch, HIVE-11891.patch
>
>
> At present it's extremely difficult to debug slow calls to the metastore. 
> Ideally there would be some basic means of doing so.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)